DPDK patches and discussions
 help / color / mirror / Atom feed
Search results ordered by [date|relevance]  view[summary|nested|Atom feed]
thread overview below | download: 
* Re: [PATCH] version: 22.03-rc0
  2021-11-30 15:35  0% ` Thomas Monjalon
@ 2021-11-30 19:51  3%   ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-11-30 19:51 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, Aaron Conole, Michael Santana, Dodji Seketeli

On Tue, Nov 30, 2021 at 4:35 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 29/11/2021 14:16, David Marchand:
> > Start a new release cycle with empty release notes.
> > Bump version and ABI minor.
> > Enable ABI checks using latest libabigail.
> >
> > Signed-off-by: David Marchand <david.marchand@redhat.com>
> [...]
> > -      LIBABIGAIL_VERSION: libabigail-1.8
> > +      LIBABIGAIL_VERSION: libabigail-2.0
>
> What is the reason for this update? Can we still use the old version?

Nothing prevents from using the old version, I just used this chance
to bump the version.

I talked with Dodji, 2.0 is the version used in Fedora for ABI checks.
This version comes with enhancements and at least a fix for a bug we
got when writing exception rules in dpdk:
https://sourceware.org/bugzilla/show_bug.cgi?id=28060


-- 
David Marchand


^ permalink raw reply	[relevance 3%]

* Re: [PATCH] version: 22.03-rc0
  2021-11-29 13:16 11% [PATCH] version: 22.03-rc0 David Marchand
@ 2021-11-30 15:35  0% ` Thomas Monjalon
  2021-11-30 19:51  3%   ` David Marchand
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-11-30 15:35 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Aaron Conole, Michael Santana

29/11/2021 14:16, David Marchand:
> Start a new release cycle with empty release notes.
> Bump version and ABI minor.
> Enable ABI checks using latest libabigail.
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
[...]
> -      LIBABIGAIL_VERSION: libabigail-1.8
> +      LIBABIGAIL_VERSION: libabigail-2.0

What is the reason for this update? Can we still use the old version?
Maybe add a small comment in the commit log.

Acked-by: Thomas Monjalon <thomas@monjalon.net>

Thanks



^ permalink raw reply	[relevance 0%]

* [PATCH] version: 22.03-rc0
@ 2021-11-29 13:16 11% David Marchand
  2021-11-30 15:35  0% ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-11-29 13:16 UTC (permalink / raw)
  To: dev; +Cc: Aaron Conole, Michael Santana

Start a new release cycle with empty release notes.
Bump version and ABI minor.
Enable ABI checks using latest libabigail.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 .github/workflows/build.yml            |   6 +-
 .travis.yml                            |  23 ++++-
 ABI_VERSION                            |   2 +-
 VERSION                                |   2 +-
 doc/guides/rel_notes/index.rst         |   1 +
 doc/guides/rel_notes/release_22_03.rst | 138 +++++++++++++++++++++++++
 6 files changed, 165 insertions(+), 7 deletions(-)
 create mode 100644 doc/guides/rel_notes/release_22_03.rst

diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 2e9c4be6d0..1a29e107be 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -20,10 +20,10 @@ jobs:
       BUILD_DOCS: ${{ contains(matrix.config.checks, 'doc') }}
       CC: ccache ${{ matrix.config.compiler }}
       DEF_LIB: ${{ matrix.config.library }}
-      LIBABIGAIL_VERSION: libabigail-1.8
+      LIBABIGAIL_VERSION: libabigail-2.0
       MINI: ${{ matrix.config.mini != '' }}
       PPC64LE: ${{ matrix.config.cross == 'ppc64le' }}
-      REF_GIT_TAG: none
+      REF_GIT_TAG: v21.11
       RUN_TESTS: ${{ contains(matrix.config.checks, 'tests') }}
 
     strategy:
@@ -40,7 +40,7 @@ jobs:
           - os: ubuntu-18.04
             compiler: gcc
             library: shared
-            checks: doc+tests
+            checks: abi+doc+tests
           - os: ubuntu-18.04
             compiler: clang
             library: static
diff --git a/.travis.yml b/.travis.yml
index 4bb5bf629e..da5273048f 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -41,8 +41,8 @@ script: ./.ci/${TRAVIS_OS_NAME}-build.sh
 
 env:
   global:
-    - LIBABIGAIL_VERSION=libabigail-1.8
-    - REF_GIT_TAG=none
+    - LIBABIGAIL_VERSION=libabigail-2.0
+    - REF_GIT_TAG=v21.11
 
 jobs:
   include:
@@ -61,6 +61,14 @@ jobs:
         packages:
           - *required_packages
           - *doc_packages
+  - env: DEF_LIB="shared" ABI_CHECKS=true
+    arch: amd64
+    compiler: gcc
+    addons:
+      apt:
+        packages:
+          - *required_packages
+          - *libabigail_build_packages
   # x86_64 clang jobs
   - env: DEF_LIB="static"
     arch: amd64
@@ -137,6 +145,17 @@ jobs:
         packages:
           - *required_packages
           - *doc_packages
+  - env: DEF_LIB="shared" ABI_CHECKS=true
+    dist: focal
+    arch: arm64-graviton2
+    virt: vm
+    group: edge
+    compiler: gcc
+    addons:
+      apt:
+        packages:
+          - *required_packages
+          - *libabigail_build_packages
   # aarch64 clang jobs
   - env: DEF_LIB="static"
     dist: focal
diff --git a/ABI_VERSION b/ABI_VERSION
index b090fe57f6..70a91e23ec 100644
--- a/ABI_VERSION
+++ b/ABI_VERSION
@@ -1 +1 @@
-22.0
+22.1
diff --git a/VERSION b/VERSION
index b570734337..25bb269237 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-21.11.0
+22.03.0-rc0
diff --git a/doc/guides/rel_notes/index.rst b/doc/guides/rel_notes/index.rst
index 78861ee57b..876ffd28f6 100644
--- a/doc/guides/rel_notes/index.rst
+++ b/doc/guides/rel_notes/index.rst
@@ -8,6 +8,7 @@ Release Notes
     :maxdepth: 1
     :numbered:
 
+    release_22_03
     release_21_11
     release_21_08
     release_21_05
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
new file mode 100644
index 0000000000..6d99d1eaa9
--- /dev/null
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -0,0 +1,138 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright 2021 The DPDK contributors
+
+.. include:: <isonum.txt>
+
+DPDK Release 22.03
+==================
+
+.. **Read this first.**
+
+   The text in the sections below explains how to update the release notes.
+
+   Use proper spelling, capitalization and punctuation in all sections.
+
+   Variable and config names should be quoted as fixed width text:
+   ``LIKE_THIS``.
+
+   Build the docs and view the output file to ensure the changes are correct::
+
+      ninja -C build doc
+      xdg-open build/doc/guides/html/rel_notes/release_22_03.html
+
+
+New Features
+------------
+
+.. This section should contain new features added in this release.
+   Sample format:
+
+   * **Add a title in the past tense with a full stop.**
+
+     Add a short 1-2 sentence description in the past tense.
+     The description should be enough to allow someone scanning
+     the release notes to understand the new feature.
+
+     If the feature adds a lot of sub-features you can use a bullet list
+     like this:
+
+     * Added feature foo to do something.
+     * Enhanced feature bar to do something else.
+
+     Refer to the previous release notes for examples.
+
+     Suggested order in release notes items:
+     * Core libs (EAL, mempool, ring, mbuf, buses)
+     * Device abstraction libs and PMDs (ordered alphabetically by vendor name)
+       - ethdev (lib, PMDs)
+       - cryptodev (lib, PMDs)
+       - eventdev (lib, PMDs)
+       - etc
+     * Other libs
+     * Apps, Examples, Tools (if significant)
+
+     This section is a comment. Do not overwrite or remove it.
+     Also, make sure to start the actual text at the margin.
+     =======================================================
+
+
+Removed Items
+-------------
+
+.. This section should contain removed items in this release. Sample format:
+
+   * Add a short 1-2 sentence description of the removed item
+     in the past tense.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
+
+
+API Changes
+-----------
+
+.. This section should contain API changes. Sample format:
+
+   * sample: Add a short 1-2 sentence description of the API change
+     which was announced in the previous releases and made in this release.
+     Start with a scope label like "ethdev:".
+     Use fixed width quotes for ``function_names`` or ``struct_names``.
+     Use the past tense.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
+
+
+ABI Changes
+-----------
+
+.. This section should contain ABI changes. Sample format:
+
+   * sample: Add a short 1-2 sentence description of the ABI change
+     which was announced in the previous releases and made in this release.
+     Start with a scope label like "ethdev:".
+     Use fixed width quotes for ``function_names`` or ``struct_names``.
+     Use the past tense.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
+
+* No ABI change that would break compatibility with 21.11.
+
+
+Known Issues
+------------
+
+.. This section should contain new known issues in this release. Sample format:
+
+   * **Add title in present tense with full stop.**
+
+     Add a short 1-2 sentence description of the known issue
+     in the present tense. Add information on any known workarounds.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
+
+
+Tested Platforms
+----------------
+
+.. This section should contain a list of platforms that were tested
+   with this release.
+
+   The format is:
+
+   * <vendor> platform with <vendor> <type of devices> combinations
+
+     * List of CPU
+     * List of OS
+     * List of devices
+     * Other relevant details...
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
-- 
2.23.0


^ permalink raw reply	[relevance 11%]

* DPDK 21.11 released!
@ 2021-11-26 20:34  4% David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-11-26 20:34 UTC (permalink / raw)
  To: announce; +Cc: Thomas Monjalon

A new major release is available:
    https://fast.dpdk.org/rel/dpdk-21.11.tar.xz

This is a big DPDK release.
    1875 commits from 204 authors
    2413 files changed, 259559 insertions(+), 87876 deletions(-)

The branch 21.11 should be supported for at least two years,
making it recommended for system integration and deployment.

The new major ABI version is 22.
The next releases 22.03 and 22.07 will be ABI compatible with 21.11.
As you probably noticed, the year 2022 will see only two intermediate
releases before the next 22.11 LTS.

Below are some new features, grouped by category.
* General
    - hugetlbfs subdirectories
    - AddressSanitizer (ASan) integration for debug
    - mempool flag for non-IO usages
    - device class for DMA accelerators and drivers for
      HiSilicon, Intel DSA, Intel IOAT, Marvell CNXK and NXP DPAA
    - device class for GPU devices and driver for NVIDIA CUDA
    - Toeplitz hash using Galois Fields New Instructions (GFNI)
* Networking
    - MTU handling rework
    - get all MAC addresses of a port
    - RSS based on L3/L4 checksum fields
    - flow match on L2TPv2 and PPP
    - flow flex parser for custom header
    - control delivery of HW Rx metadata
    - transfer flows API rework
    - shared Rx queue
    - Windows support of Intel e1000, ixgbe and iavf
    - driver for NXP ENETFEC
    - vDPA driver for Xilinx devices
    - virtio RSS
    - vhost power monitor wakeup
    - testpmd multi-process
    - pcapng library and dumpcap tool
* API/ABI
    - API namespace improvements and cleanups
    - API internals hidden
    - flags check for future ABI compatibility

More details in the release notes:
    http://doc.dpdk.org/guides/rel_notes/release_21_11.html


There are 55 new contributors (including authors, reviewers and testers).
Welcome to Abhijit Sinha, Ady Agbarih, Alexander Bechikov, Alice Michael,
Artur Tyminski, Ben Magistro, Ben Pfaff, Charles Brett, Chengfeng Ye,
Christopher Pau, Daniel Martin Buckley, Danny Patel, Dariusz Sosnowski,
David George, Elena Agostini, Ganapati Kundapura, Georg Sauthoff,
Hanumanth Reddy Pothula, Harneet Singh, Huichao Cai, Idan Hackmon,
Ilyes Ben Hamouda, Jilei Chen, Jonathan Erb, Kumara Parameshwaran,
Lewei Yang, Liang Longfeng, Longfeng Liang, Maciej Fijalkowski,
Maciej Paczkowski, Maciej Szwed, Marcin Domagala, Miao Li,
Michal Berger, Michal Michalik, Mihai Pogonaru, Mohamad Noor Alim Hussin,
Nikhil Vasoya, Pawel Malinowski, Pei Zhang, Pravin Pathak,
Przemyslaw Zegan, Qiming Chen, Rashmi Shetty, Richard Eklycke,
Sean Zhang, Siddaraju DH, Steve Rempe, Sylwester Dziedziuch,
Volodymyr Fialko, Wojciech Drewek, Wojciech Liguzinski, Xingguang He,
Yu Wenjun, Yvonne Yang.


Below is the number of commits per employer (with authors count):
    525    Intel (64)
    331    NVIDIA (29)
    312    Marvell (28)
    155    OKTET Labs (5)
     91    Huawei (7)
     89    Red Hat (6)
     75    Broadcom (11)
     67    NXP (8)
     49    Arm (5)
     34    Trustnet (1)
     29    Microsoft (4)
     13    6WIND (2)
     10    Xilinx (1)


A big thank to all courageous people who took on the non rewarding task
of reviewing other's job.
Based on Reviewed-by and Acked-by tags, the top non-PMD reviewers are:
    113    Akhil Goyal <gakhil@marvell.com>
     83    Ferruh Yigit <ferruh.yigit@intel.com>
     70    Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
     51    Ray Kinsella <mdr@ashroe.eu>
     50    Konstantin Ananyev <konstantin.ananyev@intel.com>
     47    Bruce Richardson <bruce.richardson@intel.com>
     46    Conor Walsh <conor.walsh@intel.com>
     45    David Marchand <david.marchand@redhat.com>
     39    Ruifeng Wang <ruifeng.wang@arm.com>
     37    Jerin Jacob <jerinj@marvell.com>
     36    Olivier Matz <olivier.matz@6wind.com>
     36    Fan Zhang <roy.fan.zhang@intel.com>
     32    Chenbo Xia <chenbo.xia@intel.com>
     32    Ajit Khaparde <ajit.khaparde@broadcom.com>
     25    Ori Kam <orika@nvidia.com>
     23    Kevin Laatz <kevin.laatz@intel.com>
     22    Ciara Power <ciara.power@intel.com>
     20    Thomas Monjalon <thomas@monjalon.net>
     19    Xiaoyun Li <xiaoyun.li@intel.com>
     18    Maxime Coquelin <maxime.coquelin@redhat.com>


The new features for 22.03 may be submitted during the next 4 weeks so
that we can all enjoy a good break at the end of this year.
2022 will see a change in pace for releases timing, let's make the best
of it to make good reviews.

DPDK 22.03 is scheduled for early March:
        http://core.dpdk.org/roadmap#dates
Please share your roadmap.

Thanks everyone!


-- 
David Marchand


^ permalink raw reply	[relevance 4%]

* Re: [PATCH v3] ethdev: deprecate header fields and metadata flow actions
  2021-11-25 12:31  4%   ` Ferruh Yigit
@ 2021-11-25 12:50  0%     ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-11-25 12:50 UTC (permalink / raw)
  To: Ray Kinsella, Ori Kam, Ferruh Yigit
  Cc: dev, Viacheslav Ovsiienko, Andrew Rybchenko, David Marchand

25/11/2021 13:31, Ferruh Yigit:
> On 11/24/2021 3:37 PM, Viacheslav Ovsiienko wrote:
> > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > index 6d087c64ef..d04a606b7d 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -101,6 +101,20 @@ Deprecation Notices
> >     is deprecated as ambiguous with respect to the embedded switch. The use of
> >     these attributes will become invalid starting from DPDK 22.11.
> >   
> > +* ethdev: Actions ``OF_SET_MPLS_TTL``, ``OF_DEC_MPLS_TTL``, ``OF_SET_NW_TTL``,
> > +  ``OF_COPY_TTL_OUT``, ``OF_COPY_TTL_IN`` are deprecated as not supported by
> > +  PMDs, will be removed in DPDK 22.11.
> > +
> > +* ethdev: Actions ``OF_DEC_NW_TTL``, ``SET_IPV4_SRC``, ``SET_IPV4_DST``,
> > +  ``SET_IPV6_SRC``, ``SET_IPV6_DST``, ``SET_TP_SRC``, ``SET_TP_DST``,
> > +  ``DEC_TTL``, ``SET_TTL``, ``SET_MAC_SRC``, ``SET_MAC_DST``, ``INC_TCP_SEQ``,
> > +  ``DEC_TCP_SEQ``, ``INC_TCP_ACK``, ``DEC_TCP_ACK``, ``SET_IPV4_DSCP``,
> > +  ``SET_IPV6_DSCP``, ``SET_TAG``, ``SET_META`` are deprecated as superseded
> > +  by generic MODIFY_FIELD action, will be removed in DPDK 22.11.
> > +
> > +* ethdev: Actions ``OF_SET_VLAN_VID``, ``OF_SET_VLAN_PCP`` are deprecated
> > +  as superseded by generic MODIFY_FIELD action.
> > +
> 
> 
> I have a question about ABI/API related issue for rte_flow support,
> 
> If a driver removes an flow API item/action support, it directly impacts
> the user application. The application previously working may stop working
> and require code update, this is something we want to prevent with
> ABI policy. And this kind of changes are not caught by our tools.
> 
> Do we have a process to deprecate/remove a flow API item/action support?
> Like they can be only removed on ABI break release...

If possible, we should avoid removing them, or dropping support in a driver.
I think removing a feature could be considered only if not too many drivers
use it, or if it becomes a real burden to maintain.



^ permalink raw reply	[relevance 0%]

* Re: [PATCH v3] ethdev: deprecate header fields and metadata flow actions
  @ 2021-11-25 12:31  4%   ` Ferruh Yigit
  2021-11-25 12:50  0%     ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-11-25 12:31 UTC (permalink / raw)
  To: Ray Kinsella, Thomas Monjalon, Ori Kam
  Cc: thomas, dev, Viacheslav Ovsiienko, Andrew Rybchenko, David Marchand

On 11/24/2021 3:37 PM, Viacheslav Ovsiienko wrote:
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 6d087c64ef..d04a606b7d 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -101,6 +101,20 @@ Deprecation Notices
>     is deprecated as ambiguous with respect to the embedded switch. The use of
>     these attributes will become invalid starting from DPDK 22.11.
>   
> +* ethdev: Actions ``OF_SET_MPLS_TTL``, ``OF_DEC_MPLS_TTL``, ``OF_SET_NW_TTL``,
> +  ``OF_COPY_TTL_OUT``, ``OF_COPY_TTL_IN`` are deprecated as not supported by
> +  PMDs, will be removed in DPDK 22.11.
> +
> +* ethdev: Actions ``OF_DEC_NW_TTL``, ``SET_IPV4_SRC``, ``SET_IPV4_DST``,
> +  ``SET_IPV6_SRC``, ``SET_IPV6_DST``, ``SET_TP_SRC``, ``SET_TP_DST``,
> +  ``DEC_TTL``, ``SET_TTL``, ``SET_MAC_SRC``, ``SET_MAC_DST``, ``INC_TCP_SEQ``,
> +  ``DEC_TCP_SEQ``, ``INC_TCP_ACK``, ``DEC_TCP_ACK``, ``SET_IPV4_DSCP``,
> +  ``SET_IPV6_DSCP``, ``SET_TAG``, ``SET_META`` are deprecated as superseded
> +  by generic MODIFY_FIELD action, will be removed in DPDK 22.11.
> +
> +* ethdev: Actions ``OF_SET_VLAN_VID``, ``OF_SET_VLAN_PCP`` are deprecated
> +  as superseded by generic MODIFY_FIELD action.
> +


I have a question about ABI/API related issue for rte_flow support,

If a driver removes an flow API item/action support, it directly impacts
the user application. The application previously working may stop working
and require code update, this is something we want to prevent with
ABI policy. And this kind of changes are not caught by our tools.

Do we have a process to deprecate/remove a flow API item/action support?
Like they can be only removed on ABI break release...


Thanks,
ferruh


^ permalink raw reply	[relevance 4%]

* Re: [PATCH v1] gpudev: return EINVAL if invalid input pointer for free and unregister
  2021-11-24 17:24  3%       ` Tyler Retzlaff
@ 2021-11-24 18:04  0%         ` Bruce Richardson
  0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2021-11-24 18:04 UTC (permalink / raw)
  To: Tyler Retzlaff
  Cc: Thomas Monjalon, eagostini, techboard, dev, Andrew Rybchenko,
	David Marchand, Ferruh Yigit

On Wed, Nov 24, 2021 at 09:24:42AM -0800, Tyler Retzlaff wrote:
> On Fri, Nov 19, 2021 at 10:56:36AM +0100, Thomas Monjalon wrote:
> > 19/11/2021 10:34, Ferruh Yigit:
> > > >> +	if (ptr == NULL) {
> > > >> +		rte_errno = EINVAL;
> > > >> +		return -rte_errno;
> > > >> +	}
> > > > 
> > > > in general dpdk has real problems with how it indicates that an error
> > > > occurred and what error occurred consistently.
> > > > 
> > > > some api's return 0 on success
> > > >    and maybe return -errno if ! 0
> > > >    and maybe return errno if ! 0
> > 
> > Which function returns a positive errno?
> 
> i may have mispoke about this variant, it may be something i recall
> seeing in a posted patch that was resolved before integration.
> 
> > 
> > > >    and maybe set rte_errno if ! 0
> > > > 
> > > > some api's return -1 on failure
> > > >    and set rte_errno if -1
> > > > 
> > > > some api's return < 0 on failure
> > > >    and maybe set rte_errno
> > > >    and maybe return -errno
> > > >    and maybe set rte_errno and return -rte_errno
> > > 
> > > This is a generic comment, cc'ed a few more folks to make the comment more
> > > visible.
> > > 
> > > > this isn't isiolated to only this change but since additions and context
> > > > in this patch highlight it maybe it's a good time to bring it up.
> > > > 
> > > > it's frustrating to have to carefully read the implementation every time
> > > > you want to make a function call to make sure you're handling the flavor
> > > > of error reporting for a particular function.
> > > > 
> > > > if this is new code could we please clearly identify the current best
> > > > practice and follow it as a standard going forward for all new public
> > > > apis.
> > 
> > I think this patch is following the best practice.
> > 1/ Return negative value in case of error
> > 2/ Set rte_errno
> > 3/ Set same absolute value in rte_errno and return code
> 
> with the approach proposed as best practice above it results in at least the 
> applicaiton code variations as follows.
> 
> int rv = rte_func_call();
> 
> 1. if (rv < 0 && rte_errno == EAGAIN)
> 
> 2. if (rv == -1 && rte_errno == EAGAIN)
> 
> 3. if (rv < 0 && -rv == EAGAIN)
> 
> 4. if (rv < 0 && rv == -EAGAIN)
> 
> (and incorrectly)
> 
> 5. // ignore rv
>   if (rte_errno == EAGAIN)
> 
> it might be better practice if indication that an error occurs is
> signaled distinctly from the error that occurred. otherwise why use
> rte_errno at all instead returning -rte_errno always?
> 
> this philosophy would align better with modern posix / unix platform
> apis. often documented in the RETURN VALUE section of the manpage as:
> 
>     ``Upon successful completion, somefunction() shall return 0;
>       otherwise, -1 shall be returned and errno set to indicate the
>       error.''
> 
> therefore returning a value outside of the set {0, -1} is an abi break.
 
I like using this standard, because it also allows consistent behaviour for
non-integer returning functions, e.g. object creation functions returning
pointers.

  if (ret < 0 && rte_errno == EAGAIN)

becomes for a pointer:

  if (ret == NULL && rte_errno == EAGAIN)

Regards,
/Bruce

^ permalink raw reply	[relevance 0%]

* Re: [PATCH v1] gpudev: return EINVAL if invalid input pointer for free and unregister
  @ 2021-11-24 17:24  3%       ` Tyler Retzlaff
  2021-11-24 18:04  0%         ` Bruce Richardson
  0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2021-11-24 17:24 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: eagostini, techboard, dev, Andrew Rybchenko, David Marchand,
	Ferruh Yigit

On Fri, Nov 19, 2021 at 10:56:36AM +0100, Thomas Monjalon wrote:
> 19/11/2021 10:34, Ferruh Yigit:
> > >> +	if (ptr == NULL) {
> > >> +		rte_errno = EINVAL;
> > >> +		return -rte_errno;
> > >> +	}
> > > 
> > > in general dpdk has real problems with how it indicates that an error
> > > occurred and what error occurred consistently.
> > > 
> > > some api's return 0 on success
> > >    and maybe return -errno if ! 0
> > >    and maybe return errno if ! 0
> 
> Which function returns a positive errno?

i may have mispoke about this variant, it may be something i recall
seeing in a posted patch that was resolved before integration.

> 
> > >    and maybe set rte_errno if ! 0
> > > 
> > > some api's return -1 on failure
> > >    and set rte_errno if -1
> > > 
> > > some api's return < 0 on failure
> > >    and maybe set rte_errno
> > >    and maybe return -errno
> > >    and maybe set rte_errno and return -rte_errno
> > 
> > This is a generic comment, cc'ed a few more folks to make the comment more
> > visible.
> > 
> > > this isn't isiolated to only this change but since additions and context
> > > in this patch highlight it maybe it's a good time to bring it up.
> > > 
> > > it's frustrating to have to carefully read the implementation every time
> > > you want to make a function call to make sure you're handling the flavor
> > > of error reporting for a particular function.
> > > 
> > > if this is new code could we please clearly identify the current best
> > > practice and follow it as a standard going forward for all new public
> > > apis.
> 
> I think this patch is following the best practice.
> 1/ Return negative value in case of error
> 2/ Set rte_errno
> 3/ Set same absolute value in rte_errno and return code

with the approach proposed as best practice above it results in at least the 
applicaiton code variations as follows.

int rv = rte_func_call();

1. if (rv < 0 && rte_errno == EAGAIN)

2. if (rv == -1 && rte_errno == EAGAIN)

3. if (rv < 0 && -rv == EAGAIN)

4. if (rv < 0 && rv == -EAGAIN)

(and incorrectly)

5. // ignore rv
  if (rte_errno == EAGAIN)

it might be better practice if indication that an error occurs is
signaled distinctly from the error that occurred. otherwise why use
rte_errno at all instead returning -rte_errno always?

this philosophy would align better with modern posix / unix platform
apis. often documented in the RETURN VALUE section of the manpage as:

    ``Upon successful completion, somefunction() shall return 0;
      otherwise, -1 shall be returned and errno set to indicate the
      error.''

therefore returning a value outside of the set {0, -1} is an abi break.

separately i have misgivings about how many patches have been integrated
and in some instances backported to dpdk stable that have resulted in
new return values and / or set new values to rte_errno outside of the
set of values initially possible when the dpdk release was made.

^ permalink raw reply	[relevance 3%]

* [PATCH v3 2/2] doc: announce KNI deprecation
  @ 2021-11-24 17:16  5%   ` Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-11-24 17:16 UTC (permalink / raw)
  To: Ray Kinsella
  Cc: Ferruh Yigit, dev, Olivier Matz Olivier Matz,
	David Marchand David Marchand,
	Stephen Hemminger Stephen Hemminger, Elad Nachman, Igor Ryzhov,
	Dan Gora

Announce the KNI kernel module move to out of dpdk repo and announce
long term plan to deprecate the KNI.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
Cc: Olivier Matz Olivier Matz <olivier.matz@6wind.com>
Cc: David Marchand David Marchand <david.marchand@redhat.com>
Cc: Stephen Hemminger Stephen Hemminger <stephen@networkplumber.org>
Cc: Elad Nachman <eladv6@gmail.com>
Cc: Igor Ryzhov <iryzhov@nfware.com>
Cc: Dan Gora <dg@adax.com>

Dates are not discussed before, the patch aims to trigger a discussion
for the dates.
---
 doc/guides/prog_guide/kernel_nic_interface.rst | 2 ++
 doc/guides/rel_notes/deprecation.rst           | 6 ++++++
 2 files changed, 8 insertions(+)

diff --git a/doc/guides/prog_guide/kernel_nic_interface.rst b/doc/guides/prog_guide/kernel_nic_interface.rst
index f5a8b7c0782c..d1c5ccd0851d 100644
--- a/doc/guides/prog_guide/kernel_nic_interface.rst
+++ b/doc/guides/prog_guide/kernel_nic_interface.rst
@@ -7,6 +7,8 @@ Kernel NIC Interface
 ====================
 
 .. Note::
+   KNI kernel module will be moved from main git repository to `dpdk-kmods <https://git.dpdk.org/dpdk-kmods/>`_ repository.
+   There is a long term plan to deprecate the KNI. See :doc:`../rel_notes/deprecation`
 
    :ref:`virtio_user_as_exceptional_path` alternative is the preferred way for
    interfacing with the Linux network stack as it is an in-kernel solution and
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 2262b8de6093..f20852504319 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -48,6 +48,12 @@ Deprecation Notices
   in the header will not be considered as ABI anymore. This change is inspired
   by the RFC https://patchwork.dpdk.org/project/dpdk/list/?series=17176.
 
+* kni: KNI kernel module will be moved to `dpdk-kmods <https://git.dpdk.org/dpdk-kmods/>`_
+  repository by the `DPDK technical board decision
+  <https://mails.dpdk.org/archives/dev/2021-January/197077.html>`_, on v22.11.
+* kni: will be depreciated, will remove all kni lib, kernel module and example code
+  on v23.11.
+
 * lib: will fix extending some enum/define breaking the ABI. There are multiple
   samples in DPDK that enum/define terminated with a ``.*MAX.*`` value which is
   used by iterators, and arrays holding these values are sized with this
-- 
2.31.1


^ permalink raw reply	[relevance 5%]

* Minutes of Technical Board Meeting, 2021-11-17
@ 2021-11-24 13:00  4% Olivier Matz
  0 siblings, 0 replies; 200+ results
From: Olivier Matz @ 2021-11-24 13:00 UTC (permalink / raw)
  To: dev

Members Attending
-----------------

- Aaron
- Bruce
- Ferruh
- Honnappa
- Jerin
- Kevin
- Konstantin
- Maxime
- Olivier (Chair)
- Stephen
- Thomas

NOTE: The technical board meetings every second Wednesday at
https://meet.jit.si/DPDK at 3 pm UTC.
Meetings are public, and DPDK community members are welcome to attend.

NOTE: Next meeting will be on Wednesday 2021-12-01 @3pm UTC, and will
be chaired by Stephen.

1. Switch to 3 releases per year instead of 4
=============================================

Reference: http://inbox.dpdk.org/dev/5786413.XMpytKYiJR@thomas

Only good feedback on the mailing list up to now.

This proposal is therefore accepted - so DPDK will only have 3 releases
in 2022 - unless there is strong opposition, with suitable
justification, raised on the DPDK Dev mailing list ahead of the final
DPDK 21.11 release.

2. Raise the maximum number of lcores
=====================================

References:

- https://inbox.dpdk.org/dev/1902057.C4l9sbjloW@thomas/
- https://inbox.dpdk.org/dev/CAJFAV8z-5amvEnr3mazkTqH-7SZX_C6EqCua6UdMXXHgrcmT6g@mail.gmail.com/

Modifying this value is an ABI change and has an impact on memory
consumption. There is no identified use-case where a single
application requires more than 128 lcores.

- Ideally, this configuration should be dynamic at runtime, but it would
  require a lot of changes
- It is possible with the --lcores EAL option to bind up to 128 lcores to
  any lcore id (even higher than 128). If "-l 129" is passed to EAL, a
  message giving the alternative syntax ("--lcores 0@129") is
  displayed. An option to rebind automatically could help for usability.
- If a case a use-case exists for a single application that uses
  more than 128 lcores, the TB is ok to update the default config value.
  Note that it is already possible to change the value at compilation
  time with -Dmax_lcores in meson.

3. New threading API
====================

References:

- https://patches.dpdk.org/project/dpdk/list/?series=20472&state=*
- https://inbox.dpdk.org/dev/1636594425-9692-1-git-send-email-navasile@linux.microsoft.com/

The DPDK relies on the pthread interface for eal threads, which is not
supported in windows. Windows DPDK code currently emulates pthread. A
patchset has been proposed which, among others:

- makes the eal thread API rely on OS-specific
- removes direct call to pthread in dpdk

This patchset (not for 21.11) needs more reviews. People from TB should
take a look at it.

The TB provided some guidelines:
- the EAL thread API level should be similar to pthread API
  (it would mostly be a namespace change for posix)
- the API/ABI should remain compatible. It is possible to make use of
  rte_function_versioning.h for that

4. DTS Co-maintenance
=====================

Owen Hilyard from UNH proposes himself to be the co-maintainer for DTS.
This would for instance help to ensure that the interface between CI
and DTS remains stable.

The TB welcomes this proposition, as long as there is no opposition from
current DTS maintainer and DTS community.

By the way, the TB asks for volunteers to help to make the transition to
DPDK repository.

5. Spell checking in the CI infrastructure and patchwork
========================================================

The spell checking was done with aspell on documentation. The problem is
that check is done on everything including code or acronyms, resulting
on constant failures.

The TB recommends to focus on per-patch basis checks, on rst files
first. A tool should be provided in dpdk/devtools, so it can also be
used by developpers.

Spelling errors should be considered as warning given code or acronyms
may trigger false-positives.

^ permalink raw reply	[relevance 4%]

* [PATCH v2 2/2] doc: announce KNI deprecation
  @ 2021-11-23 12:08  5%   ` Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-11-23 12:08 UTC (permalink / raw)
  To: dev, Ray Kinsella
  Cc: Ferruh Yigit, Olivier Matz, David Marchand, Stephen Hemminger,
	Elad Nachman, Igor Ryzhov, Dan Gora

Announce the KNI kernel module move to out of dpdk repo and announce
long term plan to deprecate the KNI.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
Cc: Olivier Matz <olivier.matz@6wind.com>
Cc: David Marchand <david.marchand@redhat.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>
Cc: Elad Nachman <eladv6@gmail.com>
Cc: Igor Ryzhov <iryzhov@nfware.com>
Cc: Dan Gora <dg@adax.com>

Dates are not discussed before, the patch aims to trigger a discussion
for the dates.
---
 doc/guides/prog_guide/kernel_nic_interface.rst | 2 ++
 doc/guides/rel_notes/deprecation.rst           | 6 ++++++
 2 files changed, 8 insertions(+)

diff --git a/doc/guides/prog_guide/kernel_nic_interface.rst b/doc/guides/prog_guide/kernel_nic_interface.rst
index 70e92687d711..276014fe28bb 100644
--- a/doc/guides/prog_guide/kernel_nic_interface.rst
+++ b/doc/guides/prog_guide/kernel_nic_interface.rst
@@ -7,6 +7,8 @@ Kernel NIC Interface
 ====================
 
 .. Note::
+   KNI kernel module will be moved from main git repository to `dpdk-kmods <https://git.dpdk.org/dpdk-kmods/>`_ repository.
+   There is a long term plan to deprecate the KNI. See :doc:`../rel_notes/deprecation`
 
    :ref:`virtio_user_as_exceptional_path` alternative is the preferred way for
    interfacing with the Linux network stack as it is an in-kernel solution and
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 6d087c64ef28..62fd991e4eb4 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -48,6 +48,12 @@ Deprecation Notices
   in the header will not be considered as ABI anymore. This change is inspired
   by the RFC https://patchwork.dpdk.org/project/dpdk/list/?series=17176.
 
+* kni: KNI kernel module will be moved to `dpdk-kmods <https://git.dpdk.org/dpdk-kmods/>`_
+  repository by the `DPDK technical board decision
+  <https://mails.dpdk.org/archives/dev/2021-January/197077.html>`_, on v22.11.
+* kni: will be depreciated, will remove all kni lib, kernel module and example code
+  on v23.11.
+
 * lib: will fix extending some enum/define breaking the ABI. There are multiple
   samples in DPDK that enum/define terminated with a ``.*MAX.*`` value which is
   used by iterators, and arrays holding these values are sized with this
-- 
2.31.1


^ permalink raw reply	[relevance 5%]

* Re: [PATCH v1] doc: update release notes for 21.11
  2021-11-22 17:00 12% [PATCH v1] doc: update release notes for 21.11 John McNamara
@ 2021-11-22 17:05  0% ` Ajit Khaparde
  0 siblings, 0 replies; 200+ results
From: Ajit Khaparde @ 2021-11-22 17:05 UTC (permalink / raw)
  To: John McNamara; +Cc: dpdk-dev, Thomas Monjalon

On Mon, Nov 22, 2021 at 9:01 AM John McNamara <john.mcnamara@intel.com> wrote:
>
> Fix grammar, spelling and formatting of DPDK 21.11 release notes.
>
> Signed-off-by: John McNamara <john.mcnamara@intel.com>

Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

> ---
>  doc/guides/rel_notes/release_21_11.rst | 123 +++++++++++++------------
>  1 file changed, 65 insertions(+), 58 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index 4d8c59472a..7008c5e907 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -57,14 +57,14 @@ New Features
>
>  * **Enabled new devargs parser.**
>
> -  * Enabled devargs syntax
> -    ``bus=X,paramX=x/class=Y,paramY=y/driver=Z,paramZ=z``
> +  * Enabled devargs syntax:
> +    ``bus=X,paramX=x/class=Y,paramY=y/driver=Z,paramZ=z``.
>    * Added bus-level parsing of the devargs syntax.
>    * Kept compatibility with the legacy syntax as parsing fallback.
>
>  * **Updated EAL hugetlbfs mount handling for Linux.**
>
> -  * Modified to allow ``--huge-dir`` option to specify a sub-directory
> +  * Modified EAL to allow ``--huge-dir`` option to specify a sub-directory
>      within a hugetlbfs mountpoint.
>
>  * **Added dmadev library.**
> @@ -82,7 +82,7 @@ New Features
>
>  * **Added IDXD dmadev driver implementation.**
>
> -  The IDXD dmadev driver provide device drivers for the Intel DSA devices.
> +  The IDXD dmadev driver provides device drivers for the Intel DSA devices.
>    This device driver can be used through the generic dmadev API.
>
>  * **Added IOAT dmadev driver implementation.**
> @@ -98,29 +98,34 @@ New Features
>
>  * **Added NXP DPAA DMA driver.**
>
> -  Added a new dmadev driver for NXP DPAA platform.
> +  Added a new dmadev driver for the NXP DPAA platform.
>
>  * **Added support to get all MAC addresses of a device.**
>
> -  Added ``rte_eth_macaddrs_get`` to allow user to retrieve all Ethernet
> -  addresses assigned to given ethernet port.
> +  Added ``rte_eth_macaddrs_get`` to allow a user to retrieve all Ethernet
> +  addresses assigned to a given Ethernet port.
>
> -* **Introduced GPU device class with first features:**
> +* **Introduced GPU device class.**
>
> -  * Device information
> -  * Memory management
> -  * Communication flag & list
> +  Introduced the GPU device class with initial features:
> +
> +  * Device information.
> +  * Memory management.
> +  * Communication flag and list.
>
>  * **Added NVIDIA GPU driver implemented with CUDA library.**
>
> +  Added NVIDIA GPU driver implemented with CUDA library under the new
> +  GPU device interface.
> +
>  * **Added new RSS offload types for IPv4/L4 checksum in RSS flow.**
>
> -  Added macros ETH_RSS_IPV4_CHKSUM and ETH_RSS_L4_CHKSUM, now IPv4 and
> -  TCP/UDP/SCTP header checksum field can be used as input set for RSS.
> +  Added macros ``ETH_RSS_IPV4_CHKSUM`` and ``ETH_RSS_L4_CHKSUM``. The IPv4 and
> +  TCP/UDP/SCTP header checksum field can now be used as input set for RSS.
>
>  * **Added L2TPv2 and PPP protocol support in flow API.**
>
> -  Added flow pattern items and header formats of L2TPv2 and PPP protocol.
> +  Added flow pattern items and header formats for the L2TPv2 and PPP protocols.
>
>  * **Added flow flex item.**
>
> @@ -146,11 +151,11 @@ New Features
>
>    * Added new device capability flag and Rx domain field to switch info.
>    * Added share group and share queue ID to Rx queue configuration.
> -  * Added testpmd support and dedicate forwarding engine.
> +  * Added testpmd support and dedicated forwarding engine.
>
>  * **Updated af_packet ethdev driver.**
>
> -  * Default VLAN strip behavior was changed. VLAN tag won't be stripped
> +  * The default VLAN strip behavior has changed. The VLAN tag won't be stripped
>      unless ``DEV_RX_OFFLOAD_VLAN_STRIP`` offload is enabled.
>
>  * **Added API to get device configuration in ethdev.**
> @@ -159,28 +164,30 @@ New Features
>
>  * **Updated AF_XDP PMD.**
>
> -  * Disabled secondary process support.
> +  * Disabled secondary process support due to insufficient state shared
> +    between processes which causes a crash. This will be fixed/re-enabled
> +    in the next release.
>
>  * **Updated Amazon ENA PMD.**
>
>    Updated the Amazon ENA PMD. The new driver version (v2.5.0) introduced
>    bug fixes and improvements, including:
>
> -  * Support for the tx_free_thresh and rx_free_thresh configuration parameters.
> +  * Support for the ``tx_free_thresh`` and ``rx_free_thresh`` configuration parameters.
>    * NUMA aware allocations for the queue helper structures.
> -  * Watchdog's feature which is checking for missing Tx completions.
> +  * A Watchdog feature which is checking for missing Tx completions.
>
>  * **Updated Broadcom bnxt PMD.**
>
>    * Added flow offload support for Thor.
>    * Added TruFlow and AFM SRAM partitioning support.
> -  * Implement support for tunnel offload.
> +  * Implemented support for tunnel offload.
>    * Updated HWRM API to version 1.10.2.68.
> -  * Added NAT support for dest IP and port combination.
> +  * Added NAT support for destination IP and port combination.
>    * Added support for socket redirection.
>    * Added wildcard match support for ingress flows.
>    * Added support for inner IP header for GRE tunnel flows.
> -  * Updated support for RSS action in flow rule.
> +  * Updated support for RSS action in flow rules.
>    * Removed devargs option for stats accumulation.
>
>  * **Updated Cisco enic driver.**
> @@ -202,9 +209,9 @@ New Features
>
>    * Added protocol agnostic flow offloading support in Flow Director.
>    * Added protocol agnostic flow offloading support in RSS hash.
> -  * Added 1PPS out support by a devargs.
> +  * Added 1PPS out support via devargs.
>    * Added IPv4 and L4 (TCP/UDP/SCTP) checksum hash support in RSS flow.
> -  * Added DEV_RX_OFFLOAD_TIMESTAMP support.
> +  * Added ``DEV_RX_OFFLOAD_TIMESTAMP`` support.
>    * Added timesync API support under scalar path.
>    * Added DCF reset API support.
>
> @@ -225,7 +232,7 @@ New Features
>    Updated the Mellanox mlx5 driver with new features and improvements, including:
>
>    * Added implicit mempool registration to avoid data path hiccups (opt-out).
> -  * Added delay drop support for Rx queue.
> +  * Added delay drop support for Rx queues.
>    * Added NIC offloads for the PMD on Windows (TSO, VLAN strip, CRC keep).
>    * Added socket direct mode bonding support.
>
> @@ -275,7 +282,7 @@ New Features
>    Added a new Xilinx vDPA  (``sfc_vdpa``) PMD.
>    See the :doc:`../vdpadevs/sfc` guide for more details on this driver.
>
> -* **Added telemetry callbacks to cryptodev library.**
> +* **Added telemetry callbacks to the cryptodev library.**
>
>    Added telemetry callback functions which allow a list of crypto devices,
>    stats for a crypto device, and other device information to be queried.
> @@ -300,7 +307,7 @@ New Features
>
>  * **Added support for event crypto adapter on Marvell CN10K and CN9K.**
>
> -  * Added event crypto adapter OP_FORWARD mode support.
> +  * Added event crypto adapter ``OP_FORWARD`` mode support.
>
>  * **Updated Mellanox mlx5 crypto driver.**
>
> @@ -309,7 +316,7 @@ New Features
>
>  * **Updated NXP dpaa_sec crypto PMD.**
>
> -  * Added DES-CBC, AES-XCBC-MAC, AES-CMAC and non-HMAC algo support.
> +  * Added DES-CBC, AES-XCBC-MAC, AES-CMAC and non-HMAC algorithm support.
>    * Added PDCP short MAC-I support.
>    * Added raw vector datapath API support.
>
> @@ -322,16 +329,16 @@ New Features
>
>    * The IPsec_MB framework was added to share common code between Intel
>      SW Crypto PMDs that depend on the intel-ipsec-mb library.
> -  * Multiprocess support was added for the consolidated PMDs,
> +  * Multiprocess support was added for the consolidated PMDs
>      which requires v1.1 of the intel-ipsec-mb library.
> -  * The following PMDs were moved into a single source folder,
> -    however their usage and EAL options remain unchanged.
> +  * The following PMDs were moved into a single source folder
> +    while their usage and EAL options remain unchanged.
>      * AESNI_MB PMD.
>      * AESNI_GCM PMD.
>      * KASUMI PMD.
>      * SNOW3G PMD.
>      * ZUC PMD.
> -    * CHACHA20_POLY1305 - A new PMD added.
> +    * CHACHA20_POLY1305 - a new PMD.
>
>  * **Updated the aesni_mb crypto PMD.**
>
> @@ -381,7 +388,7 @@ New Features
>  * **Added multi-process support for testpmd.**
>
>    Added command-line options to specify total number of processes and
> -  current process ID. Each process owns subset of Rx and Tx queues.
> +  current process ID. Each process owns a subset of Rx and Tx queues.
>
>  * **Updated test-crypto-perf application with new cases.**
>
> @@ -404,8 +411,8 @@ New Features
>
>  * **Updated l3fwd sample application.**
>
> -  * Increased number of routes to 16 for all lookup modes (LPM, EM and FIB),
> -    this helps in validating SoC with many ethernet devices.
> +  * Increased number of routes to 16 for all lookup modes (LPM, EM and FIB).
> +    This helps in validating SoC with many Ethernet devices.
>    * Updated EM mode to use RFC2544 reserved IP address space with RFC863
>      UDP discard protocol.
>
> @@ -431,8 +438,8 @@ New Features
>
>  * **Added ASan support.**
>
> -  `AddressSanitizer
> -  <https://github.com/google/sanitizers/wiki/AddressSanitizer>`_ (ASan)
> +  Added ASan/AddressSanitizer support. `AddressSanitizer
> +  <https://github.com/google/sanitizers/wiki/AddressSanitizer>`_
>    is a widely-used debugging tool to detect memory access errors.
>    It helps to detect issues like use-after-free, various kinds of buffer
>    overruns in C/C++ programs, and other similar errors, as well as
> @@ -454,12 +461,12 @@ Removed Items
>  * eal: Removed the deprecated function ``rte_get_master_lcore()``
>    and the iterator macro ``RTE_LCORE_FOREACH_SLAVE``.
>
> -* eal: The old api arguments that were deprecated for
> +* eal: The old API arguments that were deprecated for
>    blacklist/whitelist are removed. Users must use the new
>    block/allow list arguments.
>
>  * mbuf: Removed offload flag ``PKT_RX_EIP_CKSUM_BAD``.
> -  ``PKT_RX_OUTER_IP_CKSUM_BAD`` should be used as a replacement.
> +  The ``PKT_RX_OUTER_IP_CKSUM_BAD`` flag should be used as a replacement.
>
>  * ethdev: Removed the port mirroring API. A more fine-grain flow API
>    action ``RTE_FLOW_ACTION_TYPE_SAMPLE`` should be used instead.
> @@ -468,9 +475,9 @@ Removed Items
>    ``rte_eth_mirror_rule_reset`` along with the associated macros
>    ``ETH_MIRROR_*`` are removed.
>
> -* ethdev: Removed ``rte_eth_rx_descriptor_done`` API function and its
> +* ethdev: Removed the ``rte_eth_rx_descriptor_done()`` API function and its
>    driver callback. It is replaced by the more complete function
> -  ``rte_eth_rx_descriptor_status``.
> +  ``rte_eth_rx_descriptor_status()``.
>
>  * ethdev: Removed deprecated ``shared`` attribute of the
>    ``struct rte_flow_action_count``. Shared counters should be managed
> @@ -548,21 +555,21 @@ API Changes
>
>  * ethdev: ``rte_flow_action_modify_data`` structure updated, immediate data
>    array is extended, data pointer field is explicitly added to union, the
> -  action behavior is defined in more strict fashion and documentation updated.
> +  action behavior is defined in a more strict fashion and documentation updated.
>    The immediate value behavior has been changed, the entire immediate field
>    should be provided, and offset for immediate source bitfield is assigned
> -  from destination one.
> +  from the destination one.
>
>  * vhost: ``rte_vdpa_register_device``, ``rte_vdpa_unregister_device``,
>    ``rte_vhost_host_notifier_ctrl`` and ``rte_vdpa_relay_vring_used`` vDPA
>    driver interface are marked as internal.
>
> -* cryptodev: The API rte_cryptodev_pmd_is_valid_dev is modified to
> -  rte_cryptodev_is_valid_dev as it can be used by the application as
> -  well as PMD to check whether the device is valid or not.
> +* cryptodev: The API ``rte_cryptodev_pmd_is_valid_dev()`` is modified to
> +  ``rte_cryptodev_is_valid_dev()`` as it can be used by the application as
> +  well as the PMD to check whether the device is valid or not.
>
> -* cryptodev: The rte_cryptodev_pmd.* files are renamed as cryptodev_pmd.*
> -  as it is for drivers only and should be private to DPDK, and not
> +* cryptodev: The ``rte_cryptodev_pmd.*`` files are renamed to ``cryptodev_pmd.*``
> +  since they are for drivers only and should be private to DPDK, and not
>    installed for app use.
>
>  * cryptodev: A ``reserved`` byte from structure ``rte_crypto_op`` was
> @@ -590,8 +597,8 @@ API Changes
>  * ip_frag: All macros updated to have ``RTE_IP_FRAG_`` prefix.
>    Obsolete macros are kept for compatibility.
>    DPDK components updated to use new names.
> -  Experimental function ``rte_frag_table_del_expired_entries`` was renamed
> -  to ``rte_ip_frag_table_del_expired_entries``
> +  Experimental function ``rte_frag_table_del_expired_entries()`` was renamed
> +  to ``rte_ip_frag_table_del_expired_entries()``
>    to comply with other public API naming convention.
>
>
> @@ -610,14 +617,14 @@ ABI Changes
>     Also, make sure to start the actual text at the margin.
>     =======================================================
>
> -* ethdev: All enums & macros updated to have ``RTE_ETH`` prefix and structures
> +* ethdev: All enums and macros updated to have ``RTE_ETH`` prefix and structures
>    updated to have ``rte_eth`` prefix. DPDK components updated to use new names.
>
> -* ethdev: Input parameters for ``eth_rx_queue_count_t`` was changed.
> -  Instead of pointer to ``rte_eth_dev`` and queue index, now it accepts pointer
> -  to internal queue data as input parameter. While this change is transparent
> -  to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
> -  is used by  public inline function ``rte_eth_rx_queue_count``.
> +* ethdev: The input parameters for ``eth_rx_queue_count_t`` were changed.
> +  Instead of a pointer to ``rte_eth_dev`` and queue index, it now accepts a pointer
> +  to internal queue data as an input parameter. While this change is transparent
> +  to the user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
> +  is used by the public inline function ``rte_eth_rx_queue_count``.
>
>  * ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``
>    private data structures. ``rte_eth_devices[]`` can't be accessed directly
> @@ -663,7 +670,7 @@ ABI Changes
>
>  * security: A new structure ``esn`` was added in structure
>    ``rte_security_ipsec_xform`` to set an initial ESN value. This permits
> -  application to start from an arbitrary ESN value for debug and SA lifetime
> +  applications to start from an arbitrary ESN value for debug and SA lifetime
>    enforcement purposes.
>
>  * security: A new structure ``udp`` was added in structure
> @@ -689,7 +696,7 @@ ABI Changes
>    ``RTE_LIBRTE_IP_FRAG_MAX_FRAG`` from ``4`` to ``8``.
>    This parameter controls maximum number of fragments per packet
>    in IP reassembly table. Increasing this value from ``4`` to ``8``
> -  will allow to cover common case with jumbo packet size of ``9KB``
> +  will allow covering the common case with jumbo packet size of ``9000B``
>    and fragments with default frame size ``(1500B)``.
>
>
> --
> 2.25.1
>

^ permalink raw reply	[relevance 0%]

* [PATCH v1] doc: update release notes for 21.11
@ 2021-11-22 17:00 12% John McNamara
  2021-11-22 17:05  0% ` Ajit Khaparde
  0 siblings, 1 reply; 200+ results
From: John McNamara @ 2021-11-22 17:00 UTC (permalink / raw)
  To: dev; +Cc: thomas, John McNamara

Fix grammar, spelling and formatting of DPDK 21.11 release notes.

Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst | 123 +++++++++++++------------
 1 file changed, 65 insertions(+), 58 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 4d8c59472a..7008c5e907 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -57,14 +57,14 @@ New Features
 
 * **Enabled new devargs parser.**
 
-  * Enabled devargs syntax
-    ``bus=X,paramX=x/class=Y,paramY=y/driver=Z,paramZ=z``
+  * Enabled devargs syntax:
+    ``bus=X,paramX=x/class=Y,paramY=y/driver=Z,paramZ=z``.
   * Added bus-level parsing of the devargs syntax.
   * Kept compatibility with the legacy syntax as parsing fallback.
 
 * **Updated EAL hugetlbfs mount handling for Linux.**
 
-  * Modified to allow ``--huge-dir`` option to specify a sub-directory
+  * Modified EAL to allow ``--huge-dir`` option to specify a sub-directory
     within a hugetlbfs mountpoint.
 
 * **Added dmadev library.**
@@ -82,7 +82,7 @@ New Features
 
 * **Added IDXD dmadev driver implementation.**
 
-  The IDXD dmadev driver provide device drivers for the Intel DSA devices.
+  The IDXD dmadev driver provides device drivers for the Intel DSA devices.
   This device driver can be used through the generic dmadev API.
 
 * **Added IOAT dmadev driver implementation.**
@@ -98,29 +98,34 @@ New Features
 
 * **Added NXP DPAA DMA driver.**
 
-  Added a new dmadev driver for NXP DPAA platform.
+  Added a new dmadev driver for the NXP DPAA platform.
 
 * **Added support to get all MAC addresses of a device.**
 
-  Added ``rte_eth_macaddrs_get`` to allow user to retrieve all Ethernet
-  addresses assigned to given ethernet port.
+  Added ``rte_eth_macaddrs_get`` to allow a user to retrieve all Ethernet
+  addresses assigned to a given Ethernet port.
 
-* **Introduced GPU device class with first features:**
+* **Introduced GPU device class.**
 
-  * Device information
-  * Memory management
-  * Communication flag & list
+  Introduced the GPU device class with initial features:
+
+  * Device information.
+  * Memory management.
+  * Communication flag and list.
 
 * **Added NVIDIA GPU driver implemented with CUDA library.**
 
+  Added NVIDIA GPU driver implemented with CUDA library under the new
+  GPU device interface.
+
 * **Added new RSS offload types for IPv4/L4 checksum in RSS flow.**
 
-  Added macros ETH_RSS_IPV4_CHKSUM and ETH_RSS_L4_CHKSUM, now IPv4 and
-  TCP/UDP/SCTP header checksum field can be used as input set for RSS.
+  Added macros ``ETH_RSS_IPV4_CHKSUM`` and ``ETH_RSS_L4_CHKSUM``. The IPv4 and
+  TCP/UDP/SCTP header checksum field can now be used as input set for RSS.
 
 * **Added L2TPv2 and PPP protocol support in flow API.**
 
-  Added flow pattern items and header formats of L2TPv2 and PPP protocol.
+  Added flow pattern items and header formats for the L2TPv2 and PPP protocols.
 
 * **Added flow flex item.**
 
@@ -146,11 +151,11 @@ New Features
 
   * Added new device capability flag and Rx domain field to switch info.
   * Added share group and share queue ID to Rx queue configuration.
-  * Added testpmd support and dedicate forwarding engine.
+  * Added testpmd support and dedicated forwarding engine.
 
 * **Updated af_packet ethdev driver.**
 
-  * Default VLAN strip behavior was changed. VLAN tag won't be stripped
+  * The default VLAN strip behavior has changed. The VLAN tag won't be stripped
     unless ``DEV_RX_OFFLOAD_VLAN_STRIP`` offload is enabled.
 
 * **Added API to get device configuration in ethdev.**
@@ -159,28 +164,30 @@ New Features
 
 * **Updated AF_XDP PMD.**
 
-  * Disabled secondary process support.
+  * Disabled secondary process support due to insufficient state shared
+    between processes which causes a crash. This will be fixed/re-enabled
+    in the next release.
 
 * **Updated Amazon ENA PMD.**
 
   Updated the Amazon ENA PMD. The new driver version (v2.5.0) introduced
   bug fixes and improvements, including:
 
-  * Support for the tx_free_thresh and rx_free_thresh configuration parameters.
+  * Support for the ``tx_free_thresh`` and ``rx_free_thresh`` configuration parameters.
   * NUMA aware allocations for the queue helper structures.
-  * Watchdog's feature which is checking for missing Tx completions.
+  * A Watchdog feature which is checking for missing Tx completions.
 
 * **Updated Broadcom bnxt PMD.**
 
   * Added flow offload support for Thor.
   * Added TruFlow and AFM SRAM partitioning support.
-  * Implement support for tunnel offload.
+  * Implemented support for tunnel offload.
   * Updated HWRM API to version 1.10.2.68.
-  * Added NAT support for dest IP and port combination.
+  * Added NAT support for destination IP and port combination.
   * Added support for socket redirection.
   * Added wildcard match support for ingress flows.
   * Added support for inner IP header for GRE tunnel flows.
-  * Updated support for RSS action in flow rule.
+  * Updated support for RSS action in flow rules.
   * Removed devargs option for stats accumulation.
 
 * **Updated Cisco enic driver.**
@@ -202,9 +209,9 @@ New Features
 
   * Added protocol agnostic flow offloading support in Flow Director.
   * Added protocol agnostic flow offloading support in RSS hash.
-  * Added 1PPS out support by a devargs.
+  * Added 1PPS out support via devargs.
   * Added IPv4 and L4 (TCP/UDP/SCTP) checksum hash support in RSS flow.
-  * Added DEV_RX_OFFLOAD_TIMESTAMP support.
+  * Added ``DEV_RX_OFFLOAD_TIMESTAMP`` support.
   * Added timesync API support under scalar path.
   * Added DCF reset API support.
 
@@ -225,7 +232,7 @@ New Features
   Updated the Mellanox mlx5 driver with new features and improvements, including:
 
   * Added implicit mempool registration to avoid data path hiccups (opt-out).
-  * Added delay drop support for Rx queue.
+  * Added delay drop support for Rx queues.
   * Added NIC offloads for the PMD on Windows (TSO, VLAN strip, CRC keep).
   * Added socket direct mode bonding support.
 
@@ -275,7 +282,7 @@ New Features
   Added a new Xilinx vDPA  (``sfc_vdpa``) PMD.
   See the :doc:`../vdpadevs/sfc` guide for more details on this driver.
 
-* **Added telemetry callbacks to cryptodev library.**
+* **Added telemetry callbacks to the cryptodev library.**
 
   Added telemetry callback functions which allow a list of crypto devices,
   stats for a crypto device, and other device information to be queried.
@@ -300,7 +307,7 @@ New Features
 
 * **Added support for event crypto adapter on Marvell CN10K and CN9K.**
 
-  * Added event crypto adapter OP_FORWARD mode support.
+  * Added event crypto adapter ``OP_FORWARD`` mode support.
 
 * **Updated Mellanox mlx5 crypto driver.**
 
@@ -309,7 +316,7 @@ New Features
 
 * **Updated NXP dpaa_sec crypto PMD.**
 
-  * Added DES-CBC, AES-XCBC-MAC, AES-CMAC and non-HMAC algo support.
+  * Added DES-CBC, AES-XCBC-MAC, AES-CMAC and non-HMAC algorithm support.
   * Added PDCP short MAC-I support.
   * Added raw vector datapath API support.
 
@@ -322,16 +329,16 @@ New Features
 
   * The IPsec_MB framework was added to share common code between Intel
     SW Crypto PMDs that depend on the intel-ipsec-mb library.
-  * Multiprocess support was added for the consolidated PMDs,
+  * Multiprocess support was added for the consolidated PMDs
     which requires v1.1 of the intel-ipsec-mb library.
-  * The following PMDs were moved into a single source folder,
-    however their usage and EAL options remain unchanged.
+  * The following PMDs were moved into a single source folder
+    while their usage and EAL options remain unchanged.
     * AESNI_MB PMD.
     * AESNI_GCM PMD.
     * KASUMI PMD.
     * SNOW3G PMD.
     * ZUC PMD.
-    * CHACHA20_POLY1305 - A new PMD added.
+    * CHACHA20_POLY1305 - a new PMD.
 
 * **Updated the aesni_mb crypto PMD.**
 
@@ -381,7 +388,7 @@ New Features
 * **Added multi-process support for testpmd.**
 
   Added command-line options to specify total number of processes and
-  current process ID. Each process owns subset of Rx and Tx queues.
+  current process ID. Each process owns a subset of Rx and Tx queues.
 
 * **Updated test-crypto-perf application with new cases.**
 
@@ -404,8 +411,8 @@ New Features
 
 * **Updated l3fwd sample application.**
 
-  * Increased number of routes to 16 for all lookup modes (LPM, EM and FIB),
-    this helps in validating SoC with many ethernet devices.
+  * Increased number of routes to 16 for all lookup modes (LPM, EM and FIB).
+    This helps in validating SoC with many Ethernet devices.
   * Updated EM mode to use RFC2544 reserved IP address space with RFC863
     UDP discard protocol.
 
@@ -431,8 +438,8 @@ New Features
 
 * **Added ASan support.**
 
-  `AddressSanitizer
-  <https://github.com/google/sanitizers/wiki/AddressSanitizer>`_ (ASan)
+  Added ASan/AddressSanitizer support. `AddressSanitizer
+  <https://github.com/google/sanitizers/wiki/AddressSanitizer>`_
   is a widely-used debugging tool to detect memory access errors.
   It helps to detect issues like use-after-free, various kinds of buffer
   overruns in C/C++ programs, and other similar errors, as well as
@@ -454,12 +461,12 @@ Removed Items
 * eal: Removed the deprecated function ``rte_get_master_lcore()``
   and the iterator macro ``RTE_LCORE_FOREACH_SLAVE``.
 
-* eal: The old api arguments that were deprecated for
+* eal: The old API arguments that were deprecated for
   blacklist/whitelist are removed. Users must use the new
   block/allow list arguments.
 
 * mbuf: Removed offload flag ``PKT_RX_EIP_CKSUM_BAD``.
-  ``PKT_RX_OUTER_IP_CKSUM_BAD`` should be used as a replacement.
+  The ``PKT_RX_OUTER_IP_CKSUM_BAD`` flag should be used as a replacement.
 
 * ethdev: Removed the port mirroring API. A more fine-grain flow API
   action ``RTE_FLOW_ACTION_TYPE_SAMPLE`` should be used instead.
@@ -468,9 +475,9 @@ Removed Items
   ``rte_eth_mirror_rule_reset`` along with the associated macros
   ``ETH_MIRROR_*`` are removed.
 
-* ethdev: Removed ``rte_eth_rx_descriptor_done`` API function and its
+* ethdev: Removed the ``rte_eth_rx_descriptor_done()`` API function and its
   driver callback. It is replaced by the more complete function
-  ``rte_eth_rx_descriptor_status``.
+  ``rte_eth_rx_descriptor_status()``.
 
 * ethdev: Removed deprecated ``shared`` attribute of the
   ``struct rte_flow_action_count``. Shared counters should be managed
@@ -548,21 +555,21 @@ API Changes
 
 * ethdev: ``rte_flow_action_modify_data`` structure updated, immediate data
   array is extended, data pointer field is explicitly added to union, the
-  action behavior is defined in more strict fashion and documentation updated.
+  action behavior is defined in a more strict fashion and documentation updated.
   The immediate value behavior has been changed, the entire immediate field
   should be provided, and offset for immediate source bitfield is assigned
-  from destination one.
+  from the destination one.
 
 * vhost: ``rte_vdpa_register_device``, ``rte_vdpa_unregister_device``,
   ``rte_vhost_host_notifier_ctrl`` and ``rte_vdpa_relay_vring_used`` vDPA
   driver interface are marked as internal.
 
-* cryptodev: The API rte_cryptodev_pmd_is_valid_dev is modified to
-  rte_cryptodev_is_valid_dev as it can be used by the application as
-  well as PMD to check whether the device is valid or not.
+* cryptodev: The API ``rte_cryptodev_pmd_is_valid_dev()`` is modified to
+  ``rte_cryptodev_is_valid_dev()`` as it can be used by the application as
+  well as the PMD to check whether the device is valid or not.
 
-* cryptodev: The rte_cryptodev_pmd.* files are renamed as cryptodev_pmd.*
-  as it is for drivers only and should be private to DPDK, and not
+* cryptodev: The ``rte_cryptodev_pmd.*`` files are renamed to ``cryptodev_pmd.*``
+  since they are for drivers only and should be private to DPDK, and not
   installed for app use.
 
 * cryptodev: A ``reserved`` byte from structure ``rte_crypto_op`` was
@@ -590,8 +597,8 @@ API Changes
 * ip_frag: All macros updated to have ``RTE_IP_FRAG_`` prefix.
   Obsolete macros are kept for compatibility.
   DPDK components updated to use new names.
-  Experimental function ``rte_frag_table_del_expired_entries`` was renamed
-  to ``rte_ip_frag_table_del_expired_entries``
+  Experimental function ``rte_frag_table_del_expired_entries()`` was renamed
+  to ``rte_ip_frag_table_del_expired_entries()``
   to comply with other public API naming convention.
 
 
@@ -610,14 +617,14 @@ ABI Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
-* ethdev: All enums & macros updated to have ``RTE_ETH`` prefix and structures
+* ethdev: All enums and macros updated to have ``RTE_ETH`` prefix and structures
   updated to have ``rte_eth`` prefix. DPDK components updated to use new names.
 
-* ethdev: Input parameters for ``eth_rx_queue_count_t`` was changed.
-  Instead of pointer to ``rte_eth_dev`` and queue index, now it accepts pointer
-  to internal queue data as input parameter. While this change is transparent
-  to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
-  is used by  public inline function ``rte_eth_rx_queue_count``.
+* ethdev: The input parameters for ``eth_rx_queue_count_t`` were changed.
+  Instead of a pointer to ``rte_eth_dev`` and queue index, it now accepts a pointer
+  to internal queue data as an input parameter. While this change is transparent
+  to the user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
+  is used by the public inline function ``rte_eth_rx_queue_count``.
 
 * ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``
   private data structures. ``rte_eth_devices[]`` can't be accessed directly
@@ -663,7 +670,7 @@ ABI Changes
 
 * security: A new structure ``esn`` was added in structure
   ``rte_security_ipsec_xform`` to set an initial ESN value. This permits
-  application to start from an arbitrary ESN value for debug and SA lifetime
+  applications to start from an arbitrary ESN value for debug and SA lifetime
   enforcement purposes.
 
 * security: A new structure ``udp`` was added in structure
@@ -689,7 +696,7 @@ ABI Changes
   ``RTE_LIBRTE_IP_FRAG_MAX_FRAG`` from ``4`` to ``8``.
   This parameter controls maximum number of fragments per packet
   in IP reassembly table. Increasing this value from ``4`` to ``8``
-  will allow to cover common case with jumbo packet size of ``9KB``
+  will allow covering the common case with jumbo packet size of ``9000B``
   and fragments with default frame size ``(1500B)``.
 
 
-- 
2.25.1


^ permalink raw reply	[relevance 12%]

* [PATCH v2 1/3] fix PMD wording typo
  @ 2021-11-22 10:50  1%   ` Sean Morrissey
  0 siblings, 0 replies; 200+ results
From: Sean Morrissey @ 2021-11-22 10:50 UTC (permalink / raw)
  To: Xiaoyun Li, Nicolas Chautru, Jay Zhou, Ciara Loftus, Qi Zhang,
	Steven Webster, Matt Peters, Apeksha Gupta, Sachin Saxena,
	Xiao Wang, Haiyue Wang, Beilei Xing, Stephen Hemminger, Long Li,
	Heinrich Kuhn, Jerin Jacob, Maciej Czekaj, Maxime Coquelin,
	Chenbo Xia, Konstantin Ananyev, Andrew Rybchenko, Fiona Trahe,
	Ashish Gupta, John Griffin, Deepak Kumar Jain, Ziyang Xuan,
	Xiaoyun Wang, Guoyang Zhou, Min Hu (Connor),
	Yisen Zhuang, Lijun Ou, Rosen Xu, Tianfei zhang, Akhil Goyal,
	Declan Doherty, Chengwen Feng, Kevin Laatz, Bruce Richardson,
	Thomas Monjalon, Ferruh Yigit
  Cc: dev, Sean Morrissey, Conor Fogarty, John McNamara, Conor Walsh

Removing the use of driver following PMD as its
unnecessary.

Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
Signed-off-by: Conor Fogarty <conor.fogarty@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 app/test-pmd/cmdline.c                        |  4 +--
 doc/guides/bbdevs/turbo_sw.rst                |  2 +-
 doc/guides/cryptodevs/virtio.rst              |  2 +-
 doc/guides/linux_gsg/build_sample_apps.rst    |  2 +-
 doc/guides/nics/af_packet.rst                 |  2 +-
 doc/guides/nics/af_xdp.rst                    |  2 +-
 doc/guides/nics/avp.rst                       |  4 +--
 doc/guides/nics/enetfec.rst                   |  2 +-
 doc/guides/nics/fm10k.rst                     |  4 +--
 doc/guides/nics/intel_vf.rst                  |  2 +-
 doc/guides/nics/netvsc.rst                    |  2 +-
 doc/guides/nics/nfp.rst                       |  2 +-
 doc/guides/nics/thunderx.rst                  |  2 +-
 doc/guides/nics/virtio.rst                    |  4 +--
 .../prog_guide/writing_efficient_code.rst     |  4 +--
 doc/guides/rel_notes/known_issues.rst         |  2 +-
 doc/guides/rel_notes/release_16_04.rst        |  2 +-
 doc/guides/rel_notes/release_19_05.rst        |  6 ++--
 doc/guides/rel_notes/release_19_11.rst        |  2 +-
 doc/guides/rel_notes/release_20_11.rst        |  4 +--
 doc/guides/rel_notes/release_21_02.rst        |  2 +-
 doc/guides/rel_notes/release_21_05.rst        |  2 +-
 doc/guides/rel_notes/release_21_08.rst        |  2 +-
 doc/guides/rel_notes/release_21_11.rst        |  2 +-
 doc/guides/rel_notes/release_2_2.rst          |  4 +--
 doc/guides/sample_app_ug/bbdev_app.rst        |  2 +-
 .../sample_app_ug/l3_forward_access_ctrl.rst  |  2 +-
 doc/guides/tools/testeventdev.rst             |  2 +-
 drivers/common/sfc_efx/efsys.h                |  2 +-
 drivers/compress/qat/qat_comp_pmd.h           |  2 +-
 drivers/crypto/qat/qat_asym_pmd.h             |  2 +-
 drivers/crypto/qat/qat_sym_pmd.h              |  2 +-
 drivers/net/fm10k/fm10k_ethdev.c              |  2 +-
 drivers/net/hinic/base/hinic_pmd_cmdq.h       |  2 +-
 drivers/net/hns3/hns3_ethdev.c                |  6 ++--
 drivers/net/hns3/hns3_ethdev.h                |  8 +++---
 drivers/net/hns3/hns3_ethdev_vf.c             | 28 +++++++++----------
 drivers/net/hns3/hns3_rss.c                   |  4 +--
 drivers/net/hns3/hns3_rxtx.c                  |  8 +++---
 drivers/net/hns3/hns3_rxtx.h                  |  4 +--
 drivers/net/i40e/i40e_ethdev.c                |  2 +-
 drivers/net/nfp/nfp_common.h                  |  2 +-
 drivers/net/nfp/nfp_ethdev.c                  |  2 +-
 drivers/net/nfp/nfp_ethdev_vf.c               |  2 +-
 drivers/raw/ifpga/base/README                 |  2 +-
 lib/bbdev/rte_bbdev.h                         | 12 ++++----
 lib/compressdev/rte_compressdev_pmd.h         |  2 +-
 lib/cryptodev/cryptodev_pmd.h                 |  2 +-
 lib/dmadev/rte_dmadev_core.h                  |  2 +-
 lib/eal/include/rte_dev.h                     |  2 +-
 lib/eal/include/rte_devargs.h                 |  4 +--
 lib/ethdev/rte_ethdev.h                       | 18 ++++++------
 52 files changed, 98 insertions(+), 98 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index c43c85c591..6e10afeedd 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -2701,7 +2701,7 @@ cmd_config_rxtx_queue_parsed(void *parsed_result,
 		ret = rte_eth_dev_tx_queue_stop(res->portid, res->qid);
 
 	if (ret == -ENOTSUP)
-		fprintf(stderr, "Function not supported in PMD driver\n");
+		fprintf(stderr, "Function not supported in PMD\n");
 }
 
 cmdline_parse_token_string_t cmd_config_rxtx_queue_port =
@@ -14700,7 +14700,7 @@ cmd_ddp_info_parsed(
 		free(proto);
 #endif
 	if (ret == -ENOTSUP)
-		fprintf(stderr, "Function not supported in PMD driver\n");
+		fprintf(stderr, "Function not supported in PMD\n");
 	close_file(pkg);
 }
 
diff --git a/doc/guides/bbdevs/turbo_sw.rst b/doc/guides/bbdevs/turbo_sw.rst
index 43c5129fd7..1e23e37027 100644
--- a/doc/guides/bbdevs/turbo_sw.rst
+++ b/doc/guides/bbdevs/turbo_sw.rst
@@ -149,7 +149,7 @@ Example:
 
 * For AVX512 machines with SDK libraries installed then both 4G and 5G can be enabled for full real time FEC capability.
   For AVX2 machines it is possible to only enable the 4G libraries and the PMD capabilities will be limited to 4G FEC.
-  If no library is present then the PMD driver will still build but its capabilities will be limited accordingly.
+  If no library is present then the PMD will still build but its capabilities will be limited accordingly.
 
 
 To use the PMD in an application, user must:
diff --git a/doc/guides/cryptodevs/virtio.rst b/doc/guides/cryptodevs/virtio.rst
index 8b96446ff2..ce4d43519a 100644
--- a/doc/guides/cryptodevs/virtio.rst
+++ b/doc/guides/cryptodevs/virtio.rst
@@ -73,7 +73,7 @@ number of the virtio-crypto device:
     echo -n 0000:00:04.0 > /sys/bus/pci/drivers/virtio-pci/unbind
     echo "1af4 1054" > /sys/bus/pci/drivers/uio_pci_generic/new_id
 
-Finally the front-end virtio crypto PMD driver can be installed.
+Finally the front-end virtio crypto PMD can be installed.
 
 Tests
 -----
diff --git a/doc/guides/linux_gsg/build_sample_apps.rst b/doc/guides/linux_gsg/build_sample_apps.rst
index efd2dd23f1..4f99617233 100644
--- a/doc/guides/linux_gsg/build_sample_apps.rst
+++ b/doc/guides/linux_gsg/build_sample_apps.rst
@@ -66,7 +66,7 @@ The EAL options are as follows:
 
 * ``-d``:
   Add a driver or driver directory to be loaded.
-  The application should use this option to load the pmd drivers
+  The application should use this option to load the PMDs
   that are built as shared libraries.
 
 * ``-m MB``:
diff --git a/doc/guides/nics/af_packet.rst b/doc/guides/nics/af_packet.rst
index 54feffdef4..8292369141 100644
--- a/doc/guides/nics/af_packet.rst
+++ b/doc/guides/nics/af_packet.rst
@@ -5,7 +5,7 @@ AF_PACKET Poll Mode Driver
 ==========================
 
 The AF_PACKET socket in Linux allows an application to receive and send raw
-packets. This Linux-specific PMD driver binds to an AF_PACKET socket and allows
+packets. This Linux-specific PMD binds to an AF_PACKET socket and allows
 a DPDK application to send and receive raw packets through the Kernel.
 
 In order to improve Rx and Tx performance this implementation makes use of
diff --git a/doc/guides/nics/af_xdp.rst b/doc/guides/nics/af_xdp.rst
index 8bf40b5f0f..c9d0e1ad6c 100644
--- a/doc/guides/nics/af_xdp.rst
+++ b/doc/guides/nics/af_xdp.rst
@@ -12,7 +12,7 @@ For the full details behind AF_XDP socket, you can refer to
 `AF_XDP documentation in the Kernel
 <https://www.kernel.org/doc/Documentation/networking/af_xdp.rst>`_.
 
-This Linux-specific PMD driver creates the AF_XDP socket and binds it to a
+This Linux-specific PMD creates the AF_XDP socket and binds it to a
 specific netdev queue, it allows a DPDK application to send and receive raw
 packets through the socket which would bypass the kernel network stack.
 Current implementation only supports single queue, multi-queues feature will
diff --git a/doc/guides/nics/avp.rst b/doc/guides/nics/avp.rst
index 1a194fc23c..a749f2a0f6 100644
--- a/doc/guides/nics/avp.rst
+++ b/doc/guides/nics/avp.rst
@@ -35,7 +35,7 @@ to another with minimal packet loss.
 Features and Limitations of the AVP PMD
 ---------------------------------------
 
-The AVP PMD driver provides the following functionality.
+The AVP PMD provides the following functionality.
 
 *   Receive and transmit of both simple and chained mbuf packets,
 
@@ -74,7 +74,7 @@ Launching a VM with an AVP type network attachment
 The following example will launch a VM with three network attachments.  The
 first attachment will have a default vif-model of "virtio".  The next two
 network attachments will have a vif-model of "avp" and may be used with a DPDK
-application which is built to include the AVP PMD driver.
+application which is built to include the AVP PMD.
 
 .. code-block:: console
 
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index a64e72fdd6..381635e627 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -65,7 +65,7 @@ The diagram below shows a system level overview of ENETFEC:
                         | PHY |
                         +-----+
 
-ENETFEC Ethernet driver is traditional DPDK PMD driver running in userspace.
+ENETFEC Ethernet driver is traditional DPDK PMD running in userspace.
 'fec-uio' is the kernel driver.
 The MAC and PHY are the hardware blocks.
 ENETFEC PMD uses standard UIO interface to access kernel
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index bba53f5a64..d6efac0917 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -114,9 +114,9 @@ Switch manager
 ~~~~~~~~~~~~~~
 
 The Intel FM10000 family of NICs integrate a hardware switch and multiple host
-interfaces. The FM10000 PMD driver only manages host interfaces. For the
+interfaces. The FM10000 PMD only manages host interfaces. For the
 switch component another switch driver has to be loaded prior to the
-FM10000 PMD driver. The switch driver can be acquired from Intel support.
+FM10000 PMD. The switch driver can be acquired from Intel support.
 Only Testpoint is validated with DPDK, the latest version that has been
 validated with DPDK is 4.1.6.
 
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index fd235e1463..648af39c22 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -571,7 +571,7 @@ Fast Host-based Packet Processing
 
 Software Defined Network (SDN) trends are demanding fast host-based packet handling.
 In a virtualization environment,
-the DPDK VF PMD driver performs the same throughput result as a non-VT native environment.
+the DPDK VF PMD performs the same throughput result as a non-VT native environment.
 
 With such host instance fast packet processing, lots of services such as filtering, QoS,
 DPI can be offloaded on the host fast path.
diff --git a/doc/guides/nics/netvsc.rst b/doc/guides/nics/netvsc.rst
index c0e218c743..77efe1dc91 100644
--- a/doc/guides/nics/netvsc.rst
+++ b/doc/guides/nics/netvsc.rst
@@ -14,7 +14,7 @@ checksum and segmentation offloads.
 Features and Limitations of Hyper-V PMD
 ---------------------------------------
 
-In this release, the hyper PMD driver provides the basic functionality of packet reception and transmission.
+In this release, the hyper PMD provides the basic functionality of packet reception and transmission.
 
 *   It supports merge-able buffers per packet when receiving packets and scattered buffer per packet
     when transmitting packets. The packet size supported is from 64 to 65536.
diff --git a/doc/guides/nics/nfp.rst b/doc/guides/nics/nfp.rst
index bf8be723b0..30cdc69202 100644
--- a/doc/guides/nics/nfp.rst
+++ b/doc/guides/nics/nfp.rst
@@ -14,7 +14,7 @@ This document explains how to use DPDK with the Netronome Poll Mode
 Driver (PMD) supporting Netronome's Network Flow Processor 6xxx
 (NFP-6xxx) and Netronome's Flow Processor 4xxx (NFP-4xxx).
 
-NFP is a SRIOV capable device and the PMD driver supports the physical
+NFP is a SRIOV capable device and the PMD supports the physical
 function (PF) and the virtual functions (VFs).
 
 Dependencies
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index 98f23a2b2a..d96395dafa 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -199,7 +199,7 @@ Each port consists of a primary VF and n secondary VF(s). Each VF provides 8 Tx/
 When a given port is configured to use more than 8 queues, it requires one (or more) secondary VF.
 Each secondary VF adds 8 additional queues to the queue set.
 
-During PMD driver initialization, the primary VF's are enumerated by checking the
+During PMD initialization, the primary VF's are enumerated by checking the
 specific flag (see sqs message in DPDK boot log - sqs indicates secondary queue set).
 They are at the beginning of VF list (the remain ones are secondary VF's).
 
diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
index 98e0d012b7..7c0ae2b3af 100644
--- a/doc/guides/nics/virtio.rst
+++ b/doc/guides/nics/virtio.rst
@@ -17,7 +17,7 @@ With this enhancement, virtio could achieve quite promising performance.
 For basic qemu-KVM installation and other Intel EM poll mode driver in guest VM,
 please refer to Chapter "Driver for VM Emulated Devices".
 
-In this chapter, we will demonstrate usage of virtio PMD driver with two backends,
+In this chapter, we will demonstrate usage of virtio PMD with two backends,
 standard qemu vhost back end and vhost kni back end.
 
 Virtio Implementation in DPDK
@@ -40,7 +40,7 @@ end if necessary.
 Features and Limitations of virtio PMD
 --------------------------------------
 
-In this release, the virtio PMD driver provides the basic functionality of packet reception and transmission.
+In this release, the virtio PMD provides the basic functionality of packet reception and transmission.
 
 *   It supports merge-able buffers per packet when receiving packets and scattered buffer per packet
     when transmitting packets. The packet size supported is from 64 to 1518.
diff --git a/doc/guides/prog_guide/writing_efficient_code.rst b/doc/guides/prog_guide/writing_efficient_code.rst
index a61e8320ae..e6c26efdd3 100644
--- a/doc/guides/prog_guide/writing_efficient_code.rst
+++ b/doc/guides/prog_guide/writing_efficient_code.rst
@@ -119,8 +119,8 @@ The code algorithm that dequeues messages may be something similar to the follow
         my_process_bulk(obj_table, count);
    }
 
-PMD Driver
-----------
+PMD
+---
 
 The DPDK Poll Mode Driver (PMD) is also able to work in bulk/burst mode,
 allowing the factorization of some code for each call in the send or receive function.
diff --git a/doc/guides/rel_notes/known_issues.rst b/doc/guides/rel_notes/known_issues.rst
index beea877bad..187d9c942e 100644
--- a/doc/guides/rel_notes/known_issues.rst
+++ b/doc/guides/rel_notes/known_issues.rst
@@ -250,7 +250,7 @@ PMD does not work with --no-huge EAL command line parameter
 
 **Description**:
    Currently, the DPDK does not store any information about memory allocated by ``malloc()` (for example, NUMA node,
-   physical address), hence PMD drivers do not work when the ``--no-huge`` command line parameter is supplied to EAL.
+   physical address), hence PMDs do not work when the ``--no-huge`` command line parameter is supplied to EAL.
 
 **Implication**:
    Sending and receiving data with PMD will not work.
diff --git a/doc/guides/rel_notes/release_16_04.rst b/doc/guides/rel_notes/release_16_04.rst
index b7d07834e1..ac18e1dddb 100644
--- a/doc/guides/rel_notes/release_16_04.rst
+++ b/doc/guides/rel_notes/release_16_04.rst
@@ -56,7 +56,7 @@ New Features
 
 * **Enabled Virtio 1.0 support.**
 
-  Enabled Virtio 1.0 support for Virtio pmd driver.
+  Enabled Virtio 1.0 support for Virtio PMD.
 
 * **Supported Virtio for ARM.**
 
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 30f704e204..89ae425bdb 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -46,13 +46,13 @@ New Features
   Updated the KNI kernel module to set the ``max_mtu`` according to the given
   initial MTU size. Without it, the maximum MTU was 1500.
 
-  Updated the KNI PMD driver to set the ``mbuf_size`` and MTU based on
+  Updated the KNI PMD to set the ``mbuf_size`` and MTU based on
   the given mb-pool. This provide the ability to pass jumbo frames
   if the mb-pool contains a suitable buffer size.
 
 * **Added the AF_XDP PMD.**
 
-  Added a Linux-specific PMD driver for AF_XDP. This PMD can create an AF_XDP socket
+  Added a Linux-specific PMD for AF_XDP. This PMD can create an AF_XDP socket
   and bind it to a specific netdev queue. It allows a DPDK application to send
   and receive raw packets through the socket which would bypass the kernel
   network stack to achieve high performance packet processing.
@@ -240,7 +240,7 @@ ABI Changes
 
   The ``rte_eth_dev_info`` structure has had two extra fields
   added: ``min_mtu`` and ``max_mtu``. Each of these are of type ``uint16_t``.
-  The values of these fields can be set specifically by the PMD drivers as
+  The values of these fields can be set specifically by the PMDs as
   supported values can vary from device to device.
 
 * cryptodev: in 18.08 a new structure ``rte_crypto_asym_op`` was introduced and
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index b509a6dd28..302b3e5f37 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -189,7 +189,7 @@ New Features
 
 * **Added Marvell OCTEON TX2 crypto PMD.**
 
-  Added a new PMD driver for hardware crypto offload block on ``OCTEON TX2``
+  Added a new PMD for hardware crypto offload block on ``OCTEON TX2``
   SoC.
 
   See :doc:`../cryptodevs/octeontx2` for more details
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 90cc3ed680..af7ce90ba3 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -192,7 +192,7 @@ New Features
 
 * **Added Wangxun txgbe PMD.**
 
-  Added a new PMD driver for Wangxun 10 Gigabit Ethernet NICs.
+  Added a new PMD for Wangxun 10 Gigabit Ethernet NICs.
 
   See the :doc:`../nics/txgbe` for more details.
 
@@ -288,7 +288,7 @@ New Features
 
 * **Added Marvell OCTEON TX2 regex PMD.**
 
-  Added a new PMD driver for the hardware regex offload block for OCTEON TX2 SoC.
+  Added a new PMD for the hardware regex offload block for OCTEON TX2 SoC.
 
   See the :doc:`../regexdevs/octeontx2` for more details.
 
diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst
index 9d5e17758f..5fbf5b3d43 100644
--- a/doc/guides/rel_notes/release_21_02.rst
+++ b/doc/guides/rel_notes/release_21_02.rst
@@ -135,7 +135,7 @@ New Features
 
 * **Added mlx5 compress PMD.**
 
-  Added a new compress PMD driver for Bluefield 2 adapters.
+  Added a new compress PMD for Bluefield 2 adapters.
 
   See the :doc:`../compressdevs/mlx5` for more details.
 
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index 8adb225a4d..49044ed422 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -78,7 +78,7 @@ New Features
   * Updated ena_com (HAL) to the latest version.
   * Added indication of the RSS hash presence in the mbuf.
 
-* **Updated Arkville PMD driver.**
+* **Updated Arkville PMD.**
 
   Updated Arkville net driver with new features and improvements, including:
 
diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst
index 6fb4e43346..ac1c081903 100644
--- a/doc/guides/rel_notes/release_21_08.rst
+++ b/doc/guides/rel_notes/release_21_08.rst
@@ -67,7 +67,7 @@ New Features
 
 * **Added Wangxun ngbe PMD.**
 
-  Added a new PMD driver for Wangxun 1Gb Ethernet NICs.
+  Added a new PMD for Wangxun 1Gb Ethernet NICs.
   See the :doc:`../nics/ngbe` for more details.
 
 * **Added inflight packets clear API in vhost library.**
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 4d8c59472a..1d6774afc1 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -354,7 +354,7 @@ New Features
 
 * **Added NXP LA12xx baseband PMD.**
 
-  * Added a new baseband PMD driver for NXP LA12xx Software defined radio.
+  * Added a new baseband PMD for NXP LA12xx Software defined radio.
   * See the :doc:`../bbdevs/la12xx` for more details.
 
 * **Updated Mellanox compress driver.**
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 8273473ff4..029b758e90 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -10,8 +10,8 @@ New Features
 * **Introduce ARMv7 and ARMv8 architectures.**
 
   * It is now possible to build DPDK for the ARMv7 and ARMv8 platforms.
-  * ARMv7 can be tested with virtual PMD drivers.
-  * ARMv8 can be tested with virtual and physical PMD drivers.
+  * ARMv7 can be tested with virtual PMDs.
+  * ARMv8 can be tested with virtual and physical PMDs.
 
 * **Enabled freeing of ring.**
 
diff --git a/doc/guides/sample_app_ug/bbdev_app.rst b/doc/guides/sample_app_ug/bbdev_app.rst
index 45e69e36e2..7f02f0ed90 100644
--- a/doc/guides/sample_app_ug/bbdev_app.rst
+++ b/doc/guides/sample_app_ug/bbdev_app.rst
@@ -31,7 +31,7 @@ Limitations
 Compiling the Application
 -------------------------
 
-DPDK needs to be built with ``baseband_turbo_sw`` PMD driver enabled along
+DPDK needs to be built with ``baseband_turbo_sw`` PMD enabled along
 with ``FLEXRAN SDK`` Libraries. Refer to *SW Turbo Poll Mode Driver*
 documentation for more details on this.
 
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index 486247ac2e..ecb1c857c4 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -220,7 +220,7 @@ Once the application starts, it transitions through three phases:
 
 *   **Final Phase** - Perform the following tasks:
 
-    Calls the EAL, PMD driver and ACL library to free resource, then quits.
+    Calls the EAL, PMD and ACL library to free resource, then quits.
 
 Compiling the Application
 -------------------------
diff --git a/doc/guides/tools/testeventdev.rst b/doc/guides/tools/testeventdev.rst
index 7b4cdeb43f..48efb9ea6e 100644
--- a/doc/guides/tools/testeventdev.rst
+++ b/doc/guides/tools/testeventdev.rst
@@ -239,7 +239,7 @@ to the ordered queue. The worker receives the events from ordered queue and
 forwards to atomic queue. Since the events from an ordered queue can be
 processed in parallel on the different workers, the ingress order of events
 might have changed on the downstream atomic queue enqueue. On enqueue to the
-atomic queue, the eventdev PMD driver reorders the event to the original
+atomic queue, the eventdev PMD reorders the event to the original
 ingress order(i.e producer ingress order).
 
 When the event is dequeued from the atomic queue by the worker, this test
diff --git a/drivers/common/sfc_efx/efsys.h b/drivers/common/sfc_efx/efsys.h
index b2109bf3c0..3860c2835a 100644
--- a/drivers/common/sfc_efx/efsys.h
+++ b/drivers/common/sfc_efx/efsys.h
@@ -609,7 +609,7 @@ typedef struct efsys_bar_s {
 /* DMA SYNC */
 
 /*
- * DPDK does not provide any DMA syncing API, and no PMD drivers
+ * DPDK does not provide any DMA syncing API, and no PMDs
  * have any traces of explicit DMA syncing.
  * DMA mapping is assumed to be coherent.
  */
diff --git a/drivers/compress/qat/qat_comp_pmd.h b/drivers/compress/qat/qat_comp_pmd.h
index 86317a513c..3c8682a768 100644
--- a/drivers/compress/qat/qat_comp_pmd.h
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -13,7 +13,7 @@
 #include "qat_device.h"
 #include "qat_comp.h"
 
-/**< Intel(R) QAT Compression PMD driver name */
+/**< Intel(R) QAT Compression PMD name */
 #define COMPRESSDEV_NAME_QAT_PMD	compress_qat
 
 /* Private data structure for a QAT compression device capability. */
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index fd6b406248..f988d646e5 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -10,7 +10,7 @@
 #include "qat_crypto.h"
 #include "qat_device.h"
 
-/** Intel(R) QAT Asymmetric Crypto PMD driver name */
+/** Intel(R) QAT Asymmetric Crypto PMD name */
 #define CRYPTODEV_NAME_QAT_ASYM_PMD	crypto_qat_asym
 
 
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index 0dc0c6f0d9..59fbdefa12 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -16,7 +16,7 @@
 #include "qat_crypto.h"
 #include "qat_device.h"
 
-/** Intel(R) QAT Symmetric Crypto PMD driver name */
+/** Intel(R) QAT Symmetric Crypto PMD name */
 #define CRYPTODEV_NAME_QAT_SYM_PMD	crypto_qat
 
 /* Internal capabilities */
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 7c85a05746..43e1d13431 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -255,7 +255,7 @@ rx_queue_clean(struct fm10k_rx_queue *q)
 	for (i = 0; i < q->nb_fake_desc; ++i)
 		q->hw_ring[q->nb_desc + i] = zero;
 
-	/* vPMD driver has a different way of releasing mbufs. */
+	/* vPMD has a different way of releasing mbufs. */
 	if (q->rx_using_sse) {
 		fm10k_rx_queue_release_mbufs_vec(q);
 		return;
diff --git a/drivers/net/hinic/base/hinic_pmd_cmdq.h b/drivers/net/hinic/base/hinic_pmd_cmdq.h
index 0d5e380123..58a1fbda71 100644
--- a/drivers/net/hinic/base/hinic_pmd_cmdq.h
+++ b/drivers/net/hinic/base/hinic_pmd_cmdq.h
@@ -9,7 +9,7 @@
 
 #define HINIC_SCMD_DATA_LEN		16
 
-/* pmd driver uses 64, kernel l2nic use 4096 */
+/* PMD uses 64, kernel l2nic use 4096 */
 #define	HINIC_CMDQ_DEPTH		64
 
 #define	HINIC_CMDQ_BUF_SIZE		2048U
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 847e660f44..0bd12907d8 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -1060,7 +1060,7 @@ hns3_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on)
 		return ret;
 	/*
 	 * Only in HNS3_SW_SHIFT_AND_MODE the PVID related operation in Tx/Rx
-	 * need be processed by PMD driver.
+	 * need be processed by PMD.
 	 */
 	if (pvid_en_state_change &&
 	    hw->vlan_mode == HNS3_SW_SHIFT_AND_DISCARD_MODE)
@@ -2592,7 +2592,7 @@ hns3_parse_cfg(struct hns3_cfg *cfg, struct hns3_cmd_desc *desc)
 	 * Field ext_rss_size_max obtained from firmware will be more flexible
 	 * for future changes and expansions, which is an exponent of 2, instead
 	 * of reading out directly. If this field is not zero, hns3 PF PMD
-	 * driver uses it as rss_size_max under one TC. Device, whose revision
+	 * uses it as rss_size_max under one TC. Device, whose revision
 	 * id is greater than or equal to PCI_REVISION_ID_HIP09_A, obtains the
 	 * maximum number of queues supported under a TC through this field.
 	 */
@@ -6311,7 +6311,7 @@ hns3_fec_set(struct rte_eth_dev *dev, uint32_t mode)
 	if (ret < 0)
 		return ret;
 
-	/* HNS3 PMD driver only support one bit set mode, e.g. 0x1, 0x4 */
+	/* HNS3 PMD only support one bit set mode, e.g. 0x1, 0x4 */
 	if (!is_fec_mode_one_bit_set(mode)) {
 		hns3_err(hw, "FEC mode(0x%x) not supported in HNS3 PMD, "
 			     "FEC mode should be only one bit set", mode);
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 6d30125dcc..aa45b31261 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -465,8 +465,8 @@ struct hns3_queue_intr {
 	 *     enable Rx interrupt.
 	 *
 	 *  - HNS3_INTR_MAPPING_VEC_ALL
-	 *     PMD driver can map/unmmap all interrupt vectors with queues When
-	 *     Rx interrupt in enabled.
+	 *     PMD can map/unmmap all interrupt vectors with queues when
+	 *     Rx interrupt is enabled.
 	 */
 	uint8_t mapping_mode;
 	/*
@@ -575,14 +575,14 @@ struct hns3_hw {
 	 *
 	 *  - HNS3_SW_SHIFT_AND_DISCARD_MODE
 	 *     For some versions of hardware network engine, because of the
-	 *     hardware limitation, PMD driver needs to detect the PVID status
+	 *     hardware limitation, PMD needs to detect the PVID status
 	 *     to work with haredware to implement PVID-related functions.
 	 *     For example, driver need discard the stripped PVID tag to ensure
 	 *     the PVID will not report to mbuf and shift the inserted VLAN tag
 	 *     to avoid port based VLAN covering it.
 	 *
 	 *  - HNS3_HW_SHIT_AND_DISCARD_MODE
-	 *     PMD driver does not need to process PVID-related functions in
+	 *     PMD does not need to process PVID-related functions in
 	 *     I/O process, Hardware will adjust the sequence between port based
 	 *     VLAN tag and BD VLAN tag automatically and VLAN tag stripped by
 	 *     PVID will be invisible to driver. And in this mode, hns3 is able
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index d8a99693e0..805abd4543 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -232,7 +232,7 @@ hns3vf_set_default_mac_addr(struct rte_eth_dev *dev,
 				HNS3_TWO_ETHER_ADDR_LEN, true, NULL, 0);
 	if (ret) {
 		/*
-		 * The hns3 VF PMD driver depends on the hns3 PF kernel ethdev
+		 * The hns3 VF PMD depends on the hns3 PF kernel ethdev
 		 * driver. When user has configured a MAC address for VF device
 		 * by "ip link set ..." command based on the PF device, the hns3
 		 * PF kernel ethdev driver does not allow VF driver to request
@@ -312,9 +312,9 @@ hns3vf_set_promisc_mode(struct hns3_hw *hw, bool en_bc_pmc,
 	req = (struct hns3_mbx_vf_to_pf_cmd *)desc.data;
 
 	/*
-	 * The hns3 VF PMD driver depends on the hns3 PF kernel ethdev driver,
+	 * The hns3 VF PMD depends on the hns3 PF kernel ethdev driver,
 	 * so there are some features for promiscuous/allmulticast mode in hns3
-	 * VF PMD driver as below:
+	 * VF PMD as below:
 	 * 1. The promiscuous/allmulticast mode can be configured successfully
 	 *    only based on the trusted VF device. If based on the non trusted
 	 *    VF device, configuring promiscuous/allmulticast mode will fail.
@@ -322,14 +322,14 @@ hns3vf_set_promisc_mode(struct hns3_hw *hw, bool en_bc_pmc,
 	 *    kernel ethdev driver on the host by the following command:
 	 *      "ip link set <eth num> vf <vf id> turst on"
 	 * 2. After the promiscuous mode is configured successfully, hns3 VF PMD
-	 *    driver can receive the ingress and outgoing traffic. In the words,
+	 *    can receive the ingress and outgoing traffic. This includes
 	 *    all the ingress packets, all the packets sent from the PF and
 	 *    other VFs on the same physical port.
 	 * 3. Note: Because of the hardware constraints, By default vlan filter
 	 *    is enabled and couldn't be turned off based on VF device, so vlan
 	 *    filter is still effective even in promiscuous mode. If upper
 	 *    applications don't call rte_eth_dev_vlan_filter API function to
-	 *    set vlan based on VF device, hns3 VF PMD driver will can't receive
+	 *    set vlan based on VF device, hns3 VF PMD will can't receive
 	 *    the packets with vlan tag in promiscuoue mode.
 	 */
 	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false);
@@ -553,9 +553,9 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	/*
 	 * The hns3 PF/VF devices on the same port share the hardware MTU
 	 * configuration. Currently, we send mailbox to inform hns3 PF kernel
-	 * ethdev driver to finish hardware MTU configuration in hns3 VF PMD
-	 * driver, there is no need to stop the port for hns3 VF device, and the
-	 * MTU value issued by hns3 VF PMD driver must be less than or equal to
+	 * ethdev driver to finish hardware MTU configuration in hns3 VF PMD,
+	 * there is no need to stop the port for hns3 VF device, and the
+	 * MTU value issued by hns3 VF PMD must be less than or equal to
 	 * PF's MTU.
 	 */
 	if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED)) {
@@ -565,8 +565,8 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	/*
 	 * when Rx of scattered packets is off, we have some possibility of
-	 * using vector Rx process function or simple Rx functions in hns3 PMD
-	 * driver. If the input MTU is increased and the maximum length of
+	 * using vector Rx process function or simple Rx functions in hns3 PMD.
+	 * If the input MTU is increased and the maximum length of
 	 * received packets is greater than the length of a buffer for Rx
 	 * packet, the hardware network engine needs to use multiple BDs and
 	 * buffers to store these packets. This will cause problems when still
@@ -2075,7 +2075,7 @@ hns3vf_check_default_mac_change(struct hns3_hw *hw)
 	 * ethdev driver sets the MAC address for VF device after the
 	 * initialization of the related VF device, the PF driver will notify
 	 * VF driver to reset VF device to make the new MAC address effective
-	 * immediately. The hns3 VF PMD driver should check whether the MAC
+	 * immediately. The hns3 VF PMD should check whether the MAC
 	 * address has been changed by the PF kernel ethdev driver, if changed
 	 * VF driver should configure hardware using the new MAC address in the
 	 * recovering hardware configuration stage of the reset process.
@@ -2416,12 +2416,12 @@ hns3vf_dev_init(struct rte_eth_dev *eth_dev)
 	/*
 	 * The hns3 PF ethdev driver in kernel support setting VF MAC address
 	 * on the host by "ip link set ..." command. To avoid some incorrect
-	 * scenes, for example, hns3 VF PMD driver fails to receive and send
+	 * scenes, for example, hns3 VF PMD fails to receive and send
 	 * packets after user configure the MAC address by using the
-	 * "ip link set ..." command, hns3 VF PMD driver keep the same MAC
+	 * "ip link set ..." command, hns3 VF PMD keep the same MAC
 	 * address strategy as the hns3 kernel ethdev driver in the
 	 * initialization. If user configure a MAC address by the ip command
-	 * for VF device, then hns3 VF PMD driver will start with it, otherwise
+	 * for VF device, then hns3 VF PMD will start with it, otherwise
 	 * start with a random MAC address in the initialization.
 	 */
 	if (rte_is_zero_ether_addr((struct rte_ether_addr *)hw->mac.mac_addr))
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index 85495bbe89..3a4b699ae2 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -667,7 +667,7 @@ hns3_rss_set_default_args(struct hns3_hw *hw)
 }
 
 /*
- * RSS initialization for hns3 pmd driver.
+ * RSS initialization for hns3 PMD.
  */
 int
 hns3_config_rss(struct hns3_adapter *hns)
@@ -739,7 +739,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 }
 
 /*
- * RSS uninitialization for hns3 pmd driver.
+ * RSS uninitialization for hns3 PMD.
  */
 void
 hns3_rss_uninit(struct hns3_adapter *hns)
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 40cc4e9c1a..f365daadf8 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1899,8 +1899,8 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	/*
 	 * For hns3 PF device, if the VLAN mode is HW_SHIFT_AND_DISCARD_MODE,
 	 * the pvid_sw_discard_en in the queue struct should not be changed,
-	 * because PVID-related operations do not need to be processed by PMD
-	 * driver. For hns3 VF device, whether it needs to process PVID depends
+	 * because PVID-related operations do not need to be processed by PMD.
+	 * For hns3 VF device, whether it needs to process PVID depends
 	 * on the configuration of PF kernel mode netdevice driver. And the
 	 * related PF configuration is delivered through the mailbox and finally
 	 * reflectd in port_base_vlan_cfg.
@@ -3039,8 +3039,8 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	/*
 	 * For hns3 PF device, if the VLAN mode is HW_SHIFT_AND_DISCARD_MODE,
 	 * the pvid_sw_shift_en in the queue struct should not be changed,
-	 * because PVID-related operations do not need to be processed by PMD
-	 * driver. For hns3 VF device, whether it needs to process PVID depends
+	 * because PVID-related operations do not need to be processed by PMD.
+	 * For hns3 VF device, whether it needs to process PVID depends
 	 * on the configuration of PF kernel mode netdev driver. And the
 	 * related PF configuration is delivered through the mailbox and finally
 	 * reflectd in port_base_vlan_cfg.
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index df731856ef..5423568cd0 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -318,7 +318,7 @@ struct hns3_rx_queue {
 	 * should not be transitted to the upper-layer application. For hardware
 	 * network engine whose vlan mode is HNS3_HW_SHIFT_AND_DISCARD_MODE,
 	 * such as kunpeng 930, PVID will not be reported to the BDs. So, PMD
-	 * driver does not need to perform PVID-related operation in Rx. At this
+	 * does not need to perform PVID-related operation in Rx. At this
 	 * point, the pvid_sw_discard_en will be false.
 	 */
 	uint8_t pvid_sw_discard_en:1;
@@ -490,7 +490,7 @@ struct hns3_tx_queue {
 	 * PVID will overwrite the outer VLAN field of Tx BD. For the hardware
 	 * network engine whose vlan mode is HNS3_HW_SHIFT_AND_DISCARD_MODE,
 	 * such as kunpeng 930, if the PVID is set, the hardware will shift the
-	 * VLAN field automatically. So, PMD driver does not need to do
+	 * VLAN field automatically. So, PMD does not need to do
 	 * PVID-related operations in Tx. And pvid_sw_shift_en will be false at
 	 * this point.
 	 */
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 344cbd25d3..c0bfff43ee 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1922,7 +1922,7 @@ i40e_dev_configure(struct rte_eth_dev *dev)
 		goto err;
 
 	/* VMDQ setup.
-	 *  General PMD driver call sequence are NIC init, configure,
+	 *  General PMD call sequence are NIC init, configure,
 	 *  rx/tx_queue_setup and dev_start. In rx/tx_queue_setup() function, it
 	 *  will try to lookup the VSI that specific queue belongs to if VMDQ
 	 *  applicable. So, VMDQ setting has to be done before
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 3556c9cd17..8b35fa119c 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -8,7 +8,7 @@
  *
  * @file dpdk/pmd/nfp_net_pmd.h
  *
- * Netronome NFP_NET PMD driver
+ * Netronome NFP_NET PMD
  */
 
 #ifndef _NFP_COMMON_H_
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 830863af28..8e81cc498f 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -342,7 +342,7 @@ nfp_net_close(struct rte_eth_dev *dev)
 				     (void *)dev);
 
 	/*
-	 * The ixgbe PMD driver disables the pcie master on the
+	 * The ixgbe PMD disables the pcie master on the
 	 * device. The i40e does not...
 	 */
 
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 5557a1e002..303ef72b1b 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -238,7 +238,7 @@ nfp_netvf_close(struct rte_eth_dev *dev)
 			     (void *)dev);
 
 	/*
-	 * The ixgbe PMD driver disables the pcie master on the
+	 * The ixgbe PMD disables the pcie master on the
 	 * device. The i40e does not...
 	 */
 
diff --git a/drivers/raw/ifpga/base/README b/drivers/raw/ifpga/base/README
index 6b2b171b01..55d92d590a 100644
--- a/drivers/raw/ifpga/base/README
+++ b/drivers/raw/ifpga/base/README
@@ -42,5 +42,5 @@ Some features added in this version:
 3. Add altera SPI master driver and Intel MAX10 device driver.
 4. Add Altera I2C master driver and AT24 eeprom driver.
 5. Add Device Tree support to get the configuration from card.
-6. Instruding and exposing APIs to DPDK PMD driver to access networking
+6. Instruding and exposing APIs to DPDK PMD to access networking
 functionality.
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index ff193f2d65..1dbcf73b0e 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -164,7 +164,7 @@ rte_bbdev_queue_configure(uint16_t dev_id, uint16_t queue_id,
  *
  * @return
  *   - 0 on success
- *   - negative value on failure - as returned from PMD driver
+ *   - negative value on failure - as returned from PMD
  */
 int
 rte_bbdev_start(uint16_t dev_id);
@@ -207,7 +207,7 @@ rte_bbdev_close(uint16_t dev_id);
  *
  * @return
  *   - 0 on success
- *   - negative value on failure - as returned from PMD driver
+ *   - negative value on failure - as returned from PMD
  */
 int
 rte_bbdev_queue_start(uint16_t dev_id, uint16_t queue_id);
@@ -222,7 +222,7 @@ rte_bbdev_queue_start(uint16_t dev_id, uint16_t queue_id);
  *
  * @return
  *   - 0 on success
- *   - negative value on failure - as returned from PMD driver
+ *   - negative value on failure - as returned from PMD
  */
 int
 rte_bbdev_queue_stop(uint16_t dev_id, uint16_t queue_id);
@@ -782,7 +782,7 @@ rte_bbdev_callback_unregister(uint16_t dev_id, enum rte_bbdev_event_type event,
  *
  * @return
  *   - 0 on success
- *   - negative value on failure - as returned from PMD driver
+ *   - negative value on failure - as returned from PMD
  */
 int
 rte_bbdev_queue_intr_enable(uint16_t dev_id, uint16_t queue_id);
@@ -798,7 +798,7 @@ rte_bbdev_queue_intr_enable(uint16_t dev_id, uint16_t queue_id);
  *
  * @return
  *   - 0 on success
- *   - negative value on failure - as returned from PMD driver
+ *   - negative value on failure - as returned from PMD
  */
 int
 rte_bbdev_queue_intr_disable(uint16_t dev_id, uint16_t queue_id);
@@ -825,7 +825,7 @@ rte_bbdev_queue_intr_disable(uint16_t dev_id, uint16_t queue_id);
  * @return
  *   - 0 on success
  *   - ENOTSUP if interrupts are not supported by the identified device
- *   - negative value on failure - as returned from PMD driver
+ *   - negative value on failure - as returned from PMD
  */
 int
 rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
diff --git a/lib/compressdev/rte_compressdev_pmd.h b/lib/compressdev/rte_compressdev_pmd.h
index 16b6bc6b35..945a991fd6 100644
--- a/lib/compressdev/rte_compressdev_pmd.h
+++ b/lib/compressdev/rte_compressdev_pmd.h
@@ -319,7 +319,7 @@ rte_compressdev_pmd_release_device(struct rte_compressdev *dev);
  * PMD assist function to parse initialisation arguments for comp driver
  * when creating a new comp PMD device instance.
  *
- * PMD driver should set default values for that PMD before calling function,
+ * PMD should set default values for that PMD before calling function,
  * these default values will be over-written with successfully parsed values
  * from args string.
  *
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index 89bf2af399..a6b25d297b 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -483,7 +483,7 @@ rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev);
  * PMD assist function to parse initialisation arguments for crypto driver
  * when creating a new crypto PMD device instance.
  *
- * PMD driver should set default values for that PMD before calling function,
+ * PMD should set default values for that PMD before calling function,
  * these default values will be over-written with successfully parsed values
  * from args string.
  *
diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
index e42d8739ab..064785686f 100644
--- a/lib/dmadev/rte_dmadev_core.h
+++ b/lib/dmadev/rte_dmadev_core.h
@@ -59,7 +59,7 @@ typedef uint16_t (*rte_dma_burst_capacity_t)(const void *dev_private, uint16_t v
  * functions.
  *
  * The 'dev_private' field was placed in the first cache line to optimize
- * performance because the PMD driver mainly depends on this field.
+ * performance because the PMD mainly depends on this field.
  */
 struct rte_dma_fp_object {
 	/** PMD-specific private data. The driver should copy
diff --git a/lib/eal/include/rte_dev.h b/lib/eal/include/rte_dev.h
index 6c3f774672..448a41cb0e 100644
--- a/lib/eal/include/rte_dev.h
+++ b/lib/eal/include/rte_dev.h
@@ -8,7 +8,7 @@
 /**
  * @file
  *
- * RTE PMD Driver Registration Interface
+ * RTE PMD Registration Interface
  *
  * This file manages the list of device drivers.
  */
diff --git a/lib/eal/include/rte_devargs.h b/lib/eal/include/rte_devargs.h
index 71c8af9df3..37a0f042ab 100644
--- a/lib/eal/include/rte_devargs.h
+++ b/lib/eal/include/rte_devargs.h
@@ -35,7 +35,7 @@ extern "C" {
 /**
  * Class type key in global devargs syntax.
  *
- * Legacy devargs parser doesn't parse class type. PMD driver is
+ * Legacy devargs parser doesn't parse class type. PMD is
  * encouraged to use this key to resolve class type.
  */
 #define RTE_DEVARGS_KEY_CLASS "class"
@@ -43,7 +43,7 @@ extern "C" {
 /**
  * Driver type key in global devargs syntax.
  *
- * Legacy devargs parser doesn't parse driver type. PMD driver is
+ * Legacy devargs parser doesn't parse driver type. PMD is
  * encouraged to use this key to resolve driver type.
  */
 #define RTE_DEVARGS_KEY_DRIVER "driver"
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 096b676fc1..fa299c8ad7 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -2610,7 +2610,7 @@ int rte_eth_tx_hairpin_queue_setup
  *   - (-EINVAL) if bad parameter.
  *   - (-ENODEV) if *port_id* invalid
  *   - (-ENOTSUP) if hardware doesn't support.
- *   - Others detailed errors from PMD drivers.
+ *   - Others detailed errors from PMDs.
  */
 __rte_experimental
 int rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports,
@@ -2636,7 +2636,7 @@ int rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports,
  *   - (-ENODEV) if Tx port ID is invalid.
  *   - (-EBUSY) if device is not in started state.
  *   - (-ENOTSUP) if hardware doesn't support.
- *   - Others detailed errors from PMD drivers.
+ *   - Others detailed errors from PMDs.
  */
 __rte_experimental
 int rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port);
@@ -2663,7 +2663,7 @@ int rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port);
  *   - (-ENODEV) if Tx port ID is invalid.
  *   - (-EBUSY) if device is in stopped state.
  *   - (-ENOTSUP) if hardware doesn't support.
- *   - Others detailed errors from PMD drivers.
+ *   - Others detailed errors from PMDs.
  */
 __rte_experimental
 int rte_eth_hairpin_unbind(uint16_t tx_port, uint16_t rx_port);
@@ -2706,7 +2706,7 @@ int rte_eth_dev_is_valid_port(uint16_t port_id);
  *   - -ENODEV: if *port_id* is invalid.
  *   - -EINVAL: The queue_id out of range or belong to hairpin.
  *   - -EIO: if device is removed.
- *   - -ENOTSUP: The function not supported in PMD driver.
+ *   - -ENOTSUP: The function not supported in PMD.
  */
 int rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id);
 
@@ -2724,7 +2724,7 @@ int rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id);
  *   - -ENODEV: if *port_id* is invalid.
  *   - -EINVAL: The queue_id out of range or belong to hairpin.
  *   - -EIO: if device is removed.
- *   - -ENOTSUP: The function not supported in PMD driver.
+ *   - -ENOTSUP: The function not supported in PMD.
  */
 int rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id);
 
@@ -2743,7 +2743,7 @@ int rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id);
  *   - -ENODEV: if *port_id* is invalid.
  *   - -EINVAL: The queue_id out of range or belong to hairpin.
  *   - -EIO: if device is removed.
- *   - -ENOTSUP: The function not supported in PMD driver.
+ *   - -ENOTSUP: The function not supported in PMD.
  */
 int rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id);
 
@@ -2761,7 +2761,7 @@ int rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id);
  *   - -ENODEV: if *port_id* is invalid.
  *   - -EINVAL: The queue_id out of range or belong to hairpin.
  *   - -EIO: if device is removed.
- *   - -ENOTSUP: The function not supported in PMD driver.
+ *   - -ENOTSUP: The function not supported in PMD.
  */
 int rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id);
 
@@ -2963,7 +2963,7 @@ int rte_eth_allmulticast_get(uint16_t port_id);
  *   Link information written back.
  * @return
  *   - (0) if successful.
- *   - (-ENOTSUP) if the function is not supported in PMD driver.
+ *   - (-ENOTSUP) if the function is not supported in PMD.
  *   - (-ENODEV) if *port_id* invalid.
  *   - (-EINVAL) if bad parameter.
  */
@@ -2979,7 +2979,7 @@ int rte_eth_link_get(uint16_t port_id, struct rte_eth_link *link);
  *   Link information written back.
  * @return
  *   - (0) if successful.
- *   - (-ENOTSUP) if the function is not supported in PMD driver.
+ *   - (-ENOTSUP) if the function is not supported in PMD.
  *   - (-ENODEV) if *port_id* invalid.
  *   - (-EINVAL) if bad parameter.
  */
-- 
2.25.1


^ permalink raw reply	[relevance 1%]

* [PATCH v1 1/3] fix PMD wording typo
  @ 2021-11-18 14:46  1% ` Sean Morrissey
    1 sibling, 0 replies; 200+ results
From: Sean Morrissey @ 2021-11-18 14:46 UTC (permalink / raw)
  To: Xiaoyun Li, Nicolas Chautru, Jay Zhou, Ciara Loftus, Qi Zhang,
	Steven Webster, Matt Peters, Apeksha Gupta, Sachin Saxena,
	Xiao Wang, Haiyue Wang, Beilei Xing, Stephen Hemminger, Long Li,
	Heinrich Kuhn, Jerin Jacob, Maciej Czekaj, Maxime Coquelin,
	Chenbo Xia, Konstantin Ananyev, Andrew Rybchenko, Fiona Trahe,
	Ashish Gupta, John Griffin, Deepak Kumar Jain, Ziyang Xuan,
	Xiaoyun Wang, Guoyang Zhou, Min Hu (Connor),
	Yisen Zhuang, Lijun Ou, Rosen Xu, Tianfei zhang, Akhil Goyal,
	Declan Doherty, Chengwen Feng, Kevin Laatz, Bruce Richardson,
	Thomas Monjalon, Ferruh Yigit
  Cc: dev, Sean Morrissey, Conor Fogarty

Removing the use of driver following PMD as its
unnecessary.

Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
Signed-off-by: Conor Fogarty <conor.fogarty@intel.com>
---
 app/test-pmd/cmdline.c                        |  4 +--
 doc/guides/bbdevs/turbo_sw.rst                |  2 +-
 doc/guides/cryptodevs/virtio.rst              |  2 +-
 doc/guides/linux_gsg/build_sample_apps.rst    |  2 +-
 doc/guides/nics/af_packet.rst                 |  2 +-
 doc/guides/nics/af_xdp.rst                    |  2 +-
 doc/guides/nics/avp.rst                       |  4 +--
 doc/guides/nics/enetfec.rst                   |  2 +-
 doc/guides/nics/fm10k.rst                     |  4 +--
 doc/guides/nics/intel_vf.rst                  |  2 +-
 doc/guides/nics/netvsc.rst                    |  2 +-
 doc/guides/nics/nfp.rst                       |  2 +-
 doc/guides/nics/thunderx.rst                  |  2 +-
 doc/guides/nics/virtio.rst                    |  4 +--
 .../prog_guide/writing_efficient_code.rst     |  4 +--
 doc/guides/rel_notes/known_issues.rst         |  2 +-
 doc/guides/rel_notes/release_16_04.rst        |  2 +-
 doc/guides/rel_notes/release_19_05.rst        |  6 ++--
 doc/guides/rel_notes/release_19_11.rst        |  2 +-
 doc/guides/rel_notes/release_20_11.rst        |  4 +--
 doc/guides/rel_notes/release_21_02.rst        |  2 +-
 doc/guides/rel_notes/release_21_05.rst        |  2 +-
 doc/guides/rel_notes/release_21_08.rst        |  2 +-
 doc/guides/rel_notes/release_21_11.rst        |  2 +-
 doc/guides/rel_notes/release_2_2.rst          |  4 +--
 doc/guides/sample_app_ug/bbdev_app.rst        |  2 +-
 .../sample_app_ug/l3_forward_access_ctrl.rst  |  2 +-
 doc/guides/tools/testeventdev.rst             |  2 +-
 drivers/common/sfc_efx/efsys.h                |  2 +-
 drivers/compress/qat/qat_comp_pmd.h           |  2 +-
 drivers/crypto/qat/qat_asym_pmd.h             |  2 +-
 drivers/crypto/qat/qat_sym_pmd.h              |  2 +-
 drivers/net/fm10k/fm10k_ethdev.c              |  2 +-
 drivers/net/hinic/base/hinic_pmd_cmdq.h       |  2 +-
 drivers/net/hns3/hns3_ethdev.c                |  6 ++--
 drivers/net/hns3/hns3_ethdev.h                |  6 ++--
 drivers/net/hns3/hns3_ethdev_vf.c             | 28 +++++++++----------
 drivers/net/hns3/hns3_rss.c                   |  4 +--
 drivers/net/hns3/hns3_rxtx.c                  |  8 +++---
 drivers/net/hns3/hns3_rxtx.h                  |  4 +--
 drivers/net/i40e/i40e_ethdev.c                |  2 +-
 drivers/net/nfp/nfp_common.h                  |  2 +-
 drivers/net/nfp/nfp_ethdev.c                  |  2 +-
 drivers/net/nfp/nfp_ethdev_vf.c               |  2 +-
 drivers/raw/ifpga/base/README                 |  2 +-
 lib/bbdev/rte_bbdev.h                         | 12 ++++----
 lib/compressdev/rte_compressdev_pmd.h         |  2 +-
 lib/cryptodev/cryptodev_pmd.h                 |  2 +-
 lib/dmadev/rte_dmadev_core.h                  |  2 +-
 lib/eal/include/rte_dev.h                     |  2 +-
 lib/eal/include/rte_devargs.h                 |  4 +--
 lib/ethdev/rte_ethdev.h                       | 18 ++++++------
 52 files changed, 97 insertions(+), 97 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index c43c85c591..6e10afeedd 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -2701,7 +2701,7 @@ cmd_config_rxtx_queue_parsed(void *parsed_result,
 		ret = rte_eth_dev_tx_queue_stop(res->portid, res->qid);
 
 	if (ret == -ENOTSUP)
-		fprintf(stderr, "Function not supported in PMD driver\n");
+		fprintf(stderr, "Function not supported in PMD\n");
 }
 
 cmdline_parse_token_string_t cmd_config_rxtx_queue_port =
@@ -14700,7 +14700,7 @@ cmd_ddp_info_parsed(
 		free(proto);
 #endif
 	if (ret == -ENOTSUP)
-		fprintf(stderr, "Function not supported in PMD driver\n");
+		fprintf(stderr, "Function not supported in PMD\n");
 	close_file(pkg);
 }
 
diff --git a/doc/guides/bbdevs/turbo_sw.rst b/doc/guides/bbdevs/turbo_sw.rst
index 43c5129fd7..1e23e37027 100644
--- a/doc/guides/bbdevs/turbo_sw.rst
+++ b/doc/guides/bbdevs/turbo_sw.rst
@@ -149,7 +149,7 @@ Example:
 
 * For AVX512 machines with SDK libraries installed then both 4G and 5G can be enabled for full real time FEC capability.
   For AVX2 machines it is possible to only enable the 4G libraries and the PMD capabilities will be limited to 4G FEC.
-  If no library is present then the PMD driver will still build but its capabilities will be limited accordingly.
+  If no library is present then the PMD will still build but its capabilities will be limited accordingly.
 
 
 To use the PMD in an application, user must:
diff --git a/doc/guides/cryptodevs/virtio.rst b/doc/guides/cryptodevs/virtio.rst
index 8b96446ff2..ce4d43519a 100644
--- a/doc/guides/cryptodevs/virtio.rst
+++ b/doc/guides/cryptodevs/virtio.rst
@@ -73,7 +73,7 @@ number of the virtio-crypto device:
     echo -n 0000:00:04.0 > /sys/bus/pci/drivers/virtio-pci/unbind
     echo "1af4 1054" > /sys/bus/pci/drivers/uio_pci_generic/new_id
 
-Finally the front-end virtio crypto PMD driver can be installed.
+Finally the front-end virtio crypto PMD can be installed.
 
 Tests
 -----
diff --git a/doc/guides/linux_gsg/build_sample_apps.rst b/doc/guides/linux_gsg/build_sample_apps.rst
index efd2dd23f1..4f99617233 100644
--- a/doc/guides/linux_gsg/build_sample_apps.rst
+++ b/doc/guides/linux_gsg/build_sample_apps.rst
@@ -66,7 +66,7 @@ The EAL options are as follows:
 
 * ``-d``:
   Add a driver or driver directory to be loaded.
-  The application should use this option to load the pmd drivers
+  The application should use this option to load the PMDs
   that are built as shared libraries.
 
 * ``-m MB``:
diff --git a/doc/guides/nics/af_packet.rst b/doc/guides/nics/af_packet.rst
index 54feffdef4..8292369141 100644
--- a/doc/guides/nics/af_packet.rst
+++ b/doc/guides/nics/af_packet.rst
@@ -5,7 +5,7 @@ AF_PACKET Poll Mode Driver
 ==========================
 
 The AF_PACKET socket in Linux allows an application to receive and send raw
-packets. This Linux-specific PMD driver binds to an AF_PACKET socket and allows
+packets. This Linux-specific PMD binds to an AF_PACKET socket and allows
 a DPDK application to send and receive raw packets through the Kernel.
 
 In order to improve Rx and Tx performance this implementation makes use of
diff --git a/doc/guides/nics/af_xdp.rst b/doc/guides/nics/af_xdp.rst
index 8bf40b5f0f..c9d0e1ad6c 100644
--- a/doc/guides/nics/af_xdp.rst
+++ b/doc/guides/nics/af_xdp.rst
@@ -12,7 +12,7 @@ For the full details behind AF_XDP socket, you can refer to
 `AF_XDP documentation in the Kernel
 <https://www.kernel.org/doc/Documentation/networking/af_xdp.rst>`_.
 
-This Linux-specific PMD driver creates the AF_XDP socket and binds it to a
+This Linux-specific PMD creates the AF_XDP socket and binds it to a
 specific netdev queue, it allows a DPDK application to send and receive raw
 packets through the socket which would bypass the kernel network stack.
 Current implementation only supports single queue, multi-queues feature will
diff --git a/doc/guides/nics/avp.rst b/doc/guides/nics/avp.rst
index 1a194fc23c..a749f2a0f6 100644
--- a/doc/guides/nics/avp.rst
+++ b/doc/guides/nics/avp.rst
@@ -35,7 +35,7 @@ to another with minimal packet loss.
 Features and Limitations of the AVP PMD
 ---------------------------------------
 
-The AVP PMD driver provides the following functionality.
+The AVP PMD provides the following functionality.
 
 *   Receive and transmit of both simple and chained mbuf packets,
 
@@ -74,7 +74,7 @@ Launching a VM with an AVP type network attachment
 The following example will launch a VM with three network attachments.  The
 first attachment will have a default vif-model of "virtio".  The next two
 network attachments will have a vif-model of "avp" and may be used with a DPDK
-application which is built to include the AVP PMD driver.
+application which is built to include the AVP PMD.
 
 .. code-block:: console
 
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index a64e72fdd6..381635e627 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -65,7 +65,7 @@ The diagram below shows a system level overview of ENETFEC:
                         | PHY |
                         +-----+
 
-ENETFEC Ethernet driver is traditional DPDK PMD driver running in userspace.
+ENETFEC Ethernet driver is traditional DPDK PMD running in userspace.
 'fec-uio' is the kernel driver.
 The MAC and PHY are the hardware blocks.
 ENETFEC PMD uses standard UIO interface to access kernel
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index bba53f5a64..d6efac0917 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -114,9 +114,9 @@ Switch manager
 ~~~~~~~~~~~~~~
 
 The Intel FM10000 family of NICs integrate a hardware switch and multiple host
-interfaces. The FM10000 PMD driver only manages host interfaces. For the
+interfaces. The FM10000 PMD only manages host interfaces. For the
 switch component another switch driver has to be loaded prior to the
-FM10000 PMD driver. The switch driver can be acquired from Intel support.
+FM10000 PMD. The switch driver can be acquired from Intel support.
 Only Testpoint is validated with DPDK, the latest version that has been
 validated with DPDK is 4.1.6.
 
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index fd235e1463..648af39c22 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -571,7 +571,7 @@ Fast Host-based Packet Processing
 
 Software Defined Network (SDN) trends are demanding fast host-based packet handling.
 In a virtualization environment,
-the DPDK VF PMD driver performs the same throughput result as a non-VT native environment.
+the DPDK VF PMD performs the same throughput result as a non-VT native environment.
 
 With such host instance fast packet processing, lots of services such as filtering, QoS,
 DPI can be offloaded on the host fast path.
diff --git a/doc/guides/nics/netvsc.rst b/doc/guides/nics/netvsc.rst
index c0e218c743..77efe1dc91 100644
--- a/doc/guides/nics/netvsc.rst
+++ b/doc/guides/nics/netvsc.rst
@@ -14,7 +14,7 @@ checksum and segmentation offloads.
 Features and Limitations of Hyper-V PMD
 ---------------------------------------
 
-In this release, the hyper PMD driver provides the basic functionality of packet reception and transmission.
+In this release, the hyper PMD provides the basic functionality of packet reception and transmission.
 
 *   It supports merge-able buffers per packet when receiving packets and scattered buffer per packet
     when transmitting packets. The packet size supported is from 64 to 65536.
diff --git a/doc/guides/nics/nfp.rst b/doc/guides/nics/nfp.rst
index bf8be723b0..30cdc69202 100644
--- a/doc/guides/nics/nfp.rst
+++ b/doc/guides/nics/nfp.rst
@@ -14,7 +14,7 @@ This document explains how to use DPDK with the Netronome Poll Mode
 Driver (PMD) supporting Netronome's Network Flow Processor 6xxx
 (NFP-6xxx) and Netronome's Flow Processor 4xxx (NFP-4xxx).
 
-NFP is a SRIOV capable device and the PMD driver supports the physical
+NFP is a SRIOV capable device and the PMD supports the physical
 function (PF) and the virtual functions (VFs).
 
 Dependencies
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index 98f23a2b2a..d96395dafa 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -199,7 +199,7 @@ Each port consists of a primary VF and n secondary VF(s). Each VF provides 8 Tx/
 When a given port is configured to use more than 8 queues, it requires one (or more) secondary VF.
 Each secondary VF adds 8 additional queues to the queue set.
 
-During PMD driver initialization, the primary VF's are enumerated by checking the
+During PMD initialization, the primary VF's are enumerated by checking the
 specific flag (see sqs message in DPDK boot log - sqs indicates secondary queue set).
 They are at the beginning of VF list (the remain ones are secondary VF's).
 
diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
index 98e0d012b7..7c0ae2b3af 100644
--- a/doc/guides/nics/virtio.rst
+++ b/doc/guides/nics/virtio.rst
@@ -17,7 +17,7 @@ With this enhancement, virtio could achieve quite promising performance.
 For basic qemu-KVM installation and other Intel EM poll mode driver in guest VM,
 please refer to Chapter "Driver for VM Emulated Devices".
 
-In this chapter, we will demonstrate usage of virtio PMD driver with two backends,
+In this chapter, we will demonstrate usage of virtio PMD with two backends,
 standard qemu vhost back end and vhost kni back end.
 
 Virtio Implementation in DPDK
@@ -40,7 +40,7 @@ end if necessary.
 Features and Limitations of virtio PMD
 --------------------------------------
 
-In this release, the virtio PMD driver provides the basic functionality of packet reception and transmission.
+In this release, the virtio PMD provides the basic functionality of packet reception and transmission.
 
 *   It supports merge-able buffers per packet when receiving packets and scattered buffer per packet
     when transmitting packets. The packet size supported is from 64 to 1518.
diff --git a/doc/guides/prog_guide/writing_efficient_code.rst b/doc/guides/prog_guide/writing_efficient_code.rst
index a61e8320ae..e6c26efdd3 100644
--- a/doc/guides/prog_guide/writing_efficient_code.rst
+++ b/doc/guides/prog_guide/writing_efficient_code.rst
@@ -119,8 +119,8 @@ The code algorithm that dequeues messages may be something similar to the follow
         my_process_bulk(obj_table, count);
    }
 
-PMD Driver
-----------
+PMD
+---
 
 The DPDK Poll Mode Driver (PMD) is also able to work in bulk/burst mode,
 allowing the factorization of some code for each call in the send or receive function.
diff --git a/doc/guides/rel_notes/known_issues.rst b/doc/guides/rel_notes/known_issues.rst
index beea877bad..187d9c942e 100644
--- a/doc/guides/rel_notes/known_issues.rst
+++ b/doc/guides/rel_notes/known_issues.rst
@@ -250,7 +250,7 @@ PMD does not work with --no-huge EAL command line parameter
 
 **Description**:
    Currently, the DPDK does not store any information about memory allocated by ``malloc()` (for example, NUMA node,
-   physical address), hence PMD drivers do not work when the ``--no-huge`` command line parameter is supplied to EAL.
+   physical address), hence PMDs do not work when the ``--no-huge`` command line parameter is supplied to EAL.
 
 **Implication**:
    Sending and receiving data with PMD will not work.
diff --git a/doc/guides/rel_notes/release_16_04.rst b/doc/guides/rel_notes/release_16_04.rst
index b7d07834e1..ac18e1dddb 100644
--- a/doc/guides/rel_notes/release_16_04.rst
+++ b/doc/guides/rel_notes/release_16_04.rst
@@ -56,7 +56,7 @@ New Features
 
 * **Enabled Virtio 1.0 support.**
 
-  Enabled Virtio 1.0 support for Virtio pmd driver.
+  Enabled Virtio 1.0 support for Virtio PMD.
 
 * **Supported Virtio for ARM.**
 
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 30f704e204..89ae425bdb 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -46,13 +46,13 @@ New Features
   Updated the KNI kernel module to set the ``max_mtu`` according to the given
   initial MTU size. Without it, the maximum MTU was 1500.
 
-  Updated the KNI PMD driver to set the ``mbuf_size`` and MTU based on
+  Updated the KNI PMD to set the ``mbuf_size`` and MTU based on
   the given mb-pool. This provide the ability to pass jumbo frames
   if the mb-pool contains a suitable buffer size.
 
 * **Added the AF_XDP PMD.**
 
-  Added a Linux-specific PMD driver for AF_XDP. This PMD can create an AF_XDP socket
+  Added a Linux-specific PMD for AF_XDP. This PMD can create an AF_XDP socket
   and bind it to a specific netdev queue. It allows a DPDK application to send
   and receive raw packets through the socket which would bypass the kernel
   network stack to achieve high performance packet processing.
@@ -240,7 +240,7 @@ ABI Changes
 
   The ``rte_eth_dev_info`` structure has had two extra fields
   added: ``min_mtu`` and ``max_mtu``. Each of these are of type ``uint16_t``.
-  The values of these fields can be set specifically by the PMD drivers as
+  The values of these fields can be set specifically by the PMDs as
   supported values can vary from device to device.
 
 * cryptodev: in 18.08 a new structure ``rte_crypto_asym_op`` was introduced and
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index b509a6dd28..302b3e5f37 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -189,7 +189,7 @@ New Features
 
 * **Added Marvell OCTEON TX2 crypto PMD.**
 
-  Added a new PMD driver for hardware crypto offload block on ``OCTEON TX2``
+  Added a new PMD for hardware crypto offload block on ``OCTEON TX2``
   SoC.
 
   See :doc:`../cryptodevs/octeontx2` for more details
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 90cc3ed680..af7ce90ba3 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -192,7 +192,7 @@ New Features
 
 * **Added Wangxun txgbe PMD.**
 
-  Added a new PMD driver for Wangxun 10 Gigabit Ethernet NICs.
+  Added a new PMD for Wangxun 10 Gigabit Ethernet NICs.
 
   See the :doc:`../nics/txgbe` for more details.
 
@@ -288,7 +288,7 @@ New Features
 
 * **Added Marvell OCTEON TX2 regex PMD.**
 
-  Added a new PMD driver for the hardware regex offload block for OCTEON TX2 SoC.
+  Added a new PMD for the hardware regex offload block for OCTEON TX2 SoC.
 
   See the :doc:`../regexdevs/octeontx2` for more details.
 
diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst
index 9d5e17758f..5fbf5b3d43 100644
--- a/doc/guides/rel_notes/release_21_02.rst
+++ b/doc/guides/rel_notes/release_21_02.rst
@@ -135,7 +135,7 @@ New Features
 
 * **Added mlx5 compress PMD.**
 
-  Added a new compress PMD driver for Bluefield 2 adapters.
+  Added a new compress PMD for Bluefield 2 adapters.
 
   See the :doc:`../compressdevs/mlx5` for more details.
 
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index 8adb225a4d..49044ed422 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -78,7 +78,7 @@ New Features
   * Updated ena_com (HAL) to the latest version.
   * Added indication of the RSS hash presence in the mbuf.
 
-* **Updated Arkville PMD driver.**
+* **Updated Arkville PMD.**
 
   Updated Arkville net driver with new features and improvements, including:
 
diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst
index 6fb4e43346..ac1c081903 100644
--- a/doc/guides/rel_notes/release_21_08.rst
+++ b/doc/guides/rel_notes/release_21_08.rst
@@ -67,7 +67,7 @@ New Features
 
 * **Added Wangxun ngbe PMD.**
 
-  Added a new PMD driver for Wangxun 1Gb Ethernet NICs.
+  Added a new PMD for Wangxun 1Gb Ethernet NICs.
   See the :doc:`../nics/ngbe` for more details.
 
 * **Added inflight packets clear API in vhost library.**
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 4d8c59472a..1d6774afc1 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -354,7 +354,7 @@ New Features
 
 * **Added NXP LA12xx baseband PMD.**
 
-  * Added a new baseband PMD driver for NXP LA12xx Software defined radio.
+  * Added a new baseband PMD for NXP LA12xx Software defined radio.
   * See the :doc:`../bbdevs/la12xx` for more details.
 
 * **Updated Mellanox compress driver.**
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 8273473ff4..029b758e90 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -10,8 +10,8 @@ New Features
 * **Introduce ARMv7 and ARMv8 architectures.**
 
   * It is now possible to build DPDK for the ARMv7 and ARMv8 platforms.
-  * ARMv7 can be tested with virtual PMD drivers.
-  * ARMv8 can be tested with virtual and physical PMD drivers.
+  * ARMv7 can be tested with virtual PMDs.
+  * ARMv8 can be tested with virtual and physical PMDs.
 
 * **Enabled freeing of ring.**
 
diff --git a/doc/guides/sample_app_ug/bbdev_app.rst b/doc/guides/sample_app_ug/bbdev_app.rst
index 45e69e36e2..7f02f0ed90 100644
--- a/doc/guides/sample_app_ug/bbdev_app.rst
+++ b/doc/guides/sample_app_ug/bbdev_app.rst
@@ -31,7 +31,7 @@ Limitations
 Compiling the Application
 -------------------------
 
-DPDK needs to be built with ``baseband_turbo_sw`` PMD driver enabled along
+DPDK needs to be built with ``baseband_turbo_sw`` PMD enabled along
 with ``FLEXRAN SDK`` Libraries. Refer to *SW Turbo Poll Mode Driver*
 documentation for more details on this.
 
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index 486247ac2e..ecb1c857c4 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -220,7 +220,7 @@ Once the application starts, it transitions through three phases:
 
 *   **Final Phase** - Perform the following tasks:
 
-    Calls the EAL, PMD driver and ACL library to free resource, then quits.
+    Calls the EAL, PMD and ACL library to free resource, then quits.
 
 Compiling the Application
 -------------------------
diff --git a/doc/guides/tools/testeventdev.rst b/doc/guides/tools/testeventdev.rst
index 7b4cdeb43f..48efb9ea6e 100644
--- a/doc/guides/tools/testeventdev.rst
+++ b/doc/guides/tools/testeventdev.rst
@@ -239,7 +239,7 @@ to the ordered queue. The worker receives the events from ordered queue and
 forwards to atomic queue. Since the events from an ordered queue can be
 processed in parallel on the different workers, the ingress order of events
 might have changed on the downstream atomic queue enqueue. On enqueue to the
-atomic queue, the eventdev PMD driver reorders the event to the original
+atomic queue, the eventdev PMD reorders the event to the original
 ingress order(i.e producer ingress order).
 
 When the event is dequeued from the atomic queue by the worker, this test
diff --git a/drivers/common/sfc_efx/efsys.h b/drivers/common/sfc_efx/efsys.h
index b2109bf3c0..3860c2835a 100644
--- a/drivers/common/sfc_efx/efsys.h
+++ b/drivers/common/sfc_efx/efsys.h
@@ -609,7 +609,7 @@ typedef struct efsys_bar_s {
 /* DMA SYNC */
 
 /*
- * DPDK does not provide any DMA syncing API, and no PMD drivers
+ * DPDK does not provide any DMA syncing API, and no PMDs
  * have any traces of explicit DMA syncing.
  * DMA mapping is assumed to be coherent.
  */
diff --git a/drivers/compress/qat/qat_comp_pmd.h b/drivers/compress/qat/qat_comp_pmd.h
index 86317a513c..3c8682a768 100644
--- a/drivers/compress/qat/qat_comp_pmd.h
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -13,7 +13,7 @@
 #include "qat_device.h"
 #include "qat_comp.h"
 
-/**< Intel(R) QAT Compression PMD driver name */
+/**< Intel(R) QAT Compression PMD name */
 #define COMPRESSDEV_NAME_QAT_PMD	compress_qat
 
 /* Private data structure for a QAT compression device capability. */
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index fd6b406248..f988d646e5 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -10,7 +10,7 @@
 #include "qat_crypto.h"
 #include "qat_device.h"
 
-/** Intel(R) QAT Asymmetric Crypto PMD driver name */
+/** Intel(R) QAT Asymmetric Crypto PMD name */
 #define CRYPTODEV_NAME_QAT_ASYM_PMD	crypto_qat_asym
 
 
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index 0dc0c6f0d9..59fbdefa12 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -16,7 +16,7 @@
 #include "qat_crypto.h"
 #include "qat_device.h"
 
-/** Intel(R) QAT Symmetric Crypto PMD driver name */
+/** Intel(R) QAT Symmetric Crypto PMD name */
 #define CRYPTODEV_NAME_QAT_SYM_PMD	crypto_qat
 
 /* Internal capabilities */
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 7c85a05746..43e1d13431 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -255,7 +255,7 @@ rx_queue_clean(struct fm10k_rx_queue *q)
 	for (i = 0; i < q->nb_fake_desc; ++i)
 		q->hw_ring[q->nb_desc + i] = zero;
 
-	/* vPMD driver has a different way of releasing mbufs. */
+	/* vPMD has a different way of releasing mbufs. */
 	if (q->rx_using_sse) {
 		fm10k_rx_queue_release_mbufs_vec(q);
 		return;
diff --git a/drivers/net/hinic/base/hinic_pmd_cmdq.h b/drivers/net/hinic/base/hinic_pmd_cmdq.h
index 0d5e380123..58a1fbda71 100644
--- a/drivers/net/hinic/base/hinic_pmd_cmdq.h
+++ b/drivers/net/hinic/base/hinic_pmd_cmdq.h
@@ -9,7 +9,7 @@
 
 #define HINIC_SCMD_DATA_LEN		16
 
-/* pmd driver uses 64, kernel l2nic use 4096 */
+/* PMD uses 64, kernel l2nic use 4096 */
 #define	HINIC_CMDQ_DEPTH		64
 
 #define	HINIC_CMDQ_BUF_SIZE		2048U
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 847e660f44..0bd12907d8 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -1060,7 +1060,7 @@ hns3_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on)
 		return ret;
 	/*
 	 * Only in HNS3_SW_SHIFT_AND_MODE the PVID related operation in Tx/Rx
-	 * need be processed by PMD driver.
+	 * need be processed by PMD.
 	 */
 	if (pvid_en_state_change &&
 	    hw->vlan_mode == HNS3_SW_SHIFT_AND_DISCARD_MODE)
@@ -2592,7 +2592,7 @@ hns3_parse_cfg(struct hns3_cfg *cfg, struct hns3_cmd_desc *desc)
 	 * Field ext_rss_size_max obtained from firmware will be more flexible
 	 * for future changes and expansions, which is an exponent of 2, instead
 	 * of reading out directly. If this field is not zero, hns3 PF PMD
-	 * driver uses it as rss_size_max under one TC. Device, whose revision
+	 * uses it as rss_size_max under one TC. Device, whose revision
 	 * id is greater than or equal to PCI_REVISION_ID_HIP09_A, obtains the
 	 * maximum number of queues supported under a TC through this field.
 	 */
@@ -6311,7 +6311,7 @@ hns3_fec_set(struct rte_eth_dev *dev, uint32_t mode)
 	if (ret < 0)
 		return ret;
 
-	/* HNS3 PMD driver only support one bit set mode, e.g. 0x1, 0x4 */
+	/* HNS3 PMD only support one bit set mode, e.g. 0x1, 0x4 */
 	if (!is_fec_mode_one_bit_set(mode)) {
 		hns3_err(hw, "FEC mode(0x%x) not supported in HNS3 PMD, "
 			     "FEC mode should be only one bit set", mode);
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 6d30125dcc..488fe8dbbc 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -465,7 +465,7 @@ struct hns3_queue_intr {
 	 *     enable Rx interrupt.
 	 *
 	 *  - HNS3_INTR_MAPPING_VEC_ALL
-	 *     PMD driver can map/unmmap all interrupt vectors with queues When
+	 *     PMD can map/unmmap all interrupt vectors with queues When
 	 *     Rx interrupt in enabled.
 	 */
 	uint8_t mapping_mode;
@@ -575,14 +575,14 @@ struct hns3_hw {
 	 *
 	 *  - HNS3_SW_SHIFT_AND_DISCARD_MODE
 	 *     For some versions of hardware network engine, because of the
-	 *     hardware limitation, PMD driver needs to detect the PVID status
+	 *     hardware limitation, PMD needs to detect the PVID status
 	 *     to work with haredware to implement PVID-related functions.
 	 *     For example, driver need discard the stripped PVID tag to ensure
 	 *     the PVID will not report to mbuf and shift the inserted VLAN tag
 	 *     to avoid port based VLAN covering it.
 	 *
 	 *  - HNS3_HW_SHIT_AND_DISCARD_MODE
-	 *     PMD driver does not need to process PVID-related functions in
+	 *     PMD does not need to process PVID-related functions in
 	 *     I/O process, Hardware will adjust the sequence between port based
 	 *     VLAN tag and BD VLAN tag automatically and VLAN tag stripped by
 	 *     PVID will be invisible to driver. And in this mode, hns3 is able
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index d8a99693e0..7d6e251bbe 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -232,7 +232,7 @@ hns3vf_set_default_mac_addr(struct rte_eth_dev *dev,
 				HNS3_TWO_ETHER_ADDR_LEN, true, NULL, 0);
 	if (ret) {
 		/*
-		 * The hns3 VF PMD driver depends on the hns3 PF kernel ethdev
+		 * The hns3 VF PMD depends on the hns3 PF kernel ethdev
 		 * driver. When user has configured a MAC address for VF device
 		 * by "ip link set ..." command based on the PF device, the hns3
 		 * PF kernel ethdev driver does not allow VF driver to request
@@ -312,9 +312,9 @@ hns3vf_set_promisc_mode(struct hns3_hw *hw, bool en_bc_pmc,
 	req = (struct hns3_mbx_vf_to_pf_cmd *)desc.data;
 
 	/*
-	 * The hns3 VF PMD driver depends on the hns3 PF kernel ethdev driver,
+	 * The hns3 VF PMD depends on the hns3 PF kernel ethdev driver,
 	 * so there are some features for promiscuous/allmulticast mode in hns3
-	 * VF PMD driver as below:
+	 * VF PMD as below:
 	 * 1. The promiscuous/allmulticast mode can be configured successfully
 	 *    only based on the trusted VF device. If based on the non trusted
 	 *    VF device, configuring promiscuous/allmulticast mode will fail.
@@ -322,14 +322,14 @@ hns3vf_set_promisc_mode(struct hns3_hw *hw, bool en_bc_pmc,
 	 *    kernel ethdev driver on the host by the following command:
 	 *      "ip link set <eth num> vf <vf id> turst on"
 	 * 2. After the promiscuous mode is configured successfully, hns3 VF PMD
-	 *    driver can receive the ingress and outgoing traffic. In the words,
+	 *    can receive the ingress and outgoing traffic. In the words,
 	 *    all the ingress packets, all the packets sent from the PF and
 	 *    other VFs on the same physical port.
 	 * 3. Note: Because of the hardware constraints, By default vlan filter
 	 *    is enabled and couldn't be turned off based on VF device, so vlan
 	 *    filter is still effective even in promiscuous mode. If upper
 	 *    applications don't call rte_eth_dev_vlan_filter API function to
-	 *    set vlan based on VF device, hns3 VF PMD driver will can't receive
+	 *    set vlan based on VF device, hns3 VF PMD will can't receive
 	 *    the packets with vlan tag in promiscuoue mode.
 	 */
 	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false);
@@ -553,9 +553,9 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	/*
 	 * The hns3 PF/VF devices on the same port share the hardware MTU
 	 * configuration. Currently, we send mailbox to inform hns3 PF kernel
-	 * ethdev driver to finish hardware MTU configuration in hns3 VF PMD
-	 * driver, there is no need to stop the port for hns3 VF device, and the
-	 * MTU value issued by hns3 VF PMD driver must be less than or equal to
+	 * ethdev driver to finish hardware MTU configuration in hns3 VF PMD,
+	 * there is no need to stop the port for hns3 VF device, and the
+	 * MTU value issued by hns3 VF PMD must be less than or equal to
 	 * PF's MTU.
 	 */
 	if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED)) {
@@ -565,8 +565,8 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	/*
 	 * when Rx of scattered packets is off, we have some possibility of
-	 * using vector Rx process function or simple Rx functions in hns3 PMD
-	 * driver. If the input MTU is increased and the maximum length of
+	 * using vector Rx process function or simple Rx functions in hns3 PMD.
+	 * If the input MTU is increased and the maximum length of
 	 * received packets is greater than the length of a buffer for Rx
 	 * packet, the hardware network engine needs to use multiple BDs and
 	 * buffers to store these packets. This will cause problems when still
@@ -2075,7 +2075,7 @@ hns3vf_check_default_mac_change(struct hns3_hw *hw)
 	 * ethdev driver sets the MAC address for VF device after the
 	 * initialization of the related VF device, the PF driver will notify
 	 * VF driver to reset VF device to make the new MAC address effective
-	 * immediately. The hns3 VF PMD driver should check whether the MAC
+	 * immediately. The hns3 VF PMD should check whether the MAC
 	 * address has been changed by the PF kernel ethdev driver, if changed
 	 * VF driver should configure hardware using the new MAC address in the
 	 * recovering hardware configuration stage of the reset process.
@@ -2416,12 +2416,12 @@ hns3vf_dev_init(struct rte_eth_dev *eth_dev)
 	/*
 	 * The hns3 PF ethdev driver in kernel support setting VF MAC address
 	 * on the host by "ip link set ..." command. To avoid some incorrect
-	 * scenes, for example, hns3 VF PMD driver fails to receive and send
+	 * scenes, for example, hns3 VF PMD fails to receive and send
 	 * packets after user configure the MAC address by using the
-	 * "ip link set ..." command, hns3 VF PMD driver keep the same MAC
+	 * "ip link set ..." command, hns3 VF PMD keep the same MAC
 	 * address strategy as the hns3 kernel ethdev driver in the
 	 * initialization. If user configure a MAC address by the ip command
-	 * for VF device, then hns3 VF PMD driver will start with it, otherwise
+	 * for VF device, then hns3 VF PMD will start with it, otherwise
 	 * start with a random MAC address in the initialization.
 	 */
 	if (rte_is_zero_ether_addr((struct rte_ether_addr *)hw->mac.mac_addr))
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index 85495bbe89..3a4b699ae2 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -667,7 +667,7 @@ hns3_rss_set_default_args(struct hns3_hw *hw)
 }
 
 /*
- * RSS initialization for hns3 pmd driver.
+ * RSS initialization for hns3 PMD.
  */
 int
 hns3_config_rss(struct hns3_adapter *hns)
@@ -739,7 +739,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 }
 
 /*
- * RSS uninitialization for hns3 pmd driver.
+ * RSS uninitialization for hns3 PMD.
  */
 void
 hns3_rss_uninit(struct hns3_adapter *hns)
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 40cc4e9c1a..f365daadf8 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1899,8 +1899,8 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	/*
 	 * For hns3 PF device, if the VLAN mode is HW_SHIFT_AND_DISCARD_MODE,
 	 * the pvid_sw_discard_en in the queue struct should not be changed,
-	 * because PVID-related operations do not need to be processed by PMD
-	 * driver. For hns3 VF device, whether it needs to process PVID depends
+	 * because PVID-related operations do not need to be processed by PMD.
+	 * For hns3 VF device, whether it needs to process PVID depends
 	 * on the configuration of PF kernel mode netdevice driver. And the
 	 * related PF configuration is delivered through the mailbox and finally
 	 * reflectd in port_base_vlan_cfg.
@@ -3039,8 +3039,8 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	/*
 	 * For hns3 PF device, if the VLAN mode is HW_SHIFT_AND_DISCARD_MODE,
 	 * the pvid_sw_shift_en in the queue struct should not be changed,
-	 * because PVID-related operations do not need to be processed by PMD
-	 * driver. For hns3 VF device, whether it needs to process PVID depends
+	 * because PVID-related operations do not need to be processed by PMD.
+	 * For hns3 VF device, whether it needs to process PVID depends
 	 * on the configuration of PF kernel mode netdev driver. And the
 	 * related PF configuration is delivered through the mailbox and finally
 	 * reflectd in port_base_vlan_cfg.
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index df731856ef..5423568cd0 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -318,7 +318,7 @@ struct hns3_rx_queue {
 	 * should not be transitted to the upper-layer application. For hardware
 	 * network engine whose vlan mode is HNS3_HW_SHIFT_AND_DISCARD_MODE,
 	 * such as kunpeng 930, PVID will not be reported to the BDs. So, PMD
-	 * driver does not need to perform PVID-related operation in Rx. At this
+	 * does not need to perform PVID-related operation in Rx. At this
 	 * point, the pvid_sw_discard_en will be false.
 	 */
 	uint8_t pvid_sw_discard_en:1;
@@ -490,7 +490,7 @@ struct hns3_tx_queue {
 	 * PVID will overwrite the outer VLAN field of Tx BD. For the hardware
 	 * network engine whose vlan mode is HNS3_HW_SHIFT_AND_DISCARD_MODE,
 	 * such as kunpeng 930, if the PVID is set, the hardware will shift the
-	 * VLAN field automatically. So, PMD driver does not need to do
+	 * VLAN field automatically. So, PMD does not need to do
 	 * PVID-related operations in Tx. And pvid_sw_shift_en will be false at
 	 * this point.
 	 */
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 344cbd25d3..c0bfff43ee 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1922,7 +1922,7 @@ i40e_dev_configure(struct rte_eth_dev *dev)
 		goto err;
 
 	/* VMDQ setup.
-	 *  General PMD driver call sequence are NIC init, configure,
+	 *  General PMD call sequence are NIC init, configure,
 	 *  rx/tx_queue_setup and dev_start. In rx/tx_queue_setup() function, it
 	 *  will try to lookup the VSI that specific queue belongs to if VMDQ
 	 *  applicable. So, VMDQ setting has to be done before
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 3556c9cd17..8b35fa119c 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -8,7 +8,7 @@
  *
  * @file dpdk/pmd/nfp_net_pmd.h
  *
- * Netronome NFP_NET PMD driver
+ * Netronome NFP_NET PMD
  */
 
 #ifndef _NFP_COMMON_H_
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 830863af28..8e81cc498f 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -342,7 +342,7 @@ nfp_net_close(struct rte_eth_dev *dev)
 				     (void *)dev);
 
 	/*
-	 * The ixgbe PMD driver disables the pcie master on the
+	 * The ixgbe PMD disables the pcie master on the
 	 * device. The i40e does not...
 	 */
 
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 5557a1e002..303ef72b1b 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -238,7 +238,7 @@ nfp_netvf_close(struct rte_eth_dev *dev)
 			     (void *)dev);
 
 	/*
-	 * The ixgbe PMD driver disables the pcie master on the
+	 * The ixgbe PMD disables the pcie master on the
 	 * device. The i40e does not...
 	 */
 
diff --git a/drivers/raw/ifpga/base/README b/drivers/raw/ifpga/base/README
index 6b2b171b01..55d92d590a 100644
--- a/drivers/raw/ifpga/base/README
+++ b/drivers/raw/ifpga/base/README
@@ -42,5 +42,5 @@ Some features added in this version:
 3. Add altera SPI master driver and Intel MAX10 device driver.
 4. Add Altera I2C master driver and AT24 eeprom driver.
 5. Add Device Tree support to get the configuration from card.
-6. Instruding and exposing APIs to DPDK PMD driver to access networking
+6. Instruding and exposing APIs to DPDK PMD to access networking
 functionality.
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index ff193f2d65..1dbcf73b0e 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -164,7 +164,7 @@ rte_bbdev_queue_configure(uint16_t dev_id, uint16_t queue_id,
  *
  * @return
  *   - 0 on success
- *   - negative value on failure - as returned from PMD driver
+ *   - negative value on failure - as returned from PMD
  */
 int
 rte_bbdev_start(uint16_t dev_id);
@@ -207,7 +207,7 @@ rte_bbdev_close(uint16_t dev_id);
  *
  * @return
  *   - 0 on success
- *   - negative value on failure - as returned from PMD driver
+ *   - negative value on failure - as returned from PMD
  */
 int
 rte_bbdev_queue_start(uint16_t dev_id, uint16_t queue_id);
@@ -222,7 +222,7 @@ rte_bbdev_queue_start(uint16_t dev_id, uint16_t queue_id);
  *
  * @return
  *   - 0 on success
- *   - negative value on failure - as returned from PMD driver
+ *   - negative value on failure - as returned from PMD
  */
 int
 rte_bbdev_queue_stop(uint16_t dev_id, uint16_t queue_id);
@@ -782,7 +782,7 @@ rte_bbdev_callback_unregister(uint16_t dev_id, enum rte_bbdev_event_type event,
  *
  * @return
  *   - 0 on success
- *   - negative value on failure - as returned from PMD driver
+ *   - negative value on failure - as returned from PMD
  */
 int
 rte_bbdev_queue_intr_enable(uint16_t dev_id, uint16_t queue_id);
@@ -798,7 +798,7 @@ rte_bbdev_queue_intr_enable(uint16_t dev_id, uint16_t queue_id);
  *
  * @return
  *   - 0 on success
- *   - negative value on failure - as returned from PMD driver
+ *   - negative value on failure - as returned from PMD
  */
 int
 rte_bbdev_queue_intr_disable(uint16_t dev_id, uint16_t queue_id);
@@ -825,7 +825,7 @@ rte_bbdev_queue_intr_disable(uint16_t dev_id, uint16_t queue_id);
  * @return
  *   - 0 on success
  *   - ENOTSUP if interrupts are not supported by the identified device
- *   - negative value on failure - as returned from PMD driver
+ *   - negative value on failure - as returned from PMD
  */
 int
 rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
diff --git a/lib/compressdev/rte_compressdev_pmd.h b/lib/compressdev/rte_compressdev_pmd.h
index 16b6bc6b35..945a991fd6 100644
--- a/lib/compressdev/rte_compressdev_pmd.h
+++ b/lib/compressdev/rte_compressdev_pmd.h
@@ -319,7 +319,7 @@ rte_compressdev_pmd_release_device(struct rte_compressdev *dev);
  * PMD assist function to parse initialisation arguments for comp driver
  * when creating a new comp PMD device instance.
  *
- * PMD driver should set default values for that PMD before calling function,
+ * PMD should set default values for that PMD before calling function,
  * these default values will be over-written with successfully parsed values
  * from args string.
  *
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index 89bf2af399..a6b25d297b 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -483,7 +483,7 @@ rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev);
  * PMD assist function to parse initialisation arguments for crypto driver
  * when creating a new crypto PMD device instance.
  *
- * PMD driver should set default values for that PMD before calling function,
+ * PMD should set default values for that PMD before calling function,
  * these default values will be over-written with successfully parsed values
  * from args string.
  *
diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h
index e42d8739ab..064785686f 100644
--- a/lib/dmadev/rte_dmadev_core.h
+++ b/lib/dmadev/rte_dmadev_core.h
@@ -59,7 +59,7 @@ typedef uint16_t (*rte_dma_burst_capacity_t)(const void *dev_private, uint16_t v
  * functions.
  *
  * The 'dev_private' field was placed in the first cache line to optimize
- * performance because the PMD driver mainly depends on this field.
+ * performance because the PMD mainly depends on this field.
  */
 struct rte_dma_fp_object {
 	/** PMD-specific private data. The driver should copy
diff --git a/lib/eal/include/rte_dev.h b/lib/eal/include/rte_dev.h
index 6c3f774672..448a41cb0e 100644
--- a/lib/eal/include/rte_dev.h
+++ b/lib/eal/include/rte_dev.h
@@ -8,7 +8,7 @@
 /**
  * @file
  *
- * RTE PMD Driver Registration Interface
+ * RTE PMD Registration Interface
  *
  * This file manages the list of device drivers.
  */
diff --git a/lib/eal/include/rte_devargs.h b/lib/eal/include/rte_devargs.h
index 71c8af9df3..37a0f042ab 100644
--- a/lib/eal/include/rte_devargs.h
+++ b/lib/eal/include/rte_devargs.h
@@ -35,7 +35,7 @@ extern "C" {
 /**
  * Class type key in global devargs syntax.
  *
- * Legacy devargs parser doesn't parse class type. PMD driver is
+ * Legacy devargs parser doesn't parse class type. PMD is
  * encouraged to use this key to resolve class type.
  */
 #define RTE_DEVARGS_KEY_CLASS "class"
@@ -43,7 +43,7 @@ extern "C" {
 /**
  * Driver type key in global devargs syntax.
  *
- * Legacy devargs parser doesn't parse driver type. PMD driver is
+ * Legacy devargs parser doesn't parse driver type. PMD is
  * encouraged to use this key to resolve driver type.
  */
 #define RTE_DEVARGS_KEY_DRIVER "driver"
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 096b676fc1..fa299c8ad7 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -2610,7 +2610,7 @@ int rte_eth_tx_hairpin_queue_setup
  *   - (-EINVAL) if bad parameter.
  *   - (-ENODEV) if *port_id* invalid
  *   - (-ENOTSUP) if hardware doesn't support.
- *   - Others detailed errors from PMD drivers.
+ *   - Others detailed errors from PMDs.
  */
 __rte_experimental
 int rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports,
@@ -2636,7 +2636,7 @@ int rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports,
  *   - (-ENODEV) if Tx port ID is invalid.
  *   - (-EBUSY) if device is not in started state.
  *   - (-ENOTSUP) if hardware doesn't support.
- *   - Others detailed errors from PMD drivers.
+ *   - Others detailed errors from PMDs.
  */
 __rte_experimental
 int rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port);
@@ -2663,7 +2663,7 @@ int rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port);
  *   - (-ENODEV) if Tx port ID is invalid.
  *   - (-EBUSY) if device is in stopped state.
  *   - (-ENOTSUP) if hardware doesn't support.
- *   - Others detailed errors from PMD drivers.
+ *   - Others detailed errors from PMDs.
  */
 __rte_experimental
 int rte_eth_hairpin_unbind(uint16_t tx_port, uint16_t rx_port);
@@ -2706,7 +2706,7 @@ int rte_eth_dev_is_valid_port(uint16_t port_id);
  *   - -ENODEV: if *port_id* is invalid.
  *   - -EINVAL: The queue_id out of range or belong to hairpin.
  *   - -EIO: if device is removed.
- *   - -ENOTSUP: The function not supported in PMD driver.
+ *   - -ENOTSUP: The function not supported in PMD.
  */
 int rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id);
 
@@ -2724,7 +2724,7 @@ int rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id);
  *   - -ENODEV: if *port_id* is invalid.
  *   - -EINVAL: The queue_id out of range or belong to hairpin.
  *   - -EIO: if device is removed.
- *   - -ENOTSUP: The function not supported in PMD driver.
+ *   - -ENOTSUP: The function not supported in PMD.
  */
 int rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id);
 
@@ -2743,7 +2743,7 @@ int rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id);
  *   - -ENODEV: if *port_id* is invalid.
  *   - -EINVAL: The queue_id out of range or belong to hairpin.
  *   - -EIO: if device is removed.
- *   - -ENOTSUP: The function not supported in PMD driver.
+ *   - -ENOTSUP: The function not supported in PMD.
  */
 int rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id);
 
@@ -2761,7 +2761,7 @@ int rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id);
  *   - -ENODEV: if *port_id* is invalid.
  *   - -EINVAL: The queue_id out of range or belong to hairpin.
  *   - -EIO: if device is removed.
- *   - -ENOTSUP: The function not supported in PMD driver.
+ *   - -ENOTSUP: The function not supported in PMD.
  */
 int rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id);
 
@@ -2963,7 +2963,7 @@ int rte_eth_allmulticast_get(uint16_t port_id);
  *   Link information written back.
  * @return
  *   - (0) if successful.
- *   - (-ENOTSUP) if the function is not supported in PMD driver.
+ *   - (-ENOTSUP) if the function is not supported in PMD.
  *   - (-ENODEV) if *port_id* invalid.
  *   - (-EINVAL) if bad parameter.
  */
@@ -2979,7 +2979,7 @@ int rte_eth_link_get(uint16_t port_id, struct rte_eth_link *link);
  *   Link information written back.
  * @return
  *   - (0) if successful.
- *   - (-ENOTSUP) if the function is not supported in PMD driver.
+ *   - (-ENOTSUP) if the function is not supported in PMD.
  *   - (-ENODEV) if *port_id* invalid.
  *   - (-EINVAL) if bad parameter.
  */
-- 
2.25.1


^ permalink raw reply	[relevance 1%]

* Re: ethdev: hide internal structures
  2021-11-16 23:22  0%         ` Stephen Hemminger
@ 2021-11-17 22:05  0%           ` Tyler Retzlaff
  0 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2021-11-17 22:05 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Ananyev, Konstantin, dev

On Tue, Nov 16, 2021 at 03:22:01PM -0800, Stephen Hemminger wrote:
> On Tue, 16 Nov 2021 14:58:08 -0800
> Tyler Retzlaff <roretzla@linux.microsoft.com> wrote:
> 
> > > 
> > > Keep a array in application?  Portid is universally
> > > available.
> > > 
> > > struct my_portdata *my_ports[RTE_ETH_MAXPORTS];  
> > 
> > i guess by this you mean maintain the storage in the application and
> > then export that storage for proprietary use in the pmd. ordinarily i
> > wouldn't want to have this hard-coded into the modules abi but since
> > we are talking about vendor extensions it has to be managed somewhere.
> > 
> > anyway, i guess i have my answer.
> > 
> > thanks stephen, appreciate it.
> 
> Don't understand, how are application and pmd exchanging extra data?
> Maybe a non-standard PMD API?

yes. consider the case of a "vendor extension" where for a specific pmd
driver it is possible that extra / non-standard operations are
supported.

in this instance we have a pmd that does some whiz-bang thing that isn't
something most hardware/pmds could do (or need to under ordinary 
circumstances) so it doesn't make sense to adapt the generalized pmd api
for some one-off hardware/device.  however, the vendor ships an
application that is extended to understand this extra functionality and
needs a way to hook up with and inter-operate with the non-standard api.

one example that is very common is some kind of advanced statistics that
most hardware aren't capable of producing. as long as the application
knows it is working with this advanced hardware it can present those
statistics.

in the code i'm looking at it isn't statistics but specialized control
operations that can't be expressed via the exported pmd api (and should
not be).

^ permalink raw reply	[relevance 0%]

* Re: ethdev: hide internal structures
  2021-11-16 22:58  3%       ` Tyler Retzlaff
@ 2021-11-16 23:22  0%         ` Stephen Hemminger
  2021-11-17 22:05  0%           ` Tyler Retzlaff
  0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-11-16 23:22 UTC (permalink / raw)
  To: Tyler Retzlaff; +Cc: Ananyev, Konstantin, dev

On Tue, 16 Nov 2021 14:58:08 -0800
Tyler Retzlaff <roretzla@linux.microsoft.com> wrote:

> On Tue, Nov 16, 2021 at 01:25:10PM -0800, Stephen Hemminger wrote:
> > On Tue, 16 Nov 2021 11:10:18 -0800
> > Tyler Retzlaff <roretzla@linux.microsoft.com> wrote:
> >   
> > > On Tue, Nov 16, 2021 at 10:32:55AM +0000, Ananyev, Konstantin wrote:  
> > > >  
> > > > rte_eth_dev,  rte_eth_dev_data, rte_eth_rxtx_callback are internal
> > > > data structures that were used by public inline ethdev functions. 
> > > > Well behaving app should not access these data structures directly.
> > > > So, for well behaving app there should no changes in the code required.
> > > > That what I meant by 'transparent' above.
> > > > But it is still an ABI change, so yes, the app has to be re-compiled.     
> > > 
> > > so it appears the application was establishing a private context /
> > > vendor extension between the application and a pmd. the application
> > > was abusing access to the rte_eth_devices[] to get the private context
> > > from the rte_eth_dev.
> > > 
> > > is there a proper / supported way of providing this functionality
> > > through the public api?
> > >   
> > > > 
> > > > Konstantin    
> > 
> > Keep a array in application?  Portid is universally
> > available.
> > 
> > struct my_portdata *my_ports[RTE_ETH_MAXPORTS];  
> 
> i guess by this you mean maintain the storage in the application and
> then export that storage for proprietary use in the pmd. ordinarily i
> wouldn't want to have this hard-coded into the modules abi but since
> we are talking about vendor extensions it has to be managed somewhere.
> 
> anyway, i guess i have my answer.
> 
> thanks stephen, appreciate it.

Don't understand, how are application and pmd exchanging extra data?
Maybe a non-standard PMD API?

^ permalink raw reply	[relevance 0%]

* Re: ethdev: hide internal structures
  2021-11-16 21:25  0%     ` Stephen Hemminger
@ 2021-11-16 22:58  3%       ` Tyler Retzlaff
  2021-11-16 23:22  0%         ` Stephen Hemminger
  0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2021-11-16 22:58 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Ananyev, Konstantin, dev

On Tue, Nov 16, 2021 at 01:25:10PM -0800, Stephen Hemminger wrote:
> On Tue, 16 Nov 2021 11:10:18 -0800
> Tyler Retzlaff <roretzla@linux.microsoft.com> wrote:
> 
> > On Tue, Nov 16, 2021 at 10:32:55AM +0000, Ananyev, Konstantin wrote:
> > >  
> > > rte_eth_dev,  rte_eth_dev_data, rte_eth_rxtx_callback are internal
> > > data structures that were used by public inline ethdev functions. 
> > > Well behaving app should not access these data structures directly.
> > > So, for well behaving app there should no changes in the code required.
> > > That what I meant by 'transparent' above.
> > > But it is still an ABI change, so yes, the app has to be re-compiled.   
> > 
> > so it appears the application was establishing a private context /
> > vendor extension between the application and a pmd. the application
> > was abusing access to the rte_eth_devices[] to get the private context
> > from the rte_eth_dev.
> > 
> > is there a proper / supported way of providing this functionality
> > through the public api?
> > 
> > > 
> > > Konstantin  
> 
> Keep a array in application?  Portid is universally
> available.
> 
> struct my_portdata *my_ports[RTE_ETH_MAXPORTS];

i guess by this you mean maintain the storage in the application and
then export that storage for proprietary use in the pmd. ordinarily i
wouldn't want to have this hard-coded into the modules abi but since
we are talking about vendor extensions it has to be managed somewhere.

anyway, i guess i have my answer.

thanks stephen, appreciate it.

^ permalink raw reply	[relevance 3%]

* Re: ethdev: hide internal structures
  2021-11-16 19:10  0%   ` Tyler Retzlaff
@ 2021-11-16 21:25  0%     ` Stephen Hemminger
  2021-11-16 22:58  3%       ` Tyler Retzlaff
  0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-11-16 21:25 UTC (permalink / raw)
  To: Tyler Retzlaff; +Cc: Ananyev, Konstantin, dev

On Tue, 16 Nov 2021 11:10:18 -0800
Tyler Retzlaff <roretzla@linux.microsoft.com> wrote:

> On Tue, Nov 16, 2021 at 10:32:55AM +0000, Ananyev, Konstantin wrote:
> >  
> > rte_eth_dev,  rte_eth_dev_data, rte_eth_rxtx_callback are internal
> > data structures that were used by public inline ethdev functions. 
> > Well behaving app should not access these data structures directly.
> > So, for well behaving app there should no changes in the code required.
> > That what I meant by 'transparent' above.
> > But it is still an ABI change, so yes, the app has to be re-compiled.   
> 
> so it appears the application was establishing a private context /
> vendor extension between the application and a pmd. the application
> was abusing access to the rte_eth_devices[] to get the private context
> from the rte_eth_dev.
> 
> is there a proper / supported way of providing this functionality
> through the public api?
> 
> > 
> > Konstantin  

Keep a array in application?  Portid is universally
available.

struct my_portdata *my_ports[RTE_ETH_MAXPORTS];

^ permalink raw reply	[relevance 0%]

* Re: ethdev: hide internal structures
  2021-11-16 20:07  4%     ` Ferruh Yigit
@ 2021-11-16 20:44  0%       ` Tyler Retzlaff
  0 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2021-11-16 20:44 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Konstantin Ananyev, dev, Ray Kinsella, Thomas Monjalon, David Marchand

On Tue, Nov 16, 2021 at 08:07:49PM +0000, Ferruh Yigit wrote:
> On 11/16/2021 5:54 PM, Tyler Retzlaff wrote:
> >
> >i thought someone was responsible for reviewing abi/api related changes
> >on the board to understand the implications of changes like this?
> >
> 
> Sorry for the negative impact on your product, I can understand the
> frustration.
> 
> The 'rte_eth_devices[]' was marked as '@internal' in the header file
> since 2012 [1], so it is not new, but it was not marked programmatically,
> only as comment in the header file.
> Expectation was applications to not directly use it.

unfortunately there are a lot of these expectations in the project code,
rarely do consuming applications get written in the way we would expect
and this is a lesson that if it is not mechanically enforced it isn't
prevented.

> 
> 
> For long term ABI stability, this is a good step forward, although
> the impact was known, best time for these kind of change is the 21.11
> release, otherwise change needs to wait (at least) one more year.

agreed, we appreciate what will be accomplished with the change.

> 
> This change has been discussed and accepted in the technical board [2],
> and a deprecation notice has been sent to mail list [3] for notification.

the notes from [2] aren't that clear, but i think it is fair you point
out that if [3] were read carefully it was implied that it would impact
ethdev. anyway, it is moot now.

> 
> Agree the announce was a little late than we normally do (although
> only a month late than what defined in process), this is accepted by
> the board to not miss the ABI break window (.11 release).
> As you will recognize, not only ethdev, but a few more device abstraction
> layer libraries had similar changes in this release.

yes, i understand. perhaps in the future it may be possible to introduce
some kind of __deprecation notice during compilation earlier than the
removal and it may have been noticed sooner. perhaps a patch that did
this near the time of the original notification [2].

i've left the details of the functional gap in my other reply to the
thread, hopefully you have a suggestion.

thanks Ferruh, appreciate it.

> 
> 
> [1]
> f831c63cbe86 ("ethdev: minor changes")
> 
> [2]
> https://mails.dpdk.org/archives/dev/2021-July/214662.html
> 
> [3]
> https://patches.dpdk.org/project/dpdk/patch/20210826103500.2172550-1-ferruh.yigit@intel.com/

^ permalink raw reply	[relevance 0%]

* Re: ethdev: hide internal structures
  2021-11-16 17:54  4%   ` Tyler Retzlaff
@ 2021-11-16 20:07  4%     ` Ferruh Yigit
  2021-11-16 20:44  0%       ` Tyler Retzlaff
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-11-16 20:07 UTC (permalink / raw)
  To: Tyler Retzlaff
  Cc: Konstantin Ananyev, dev, Ray Kinsella, Thomas Monjalon, David Marchand

On 11/16/2021 5:54 PM, Tyler Retzlaff wrote:
> On Tue, Nov 16, 2021 at 09:32:15AM +0000, Ferruh Yigit wrote:
>>
>> Hi Tyler,
>>
>> It shouldn't be an API change, which API is changed?
> 
> exported declarations that were consumed by the application were removed
> from an installed header. anything making reference to rte_eth_devices[]
> will no longer compile.
> 
> any change that removes any identifier or macro visible to the application
> from an installed header is an api break.
> 
>> Existing binaries won't run and needs recompile, but shouldn't need to change
>> the code.
>> Unless application is accessing *internal* DPDK structs (which were exposed
>> to application because of some technical issues that above commit fixes).
> 
> the application was, but the access was to a symbol and identifier that
> had not been previously marked __rte_internal or __rte_experimental and thus
> assumed to be public.
> 
> just to be clear i agree with the change making these internal but there
> was virtually no warning.
> 
> https://doc.dpdk.org/guides-19.11/contributing/abi_policy.html
> 
> the exports and declarations need to be marked deprecated to give ample
> time before being removed in accordance with the abi policy.
> 
> i will ask that work be scheduled to identify the gap in the public api
> surface that access to these structures was providing rather than
> backing the change out. fortunately it is only schedule rather
> than service impacting since the application hadn't been deployed yet.
> 
> i thought someone was responsible for reviewing abi/api related changes
> on the board to understand the implications of changes like this?
> 

Sorry for the negative impact on your product, I can understand the
frustration.

The 'rte_eth_devices[]' was marked as '@internal' in the header file
since 2012 [1], so it is not new, but it was not marked programmatically,
only as comment in the header file.
Expectation was applications to not directly use it.


For long term ABI stability, this is a good step forward, although
the impact was known, best time for these kind of change is the 21.11
release, otherwise change needs to wait (at least) one more year.

This change has been discussed and accepted in the technical board [2],
and a deprecation notice has been sent to mail list [3] for notification.

Agree the announce was a little late than we normally do (although
only a month late than what defined in process), this is accepted by
the board to not miss the ABI break window (.11 release).
As you will recognize, not only ethdev, but a few more device abstraction
layer libraries had similar changes in this release.


[1]
f831c63cbe86 ("ethdev: minor changes")

[2]
https://mails.dpdk.org/archives/dev/2021-July/214662.html

[3]
https://patches.dpdk.org/project/dpdk/patch/20210826103500.2172550-1-ferruh.yigit@intel.com/

^ permalink raw reply	[relevance 4%]

* Re: ethdev: hide internal structures
  2021-11-16 10:32  3% ` Ananyev, Konstantin
@ 2021-11-16 19:10  0%   ` Tyler Retzlaff
  2021-11-16 21:25  0%     ` Stephen Hemminger
  0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2021-11-16 19:10 UTC (permalink / raw)
  To: Ananyev, Konstantin; +Cc: dev

On Tue, Nov 16, 2021 at 10:32:55AM +0000, Ananyev, Konstantin wrote:
>  
> rte_eth_dev,  rte_eth_dev_data, rte_eth_rxtx_callback are internal
> data structures that were used by public inline ethdev functions. 
> Well behaving app should not access these data structures directly.
> So, for well behaving app there should no changes in the code required.
> That what I meant by 'transparent' above.
> But it is still an ABI change, so yes, the app has to be re-compiled. 

so it appears the application was establishing a private context /
vendor extension between the application and a pmd. the application
was abusing access to the rte_eth_devices[] to get the private context
from the rte_eth_dev.

is there a proper / supported way of providing this functionality
through the public api?

> 
> Konstantin

^ permalink raw reply	[relevance 0%]

* Re: ethdev: hide internal structures
  2021-11-16  9:32  0% ` Ferruh Yigit
@ 2021-11-16 17:54  4%   ` Tyler Retzlaff
  2021-11-16 20:07  4%     ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2021-11-16 17:54 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Konstantin Ananyev, dev

On Tue, Nov 16, 2021 at 09:32:15AM +0000, Ferruh Yigit wrote:
> 
> Hi Tyler,
> 
> It shouldn't be an API change, which API is changed?

exported declarations that were consumed by the application were removed
from an installed header. anything making reference to rte_eth_devices[]
will no longer compile.

any change that removes any identifier or macro visible to the application
from an installed header is an api break.

> Existing binaries won't run and needs recompile, but shouldn't need to change
> the code.
> Unless application is accessing *internal* DPDK structs (which were exposed
> to application because of some technical issues that above commit fixes).

the application was, but the access was to a symbol and identifier that
had not been previously marked __rte_internal or __rte_experimental and thus
assumed to be public.

just to be clear i agree with the change making these internal but there
was virtually no warning.

https://doc.dpdk.org/guides-19.11/contributing/abi_policy.html

the exports and declarations need to be marked deprecated to give ample
time before being removed in accordance with the abi policy.

i will ask that work be scheduled to identify the gap in the public api
surface that access to these structures was providing rather than
backing the change out. fortunately it is only schedule rather
than service impacting since the application hadn't been deployed yet.

i thought someone was responsible for reviewing abi/api related changes
on the board to understand the implications of changes like this?

thanks

^ permalink raw reply	[relevance 4%]

* RE: ethdev: hide internal structures
  2021-11-16  0:24  4% ethdev: hide internal structures Tyler Retzlaff
  2021-11-16  9:32  0% ` Ferruh Yigit
@ 2021-11-16 10:32  3% ` Ananyev, Konstantin
  2021-11-16 19:10  0%   ` Tyler Retzlaff
  1 sibling, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-11-16 10:32 UTC (permalink / raw)
  To: Tyler Retzlaff, dev

> hi folks,
> 
> I don't understand the text of this change.  would you mind explaining?
> 
>     commit f9bdee267ab84fd12dc288419aba341310b6ae08
>     Author: Konstantin Ananyev <konstantin.ananyev@intel.com>
>     Date:   Wed Oct 13 14:37:04 2021 +0100
>     ethdev: hide internal structures
> 
> +* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``           +  private data structures.  ``rte_eth_devices[]`` can't
> be accessed directly
> +  by user any more. While it is an ABI breakage, this change is intended
> +  to be transparent for both users (no changes in user app is required) and               +  PMD developers (no changes in PMD is required).
> 
> 
> if it is an ABI break (and it is also an API break) how is it that
> this change could be "transparent" to the user application?
> 
> * existing binaries will not run. (they need to be recompiled)
> * existing code will not compile. (code changes are required)
> 
> in order to cope with this change an application will have to have the
> code modified and will need to be re-compiled. so i don't understand how
> that is transparent?
 
rte_eth_dev,  rte_eth_dev_data, rte_eth_rxtx_callback are internal
data structures that were used by public inline ethdev functions. 
Well behaving app should not access these data structures directly.
So, for well behaving app there should no changes in the code required.
That what I meant by 'transparent' above.
But it is still an ABI change, so yes, the app has to be re-compiled. 

Konstantin

^ permalink raw reply	[relevance 3%]

* Re: ethdev: hide internal structures
  2021-11-16  0:24  4% ethdev: hide internal structures Tyler Retzlaff
@ 2021-11-16  9:32  0% ` Ferruh Yigit
  2021-11-16 17:54  4%   ` Tyler Retzlaff
  2021-11-16 10:32  3% ` Ananyev, Konstantin
  1 sibling, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-11-16  9:32 UTC (permalink / raw)
  To: Tyler Retzlaff; +Cc: Konstantin Ananyev, dev

On 11/16/2021 12:24 AM, Tyler Retzlaff wrote:
> hi folks,
> 
> I don't understand the text of this change.  would you mind explaining?
> 
>      commit f9bdee267ab84fd12dc288419aba341310b6ae08
>      Author: Konstantin Ananyev <konstantin.ananyev@intel.com>
>      Date:   Wed Oct 13 14:37:04 2021 +0100
>      ethdev: hide internal structures
> 
> +* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``           > +  private data structures.  ``rte_eth_devices[]`` can't be accessed directly
> +  by user any more. While it is an ABI breakage, this change is intended
> +  to be transparent for both users (no changes in user app is required) and> +  PMD developers (no changes in PMD is required).
> 
> 
> if it is an ABI break (and it is also an API break) how is it that
> this change could be "transparent" to the user application?
> 
> * existing binaries will not run. (they need to be recompiled)
> * existing code will not compile. (code changes are required)
> 
> in order to cope with this change an application will have to have the
> code modified and will need to be re-compiled. so i don't understand how
> that is transparent?
> 


Hi Tyler,

It shouldn't be an API change, which API is changed?

Existing binaries won't run and needs recompile, but shouldn't need to change
the code.
Unless application is accessing *internal* DPDK structs (which were exposed
to application because of some technical issues that above commit fixes).

What code change do you require, in driver or application?

^ permalink raw reply	[relevance 0%]

* ethdev: hide internal structures
@ 2021-11-16  0:24  4% Tyler Retzlaff
  2021-11-16  9:32  0% ` Ferruh Yigit
  2021-11-16 10:32  3% ` Ananyev, Konstantin
  0 siblings, 2 replies; 200+ results
From: Tyler Retzlaff @ 2021-11-16  0:24 UTC (permalink / raw)
  To: dev

hi folks,

I don't understand the text of this change.  would you mind explaining?

    commit f9bdee267ab84fd12dc288419aba341310b6ae08
    Author: Konstantin Ananyev <konstantin.ananyev@intel.com>
    Date:   Wed Oct 13 14:37:04 2021 +0100
    ethdev: hide internal structures                        

+* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``           +  private data structures.  ``rte_eth_devices[]`` can't be accessed directly
+  by user any more. While it is an ABI breakage, this change is intended
+  to be transparent for both users (no changes in user app is required) and               +  PMD developers (no changes in PMD is required).


if it is an ABI break (and it is also an API break) how is it that
this change could be "transparent" to the user application?

* existing binaries will not run. (they need to be recompiled)
* existing code will not compile. (code changes are required)

in order to cope with this change an application will have to have the
code modified and will need to be re-compiled. so i don't understand how
that is transparent?

thanks

^ permalink raw reply	[relevance 4%]

* [PATCH v6 0/5] cleanup more resources on eal_cleanup
    2021-11-13  0:28  3% ` [PATCH v4 0/5] cleanup more stuff " Stephen Hemminger
  2021-11-13  3:32  3% ` [PATCH v5 0/5] cleanup DPDK resources via eal_cleanup Stephen Hemminger
@ 2021-11-13 17:22  3% ` Stephen Hemminger
  2 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-11-13 17:22 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

When testing using ASAN or valgrind with DPDK; there are lots of leftover
memory and file descriptors. This makes it hard to find application
leaks versus internal DPDK leaks.

The DPDK has a function that applications can use to tell it
to cleanup resources on shutdown (rte_eal_cleanup). But the
current coverage of that API is spotty. Many internal parts of
DPDK leave files and allocated memory behind.

This patch set is a first step at getting the sub-parts of
DPDK to cleanup after themselves. These are the easier ones,
the harder and more critical ones are in the drivers
and the memory subsystem.

There should be no new exposed API or ABI changes here.

v6 - fix windows stub 
v5 - add stub for windows build in rte_malloc cleanup
v4
 - rebase to 20.11-rc
 - drop one patch (alarm cleanup is implemented)
 - drop patch that ends worker threads on cleanup.
   the test is calling rte_exit/eal_cleanup in a forked process.
   (could argue this is a test bug)!


Stephen Hemminger (5):
  eal: close log in eal_cleanup
  eal: mp: end the multiprocess thread during cleanup
  eal: vfio: cleanup the mp sync handle
  eal: hotplug: cleanup multiprocess resources
  eal: malloc: cleanup mp resources

 lib/eal/common/eal_common_log.c  | 13 +++++++++++++
 lib/eal/common/eal_common_proc.c | 20 +++++++++++++++++---
 lib/eal/common/eal_private.h     |  7 +++++++
 lib/eal/common/hotplug_mp.c      |  5 +++++
 lib/eal/common/hotplug_mp.h      |  6 ++++++
 lib/eal/common/malloc_heap.c     |  6 ++++++
 lib/eal/common/malloc_heap.h     |  3 +++
 lib/eal/common/malloc_mp.c       | 12 ++++++++++++
 lib/eal/common/malloc_mp.h       |  3 +++
 lib/eal/linux/eal.c              |  7 +++++++
 lib/eal/linux/eal_log.c          |  8 ++++++++
 lib/eal/linux/eal_vfio.h         |  1 +
 lib/eal/linux/eal_vfio_mp_sync.c |  8 ++++++++
 lib/eal/windows/eal_mp.c         |  7 +++++++
 14 files changed, 103 insertions(+), 3 deletions(-)

-- 
2.30.2


^ permalink raw reply	[relevance 3%]

* [PATCH v5 0/5] cleanup DPDK resources via eal_cleanup
    2021-11-13  0:28  3% ` [PATCH v4 0/5] cleanup more stuff " Stephen Hemminger
@ 2021-11-13  3:32  3% ` Stephen Hemminger
  2021-11-13 17:22  3% ` [PATCH v6 0/5] cleanup more resources on eal_cleanup Stephen Hemminger
  2 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-11-13  3:32 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

When testing using ASAN or valgrind with DPDK; there are lots of leftover
memory and file descriptors. This makes it hard to find application
leaks versus internal DPDK leaks.

The DPDK has a function that applications can use to tell it
to cleanup resources on shutdown (rte_eal_cleanup). But the
current coverage of that API is spotty. Many internal parts of
DPDK leave files and allocated memory behind.

This patch set is a first step at getting the sub-parts of
DPDK to cleanup after themselves. These are the easier ones,
the harder and more critical ones are in the drivers
and the memory subsystem.

There should be no new exposed API or ABI changes here.

v5
 - add stub for windows build in rte_malloc cleanup

v4
 - rebase to 20.11-rc
 - drop one patch (alarm cleanup is implemented)
 - drop patch that ends worker threads on cleanup.
   the test is calling rte_exit/eal_cleanup in a forked process.
   (could argue this is a test bug)!

v3
 - fix a couple of minor checkpatch complaints

v2
 - rebase after 20.05 file renames
 - incorporate review comment feedback
 - hold off some of the more involved patches for later

Stephen Hemminger (5):
  eal: close log in eal_cleanup
  eal: mp: end the multiprocess thread during cleanup
  eal: vfio: cleanup the mp sync handle
  eal: hotplug: cleanup multiprocess resources
  eal: malloc: cleanup mp resources

 lib/eal/common/eal_common_log.c  | 13 +++++++++++++
 lib/eal/common/eal_common_proc.c | 20 +++++++++++++++++---
 lib/eal/common/eal_private.h     |  7 +++++++
 lib/eal/common/hotplug_mp.c      |  5 +++++
 lib/eal/common/hotplug_mp.h      |  6 ++++++
 lib/eal/common/malloc_heap.c     |  6 ++++++
 lib/eal/common/malloc_heap.h     |  3 +++
 lib/eal/common/malloc_mp.c       | 12 ++++++++++++
 lib/eal/common/malloc_mp.h       |  3 +++
 lib/eal/linux/eal.c              |  7 +++++++
 lib/eal/linux/eal_log.c          |  8 ++++++++
 lib/eal/linux/eal_vfio.h         |  1 +
 lib/eal/linux/eal_vfio_mp_sync.c |  8 ++++++++
 lib/eal/windows/eal_mp.c         |  7 +++++++
 14 files changed, 103 insertions(+), 3 deletions(-)

-- 
2.30.2


^ permalink raw reply	[relevance 3%]

* [PATCH v4 0/5] cleanup more stuff on shutdown
  @ 2021-11-13  0:28  3% ` Stephen Hemminger
  2021-11-13  3:32  3% ` [PATCH v5 0/5] cleanup DPDK resources via eal_cleanup Stephen Hemminger
  2021-11-13 17:22  3% ` [PATCH v6 0/5] cleanup more resources on eal_cleanup Stephen Hemminger
  2 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-11-13  0:28 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

Started using valgrind with DPDK, and there are lots of leftover
memory and file descriptors. This makes it hard to find application
leaks versus DPDK leaks.

The DPDK has a function that applications can use to tell it
to cleanup resources on shutdown (rte_eal_cleanup). But the
current coverage of that API is spotty. Many internal parts of
DPDK leave files and allocated memory behind.

This patch set is a first step at getting the sub-parts of
DPDK to cleanup after themselves. These are the easier ones,
the harder and more critical ones are in the drivers
and the memory subsystem.

There should be no new exposed API or ABI changes here.

v4
 - rebase to 20.11-rc
 - drop one patch (alarm cleanup is implemented)
 - drop patch that ends worker threads on cleanup.
   the test is calling rte_exit/eal_cleanup in a forked process.
   (could argue this is a test bug)!

v3
 - fix a couple of minor checkpatch complaints

v2
 - rebase after 20.05 file renames
 - incorporate review comment feedback
 - hold off some of the more involved patches for later

Stephen Hemminger (5):
  eal: close log in eal_cleanup
  eal: mp: end the multiprocess thread during cleanup
  eal: vfio: cleanup the mp sync handle
  eal: hotplug: cleanup multiprocess resources
  eal: malloc: cleanup mp resources

 lib/eal/common/eal_common_log.c  | 13 +++++++++++++
 lib/eal/common/eal_common_proc.c | 20 +++++++++++++++++---
 lib/eal/common/eal_private.h     |  7 +++++++
 lib/eal/common/hotplug_mp.c      |  5 +++++
 lib/eal/common/hotplug_mp.h      |  6 ++++++
 lib/eal/common/malloc_heap.c     |  6 ++++++
 lib/eal/common/malloc_heap.h     |  3 +++
 lib/eal/common/malloc_mp.c       | 12 ++++++++++++
 lib/eal/common/malloc_mp.h       |  3 +++
 lib/eal/linux/eal.c              |  7 +++++++
 lib/eal/linux/eal_log.c          |  8 ++++++++
 lib/eal/linux/eal_vfio.h         |  1 +
 lib/eal/linux/eal_vfio_mp_sync.c |  8 ++++++++
 13 files changed, 96 insertions(+), 3 deletions(-)

-- 
2.30.2


^ permalink raw reply	[relevance 3%]

* Re: [PATCH v4 08/18] eal: fix typos in comments
  2021-11-12  0:02  4%   ` [PATCH v4 08/18] eal: fix typos in comments Stephen Hemminger
@ 2021-11-12 15:22  0%     ` Kinsella, Ray
  0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-11-12 15:22 UTC (permalink / raw)
  To: Stephen Hemminger, dev
  Cc: Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam



On 12/11/2021 00:02, Stephen Hemminger wrote:
> Minor spelling errors.
> 
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
>   lib/eal/include/rte_function_versioning.h | 2 +-
>   lib/eal/windows/include/fnmatch.h         | 2 +-
>   2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/eal/include/rte_function_versioning.h b/lib/eal/include/rte_function_versioning.h
> index 746a1e19923e..eb6dd2bc1727 100644
> --- a/lib/eal/include/rte_function_versioning.h
> +++ b/lib/eal/include/rte_function_versioning.h
> @@ -15,7 +15,7 @@
>   
>   /*
>    * Provides backwards compatibility when updating exported functions.
> - * When a symol is exported from a library to provide an API, it also provides a
> + * When a symbol is exported from a library to provide an API, it also provides a
>    * calling convention (ABI) that is embodied in its name, return type,
>    * arguments, etc.  On occasion that function may need to change to accommodate
>    * new functionality, behavior, etc.  When that occurs, it is desirable to
> diff --git a/lib/eal/windows/include/fnmatch.h b/lib/eal/windows/include/fnmatch.h
> index 142753c3568d..c272f65ccdc3 100644
> --- a/lib/eal/windows/include/fnmatch.h
> +++ b/lib/eal/windows/include/fnmatch.h
> @@ -30,7 +30,7 @@ extern "C" {
>    * with the given regular expression pattern.
>    *
>    * @param pattern
> - *	regular expression notation decribing the pattern to match
> + *	regular expression notation describing the pattern to match
>    *
>    * @param string
>    *	source string to searcg for the pattern
> 

Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[relevance 0%]

* [PATCH v4 08/18] eal: fix typos in comments
  @ 2021-11-12  0:02  4%   ` Stephen Hemminger
  2021-11-12 15:22  0%     ` Kinsella, Ray
  0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-11-12  0:02 UTC (permalink / raw)
  To: dev
  Cc: Stephen Hemminger, Ray Kinsella, Dmitry Kozlyuk,
	Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam

Minor spelling errors.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 lib/eal/include/rte_function_versioning.h | 2 +-
 lib/eal/windows/include/fnmatch.h         | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/eal/include/rte_function_versioning.h b/lib/eal/include/rte_function_versioning.h
index 746a1e19923e..eb6dd2bc1727 100644
--- a/lib/eal/include/rte_function_versioning.h
+++ b/lib/eal/include/rte_function_versioning.h
@@ -15,7 +15,7 @@
 
 /*
  * Provides backwards compatibility when updating exported functions.
- * When a symol is exported from a library to provide an API, it also provides a
+ * When a symbol is exported from a library to provide an API, it also provides a
  * calling convention (ABI) that is embodied in its name, return type,
  * arguments, etc.  On occasion that function may need to change to accommodate
  * new functionality, behavior, etc.  When that occurs, it is desirable to
diff --git a/lib/eal/windows/include/fnmatch.h b/lib/eal/windows/include/fnmatch.h
index 142753c3568d..c272f65ccdc3 100644
--- a/lib/eal/windows/include/fnmatch.h
+++ b/lib/eal/windows/include/fnmatch.h
@@ -30,7 +30,7 @@ extern "C" {
  * with the given regular expression pattern.
  *
  * @param pattern
- *	regular expression notation decribing the pattern to match
+ *	regular expression notation describing the pattern to match
  *
  * @param string
  *	source string to searcg for the pattern
-- 
2.30.2


^ permalink raw reply	[relevance 4%]

* RE: [dpdk-dev] [PATCH v2] doc: propose correction rte_{bsf, fls} inline functions type use
  2021-11-11 11:54  3%     ` Thomas Monjalon
@ 2021-11-11 12:41  0%       ` Morten Brørup
  0 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2021-11-11 12:41 UTC (permalink / raw)
  To: Thomas Monjalon, Tyler Retzlaff
  Cc: stephen, dev, anatoly.burakov, ranjit.menon, mdr, david.marchand,
	dmitry.kozliuk, bruce.richardson

> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Thursday, 11 November 2021 12.55
> 
> 11/11/2021 05:15, Tyler Retzlaff:
> > On Tue, Oct 26, 2021 at 09:45:20AM +0200, Morten Brørup wrote:
> > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Thomas
> Monjalon
> > > > Sent: Monday, 25 October 2021 21.14
> > > >
> > > > 15/03/2021 20:34, Tyler Retzlaff:
> > > > > The proposal has resulted from request to review [1] the
> following
> > > > > functions where there appeared to be inconsistency in return
> type
> > > > > or parameter type selections for the following inline
> functions.
> > > > >
> > > > > rte_bsf32()
> > > > > rte_bsf32_safe()
> > > > > rte_bsf64()
> > > > > rte_bsf64_safe()
> > > > > rte_fls_u32()
> > > > > rte_fls_u64()
> > > > > rte_log2_u32()
> > > > > rte_log2_u64()
> > > > >
> > > > > [1] http://mails.dpdk.org/archives/dev/2021-March/201590.html
> > > > >
> > > > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > > > > ---
> > > > > --- a/doc/guides/rel_notes/deprecation.rst
> > > > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > > > +* eal: Fix inline function return and parameter types for
> > > > rte_{bsf,fls}
> > > > > +  inline functions to be consistent.
> > > > > +  Change ``rte_bsf32_safe`` parameter ``v`` from ``uint64_t``
> to
> > > > ``uint32_t``.
> > > > > +  Change ``rte_bsf64`` return type to  ``uint32_t`` instead of
> > > > ``int``.
> > > > > +  Change ``rte_fls_u32`` return type to ``uint32_t`` instead
> of
> > > > ``int``.
> > > > > +  Change ``rte_fls_u64`` return type to ``uint32_t`` instead
> of
> > > > ``int``.
> > > >
> > > > It seems we completely forgot this.
> > > > How critical is it?
> > >
> >
> > our organization as a matter of internal security policy requires
> these
> > sorts of things to be fixed. while i didn't see any bugs in the dpdk
> > code there is an opportunity for users of these functions to
> > accidentally write code that is prone to integer and buffer overflow
> > class bugs.
> >
> > there is no urgency, but why leave things sloppy? though i do wish
> this
> > had been responded to in a more timely manner 7 months for something
> > that should have almost been rubber stamped.
> 
> It's difficult to be on all topics.
> The best way to avoid such miss is to ping when you see no progress.
> 
> So what's next?
> They are only inline functions, right? so no ABI breakage.
> Is it going to require any change on application-side? I guess no.
> Is it acceptable in 21.11-rc3? maybe too late?
> Is it acceptable in 22.02?

If Microsoft (represented by Tyler in this case) considers this a bug, I would prefer getting it into 21.11 - especially because it is an LTS release.

-Morten


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2] doc: propose correction rte_{bsf, fls} inline functions type use
  @ 2021-11-11 11:54  3%     ` Thomas Monjalon
  2021-11-11 12:41  0%       ` Morten Brørup
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-11-11 11:54 UTC (permalink / raw)
  To: Morten Brørup, Tyler Retzlaff
  Cc: stephen, dev, anatoly.burakov, ranjit.menon, mdr, david.marchand,
	dmitry.kozliuk, bruce.richardson

11/11/2021 05:15, Tyler Retzlaff:
> On Tue, Oct 26, 2021 at 09:45:20AM +0200, Morten Brørup wrote:
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Thomas Monjalon
> > > Sent: Monday, 25 October 2021 21.14
> > > 
> > > 15/03/2021 20:34, Tyler Retzlaff:
> > > > The proposal has resulted from request to review [1] the following
> > > > functions where there appeared to be inconsistency in return type
> > > > or parameter type selections for the following inline functions.
> > > >
> > > > rte_bsf32()
> > > > rte_bsf32_safe()
> > > > rte_bsf64()
> > > > rte_bsf64_safe()
> > > > rte_fls_u32()
> > > > rte_fls_u64()
> > > > rte_log2_u32()
> > > > rte_log2_u64()
> > > >
> > > > [1] http://mails.dpdk.org/archives/dev/2021-March/201590.html
> > > >
> > > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > > > ---
> > > > --- a/doc/guides/rel_notes/deprecation.rst
> > > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > > +* eal: Fix inline function return and parameter types for
> > > rte_{bsf,fls}
> > > > +  inline functions to be consistent.
> > > > +  Change ``rte_bsf32_safe`` parameter ``v`` from ``uint64_t`` to
> > > ``uint32_t``.
> > > > +  Change ``rte_bsf64`` return type to  ``uint32_t`` instead of
> > > ``int``.
> > > > +  Change ``rte_fls_u32`` return type to ``uint32_t`` instead of
> > > ``int``.
> > > > +  Change ``rte_fls_u64`` return type to ``uint32_t`` instead of
> > > ``int``.
> > > 
> > > It seems we completely forgot this.
> > > How critical is it?
> > 
> 
> our organization as a matter of internal security policy requires these
> sorts of things to be fixed. while i didn't see any bugs in the dpdk
> code there is an opportunity for users of these functions to
> accidentally write code that is prone to integer and buffer overflow
> class bugs.
> 
> there is no urgency, but why leave things sloppy? though i do wish this
> had been responded to in a more timely manner 7 months for something
> that should have almost been rubber stamped.

It's difficult to be on all topics.
The best way to avoid such miss is to ping when you see no progress.

So what's next?
They are only inline functions, right? so no ABI breakage.
Is it going to require any change on application-side? I guess no.
Is it acceptable in 21.11-rc3? maybe too late?
Is it acceptable in 22.02?



^ permalink raw reply	[relevance 3%]

* [PATCH v18 0/8]   eal: Add EAL API for threading
  2021-11-10  3:01  3%   ` [dpdk-dev] [PATCH v17 00/13] eal: Add EAL API for threading Narcisa Ana Maria Vasile
@ 2021-11-11  1:33  3%     ` Narcisa Ana Maria Vasile
  0 siblings, 0 replies; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-11-11  1:33 UTC (permalink / raw)
  To: dev, thomas, dmitry.kozliuk, khot, navasile, dmitrym, roretzla,
	talshn, ocardona
  Cc: bruce.richardson, david.marchand, pallavi.kadam

From: Narcisa Vasile <navasile@microsoft.com>

EAL thread API

**Problem Statement**
DPDK currently uses the pthread interface to create and manage threads.
Windows does not support the POSIX thread programming model,
so it currently
relies on a header file that hides the Windows calls under
pthread matched interfaces. Given that EAL should isolate the environment
specifics from the applications and libraries and mediate
all the communication with the operating systems, a new EAL interface
is needed for thread management.

**Goals**
* Introduce a generic EAL API for threading support that will remove
  the current Windows pthread.h shim.
* Replace references to pthread_* across the DPDK codebase with the new
  RTE_THREAD_* API.
* Allow users to choose between using the RTE_THREAD_* API or a
  3rd party thread library through a configuration option.

**Design plan**
New API main files:
* rte_thread.h (librte_eal/include)
* rte_thread.c (librte_eal/windows)
* rte_thread.c (librte_eal/common)

**A schematic example of the design**
--------------------------------------------------
lib/librte_eal/include/rte_thread.h
int rte_thread_create();

lib/librte_eal/common/rte_thread.c
int rte_thread_create() 
{
	return pthread_create();
}

lib/librte_eal/windows/rte_thread.c
int rte_thread_create() 
{
	return CreateThread();
}
-----------------------------------------------------

**Thread attributes**

When or after a thread is created, specific characteristics of the thread
can be adjusted. Currently in DPDK most threads operate at the OS-default
priority level but there are cases when increasing the priority is useful.
For example, high-performance applications require elevated priority to
avoid being preempted by other threads on the system.
The following structure that represents thread attributes has been
defined:

typedef struct
{
	enum rte_thread_priority priority;
	rte_cpuset_t cpuset;
} rte_thread_attr_t;

The *rte_thread_create()* function can optionally receive
an rte_thread_attr_t
object that will cause the thread to be created with the
affinity and priority
described by the attributes object. If no rte_thread_attr_t is passed
(parameter is NULL), the default affinity and priority are used.
An rte_thread_attr_t object can also be set to the default values
by calling *rte_thread_attr_init()*.

*Priority* is represented through an enum that currently advertises
two values for priority:
	- RTE_THREAD_PRIORITY_NORMAL
	- RTE_THREAD_PRIORITY_REALTIME_CRITICAL
The enum can be extended to allow for multiple priority levels.
rte_thread_set_priority      - sets the priority of a thread
rte_thread_get_priority      - retrieves the priority of a thread
                               from the OS
rte_thread_attr_set_priority - updates an rte_thread_attr_t object
                               with a new value for priority

*Affinity* is described by the already known “rte_cpuset_t” type.
rte_thread_attr_set/get_affinity - sets/gets the affinity field in a
                                   rte_thread_attr_t object
rte_thread_set/get_affinity      – sets/gets the affinity of a thread

**Errors**
As different platforms have different error codes, the approach here
is to translate the Windows error to POSIX-style ones to have
uniformity over the values returned. 

**Future work**
The long term plan is for EAL to provide full threading support:
* Add support for conditional variables
* Additional functionality offered by pthread_*
  (such as pthread_setname_np, etc.)

v18:
 - Squash unit tests in corresponding patches.
 - Prevent priority to be set to realtime on non-Windows systems.
 - Use already existing affinity function in rte_thread_create()

v17:
 - Move unrelated changes to the correct patch.
 - Rename RTE_STATIC_MUTEX to avoid confusion, since
   the mutex is still dynamically initialized behind the scenes.
 - Break down the unit tests into smaller patches and reorder them.
 - Remove duplicated code in header
 - Improve commit messages and cover letter.

v16:
- Fix warning on freebsd by adding cast
- Change affinity unit test to consider ases when the requested CPU
  are not available on the system.
- Fix priority unit test to avoid termination of thread before the
  priority is checked.

v15:
- Add try_lock mutex functionality. If the mutex is already owned by a
  different thread, the function returns immediately. Otherwise, 
  the mutex will be acquired.
- Add function for getting the priority of a thread.
  An auxiliary function that translates the OS priority to the
  EAL accepted ones is added.
- Fix unit tests logging, add descriptive asserts that mark test failures.
  Verify mutex locking, verify barrier return values. Add test for
  statically initialized mutexes.
- Fix Alpine build by removing the use of pthread_attr_set_affinity() and
  using pthread_set_affinity() after the thread is created.

v14:
- Remove patch "eal: add EAL argument for setting thread priority"
  This will be added later when enabling the new threading API.
- Remove priority enum value "_UNDEFINED". NORMAL is used
  as the default.
- Fix issue with thread return value.

v13:
 - Fix syntax error in unit tests

v12:
 - Fix freebsd warning about initializer in unit tests

v11:
 - Add unit tests for thread API
 - Rebase

v10:
 - Remove patch no. 10. It will be broken down in subpatches 
   and sent as a different patchset that depends on this one.
   This is done due to the ABI breaks that would be caused by patch 10.
 - Replace unix/rte_thread.c with common/rte_thread.c
 - Remove initializations that may prevent compiler from issuing useful
   warnings.
 - Remove rte_thread_types.h and rte_windows_thread_types.h
 - Remove unneeded priority macros (EAL_THREAD_PRIORITY*)
 - Remove functions that retrieves thread handle from process handle
 - Remove rte_thread_cancel() until same behavior is obtained on
   all platforms.
 - Fix rte_thread_detach() function description,
   return value and remove empty line.
 - Reimplement mutex functions. Add compatible representation for mutex
   identifier. Add macro to replace static mutex initialization instances.
 - Fix commit messages (lines too long, remove unicode symbols)

v9:
- Sign patches

v8:
- Rebase
- Add rte_thread_detach() API
- Set default priority, when user did not specify a value

v7:
Based on DmitryK's review:
- Change thread id representation
- Change mutex id representation
- Implement static mutex inititalizer for Windows
- Change barrier identifier representation
- Improve commit messages
- Add missing doxygen comments
- Split error translation function
- Improve name for affinity function
- Remove cpuset_size parameter
- Fix eal_create_cpu_map function
- Map EAL priority values to OS specific values
- Add thread wrapper for start routine
- Do not export rte_thread_cancel() on Windows
- Cleanup, fix comments, fix typos.

v6:
- improve error-translation function
- call the error translation function in rte_thread_value_get()

v5:
- update cover letter with more details on the priority argument

v4:
- fix function description
- rebase

v3:
- rebase

v2:
- revert changes that break ABI 
- break up changes into smaller patches
- fix coding style issues
- fix issues with errors
- fix parameter type in examples/kni.c

Narcisa Vasile (8):
  eal: add basic threading functions
  eal: add thread attributes
  eal/windows: translate Windows errors to errno-style errors
  eal: implement functions for thread affinity management
  eal: implement thread priority management functions
  eal: add thread lifetime management
  eal: implement functions for thread barrier management
  eal: implement functions for mutex management

 app/test/meson.build            |   2 +
 app/test/test_threads.c         | 372 ++++++++++++++++++
 lib/eal/common/meson.build      |   1 +
 lib/eal/common/rte_thread.c     | 511 +++++++++++++++++++++++++
 lib/eal/include/rte_thread.h    | 412 +++++++++++++++++++-
 lib/eal/unix/meson.build        |   1 -
 lib/eal/unix/rte_thread.c       |  92 -----
 lib/eal/version.map             |  22 ++
 lib/eal/windows/eal_lcore.c     | 176 ++++++---
 lib/eal/windows/eal_windows.h   |  10 +
 lib/eal/windows/include/sched.h |   2 +-
 lib/eal/windows/rte_thread.c    | 656 ++++++++++++++++++++++++++++++--
 12 files changed, 2084 insertions(+), 173 deletions(-)
 create mode 100644 app/test/test_threads.c
 create mode 100644 lib/eal/common/rte_thread.c
 delete mode 100644 lib/eal/unix/rte_thread.c

-- 
2.31.0.vfs.0.1


^ permalink raw reply	[relevance 3%]

* [PATCH 1/5] ci: test build with minimum configuration
  @ 2021-11-10 16:48  4% ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-11-10 16:48 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, thomas, bluca, tredaelli, i.maximets,
	james.r.harris, mohammed, Aaron Conole, Michael Santana

Disabling optional libraries was not tested.
Add a new target in test-meson-builds.sh and GHA.

The Bluefield target is removed from test-meson-builds.sh to save space
and compilation time in exchange of the new target.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 .ci/linux-build.sh            | 3 +++
 .github/workflows/build.yml   | 5 +++++
 devtools/test-meson-builds.sh | 4 +++-
 3 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index ef0bd099be..e7ed648099 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -87,6 +87,9 @@ OPTS="$OPTS -Dplatform=generic"
 OPTS="$OPTS --default-library=$DEF_LIB"
 OPTS="$OPTS --buildtype=debugoptimized"
 OPTS="$OPTS -Dcheck_includes=true"
+if [ "$NO_OPTIONAL_LIBS" = "true" ]; then
+    OPTS="$OPTS -Ddisable_libs=*"
+fi
 meson build --werror $OPTS
 ninja -C build
 
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 4151cafee7..346cc75c20 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -21,6 +21,7 @@ jobs:
       CC: ccache ${{ matrix.config.compiler }}
       DEF_LIB: ${{ matrix.config.library }}
       LIBABIGAIL_VERSION: libabigail-1.8
+      NO_OPTIONAL_LIBS: ${{ matrix.config.no_optional_libs != '' }}
       PPC64LE: ${{ matrix.config.cross == 'ppc64le' }}
       REF_GIT_TAG: none
       RUN_TESTS: ${{ contains(matrix.config.checks, 'tests') }}
@@ -32,6 +33,10 @@ jobs:
           - os: ubuntu-18.04
             compiler: gcc
             library: static
+          - os: ubuntu-18.04
+            compiler: gcc
+            library: shared
+            no_optional_libs: no-optional-libs
           - os: ubuntu-18.04
             compiler: gcc
             library: shared
diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 9ec8e2bc7e..36ecf63ec6 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -220,6 +220,8 @@ for c in gcc clang ; do
 	done
 done
 
+build build-x86-no-optional-libs cc skipABI $use_shared -Ddisable_libs=*
+
 # test compilation with minimal x86 instruction set
 # Set the install path for libraries to "lib" explicitly to prevent problems
 # with pkg-config prefixes if installed in "lib/x86_64-linux-gnu" later.
@@ -258,7 +260,7 @@ export CC="clang"
 build build-arm64-host-clang $f ABI $use_shared
 unset CC
 # some gcc/arm configurations
-for f in $srcdir/config/arm/arm64_[bdo]*gcc ; do
+for f in $srcdir/config/arm/arm64_[do]*gcc ; do
 	export CC="$CCACHE gcc"
 	targetdir=build-$(basename $f | tr '_' '-' | cut -d'-' -f-2)
 	build $targetdir $f skipABI $use_shared
-- 
2.23.0


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v16 8/9] eal: implement functions for thread barrier management
  2021-11-09  2:07  3%       ` Narcisa Ana Maria Vasile
@ 2021-11-10  3:13  0%         ` Narcisa Ana Maria Vasile
  0 siblings, 0 replies; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-11-10  3:13 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, dmitry.kozliuk, khot, dmitrym, roretzla, talshn, ocardona,
	bruce.richardson, david.marchand, pallavi.kadam

On Mon, Nov 08, 2021 at 06:07:34PM -0800, Narcisa Ana Maria Vasile wrote:
> On Tue, Oct 12, 2021 at 06:32:09PM +0200, Thomas Monjalon wrote:
> > 09/10/2021 09:41, Narcisa Ana Maria Vasile:
> > > From: Narcisa Vasile <navasile@microsoft.com>
> > > 
> > > Add functions for barrier init, destroy, wait.
> > > 
> > > A portable type is used to represent a barrier identifier.
> > > The rte_thread_barrier_wait() function returns the same value
> > > on all platforms.
> > > 
> > > Signed-off-by: Narcisa Vasile <navasile@microsoft.com>
> > > ---
> > >  lib/eal/common/rte_thread.c  | 61 ++++++++++++++++++++++++++++++++++++
> > >  lib/eal/include/rte_thread.h | 58 ++++++++++++++++++++++++++++++++++
> > >  lib/eal/version.map          |  3 ++
> > >  lib/eal/windows/rte_thread.c | 56 +++++++++++++++++++++++++++++++++
> > >  4 files changed, 178 insertions(+)
> > 
> > It doesn't need to be part of the API.
> > The pthread barrier is used only as part of the control thread implementation.
> > The need disappear if you implement control thread on Windows.
> > 
> Actually I think I have the implementation already. I've worked at this some time ago,
> I have this patch:
> [v4,2/6] eal: add function for control thread creation
> 
> The issue is I will break ABI so I cannot merge it as part of this patchset.
> I'll see if I can remove this barrier patch though.

  I couldn't find a good way to test mutexes without barriers, so I kept this for now.

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v17 00/13]  eal: Add EAL API for threading
    @ 2021-11-10  3:01  3%   ` Narcisa Ana Maria Vasile
  2021-11-11  1:33  3%     ` [PATCH v18 0/8] " Narcisa Ana Maria Vasile
  1 sibling, 1 reply; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-11-10  3:01 UTC (permalink / raw)
  To: dev, thomas, dmitry.kozliuk, khot, navasile, dmitrym, roretzla,
	talshn, ocardona
  Cc: bruce.richardson, david.marchand, pallavi.kadam

From: Narcisa Vasile <navasile@microsoft.com>

EAL thread API

**Problem Statement**
DPDK currently uses the pthread interface to create and manage threads.
Windows does not support the POSIX thread programming model,
so it currently
relies on a header file that hides the Windows calls under
pthread matched interfaces. Given that EAL should isolate the environment
specifics from the applications and libraries and mediate
all the communication with the operating systems, a new EAL interface
is needed for thread management.

**Goals**
* Introduce a generic EAL API for threading support that will remove
  the current Windows pthread.h shim.
* Replace references to pthread_* across the DPDK codebase with the new
  RTE_THREAD_* API.
* Allow users to choose between using the RTE_THREAD_* API or a
  3rd party thread library through a configuration option.

**Design plan**
New API main files:
* rte_thread.h (librte_eal/include)
* rte_thread.c (librte_eal/windows)
* rte_thread.c (librte_eal/common)

**A schematic example of the design**
--------------------------------------------------
lib/librte_eal/include/rte_thread.h
int rte_thread_create();

lib/librte_eal/common/rte_thread.c
int rte_thread_create() 
{
	return pthread_create();
}

lib/librte_eal/windows/rte_thread.c
int rte_thread_create() 
{
	return CreateThread();
}
-----------------------------------------------------

**Thread attributes**

When or after a thread is created, specific characteristics of the thread
can be adjusted. Currently in DPDK most threads operate at the OS-default
priority level but there are cases when increasing the priority is useful.
For example, high-performance applications require elevated priority to
avoid being preempted by other threads on the system.
The following structure that represents thread attributes has been
defined:

typedef struct
{
	enum rte_thread_priority priority;
	rte_cpuset_t cpuset;
} rte_thread_attr_t;

The *rte_thread_create()* function can optionally receive
an rte_thread_attr_t
object that will cause the thread to be created with the
affinity and priority
described by the attributes object. If no rte_thread_attr_t is passed
(parameter is NULL), the default affinity and priority are used.
An rte_thread_attr_t object can also be set to the default values
by calling *rte_thread_attr_init()*.

*Priority* is represented through an enum that currently advertises
two values for priority:
	- RTE_THREAD_PRIORITY_NORMAL
	- RTE_THREAD_PRIORITY_REALTIME_CRITICAL
The enum can be extended to allow for multiple priority levels.
rte_thread_set_priority      - sets the priority of a thread
rte_thread_get_priority      - retrieves the priority of a thread
                               from the OS
rte_thread_attr_set_priority - updates an rte_thread_attr_t object
                               with a new value for priority

*Affinity* is described by the already known “rte_cpuset_t” type.
rte_thread_attr_set/get_affinity - sets/gets the affinity field in a
                                   rte_thread_attr_t object
rte_thread_set/get_affinity      – sets/gets the affinity of a thread

**Errors**
As different platforms have different error codes, the approach here
is to translate the Windows error to POSIX-style ones to have
uniformity over the values returned. 

**Future work**
The long term plan is for EAL to provide full threading support:
* Add support for conditional variables
* Additional functionality offered by pthread_*
  (such as pthread_setname_np, etc.)

v17:
 - Move unrelated changes to the correct patch.
 - Rename RTE_STATIC_MUTEX to avoid confusion, since
   the mutex is still dynamically initialized behind the scenes.
 - Break down the unit tests into smaller patches and reorder them.
 - Remove duplicated code in header.
 - Improve commit messages and cover letter.

v16:
- Fix warning on freebsd by adding cast
- Change affinity unit test to consider ases when the requested CPU
  are not available on the system.
- Fix priority unit test to avoid termination of thread before the
  priority is checked.

v15:
- Add try_lock mutex functionality. If the mutex is already owned by a
  different thread, the function returns immediately. Otherwise, 
  the mutex will be acquired.
- Add function for getting the priority of a thread.
  An auxiliary function that translates the OS priority to the
  EAL accepted ones is added.
- Fix unit tests logging, add descriptive asserts that mark test failures.
  Verify mutex locking, verify barrier return values. Add test for
  statically initialized mutexes.
- Fix Alpine build by removing the use of pthread_attr_set_affinity() and
  using pthread_set_affinity() after the thread is created.

v14:
- Remove patch "eal: add EAL argument for setting thread priority"
  This will be added later when enabling the new threading API.
- Remove priority enum value "_UNDEFINED". NORMAL is used
  as the default.
- Fix issue with thread return value.

v13:
 - Fix syntax error in unit tests

v12:
 - Fix freebsd warning about initializer in unit tests

v11:
 - Add unit tests for thread API
 - Rebase

v10:
 - Remove patch no. 10. It will be broken down in subpatches 
   and sent as a different patchset that depends on this one.
   This is done due to the ABI breaks that would be caused by patch 10.
 - Replace unix/rte_thread.c with common/rte_thread.c
 - Remove initializations that may prevent compiler from issuing useful
   warnings.
 - Remove rte_thread_types.h and rte_windows_thread_types.h
 - Remove unneeded priority macros (EAL_THREAD_PRIORITY*)
 - Remove functions that retrieves thread handle from process handle
 - Remove rte_thread_cancel() until same behavior is obtained on
   all platforms.
 - Fix rte_thread_detach() function description,
   return value and remove empty line.
 - Reimplement mutex functions. Add compatible representation for mutex
   identifier. Add macro to replace static mutex initialization instances.
 - Fix commit messages (lines too long, remove unicode symbols)

v9:
- Sign patches

v8:
- Rebase
- Add rte_thread_detach() API
- Set default priority, when user did not specify a value

v7:
Based on DmitryK's review:
- Change thread id representation
- Change mutex id representation
- Implement static mutex inititalizer for Windows
- Change barrier identifier representation
- Improve commit messages
- Add missing doxygen comments
- Split error translation function
- Improve name for affinity function
- Remove cpuset_size parameter
- Fix eal_create_cpu_map function
- Map EAL priority values to OS specific values
- Add thread wrapper for start routine
- Do not export rte_thread_cancel() on Windows
- Cleanup, fix comments, fix typos.

v6:
- improve error-translation function
- call the error translation function in rte_thread_value_get()

v5:
- update cover letter with more details on the priority argument

v4:
- fix function description
- rebase

v3:
- rebase

v2:
- revert changes that break ABI 
- break up changes into smaller patches
- fix coding style issues
- fix issues with errors
- fix parameter type in examples/kni.c

Narcisa Vasile (13):
  eal: add basic threading functions
  eal: add thread attributes
  eal/windows: translate Windows errors to errno-style errors
  eal: implement functions for thread affinity management
  eal: implement thread priority management functions
  eal: add thread lifetime management
  app/test: add unit tests for rte_thread_self
  app/test: add unit tests for thread attributes
  app/test: add unit tests for thread lifetime management
  eal: implement functions for thread barrier management
  app/test: add unit tests for barrier
  eal: implement functions for mutex management
  app/test: add unit tests for mutex

 app/test/meson.build            |   2 +
 app/test/test_threads.c         | 372 ++++++++++++++++++
 lib/eal/common/meson.build      |   1 +
 lib/eal/common/rte_thread.c     | 497 ++++++++++++++++++++++++
 lib/eal/include/rte_thread.h    | 412 +++++++++++++++++++-
 lib/eal/unix/meson.build        |   1 -
 lib/eal/unix/rte_thread.c       |  92 -----
 lib/eal/version.map             |  22 ++
 lib/eal/windows/eal_lcore.c     | 176 ++++++---
 lib/eal/windows/eal_windows.h   |  10 +
 lib/eal/windows/include/sched.h |   2 +-
 lib/eal/windows/rte_thread.c    | 656 ++++++++++++++++++++++++++++++--
 12 files changed, 2070 insertions(+), 173 deletions(-)
 create mode 100644 app/test/test_threads.c
 create mode 100644 lib/eal/common/rte_thread.c
 delete mode 100644 lib/eal/unix/rte_thread.c

-- 
2.31.0.vfs.0.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v5] ip_frag: add namespace
  2021-11-08 13:55  3%   ` [dpdk-dev] [PATCH v4 2/2] ip_frag: add namespace Konstantin Ananyev
@ 2021-11-09 12:32  3%     ` Konstantin Ananyev
  0 siblings, 0 replies; 200+ results
From: Konstantin Ananyev @ 2021-11-09 12:32 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

Update public macros to have RTE_IP_FRAG_ prefix.
Update DPDK components to use new names.
Keep obsolete macro for compatibility reasons.
Renamed experimental function ``rte_frag_table_del_expired_entries``to
``rte_ip_frag_table_del_expired_entries`` to comply with other public
API naming convention.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst |  6 ++++++
 examples/ip_reassembly/main.c          |  2 +-
 examples/ipsec-secgw/ipsec-secgw.c     |  2 +-
 lib/ip_frag/rte_ip_frag.h              | 29 ++++++++++++++++----------
 lib/ip_frag/rte_ip_frag_common.c       |  5 +++--
 lib/ip_frag/rte_ipv6_fragmentation.c   | 12 +++++------
 lib/ip_frag/rte_ipv6_reassembly.c      |  6 +++---
 lib/ip_frag/version.map                |  2 +-
 lib/port/rte_port_ras.c                |  2 +-
 9 files changed, 40 insertions(+), 26 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 01923e2deb..226dbb5bf0 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -565,6 +565,12 @@ API Changes
 * fib: Added the ``rib_ext_sz`` field to ``rte_fib_conf`` and ``rte_fib6_conf``
   so that user can specify the size of the RIB extension inside the FIB.
 
+* ip_frag: All macros updated to have ``RTE_IP_FRAG_`` prefix. Obsolete
+  macros are kept for compatibility. DPDK components updated to use new names.
+  Experimental function ``rte_frag_table_del_expired_entries`` was renamed to
+  ``rte_ip_frag_table_del_expired_entries`` to comply with other public
+  API naming convention.
+
 
 ABI Changes
 -----------
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 547b47276e..fb3cac3bd0 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -371,7 +371,7 @@ reassemble(struct rte_mbuf *m, uint16_t portid, uint32_t queue,
 		eth_hdr->ether_type = rte_be_to_cpu_16(RTE_ETHER_TYPE_IPV4);
 	} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
 		/* if packet is IPv6 */
-		struct ipv6_extension_fragment *frag_hdr;
+		struct rte_ipv6_fragment_ext *frag_hdr;
 		struct rte_ipv6_hdr *ip_hdr;
 
 		ip_hdr = (struct rte_ipv6_hdr *)(eth_hdr + 1);
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 0a1c5bcaaa..86bb7e9064 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -2647,7 +2647,7 @@ rx_callback(__rte_unused uint16_t port, __rte_unused uint16_t queue,
 				rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) {
 
 			struct rte_ipv6_hdr *iph;
-			struct ipv6_extension_fragment *fh;
+			struct rte_ipv6_fragment_ext *fh;
 
 			iph = (struct rte_ipv6_hdr *)(eth + 1);
 			fh = rte_ipv6_frag_get_ipv6_fragment_header(iph);
diff --git a/lib/ip_frag/rte_ip_frag.h b/lib/ip_frag/rte_ip_frag.h
index b469bb5f4e..9493021428 100644
--- a/lib/ip_frag/rte_ip_frag.h
+++ b/lib/ip_frag/rte_ip_frag.h
@@ -27,22 +27,19 @@ extern "C" {
 
 struct rte_mbuf;
 
-#define IP_FRAG_DEATH_ROW_LEN 32 /**< death row size (in packets) */
+#define RTE_IP_FRAG_DEATH_ROW_LEN 32 /**< death row size (in packets) */
 
 /* death row size in mbufs */
-#define IP_FRAG_DEATH_ROW_MBUF_LEN \
-	(IP_FRAG_DEATH_ROW_LEN * (RTE_LIBRTE_IP_FRAG_MAX_FRAG + 1))
+#define RTE_IP_FRAG_DEATH_ROW_MBUF_LEN \
+	(RTE_IP_FRAG_DEATH_ROW_LEN * (RTE_LIBRTE_IP_FRAG_MAX_FRAG + 1))
 
 /** mbuf death row (packets to be freed) */
 struct rte_ip_frag_death_row {
 	uint32_t cnt;          /**< number of mbufs currently on death row */
-	struct rte_mbuf *row[IP_FRAG_DEATH_ROW_MBUF_LEN];
+	struct rte_mbuf *row[RTE_IP_FRAG_DEATH_ROW_MBUF_LEN];
 	/**< mbufs to be freed */
 };
 
-/* struct ipv6_extension_fragment moved to librte_net/rte_ip.h and renamed. */
-#define ipv6_extension_fragment	rte_ipv6_fragment_ext
-
 /**
  * Create a new IP fragmentation table.
  *
@@ -128,7 +125,7 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,
 struct rte_mbuf *rte_ipv6_frag_reassemble_packet(struct rte_ip_frag_tbl *tbl,
 		struct rte_ip_frag_death_row *dr,
 		struct rte_mbuf *mb, uint64_t tms, struct rte_ipv6_hdr *ip_hdr,
-		struct ipv6_extension_fragment *frag_hdr);
+		struct rte_ipv6_fragment_ext *frag_hdr);
 
 /**
  * Return a pointer to the packet's fragment header, if found.
@@ -141,11 +138,11 @@ struct rte_mbuf *rte_ipv6_frag_reassemble_packet(struct rte_ip_frag_tbl *tbl,
  *   Pointer to the IPv6 fragment extension header, or NULL if it's not
  *   present.
  */
-static inline struct ipv6_extension_fragment *
+static inline struct rte_ipv6_fragment_ext *
 rte_ipv6_frag_get_ipv6_fragment_header(struct rte_ipv6_hdr *hdr)
 {
 	if (hdr->proto == IPPROTO_FRAGMENT) {
-		return (struct ipv6_extension_fragment *) ++hdr;
+		return (struct rte_ipv6_fragment_ext *) ++hdr;
 	}
 	else
 		return NULL;
@@ -258,9 +255,19 @@ rte_ip_frag_table_statistics_dump(FILE * f, const struct rte_ip_frag_tbl *tbl);
  */
 __rte_experimental
 void
-rte_frag_table_del_expired_entries(struct rte_ip_frag_tbl *tbl,
+rte_ip_frag_table_del_expired_entries(struct rte_ip_frag_tbl *tbl,
 	struct rte_ip_frag_death_row *dr, uint64_t tms);
 
+/**@{*/
+/**
+ * Obsolete macros, kept here for compatibility reasons.
+ * Will be deprecated/removed in future DPDK releases.
+ */
+#define IP_FRAG_DEATH_ROW_LEN		RTE_IP_FRAG_DEATH_ROW_LEN
+#define IP_FRAG_DEATH_ROW_MBUF_LEN	RTE_IP_FRAG_DEATH_ROW_MBUF_LEN
+#define ipv6_extension_fragment		rte_ipv6_fragment_ext
+/**@}*/
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ip_frag/rte_ip_frag_common.c b/lib/ip_frag/rte_ip_frag_common.c
index 6b29e9d7ed..2c781a6d33 100644
--- a/lib/ip_frag/rte_ip_frag_common.c
+++ b/lib/ip_frag/rte_ip_frag_common.c
@@ -124,7 +124,7 @@ rte_ip_frag_table_statistics_dump(FILE *f, const struct rte_ip_frag_tbl *tbl)
 
 /* Delete expired fragments */
 void
-rte_frag_table_del_expired_entries(struct rte_ip_frag_tbl *tbl,
+rte_ip_frag_table_del_expired_entries(struct rte_ip_frag_tbl *tbl,
 	struct rte_ip_frag_death_row *dr, uint64_t tms)
 {
 	uint64_t max_cycles;
@@ -135,7 +135,8 @@ rte_frag_table_del_expired_entries(struct rte_ip_frag_tbl *tbl,
 	TAILQ_FOREACH(fp, &tbl->lru, lru)
 		if (max_cycles + fp->start < tms) {
 			/* check that death row has enough space */
-			if (IP_FRAG_DEATH_ROW_MBUF_LEN - dr->cnt >= fp->last_idx)
+			if (RTE_IP_FRAG_DEATH_ROW_MBUF_LEN - dr->cnt >=
+					fp->last_idx)
 				ip_frag_tbl_del(tbl, dr, fp);
 			else
 				return;
diff --git a/lib/ip_frag/rte_ipv6_fragmentation.c b/lib/ip_frag/rte_ipv6_fragmentation.c
index 5d67336f2d..88f29c158c 100644
--- a/lib/ip_frag/rte_ipv6_fragmentation.c
+++ b/lib/ip_frag/rte_ipv6_fragmentation.c
@@ -22,13 +22,13 @@ __fill_ipv6hdr_frag(struct rte_ipv6_hdr *dst,
 		const struct rte_ipv6_hdr *src, uint16_t len, uint16_t fofs,
 		uint32_t mf)
 {
-	struct ipv6_extension_fragment *fh;
+	struct rte_ipv6_fragment_ext *fh;
 
 	rte_memcpy(dst, src, sizeof(*dst));
 	dst->payload_len = rte_cpu_to_be_16(len);
 	dst->proto = IPPROTO_FRAGMENT;
 
-	fh = (struct ipv6_extension_fragment *) ++dst;
+	fh = (struct rte_ipv6_fragment_ext *) ++dst;
 	fh->next_header = src->proto;
 	fh->reserved = 0;
 	fh->frag_data = rte_cpu_to_be_16(RTE_IPV6_SET_FRAG_DATA(fofs, mf));
@@ -94,7 +94,7 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,
 	 */
 
 	frag_size = mtu_size - sizeof(struct rte_ipv6_hdr) -
-		sizeof(struct ipv6_extension_fragment);
+		sizeof(struct rte_ipv6_fragment_ext);
 	frag_size = RTE_ALIGN_FLOOR(frag_size, RTE_IPV6_EHDR_FO_ALIGN);
 
 	/* Check that pkts_out is big enough to hold all fragments */
@@ -124,9 +124,9 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,
 
 		/* Reserve space for the IP header that will be built later */
 		out_pkt->data_len = sizeof(struct rte_ipv6_hdr) +
-			sizeof(struct ipv6_extension_fragment);
+			sizeof(struct rte_ipv6_fragment_ext);
 		out_pkt->pkt_len  = sizeof(struct rte_ipv6_hdr) +
-			sizeof(struct ipv6_extension_fragment);
+			sizeof(struct rte_ipv6_fragment_ext);
 		frag_bytes_remaining = frag_size;
 
 		out_seg_prev = out_pkt;
@@ -184,7 +184,7 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,
 
 		fragment_offset = (uint16_t)(fragment_offset +
 		    out_pkt->pkt_len - sizeof(struct rte_ipv6_hdr)
-			- sizeof(struct ipv6_extension_fragment));
+			- sizeof(struct rte_ipv6_fragment_ext));
 
 		/* Write the fragment to the output list */
 		pkts_out[out_pkt_pos] = out_pkt;
diff --git a/lib/ip_frag/rte_ipv6_reassembly.c b/lib/ip_frag/rte_ipv6_reassembly.c
index 6bc0bf792a..d4019e87e6 100644
--- a/lib/ip_frag/rte_ipv6_reassembly.c
+++ b/lib/ip_frag/rte_ipv6_reassembly.c
@@ -33,7 +33,7 @@ struct rte_mbuf *
 ipv6_frag_reassemble(struct ip_frag_pkt *fp)
 {
 	struct rte_ipv6_hdr *ip_hdr;
-	struct ipv6_extension_fragment *frag_hdr;
+	struct rte_ipv6_fragment_ext *frag_hdr;
 	struct rte_mbuf *m, *prev;
 	uint32_t i, n, ofs, first_len;
 	uint32_t last_len, move_len, payload_len;
@@ -102,7 +102,7 @@ ipv6_frag_reassemble(struct ip_frag_pkt *fp)
 	 * the main IPv6 header instead.
 	 */
 	move_len = m->l2_len + m->l3_len - sizeof(*frag_hdr);
-	frag_hdr = (struct ipv6_extension_fragment *) (ip_hdr + 1);
+	frag_hdr = (struct rte_ipv6_fragment_ext *) (ip_hdr + 1);
 	ip_hdr->proto = frag_hdr->next_header;
 
 	ip_frag_memmove(rte_pktmbuf_mtod_offset(m, char *, sizeof(*frag_hdr)),
@@ -136,7 +136,7 @@ ipv6_frag_reassemble(struct ip_frag_pkt *fp)
 struct rte_mbuf *
 rte_ipv6_frag_reassemble_packet(struct rte_ip_frag_tbl *tbl,
 	struct rte_ip_frag_death_row *dr, struct rte_mbuf *mb, uint64_t tms,
-	struct rte_ipv6_hdr *ip_hdr, struct ipv6_extension_fragment *frag_hdr)
+	struct rte_ipv6_hdr *ip_hdr, struct rte_ipv6_fragment_ext *frag_hdr)
 {
 	struct ip_frag_pkt *fp;
 	struct ip_frag_key key;
diff --git a/lib/ip_frag/version.map b/lib/ip_frag/version.map
index 33f231fb31..e537224293 100644
--- a/lib/ip_frag/version.map
+++ b/lib/ip_frag/version.map
@@ -16,5 +16,5 @@ DPDK_22 {
 EXPERIMENTAL {
 	global:
 
-	rte_frag_table_del_expired_entries;
+	rte_ip_frag_table_del_expired_entries;
 };
diff --git a/lib/port/rte_port_ras.c b/lib/port/rte_port_ras.c
index 403028f8d6..8508814bb2 100644
--- a/lib/port/rte_port_ras.c
+++ b/lib/port/rte_port_ras.c
@@ -186,7 +186,7 @@ process_ipv6(struct rte_port_ring_writer_ras *p, struct rte_mbuf *pkt)
 	struct rte_ipv6_hdr *pkt_hdr =
 		rte_pktmbuf_mtod(pkt, struct rte_ipv6_hdr *);
 
-	struct ipv6_extension_fragment *frag_hdr;
+	struct rte_ipv6_fragment_ext *frag_hdr;
 	uint16_t frag_data = 0;
 	frag_hdr = rte_ipv6_frag_get_ipv6_fragment_header(pkt_hdr);
 	if (frag_hdr != NULL)
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v16 8/9] eal: implement functions for thread barrier management
  @ 2021-11-09  2:07  3%       ` Narcisa Ana Maria Vasile
  2021-11-10  3:13  0%         ` Narcisa Ana Maria Vasile
  0 siblings, 1 reply; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-11-09  2:07 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, dmitry.kozliuk, khot, dmitrym, roretzla, talshn, ocardona,
	bruce.richardson, david.marchand, pallavi.kadam

On Tue, Oct 12, 2021 at 06:32:09PM +0200, Thomas Monjalon wrote:
> 09/10/2021 09:41, Narcisa Ana Maria Vasile:
> > From: Narcisa Vasile <navasile@microsoft.com>
> > 
> > Add functions for barrier init, destroy, wait.
> > 
> > A portable type is used to represent a barrier identifier.
> > The rte_thread_barrier_wait() function returns the same value
> > on all platforms.
> > 
> > Signed-off-by: Narcisa Vasile <navasile@microsoft.com>
> > ---
> >  lib/eal/common/rte_thread.c  | 61 ++++++++++++++++++++++++++++++++++++
> >  lib/eal/include/rte_thread.h | 58 ++++++++++++++++++++++++++++++++++
> >  lib/eal/version.map          |  3 ++
> >  lib/eal/windows/rte_thread.c | 56 +++++++++++++++++++++++++++++++++
> >  4 files changed, 178 insertions(+)
> 
> It doesn't need to be part of the API.
> The pthread barrier is used only as part of the control thread implementation.
> The need disappear if you implement control thread on Windows.
> 
Actually I think I have the implementation already. I've worked at this some time ago,
I have this patch:
[v4,2/6] eal: add function for control thread creation

The issue is I will break ABI so I cannot merge it as part of this patchset.
I'll see if I can remove this barrier patch though.

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] ip_frag: increase default value for config parameter
  2021-11-02 19:03 14% [dpdk-dev] [PATCH] ip_frag: increase default value for config parameter Konstantin Ananyev
@ 2021-11-08 22:08  0% ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-11-08 22:08 UTC (permalink / raw)
  To: Konstantin Ananyev; +Cc: dev, techboard, bruce.richardson, koncept1

02/11/2021 20:03, Konstantin Ananyev:
> Increase default value for config parameter RTE_LIBRTE_IP_FRAG_MAX_FRAG
> from 4 to 8. This parameter controls maximum number of fragments per
> packet in ip reassembly table. Increasing this value from 4 to 8 will
> allow users to cover common case with jumbo packet size of 9KB and
> fragments with default frame size (1500B).
> As RTE_LIBRTE_IP_FRAG_MAX_FRAG is used in definition of public
> structure (struct rte_ip_frag_death_row), this is an ABI change.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
> --- a/config/rte_config.h
> +++ b/config/rte_config.h
> -#define RTE_LIBRTE_IP_FRAG_MAX_FRAG 4
> +#define RTE_LIBRTE_IP_FRAG_MAX_FRAG 8

This unannounced change was approved by the techboard:
http://inbox.dpdk.org/dev/0fccb0b7-b2bb-7391-9c94-e87fbf64f007@redhat.com/

Applied with simplified release notes, thanks.



^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v4 2/2] ip_frag: add namespace
  @ 2021-11-08 13:55  3%   ` Konstantin Ananyev
  2021-11-09 12:32  3%     ` [dpdk-dev] [PATCH v5] " Konstantin Ananyev
  0 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2021-11-08 13:55 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

Update public macros to have RTE_IP_FRAG_ prefix.
Remove obsolete macro.
Update DPDK components to use new names.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst |  3 +++
 examples/ip_reassembly/main.c          |  2 +-
 examples/ipsec-secgw/ipsec-secgw.c     |  2 +-
 lib/ip_frag/rte_ip_frag.h              | 17 +++++++----------
 lib/ip_frag/rte_ip_frag_common.c       |  3 ++-
 lib/ip_frag/rte_ipv6_fragmentation.c   | 12 ++++++------
 lib/ip_frag/rte_ipv6_reassembly.c      |  6 +++---
 lib/port/rte_port_ras.c                |  2 +-
 8 files changed, 24 insertions(+), 23 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 8da19c613a..ce47250fbd 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -559,6 +559,9 @@ API Changes
 * fib: Added the ``rib_ext_sz`` field to ``rte_fib_conf`` and ``rte_fib6_conf``
   so that user can specify the size of the RIB extension inside the FIB.
 
+* ip_frag: All macros updated to have ``RTE_IP_FRAG_`` prefix. Obsolete
+  macros are removed. DPDK components updated to use new names.
+
 
 ABI Changes
 -----------
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 547b47276e..fb3cac3bd0 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -371,7 +371,7 @@ reassemble(struct rte_mbuf *m, uint16_t portid, uint32_t queue,
 		eth_hdr->ether_type = rte_be_to_cpu_16(RTE_ETHER_TYPE_IPV4);
 	} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
 		/* if packet is IPv6 */
-		struct ipv6_extension_fragment *frag_hdr;
+		struct rte_ipv6_fragment_ext *frag_hdr;
 		struct rte_ipv6_hdr *ip_hdr;
 
 		ip_hdr = (struct rte_ipv6_hdr *)(eth_hdr + 1);
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 0a1c5bcaaa..86bb7e9064 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -2647,7 +2647,7 @@ rx_callback(__rte_unused uint16_t port, __rte_unused uint16_t queue,
 				rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) {
 
 			struct rte_ipv6_hdr *iph;
-			struct ipv6_extension_fragment *fh;
+			struct rte_ipv6_fragment_ext *fh;
 
 			iph = (struct rte_ipv6_hdr *)(eth + 1);
 			fh = rte_ipv6_frag_get_ipv6_fragment_header(iph);
diff --git a/lib/ip_frag/rte_ip_frag.h b/lib/ip_frag/rte_ip_frag.h
index b469bb5f4e..0782ba45d6 100644
--- a/lib/ip_frag/rte_ip_frag.h
+++ b/lib/ip_frag/rte_ip_frag.h
@@ -27,22 +27,19 @@ extern "C" {
 
 struct rte_mbuf;
 
-#define IP_FRAG_DEATH_ROW_LEN 32 /**< death row size (in packets) */
+#define RTE_IP_FRAG_DEATH_ROW_LEN 32 /**< death row size (in packets) */
 
 /* death row size in mbufs */
-#define IP_FRAG_DEATH_ROW_MBUF_LEN \
-	(IP_FRAG_DEATH_ROW_LEN * (RTE_LIBRTE_IP_FRAG_MAX_FRAG + 1))
+#define RTE_IP_FRAG_DEATH_ROW_MBUF_LEN \
+	(RTE_IP_FRAG_DEATH_ROW_LEN * (RTE_LIBRTE_IP_FRAG_MAX_FRAG + 1))
 
 /** mbuf death row (packets to be freed) */
 struct rte_ip_frag_death_row {
 	uint32_t cnt;          /**< number of mbufs currently on death row */
-	struct rte_mbuf *row[IP_FRAG_DEATH_ROW_MBUF_LEN];
+	struct rte_mbuf *row[RTE_IP_FRAG_DEATH_ROW_MBUF_LEN];
 	/**< mbufs to be freed */
 };
 
-/* struct ipv6_extension_fragment moved to librte_net/rte_ip.h and renamed. */
-#define ipv6_extension_fragment	rte_ipv6_fragment_ext
-
 /**
  * Create a new IP fragmentation table.
  *
@@ -128,7 +125,7 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,
 struct rte_mbuf *rte_ipv6_frag_reassemble_packet(struct rte_ip_frag_tbl *tbl,
 		struct rte_ip_frag_death_row *dr,
 		struct rte_mbuf *mb, uint64_t tms, struct rte_ipv6_hdr *ip_hdr,
-		struct ipv6_extension_fragment *frag_hdr);
+		struct rte_ipv6_fragment_ext *frag_hdr);
 
 /**
  * Return a pointer to the packet's fragment header, if found.
@@ -141,11 +138,11 @@ struct rte_mbuf *rte_ipv6_frag_reassemble_packet(struct rte_ip_frag_tbl *tbl,
  *   Pointer to the IPv6 fragment extension header, or NULL if it's not
  *   present.
  */
-static inline struct ipv6_extension_fragment *
+static inline struct rte_ipv6_fragment_ext *
 rte_ipv6_frag_get_ipv6_fragment_header(struct rte_ipv6_hdr *hdr)
 {
 	if (hdr->proto == IPPROTO_FRAGMENT) {
-		return (struct ipv6_extension_fragment *) ++hdr;
+		return (struct rte_ipv6_fragment_ext *) ++hdr;
 	}
 	else
 		return NULL;
diff --git a/lib/ip_frag/rte_ip_frag_common.c b/lib/ip_frag/rte_ip_frag_common.c
index 6b29e9d7ed..8580ffca5e 100644
--- a/lib/ip_frag/rte_ip_frag_common.c
+++ b/lib/ip_frag/rte_ip_frag_common.c
@@ -135,7 +135,8 @@ rte_frag_table_del_expired_entries(struct rte_ip_frag_tbl *tbl,
 	TAILQ_FOREACH(fp, &tbl->lru, lru)
 		if (max_cycles + fp->start < tms) {
 			/* check that death row has enough space */
-			if (IP_FRAG_DEATH_ROW_MBUF_LEN - dr->cnt >= fp->last_idx)
+			if (RTE_IP_FRAG_DEATH_ROW_MBUF_LEN - dr->cnt >=
+					fp->last_idx)
 				ip_frag_tbl_del(tbl, dr, fp);
 			else
 				return;
diff --git a/lib/ip_frag/rte_ipv6_fragmentation.c b/lib/ip_frag/rte_ipv6_fragmentation.c
index 5d67336f2d..88f29c158c 100644
--- a/lib/ip_frag/rte_ipv6_fragmentation.c
+++ b/lib/ip_frag/rte_ipv6_fragmentation.c
@@ -22,13 +22,13 @@ __fill_ipv6hdr_frag(struct rte_ipv6_hdr *dst,
 		const struct rte_ipv6_hdr *src, uint16_t len, uint16_t fofs,
 		uint32_t mf)
 {
-	struct ipv6_extension_fragment *fh;
+	struct rte_ipv6_fragment_ext *fh;
 
 	rte_memcpy(dst, src, sizeof(*dst));
 	dst->payload_len = rte_cpu_to_be_16(len);
 	dst->proto = IPPROTO_FRAGMENT;
 
-	fh = (struct ipv6_extension_fragment *) ++dst;
+	fh = (struct rte_ipv6_fragment_ext *) ++dst;
 	fh->next_header = src->proto;
 	fh->reserved = 0;
 	fh->frag_data = rte_cpu_to_be_16(RTE_IPV6_SET_FRAG_DATA(fofs, mf));
@@ -94,7 +94,7 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,
 	 */
 
 	frag_size = mtu_size - sizeof(struct rte_ipv6_hdr) -
-		sizeof(struct ipv6_extension_fragment);
+		sizeof(struct rte_ipv6_fragment_ext);
 	frag_size = RTE_ALIGN_FLOOR(frag_size, RTE_IPV6_EHDR_FO_ALIGN);
 
 	/* Check that pkts_out is big enough to hold all fragments */
@@ -124,9 +124,9 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,
 
 		/* Reserve space for the IP header that will be built later */
 		out_pkt->data_len = sizeof(struct rte_ipv6_hdr) +
-			sizeof(struct ipv6_extension_fragment);
+			sizeof(struct rte_ipv6_fragment_ext);
 		out_pkt->pkt_len  = sizeof(struct rte_ipv6_hdr) +
-			sizeof(struct ipv6_extension_fragment);
+			sizeof(struct rte_ipv6_fragment_ext);
 		frag_bytes_remaining = frag_size;
 
 		out_seg_prev = out_pkt;
@@ -184,7 +184,7 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,
 
 		fragment_offset = (uint16_t)(fragment_offset +
 		    out_pkt->pkt_len - sizeof(struct rte_ipv6_hdr)
-			- sizeof(struct ipv6_extension_fragment));
+			- sizeof(struct rte_ipv6_fragment_ext));
 
 		/* Write the fragment to the output list */
 		pkts_out[out_pkt_pos] = out_pkt;
diff --git a/lib/ip_frag/rte_ipv6_reassembly.c b/lib/ip_frag/rte_ipv6_reassembly.c
index 6bc0bf792a..d4019e87e6 100644
--- a/lib/ip_frag/rte_ipv6_reassembly.c
+++ b/lib/ip_frag/rte_ipv6_reassembly.c
@@ -33,7 +33,7 @@ struct rte_mbuf *
 ipv6_frag_reassemble(struct ip_frag_pkt *fp)
 {
 	struct rte_ipv6_hdr *ip_hdr;
-	struct ipv6_extension_fragment *frag_hdr;
+	struct rte_ipv6_fragment_ext *frag_hdr;
 	struct rte_mbuf *m, *prev;
 	uint32_t i, n, ofs, first_len;
 	uint32_t last_len, move_len, payload_len;
@@ -102,7 +102,7 @@ ipv6_frag_reassemble(struct ip_frag_pkt *fp)
 	 * the main IPv6 header instead.
 	 */
 	move_len = m->l2_len + m->l3_len - sizeof(*frag_hdr);
-	frag_hdr = (struct ipv6_extension_fragment *) (ip_hdr + 1);
+	frag_hdr = (struct rte_ipv6_fragment_ext *) (ip_hdr + 1);
 	ip_hdr->proto = frag_hdr->next_header;
 
 	ip_frag_memmove(rte_pktmbuf_mtod_offset(m, char *, sizeof(*frag_hdr)),
@@ -136,7 +136,7 @@ ipv6_frag_reassemble(struct ip_frag_pkt *fp)
 struct rte_mbuf *
 rte_ipv6_frag_reassemble_packet(struct rte_ip_frag_tbl *tbl,
 	struct rte_ip_frag_death_row *dr, struct rte_mbuf *mb, uint64_t tms,
-	struct rte_ipv6_hdr *ip_hdr, struct ipv6_extension_fragment *frag_hdr)
+	struct rte_ipv6_hdr *ip_hdr, struct rte_ipv6_fragment_ext *frag_hdr)
 {
 	struct ip_frag_pkt *fp;
 	struct ip_frag_key key;
diff --git a/lib/port/rte_port_ras.c b/lib/port/rte_port_ras.c
index 403028f8d6..8508814bb2 100644
--- a/lib/port/rte_port_ras.c
+++ b/lib/port/rte_port_ras.c
@@ -186,7 +186,7 @@ process_ipv6(struct rte_port_ring_writer_ras *p, struct rte_mbuf *pkt)
 	struct rte_ipv6_hdr *pkt_hdr =
 		rte_pktmbuf_mtod(pkt, struct rte_ipv6_hdr *);
 
-	struct ipv6_extension_fragment *frag_hdr;
+	struct rte_ipv6_fragment_ext *frag_hdr;
 	uint16_t frag_data = 0;
 	frag_hdr = rte_ipv6_frag_get_ipv6_fragment_header(pkt_hdr);
 	if (frag_hdr != NULL)
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1
  2021-10-25 21:40  4% [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1 Thomas Monjalon
  2021-10-28  7:10  0% ` Jiang, YuX
  2021-11-05 21:51  0% ` Thinh Tran
@ 2021-11-08 10:50  0% ` Pei Zhang
  2 siblings, 0 replies; 200+ results
From: Pei Zhang @ 2021-11-08 10:50 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: David Marchand, Maxime Coquelin, Kevin Traynor, dev, Chao Yang

Hello Thomas,

The testing with dpdk 21.11-rc1 from Red Hat looks good. We tested below 18
scenarios and all got PASS on RHEL8:

(1)Guest with device assignment(PF) throughput testing(1G hugepage size):
PASS
(2)Guest with device assignment(PF) throughput testing(2M hugepage size) :
PASS
(3)Guest with device assignment(VF) throughput testing: PASS
(4)PVP (host dpdk testpmd as vswitch) 1Q: throughput testing: PASS
(5)PVP vhost-user 2Q throughput testing: PASS
(6)PVP vhost-user 1Q - cross numa node throughput testing: PASS
(7)Guest with vhost-user 2 queues throughput testing: PASS
(8)vhost-user reconnect with dpdk-client, qemu-server: qemu reconnect: PASS
(9)vhost-user reconnect with dpdk-client, qemu-server: ovs reconnect: PASS
(10)PVP 1Q live migration testing: PASS
(11)PVP 1Q post copy live migration testing: PASS
(12)PVP 1Q cross numa node live migration testing: PASS
(13)Guest with ovs+dpdk+vhost-user 1Q live migration testing: PASS
(14)Guest with ovs+dpdk+vhost-user 1Q live migration testing (2M): PASS
(15)Guest with ovs+dpdk+vhost-user 2Q live migration testing: PASS
(16)Guest with ovs+dpdk+vhost-user 4Q live migration testing: PASS
(17)Host PF + DPDK testing: PASS
(18)Host VF + DPDK testing: PASS

Versions:
kernel 4.18
qemu 6.1

dpdk: git://dpdk.org/dpdk
# git log -1
commit 6c390cee976e33b1e9d8562d32c9d3ebe5d9ce94 (HEAD -> main, tag:
v21.11-rc1)
Author: Thomas Monjalon <thomas@monjalon.net>
Date:   Mon Oct 25 22:42:47 2021 +0200

    version: 21.11-rc1

    Signed-off-by: Thomas Monjalon <thomas@monjalon.net>


NICs: X540-AT2 NIC(ixgbe, 10G)

Best regards,

Pei

On Tue, Oct 26, 2021 at 5:41 AM Thomas Monjalon <thomas@monjalon.net> wrote:

> A new DPDK release candidate is ready for testing:
>         https://git.dpdk.org/dpdk/tag/?id=v21.11-rc1
>
> There are 1171 new patches in this snapshot, big as expected.
>
> Release notes:
>         https://doc.dpdk.org/guides/rel_notes/release_21_11.html
>
> Highlights of 21.11-rc1:
> * General
>         - more than 512 MSI-X interrupts
>         - hugetlbfs subdirectories
>         - mempool flag for non-IO usages
>         - device class for DMA accelerators
>         - DMA drivers for Intel DSA and IOAT
> * Networking
>         - MTU handling rework
>         - get all MAC addresses of a port
>         - RSS based on L3/L4 checksum fields
>         - flow match on L2TPv2 and PPP
>         - flow flex parser for custom header
>         - control delivery of HW Rx metadata
>         - transfer flows API rework
>         - shared Rx queue
>         - Windows support of Intel e1000, ixgbe and iavf
>         - testpmd multi-process
>         - pcapng library and dumpcap tool
> * API/ABI
>         - API namespace improvements (mempool, mbuf, ethdev)
>         - API internals hidden (intr, ethdev, security, cryptodev,
> eventdev, cmdline)
>         - flags check for future ABI compatibility (memzone, mbuf, mempool)
>
> Please test and report issues on bugs.dpdk.org.
> DPDK 21.11-rc2 is expected in two weeks or less.
>
> Thank you everyone
>
>
>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2] eal/rwlock: add note about writer starvation
  @ 2021-11-08 10:18  0%     ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-11-08 10:18 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: dev, Joyce Kong, konstantin.ananyev, Honnappa Nagarahalli

Ping again. Stephen?


12/05/2021 21:10, Thomas Monjalon:
> Ping for v3
> 
> 12/02/2021 01:21, Honnappa Nagarahalli:
> > <snip>
> > 
> > > 
> > > 14/01/2021 17:55, Stephen Hemminger:
> > > > The implementation of reader/writer locks in DPDK (from first release)
> > > > is simple and fast. But it can lead to writer starvation issues.
> > > >
> > > > It is not easy to fix this without changing ABI and potentially
> > > > breaking customer applications that are expect the unfair behavior.
> > > 
> > > typo: "are expect"
> > > 
> > > > The wikipedia page on reader-writer problem has a similar example
> > > > which summarizes the problem pretty well.
> > > 
> > > Maybe add the URL in the commit message?
> > > 
> > > >
> > > > Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> > > > ---
> > > > --- a/lib/librte_eal/include/generic/rte_rwlock.h
> > > > +++ b/lib/librte_eal/include/generic/rte_rwlock.h
> > > > + * Note: This version of reader/writer locks is not fair because
> >                                 ^^^^^^ may be "implementation" would be better?
> > 
> > > > + * readers do not block for pending writers. A stream of readers can
> > > > + * subsequently lock out all potential writers and starve them.
> > > > + * This is because after the first reader locks the resource,
> > > > + * no writer can lock it. The writer will only be able to get the
> > > > + lock
> > > > + * when it will only be released by the last reader.
> > This looks good. Though the writer starvation is prominent, the reader starvation is possible if there is a stream of writers when a writer holds the lock. Should we call this out too?
> > 
> > > 
> > > You did not get review, probably because nobody was Cc'ed.
> > > +Cc Honnappa, Joyce and Konstantin





^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1
  2021-10-25 21:40  4% [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1 Thomas Monjalon
  2021-10-28  7:10  0% ` Jiang, YuX
@ 2021-11-05 21:51  0% ` Thinh Tran
  2021-11-08 10:50  0% ` Pei Zhang
  2 siblings, 0 replies; 200+ results
From: Thinh Tran @ 2021-11-05 21:51 UTC (permalink / raw)
  To: dpdk-dev

Hi
IBM - Power Systems
DPDK v21.11-rc1-63-gbb0bd346d5

* Basic PF on Mellanox: No new issues or regressions were seen.
* Performance: not tested.

Systems tested:
  - IBM Power9 PowerNV 9006-22P
     OS: RHEL 8.4
     GCC:  version 8.3.1 20191121 (Red Hat 8.3.1-5)
     NICs:
      - Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
      - firmware version: 16.29.1017
      - MLNX_OFED_LINUX-5.2-1.0.4.1 (OFED-5.2-1.0.4)



Regards,
Thinh Tran
On 10/25/2021 4:40 PM, Thomas Monjalon wrote:
> A new DPDK release candidate is ready for testing:
> 	https://git.dpdk.org/dpdk/tag/?id=v21.11-rc1
> 
> There are 1171 new patches in this snapshot, big as expected.
> 
> Release notes:
> 	https://doc.dpdk.org/guides/rel_notes/release_21_11.html
> 
> Highlights of 21.11-rc1:
> * General
> 	- more than 512 MSI-X interrupts
> 	- hugetlbfs subdirectories
> 	- mempool flag for non-IO usages
> 	- device class for DMA accelerators
> 	- DMA drivers for Intel DSA and IOAT
> * Networking
> 	- MTU handling rework
> 	- get all MAC addresses of a port
> 	- RSS based on L3/L4 checksum fields
> 	- flow match on L2TPv2 and PPP
> 	- flow flex parser for custom header
> 	- control delivery of HW Rx metadata
> 	- transfer flows API rework
> 	- shared Rx queue
> 	- Windows support of Intel e1000, ixgbe and iavf
> 	- testpmd multi-process
> 	- pcapng library and dumpcap tool
> * API/ABI
> 	- API namespace improvements (mempool, mbuf, ethdev)
> 	- API internals hidden (intr, ethdev, security, cryptodev, eventdev, cmdline)
> 	- flags check for future ABI compatibility (memzone, mbuf, mempool)
> 
> Please test and report issues on bugs.dpdk.org.
> DPDK 21.11-rc2 is expected in two weeks or less.
> 
> Thank you everyone
> 
> 

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] Minutes of Technical Board Meeting, 2021-Nov-03
@ 2021-11-04 19:54  4% Maxime Coquelin
  0 siblings, 0 replies; 200+ results
From: Maxime Coquelin @ 2021-11-04 19:54 UTC (permalink / raw)
  To: dev

Minutes of Technical Board Meeting, 2021-Nov-03

Members Attending
-----------------
-Aaron
-Ferruh
-Hemant
-Honnappa
-Jerin
-Kevin
-Konstantin
-Maxime (Chair)
-Olivier
-Stephen
-Thomas


NOTE: The technical board meetings every second Wednesday at 
https://meet.jit.si/DPDK at 3 pm UTC.
Meetings are public, and DPDK community members are welcome to attend.

NOTE: Next meeting will be on Wednesday 2021-Nov-17 @3pm UTC, and will 
be chaired by Olivier.

# ENETFEC driver
- TB discussed whether depending on an out-of-tree Kernel module is 
acceptable
-- TB voted to accept that ENETFEC PMD relies on out-of-tree kernel module
-- TB recommends avoiding out-of-tree Kernel modules, but the kernel 
module required by the ENETFEC PMD is in the process of being upstreamed
- TB discussed whether having this drivers as a VDEV is acceptable or if 
a bus driver is required, knowing that only this device would use it
-- TB voted to accept this driver as a VDEV

# IP frag ABI change in v21.11 [0]
- This ABI change was not announced so TB approval was required
-- TB voted to accept this ABI change

# Communication plan around v21.11 release
- Thomas highlighted that a lot of changes are being introduced in 
v21.11 release.
- In addition to the usual release blog post, blog posts about specific 
new features would be welcomed
-- TB calls for ideas to maintainers and contributors

# Feedback from Governing Board on proposal for technical board process 
updates
- Honnappa proposes a new spreadsheet to improve the communication 
between the technical and governing boards

# L3 forward mode in testpmd [1]
- Honnappa presented the reasons of this new forwarding mode
-- L3FWD is a standard benchmark for DPDK
-- L3FWD example misses debugging features present in testpmd
- Concerns raised about code duplication and bloating of testpmd
- Suggestions that adding more statistics and interactive mode to L3 FWD 
would be preferable
-- But concerns that it would make this application too much complex, 
defeating the initial purpose of this example
- As no consensus has been reached, Honnappa proposed to reject/defer it 
for now

# DMARC configuration
- Ali monitored the DMARC configuration changes done on the user and web 
mailing lists
-- Better results have been observed
- TB voted to apply the new policy to the other mailing lists
- Ali will apply the new policy by the end of next week

# Patch from AMD to raise the maximum number of lcores
- Ran out of time, adding this item to the next meeting

[0]: 
https://patches.dpdk.org/project/dpdk/patch/20211102190309.5795-1-konstantin.ananyev@intel.com/
[1]: 
https://patchwork.dpdk.org/project/dpdk/patch/20210430213747.41530-2-kathleen.capella@arm.com/


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v24 0/3] Add PIE support for HQoS library
  2021-11-04 10:49  3%                 ` [dpdk-dev] [PATCH v22 " Liguzinski, WojciechX
  2021-11-04 11:03  3%                   ` [dpdk-dev] [PATCH v23 " Liguzinski, WojciechX
@ 2021-11-04 14:55  3%                   ` Thomas Monjalon
  1 sibling, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-11-04 14:55 UTC (permalink / raw)
  To: dev; +Cc: megha.ajmera

last changes to make this series "more acceptable":
- RTE_SCHED_CMAN in rte_config.h, replacing RTE_SCHED_RED
- test file listed in MAINTAINERS
- few whitespaces fixed


From: Wojciech Liguzinski <wojciechx.liguzinski@intel.com>

DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management. However, more
advanced queue management is required to address this problem and provide desirable
quality of service to users.

This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.

The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.

Wojciech Liguzinski (3):
  sched: add PIE based congestion management
  examples/qos_sched: support PIE congestion management
  examples/ip_pipeline: support PIE congestion management

 MAINTAINERS                                  |    1 +
 app/test/meson.build                         |    4 +
 app/test/test_pie.c                          | 1065 ++++++++++++++++++
 config/rte_config.h                          |    2 +-
 doc/guides/prog_guide/glossary.rst           |    3 +
 doc/guides/prog_guide/qos_framework.rst      |   64 +-
 doc/guides/prog_guide/traffic_management.rst |   13 +-
 drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
 examples/ip_pipeline/tmgr.c                  |  142 +--
 examples/qos_sched/cfg_file.c                |  127 ++-
 examples/qos_sched/cfg_file.h                |    5 +
 examples/qos_sched/init.c                    |   27 +-
 examples/qos_sched/main.h                    |    3 +
 examples/qos_sched/profile.cfg               |  196 ++--
 lib/sched/meson.build                        |    3 +-
 lib/sched/rte_pie.c                          |   86 ++
 lib/sched/rte_pie.h                          |  396 +++++++
 lib/sched/rte_sched.c                        |  256 +++--
 lib/sched/rte_sched.h                        |   64 +-
 lib/sched/version.map                        |    4 +
 20 files changed, 2185 insertions(+), 282 deletions(-)
 create mode 100644 app/test/test_pie.c
 create mode 100644 lib/sched/rte_pie.c
 create mode 100644 lib/sched/rte_pie.h

-- 
2.33.0


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v23 0/3] Add PIE support for HQoS library
  2021-11-04 10:49  3%                 ` [dpdk-dev] [PATCH v22 " Liguzinski, WojciechX
@ 2021-11-04 11:03  3%                   ` Liguzinski, WojciechX
  2021-11-04 14:55  3%                   ` [dpdk-dev] [PATCH v24 " Thomas Monjalon
  1 sibling, 0 replies; 200+ results
From: Liguzinski, WojciechX @ 2021-11-04 11:03 UTC (permalink / raw)
  To: dev, jasvinder.singh, cristian.dumitrescu
  Cc: megha.ajmera, Wojciech Liguzinski

From: Wojciech Liguzinski <wojciechx.liguzinski@intel.com>

DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management. However, more
advanced queue management is required to address this problem and provide desirable
quality of service to users.

This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.

The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.

Wojciech Liguzinski (3):
  sched: add PIE based congestion management
  examples/qos_sched: add PIE support
  examples/ip_pipeline: add PIE support

 app/test/meson.build                         |    4 +
 app/test/test_pie.c                          | 1065 ++++++++++++++++++
 config/rte_config.h                          |    1 -
 doc/guides/prog_guide/glossary.rst           |    3 +
 doc/guides/prog_guide/qos_framework.rst      |   64 +-
 doc/guides/prog_guide/traffic_management.rst |   13 +-
 drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
 examples/ip_pipeline/tmgr.c                  |  142 +--
 examples/qos_sched/cfg_file.c                |  127 ++-
 examples/qos_sched/cfg_file.h                |    5 +
 examples/qos_sched/init.c                    |   27 +-
 examples/qos_sched/main.h                    |    3 +
 examples/qos_sched/profile.cfg               |  196 ++--
 lib/sched/meson.build                        |    3 +-
 lib/sched/rte_pie.c                          |   86 ++
 lib/sched/rte_pie.h                          |  398 +++++++
 lib/sched/rte_sched.c                        |  254 +++--
 lib/sched/rte_sched.h                        |   64 +-
 lib/sched/version.map                        |    4 +
 19 files changed, 2184 insertions(+), 281 deletions(-)
 create mode 100644 app/test/test_pie.c
 create mode 100644 lib/sched/rte_pie.c
 create mode 100644 lib/sched/rte_pie.h

-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] Overriding rte_config.h
  2021-11-03 14:38  0%             ` Ben Magistro
@ 2021-11-04 11:03  0%               ` Ananyev, Konstantin
  0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-11-04 11:03 UTC (permalink / raw)
  To: Ben Magistro; +Cc: Richardson, Bruce, dev, ben.magistro, Stefan Baranoff

Hi Ben,

I also don’t think 64 is a common case here.
For such cases we probably should think up some different approach for the reassembly table.

From: Ben Magistro <koncept1@gmail.com>
Sent: Wednesday, November 3, 2021 2:38 PM
To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
Cc: Richardson, Bruce <bruce.richardson@intel.com>; dev@dpdk.org; ben.magistro@trinitycyber.com; Stefan Baranoff <stefan.baranoff@trinitycyber.com>
Subject: Re: [dpdk-dev] Overriding rte_config.h

Thanks for the clarification.

I agree bumping RTE_LIBRTE_IP_FRAG_MAX_FRAG to 8 probably makes sense to easily support jumbo frames.

The other use case we have is supporting highly fragmented UDP.  To support this we were increasing to 64 (next power of 2)  based on a 64K UDP max and a link MTU of 1200 (VPN/tunneling).  I am not sure this is a value that makes sense for the majority of use cases.

On Tue, Nov 2, 2021 at 11:09 AM Ananyev, Konstantin <konstantin.ananyev@intel.com<mailto:konstantin.ananyev@intel.com>> wrote:

> > > > > On Fri, Oct 29, 2021 at 09:48:30AM -0400, Ben Magistro wrote:
> > > > > > With the transition to meson, what is the best way to provide custom values
> > > > > > to parameters in rte_config.h?  When using makefiles, (from memory, I
> > > > > > think) we used common_base as a template that was copied in as a
> > > > > > replacement for defconfig_x86....  Our current thinking is to apply a
> > > > > > locally maintained patch so that we can track custom values easier to the
> > > > > > rte_config.h file unless there is another way to pass in an overridden
> > > > > > value.  As an example, one of the values we are customizing is
> > > > > > IP_FRAG_MAX_FRAG.
> > > > > >
> > > > > > Cheers,
> > > > > >
> > > > > There is no one defined way for overriding values in rte_config with the
> > > > > meson build system, as values there are ones that should rarely need to be
> > > > > overridden. If it's the case that one does need tuning, we generally want
> > > > > to look to either change the default so it works for everyone, or
> > > > > alternatively look to replace it with a runtime option.
> > > > >
> > > > > In the absense of that, a locally maintained patch may be reasonable. To
> > > > > what value do you want to change MAX_FRAG? Would it be worth considering as
> > > > > a newer default value in DPDK itself, since the current default is fairly
> > > > > low?
> > > >
> > > > That might be an option, with IP_FRAG_MAX_FRAG==8 it should be able
> > > > to cover common jumbo frame size (9K) pretty easily.
> > > > As a drawback default reassembly table size will double.
> > >
> > > Maybe not. I'm not an expert in the library, but it seems the basic struct
> > > used for tracking the packets and fragments is "struct ip_frag_pkt". Due to
> > > the other data in the struct and the linked-list overheads, the actual size
> > > increase when doubling MAX_FRAG from 4 to 8 is only 25%. According to gdb
> > > on my debug build it goes from 192B to 256B.
> >
> > Ah yes, you right, struct ip_frag should fit into 16B, key seems the biggest one.
> >
> > >
> > > > Even better would be to go a step further and rework lib/ip_frag
> > > > to make it configurable runtime parameter.
> > > >
> > > Agree. However, that's not as quick a fix as just increasing the default
> > > max segs value which could be done immediately if there is consensus on it.
> >
> > You mean for 21.11?
> > I don't mind in principle, but would like to know other people thoughts here.
> > Another thing -  we didn't announce it in advance, and it is definitely an ABI change.
>
> I notice from this patch you submitted that the main structure in question
> is being hidden[1]. Will it still be an ABI change if that patch is merged
> in?

Yes, it would unfortunately:
struct rte_ip_frag_death_row still remains public.

> Alternatively, should a fragment count increase be considered as part of
> that change?

I don't think they are really related.
This patch just hides some structs that are already marked as 'internal'
and not used by public API. It doesn't make any changes in the public structs layout.
But I suppose we can bring that question (increase of RTE_LIBRTE_IP_FRAG_MAX_FRAG) to
tomorrow TB meeting, and ask for approval.

> /Bruce
>
> [1] http://patches.dpdk.org/project/dpdk/patch/20211101124915.9640-1-konstantin.ananyev@intel.com/

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v22 0/3] Add PIE support for HQoS library
  2021-11-04 10:40  3%               ` [dpdk-dev] [PATCH v21 0/3] " Liguzinski, WojciechX
@ 2021-11-04 10:49  3%                 ` Liguzinski, WojciechX
  2021-11-04 11:03  3%                   ` [dpdk-dev] [PATCH v23 " Liguzinski, WojciechX
  2021-11-04 14:55  3%                   ` [dpdk-dev] [PATCH v24 " Thomas Monjalon
  0 siblings, 2 replies; 200+ results
From: Liguzinski, WojciechX @ 2021-11-04 10:49 UTC (permalink / raw)
  To: dev, jasvinder.singh, cristian.dumitrescu
  Cc: megha.ajmera, Wojciech Liguzinski

From: Wojciech Liguzinski <wojciechx.liguzinski@intel.com>

DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management. However, more
advanced queue management is required to address this problem and provide desirable
quality of service to users.

This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.

The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.

Wojciech Liguzinski (3):
  sched: add PIE based congestion management
  examples/qos_sched: add PIE support
  examples/ip_pipeline: add PIE support

 app/test/meson.build                         |    4 +
 app/test/test_pie.c                          | 1065 ++++++++++++++++++
 config/rte_config.h                          |    1 -
 doc/guides/prog_guide/glossary.rst           |    3 +
 doc/guides/prog_guide/qos_framework.rst      |   64 +-
 doc/guides/prog_guide/traffic_management.rst |   13 +-
 drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
 examples/ip_pipeline/tmgr.c                  |  142 +--
 examples/qos_sched/cfg_file.c                |  127 ++-
 examples/qos_sched/cfg_file.h                |    5 +
 examples/qos_sched/init.c                    |   27 +-
 examples/qos_sched/main.h                    |    3 +
 examples/qos_sched/profile.cfg               |  196 ++--
 lib/sched/meson.build                        |    3 +-
 lib/sched/rte_pie.c                          |   86 ++
 lib/sched/rte_pie.h                          |  398 +++++++
 lib/sched/rte_sched.c                        |  255 +++--
 lib/sched/rte_sched.h                        |   64 +-
 lib/sched/version.map                        |    4 +
 19 files changed, 2185 insertions(+), 281 deletions(-)
 create mode 100644 app/test/test_pie.c
 create mode 100644 lib/sched/rte_pie.c
 create mode 100644 lib/sched/rte_pie.h

-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] ethdev: promote device removal check function as stable
  2021-10-28  8:56  0%   ` Andrew Rybchenko
@ 2021-11-04 10:45  0%     ` Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-11-04 10:45 UTC (permalink / raw)
  To: Andrew Rybchenko, Kinsella, Ray, Thomas Monjalon, dev; +Cc: matan

On 10/28/2021 9:56 AM, Andrew Rybchenko wrote:
> On 10/28/21 11:38 AM, Kinsella, Ray wrote:
>>
>>
>> On 28/10/2021 09:35, Thomas Monjalon wrote:
>>> The function rte_eth_dev_is_removed() was introduced in DPDK 18.02,
>>> and is integrated in error checks of ethdev library.
>>>
>>> It is promoted as stable ABI.
>>>
>>> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
>>> ---
>>>   lib/ethdev/rte_ethdev.h | 4 ----
>>>   lib/ethdev/version.map  | 2 +-
>>>   2 files changed, 1 insertion(+), 5 deletions(-)
>>>
>> Acked-by: Ray Kinsella <mdr@ashroe.eu>
> 
> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> 

Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>

Applied to dpdk-next-net/main, thanks.

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v21 0/3] Add PIE support for HQoS library
  2021-11-02 23:57  3%             ` [dpdk-dev] [PATCH v20 " Liguzinski, WojciechX
  2021-11-03 17:52  0%               ` Thomas Monjalon
@ 2021-11-04 10:40  3%               ` Liguzinski, WojciechX
  2021-11-04 10:49  3%                 ` [dpdk-dev] [PATCH v22 " Liguzinski, WojciechX
  1 sibling, 1 reply; 200+ results
From: Liguzinski, WojciechX @ 2021-11-04 10:40 UTC (permalink / raw)
  To: dev, jasvinder.singh, cristian.dumitrescu
  Cc: megha.ajmera, Wojciech Liguzinski

From: Wojciech Liguzinski <wojciechx.liguzinski@intel.com>

DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management. However, more
advanced queue management is required to address this problem and provide desirable
quality of service to users.

This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.

The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.

Wojciech Liguzinski (3):
  sched: add PIE based congestion management
  examples/qos_sched: add PIE support
  examples/ip_pipeline: add PIE support

 app/test/meson.build                         |    4 +
 app/test/test_pie.c                          | 1065 ++++++++++++++++++
 config/rte_config.h                          |    1 -
 doc/guides/prog_guide/glossary.rst           |    3 +
 doc/guides/prog_guide/qos_framework.rst      |   64 +-
 doc/guides/prog_guide/traffic_management.rst |   13 +-
 drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
 examples/ip_pipeline/tmgr.c                  |  142 +--
 examples/qos_sched/cfg_file.c                |  127 ++-
 examples/qos_sched/cfg_file.h                |    5 +
 examples/qos_sched/init.c                    |   27 +-
 examples/qos_sched/main.h                    |    3 +
 examples/qos_sched/profile.cfg               |  196 ++--
 lib/sched/meson.build                        |    3 +-
 lib/sched/rte_pie.c                          |   86 ++
 lib/sched/rte_pie.h                          |  398 +++++++
 lib/sched/rte_sched.c                        |  255 +++--
 lib/sched/rte_sched.h                        |   64 +-
 lib/sched/version.map                        |    4 +
 19 files changed, 2185 insertions(+), 281 deletions(-)
 create mode 100644 app/test/test_pie.c
 create mode 100644 lib/sched/rte_pie.c
 create mode 100644 lib/sched/rte_pie.h

-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v20 0/5] Add PIE support for HQoS library
  2021-11-03 17:52  0%               ` Thomas Monjalon
@ 2021-11-04  8:29  0%                 ` Liguzinski, WojciechX
  0 siblings, 0 replies; 200+ results
From: Liguzinski, WojciechX @ 2021-11-04  8:29 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, Singh, Jasvinder, Dumitrescu, Cristian, Ajmera, Megha,
	Mcnamara, John

Hi Thomas,

Thanks, I will apply your suggestions asap.

Wojtek

-----Original Message-----
From: Thomas Monjalon <thomas@monjalon.net> 
Sent: Wednesday, November 3, 2021 6:53 PM
To: Liguzinski, WojciechX <wojciechx.liguzinski@intel.com>
Cc: dev@dpdk.org; Singh, Jasvinder <jasvinder.singh@intel.com>; Dumitrescu, Cristian <cristian.dumitrescu@intel.com>; Ajmera, Megha <megha.ajmera@intel.com>; Mcnamara, John <john.mcnamara@intel.com>
Subject: Re: [dpdk-dev] [PATCH v20 0/5] Add PIE support for HQoS library

03/11/2021 00:57, Liguzinski, WojciechX:
> From: Wojciech Liguzinski <wojciechx.liguzinski@intel.com>
> 
> DPDK sched library is equipped with mechanism that secures it from the 
> bufferbloat problem which is a situation when excess buffers in the 
> network cause high latency and latency variation. Currently, it 
> supports RED for active queue management. However, more advanced queue 
> management is required to address this problem and provide desirable quality of service to users.
> 
> This solution (RFC) proposes usage of new algorithm called "PIE" 
> (Proportional Integral controller Enhanced) that can effectively and 
> directly control queuing latency to address the bufferbloat problem.
> 
> The implementation of mentioned functionality includes modification of 
> existing and adding a new set of data structures to the library, adding PIE related APIs.
> This affects structures in public API/ABI. That is why deprecation 
> notice is going to be prepared and sent.
> 
> Wojciech Liguzinski (5):
>   sched: add PIE based congestion management

Did you see the checkpatch issues on this patch?
http://mails.dpdk.org/archives/test-report/2021-November/238253.html

>   example/qos_sched: add PIE support

The strict minimum is to explain why you add PIE and what the acronym means, inside the commit log.

>   example/ip_pipeline: add PIE support

Title should follow same convention as history.
For examples, it start with "examples/" as the directory name.

>   doc/guides/prog_guide: added PIE

doc should be squashed with code patches Is there any doc update related to the examples?
If not, it should be fully squashed with lib changes.

>   app/test: add tests for PIE

If there is nothing special, it can be squashed with the lib patch.




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v20 0/5] Add PIE support for HQoS library
  2021-11-02 23:57  3%             ` [dpdk-dev] [PATCH v20 " Liguzinski, WojciechX
@ 2021-11-03 17:52  0%               ` Thomas Monjalon
  2021-11-04  8:29  0%                 ` Liguzinski, WojciechX
  2021-11-04 10:40  3%               ` [dpdk-dev] [PATCH v21 0/3] " Liguzinski, WojciechX
  1 sibling, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-11-03 17:52 UTC (permalink / raw)
  To: Wojciech Liguzinski
  Cc: dev, jasvinder.singh, cristian.dumitrescu, megha.ajmera, john.mcnamara

03/11/2021 00:57, Liguzinski, WojciechX:
> From: Wojciech Liguzinski <wojciechx.liguzinski@intel.com>
> 
> DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
> which is a situation when excess buffers in the network cause high latency and latency
> variation. Currently, it supports RED for active queue management. However, more
> advanced queue management is required to address this problem and provide desirable
> quality of service to users.
> 
> This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
> controller Enhanced) that can effectively and directly control queuing latency to address
> the bufferbloat problem.
> 
> The implementation of mentioned functionality includes modification of existing and
> adding a new set of data structures to the library, adding PIE related APIs.
> This affects structures in public API/ABI. That is why deprecation notice is going
> to be prepared and sent.
> 
> Wojciech Liguzinski (5):
>   sched: add PIE based congestion management

Did you see the checkpatch issues on this patch?
http://mails.dpdk.org/archives/test-report/2021-November/238253.html

>   example/qos_sched: add PIE support

The strict minimum is to explain why you add PIE and what the acronym means,
inside the commit log.

>   example/ip_pipeline: add PIE support

Title should follow same convention as history.
For examples, it start with "examples/" as the directory name.

>   doc/guides/prog_guide: added PIE

doc should be squashed with code patches
Is there any doc update related to the examples?
If not, it should be fully squashed with lib changes.

>   app/test: add tests for PIE

If there is nothing special, it can be squashed with the lib patch.




^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH] doc: remove deprecation notice for interrupt
@ 2021-11-03 17:50  5% Harman Kalra
  0 siblings, 0 replies; 200+ results
From: Harman Kalra @ 2021-11-03 17:50 UTC (permalink / raw)
  To: dev, Ray Kinsella; +Cc: Harman Kalra

Deprecation notice targeted for 21.11 has been committed with
following as the first commit of the series.

Fixes: b7c984291611 ("interrupts: add allocator and accessors")

Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
 doc/guides/rel_notes/deprecation.rst | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 4366015b01..0545245222 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -17,9 +17,6 @@ Deprecation Notices
 * eal: The function ``rte_eal_remote_launch`` will return new error codes
   after read or write error on the pipe, instead of calling ``rte_panic``.
 
-* eal: Making ``struct rte_intr_handle`` internal to avoid any ABI breakages
-  in future.
-
 * rte_atomicNN_xxx: These APIs do not take memory order parameter. This does
   not allow for writing optimized code for all the CPU architectures supported
   in DPDK. DPDK has adopted the atomic operations from
-- 
2.18.0


^ permalink raw reply	[relevance 5%]

* Re: [dpdk-dev] Overriding rte_config.h
  2021-11-02 15:00  0%           ` Ananyev, Konstantin
@ 2021-11-03 14:38  0%             ` Ben Magistro
  2021-11-04 11:03  0%               ` Ananyev, Konstantin
  0 siblings, 1 reply; 200+ results
From: Ben Magistro @ 2021-11-03 14:38 UTC (permalink / raw)
  To: Ananyev, Konstantin; +Cc: Richardson, Bruce, dev, ben.magistro, Stefan Baranoff

Thanks for the clarification.

I agree bumping RTE_LIBRTE_IP_FRAG_MAX_FRAG to 8 probably makes sense to
easily support jumbo frames.

The other use case we have is supporting highly fragmented UDP.  To support
this we were increasing to 64 (next power of 2)  based on a 64K UDP max and
a link MTU of 1200 (VPN/tunneling).  I am not sure this is a value that
makes sense for the majority of use cases.

On Tue, Nov 2, 2021 at 11:09 AM Ananyev, Konstantin <
konstantin.ananyev@intel.com> wrote:

>
> > > > > > On Fri, Oct 29, 2021 at 09:48:30AM -0400, Ben Magistro wrote:
> > > > > > > With the transition to meson, what is the best way to provide
> custom values
> > > > > > > to parameters in rte_config.h?  When using makefiles, (from
> memory, I
> > > > > > > think) we used common_base as a template that was copied in as
> a
> > > > > > > replacement for defconfig_x86....  Our current thinking is to
> apply a
> > > > > > > locally maintained patch so that we can track custom values
> easier to the
> > > > > > > rte_config.h file unless there is another way to pass in an
> overridden
> > > > > > > value.  As an example, one of the values we are customizing is
> > > > > > > IP_FRAG_MAX_FRAG.
> > > > > > >
> > > > > > > Cheers,
> > > > > > >
> > > > > > There is no one defined way for overriding values in rte_config
> with the
> > > > > > meson build system, as values there are ones that should rarely
> need to be
> > > > > > overridden. If it's the case that one does need tuning, we
> generally want
> > > > > > to look to either change the default so it works for everyone, or
> > > > > > alternatively look to replace it with a runtime option.
> > > > > >
> > > > > > In the absense of that, a locally maintained patch may be
> reasonable. To
> > > > > > what value do you want to change MAX_FRAG? Would it be worth
> considering as
> > > > > > a newer default value in DPDK itself, since the current default
> is fairly
> > > > > > low?
> > > > >
> > > > > That might be an option, with IP_FRAG_MAX_FRAG==8 it should be able
> > > > > to cover common jumbo frame size (9K) pretty easily.
> > > > > As a drawback default reassembly table size will double.
> > > >
> > > > Maybe not. I'm not an expert in the library, but it seems the basic
> struct
> > > > used for tracking the packets and fragments is "struct ip_frag_pkt".
> Due to
> > > > the other data in the struct and the linked-list overheads, the
> actual size
> > > > increase when doubling MAX_FRAG from 4 to 8 is only 25%. According
> to gdb
> > > > on my debug build it goes from 192B to 256B.
> > >
> > > Ah yes, you right, struct ip_frag should fit into 16B, key seems the
> biggest one.
> > >
> > > >
> > > > > Even better would be to go a step further and rework lib/ip_frag
> > > > > to make it configurable runtime parameter.
> > > > >
> > > > Agree. However, that's not as quick a fix as just increasing the
> default
> > > > max segs value which could be done immediately if there is consensus
> on it.
> > >
> > > You mean for 21.11?
> > > I don't mind in principle, but would like to know other people
> thoughts here.
> > > Another thing -  we didn't announce it in advance, and it is
> definitely an ABI change.
> >
> > I notice from this patch you submitted that the main structure in
> question
> > is being hidden[1]. Will it still be an ABI change if that patch is
> merged
> > in?
>
> Yes, it would unfortunately:
> struct rte_ip_frag_death_row still remains public.
>
> > Alternatively, should a fragment count increase be considered as part of
> > that change?
>
> I don't think they are really related.
> This patch just hides some structs that are already marked as 'internal'
> and not used by public API. It doesn't make any changes in the public
> structs layout.
> But I suppose we can bring that question (increase of
> RTE_LIBRTE_IP_FRAG_MAX_FRAG) to
> tomorrow TB meeting, and ask for approval.
>
> > /Bruce
> >
> > [1]
> http://patches.dpdk.org/project/dpdk/patch/20211101124915.9640-1-konstantin.ananyev@intel.com/
>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] vhost: rename driver callbacks struct
  2021-11-02 10:47  4% [dpdk-dev] [PATCH] vhost: rename driver callbacks struct Maxime Coquelin
@ 2021-11-03  8:16  0% ` Xia, Chenbo
  0 siblings, 0 replies; 200+ results
From: Xia, Chenbo @ 2021-11-03  8:16 UTC (permalink / raw)
  To: Maxime Coquelin, dev, david.marchand; +Cc: Liu, Changpeng

Hi Maxime,

> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Tuesday, November 2, 2021 6:48 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH] vhost: rename driver callbacks struct
> 
> As previously announced, this patch renames struct
> vhost_device_ops to struct rte_vhost_device_ops.
> 
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  doc/guides/rel_notes/deprecation.rst   | 3 ---
>  doc/guides/rel_notes/release_21_11.rst | 2 ++
>  drivers/net/vhost/rte_eth_vhost.c      | 2 +-
>  examples/vdpa/main.c                   | 2 +-
>  examples/vhost/main.c                  | 2 +-
>  examples/vhost_blk/vhost_blk.c         | 2 +-
>  examples/vhost_blk/vhost_blk.h         | 2 +-
>  examples/vhost_crypto/main.c           | 2 +-
>  lib/vhost/rte_vhost.h                  | 4 ++--
>  lib/vhost/socket.c                     | 6 +++---
>  lib/vhost/vhost.h                      | 4 ++--

Miss two in vhost_lib.rst :)

Testing issues reported in patchwork is expected as SPDK uses
this struct, so we can ignore it as SPDK will rename it when it
adapts to DPDK 21.11

With above fixed:

Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>


>  11 files changed, 15 insertions(+), 16 deletions(-)
> 
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index 4366015b01..a9e2433988 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -111,9 +111,6 @@ Deprecation Notices
>    ``rte_vhost_host_notifier_ctrl`` and ``rte_vdpa_relay_vring_used`` vDPA
>    driver interface will be marked as internal in DPDK v21.11.
> 
> -* vhost: rename ``struct vhost_device_ops`` to ``struct
> rte_vhost_device_ops``
> -  in DPDK v21.11.
> -
>  * vhost: The experimental tags of ``rte_vhost_driver_get_protocol_features``,
>    ``rte_vhost_driver_get_queue_num``, ``rte_vhost_crypto_create``,
>    ``rte_vhost_crypto_free``, ``rte_vhost_crypto_fetch_requests``,
> diff --git a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> index 98d50a160b..dea038e3ac 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -564,6 +564,8 @@ ABI Changes
> 
>  * eventdev: Re-arranged fields in ``rte_event_timer`` to remove holes.
> 
> +* vhost: rename ``struct vhost_device_ops`` to ``struct
> rte_vhost_device_ops``.
> +
> 
>  Known Issues
>  ------------
> diff --git a/drivers/net/vhost/rte_eth_vhost.c
> b/drivers/net/vhost/rte_eth_vhost.c
> index 8bb3b27d01..070f0e6dfd 100644
> --- a/drivers/net/vhost/rte_eth_vhost.c
> +++ b/drivers/net/vhost/rte_eth_vhost.c
> @@ -975,7 +975,7 @@ vring_state_changed(int vid, uint16_t vring, int enable)
>  	return 0;
>  }
> 
> -static struct vhost_device_ops vhost_ops = {
> +static struct rte_vhost_device_ops vhost_ops = {
>  	.new_device          = new_device,
>  	.destroy_device      = destroy_device,
>  	.vring_state_changed = vring_state_changed,
> diff --git a/examples/vdpa/main.c b/examples/vdpa/main.c
> index 097a267b8c..5ab07655ae 100644
> --- a/examples/vdpa/main.c
> +++ b/examples/vdpa/main.c
> @@ -153,7 +153,7 @@ destroy_device(int vid)
>  	}
>  }
> 
> -static const struct vhost_device_ops vdpa_sample_devops = {
> +static const struct rte_vhost_device_ops vdpa_sample_devops = {
>  	.new_device = new_device,
>  	.destroy_device = destroy_device,
>  };
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index 58e12aa710..8685dfd81b 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -1519,7 +1519,7 @@ vring_state_changed(int vid, uint16_t queue_id, int
> enable)
>   * These callback allow devices to be added to the data core when
> configuration
>   * has been fully complete.
>   */
> -static const struct vhost_device_ops virtio_net_device_ops =
> +static const struct rte_vhost_device_ops virtio_net_device_ops =
>  {
>  	.new_device =  new_device,
>  	.destroy_device = destroy_device,
> diff --git a/examples/vhost_blk/vhost_blk.c b/examples/vhost_blk/vhost_blk.c
> index fe2b4e4803..feadacc62e 100644
> --- a/examples/vhost_blk/vhost_blk.c
> +++ b/examples/vhost_blk/vhost_blk.c
> @@ -753,7 +753,7 @@ new_connection(int vid)
>  	return 0;
>  }
> 
> -struct vhost_device_ops vhost_blk_device_ops = {
> +struct rte_vhost_device_ops vhost_blk_device_ops = {
>  	.new_device =  new_device,
>  	.destroy_device = destroy_device,
>  	.new_connection = new_connection,
> diff --git a/examples/vhost_blk/vhost_blk.h b/examples/vhost_blk/vhost_blk.h
> index 540998eb1b..975f0b4065 100644
> --- a/examples/vhost_blk/vhost_blk.h
> +++ b/examples/vhost_blk/vhost_blk.h
> @@ -104,7 +104,7 @@ struct vhost_blk_task {
>  };
> 
>  extern struct vhost_blk_ctrlr *g_vhost_ctrlr;
> -extern struct vhost_device_ops vhost_blk_device_ops;
> +extern struct rte_vhost_device_ops vhost_blk_device_ops;
> 
>  int vhost_bdev_process_blk_commands(struct vhost_block_dev *bdev,
>  				     struct vhost_blk_task *task);
> diff --git a/examples/vhost_crypto/main.c b/examples/vhost_crypto/main.c
> index dea7dcbd07..7d75623a5e 100644
> --- a/examples/vhost_crypto/main.c
> +++ b/examples/vhost_crypto/main.c
> @@ -363,7 +363,7 @@ destroy_device(int vid)
>  	RTE_LOG(INFO, USER1, "Vhost Crypto Device %i Removed\n", vid);
>  }
> 
> -static const struct vhost_device_ops virtio_crypto_device_ops = {
> +static const struct rte_vhost_device_ops virtio_crypto_device_ops = {
>  	.new_device =  new_device,
>  	.destroy_device = destroy_device,
>  };
> diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h
> index 6f0915b98f..af0afbcf60 100644
> --- a/lib/vhost/rte_vhost.h
> +++ b/lib/vhost/rte_vhost.h
> @@ -264,7 +264,7 @@ struct rte_vhost_user_extern_ops {
>  /**
>   * Device and vring operations.
>   */
> -struct vhost_device_ops {
> +struct rte_vhost_device_ops {
>  	int (*new_device)(int vid);		/**< Add device. */
>  	void (*destroy_device)(int vid);	/**< Remove device. */
> 
> @@ -606,7 +606,7 @@ rte_vhost_get_negotiated_protocol_features(int vid,
> 
>  /* Register callbacks. */
>  int rte_vhost_driver_callback_register(const char *path,
> -	struct vhost_device_ops const * const ops);
> +	struct rte_vhost_device_ops const * const ops);
> 
>  /**
>   *
> diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
> index c6548608a3..82963c1e6d 100644
> --- a/lib/vhost/socket.c
> +++ b/lib/vhost/socket.c
> @@ -58,7 +58,7 @@ struct vhost_user_socket {
> 
>  	struct rte_vdpa_device *vdpa_dev;
> 
> -	struct vhost_device_ops const *notify_ops;
> +	struct rte_vhost_device_ops const *notify_ops;
>  };
> 
>  struct vhost_user_connection {
> @@ -1093,7 +1093,7 @@ rte_vhost_driver_unregister(const char *path)
>   */
>  int
>  rte_vhost_driver_callback_register(const char *path,
> -	struct vhost_device_ops const * const ops)
> +	struct rte_vhost_device_ops const * const ops)
>  {
>  	struct vhost_user_socket *vsocket;
> 
> @@ -1106,7 +1106,7 @@ rte_vhost_driver_callback_register(const char *path,
>  	return vsocket ? 0 : -1;
>  }
> 
> -struct vhost_device_ops const *
> +struct rte_vhost_device_ops const *
>  vhost_driver_callback_get(const char *path)
>  {
>  	struct vhost_user_socket *vsocket;
> diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
> index 05ccc35f37..080c67ef99 100644
> --- a/lib/vhost/vhost.h
> +++ b/lib/vhost/vhost.h
> @@ -394,7 +394,7 @@ struct virtio_net {
>  	uint16_t		mtu;
>  	uint8_t			status;
> 
> -	struct vhost_device_ops const *notify_ops;
> +	struct rte_vhost_device_ops const *notify_ops;
> 
>  	uint32_t		nr_guest_pages;
>  	uint32_t		max_guest_pages;
> @@ -702,7 +702,7 @@ void vhost_enable_linearbuf(int vid);
>  int vhost_enable_guest_notification(struct virtio_net *dev,
>  		struct vhost_virtqueue *vq, int enable);
> 
> -struct vhost_device_ops const *vhost_driver_callback_get(const char *path);
> +struct rte_vhost_device_ops const *vhost_driver_callback_get(const char
> *path);
> 
>  /*
>   * Backend-specific cleanup.
> --
> 2.31.1


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: remove deprecation notice for vhost
  2021-11-03  5:25  3% ` Xia, Chenbo
@ 2021-11-03  7:03  0%   ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-11-03  7:03 UTC (permalink / raw)
  To: Xia, Chenbo; +Cc: dev, Kevin Traynor, Maxime Coquelin, Ray Kinsella

On Wed, Nov 3, 2021 at 6:25 AM Xia, Chenbo <chenbo.xia@intel.com> wrote:
>
> Hi,
>
> I notice that from the start, I should not send the notice.. as the abi policy said:
>
> For removing the experimental tag associated with an API, deprecation notice is not required.
>
> Sorry for the mistake.

It is not required, but announcing does not hurt.
A real issue would be the opposite :-).

Your patch lgtm, thanks Chenbo.

-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: remove deprecation notice for vhost
  @ 2021-11-03  5:25  3% ` Xia, Chenbo
  2021-11-03  7:03  0%   ` David Marchand
  0 siblings, 1 reply; 200+ results
From: Xia, Chenbo @ 2021-11-03  5:25 UTC (permalink / raw)
  To: dev; +Cc: Kevin Traynor, Maxime Coquelin, Ray Kinsella

Hi,

I notice that from the start, I should not send the notice.. as the abi policy said:

For removing the experimental tag associated with an API, deprecation notice is not required.

Sorry for the mistake.
/Chenbo

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Chenbo Xia
> Sent: Wednesday, November 3, 2021 1:00 PM
> To: dev@dpdk.org
> Cc: Ray Kinsella <mdr@ashroe.eu>; Kevin Traynor <ktraynor@redhat.com>; Maxime
> Coquelin <maxime.coquelin@redhat.com>
> Subject: [dpdk-dev] [PATCH] doc: remove deprecation notice for vhost
> 
> Ten vhost APIs were announced to be stable and promoted in below
> commit, so remove the related deprecation notice.
> 
> Fixes: 945ef8a04098 ("vhost: promote some APIs to stable")
> 
> Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
> Reported-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  doc/guides/rel_notes/deprecation.rst | 8 --------
>  1 file changed, 8 deletions(-)
> 
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index 4366015b01..4f7e95f05f 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -114,14 +114,6 @@ Deprecation Notices
>  * vhost: rename ``struct vhost_device_ops`` to ``struct
> rte_vhost_device_ops``
>    in DPDK v21.11.
> 
> -* vhost: The experimental tags of ``rte_vhost_driver_get_protocol_features``,
> -  ``rte_vhost_driver_get_queue_num``, ``rte_vhost_crypto_create``,
> -  ``rte_vhost_crypto_free``, ``rte_vhost_crypto_fetch_requests``,
> -  ``rte_vhost_crypto_finalize_requests``, ``rte_vhost_crypto_set_zero_copy``,
> -  ``rte_vhost_va_from_guest_pa``, ``rte_vhost_extern_callback_register``,
> -  and ``rte_vhost_driver_set_protocol_features`` functions will be removed
> -  and the API functions will be made stable in DPDK 21.11.
> -
>  * cryptodev: Hide structures ``rte_cryptodev_sym_session`` and
>    ``rte_cryptodev_asym_session`` to remove unnecessary indirection between
>    session and the private data of session. An opaque pointer can be exposed
> --
> 2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v20 0/5] Add PIE support for HQoS library
  2021-10-28 10:17  3%           ` [dpdk-dev] [PATCH v19 " Liguzinski, WojciechX
@ 2021-11-02 23:57  3%             ` Liguzinski, WojciechX
  2021-11-03 17:52  0%               ` Thomas Monjalon
  2021-11-04 10:40  3%               ` [dpdk-dev] [PATCH v21 0/3] " Liguzinski, WojciechX
  0 siblings, 2 replies; 200+ results
From: Liguzinski, WojciechX @ 2021-11-02 23:57 UTC (permalink / raw)
  To: dev, jasvinder.singh, cristian.dumitrescu
  Cc: megha.ajmera, Wojciech Liguzinski

From: Wojciech Liguzinski <wojciechx.liguzinski@intel.com>

DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management. However, more
advanced queue management is required to address this problem and provide desirable
quality of service to users.

This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.

The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.

Wojciech Liguzinski (5):
  sched: add PIE based congestion management
  example/qos_sched: add PIE support
  example/ip_pipeline: add PIE support
  doc/guides/prog_guide: added PIE
  app/test: add tests for PIE

 app/test/meson.build                         |    4 +
 app/test/test_pie.c                          | 1065 ++++++++++++++++++
 config/rte_config.h                          |    1 -
 doc/guides/prog_guide/glossary.rst           |    3 +
 doc/guides/prog_guide/qos_framework.rst      |   64 +-
 doc/guides/prog_guide/traffic_management.rst |   13 +-
 drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
 examples/ip_pipeline/tmgr.c                  |  142 +--
 examples/qos_sched/cfg_file.c                |  127 ++-
 examples/qos_sched/cfg_file.h                |    5 +
 examples/qos_sched/init.c                    |   27 +-
 examples/qos_sched/main.h                    |    3 +
 examples/qos_sched/profile.cfg               |  196 ++--
 lib/sched/meson.build                        |    3 +-
 lib/sched/rte_pie.c                          |   86 ++
 lib/sched/rte_pie.h                          |  398 +++++++
 lib/sched/rte_sched.c                        |  259 +++--
 lib/sched/rte_sched.h                        |   64 +-
 lib/sched/version.map                        |    4 +
 19 files changed, 2189 insertions(+), 281 deletions(-)
 create mode 100644 app/test/test_pie.c
 create mode 100644 lib/sched/rte_pie.c
 create mode 100644 lib/sched/rte_pie.h

-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH] ip_frag: increase default value for config parameter
@ 2021-11-02 19:03 14% Konstantin Ananyev
  2021-11-08 22:08  0% ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2021-11-02 19:03 UTC (permalink / raw)
  To: dev; +Cc: techboard, bruce.richardson, koncept1, Konstantin Ananyev

Increase default value for config parameter RTE_LIBRTE_IP_FRAG_MAX_FRAG
from 4 to 8. This parameter controls maximum number of fragments per
packet in ip reassembly table. Increasing this value from 4 to 8 will
allow users to cover common case with jumbo packet size of 9KB and
fragments with default frame size (1500B).
As RTE_LIBRTE_IP_FRAG_MAX_FRAG is used in definition of public
structure (struct rte_ip_frag_death_row), this is an ABI change.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 config/rte_config.h                    | 2 +-
 doc/guides/rel_notes/release_21_11.rst | 8 ++++++++
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/config/rte_config.h b/config/rte_config.h
index 1a66b42fcc..08e70af497 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -82,7 +82,7 @@
 #define RTE_RAWDEV_MAX_DEVS 64
 
 /* ip_fragmentation defines */
-#define RTE_LIBRTE_IP_FRAG_MAX_FRAG 4
+#define RTE_LIBRTE_IP_FRAG_MAX_FRAG 8
 #undef RTE_LIBRTE_IP_FRAG_TBL_STAT
 
 /* rte_power defines */
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 502cc5ceb2..4d0f112b00 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -543,6 +543,14 @@ ABI Changes
 
 * eventdev: Re-arranged fields in ``rte_event_timer`` to remove holes.
 
+* Increase default value for config parameter ``RTE_LIBRTE_IP_FRAG_MAX_FRAG``
+  from ``4`` to ``8``. This parameter controls maximum number of fragments
+  per packet in ip reassembly table. Increasing this value from ``4`` to ``8``
+  will allow users to cover common case with jumbo packet size of ``9KB``
+  and fragments with default frame size ``(1500B)``.
+  As ``RTE_LIBRTE_IP_FRAG_MAX_FRAG`` is used in definition of
+  public structure ``rte_ip_frag_death_row``, this is an ABI change.
+
 
 Known Issues
 ------------
-- 
2.25.1


^ permalink raw reply	[relevance 14%]

* Re: [dpdk-dev] Overriding rte_config.h
  2021-11-02 14:19  3%         ` Bruce Richardson
@ 2021-11-02 15:00  0%           ` Ananyev, Konstantin
  2021-11-03 14:38  0%             ` Ben Magistro
  0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-11-02 15:00 UTC (permalink / raw)
  To: Richardson, Bruce; +Cc: Ben Magistro, dev


> > > > > On Fri, Oct 29, 2021 at 09:48:30AM -0400, Ben Magistro wrote:
> > > > > > With the transition to meson, what is the best way to provide custom values
> > > > > > to parameters in rte_config.h?  When using makefiles, (from memory, I
> > > > > > think) we used common_base as a template that was copied in as a
> > > > > > replacement for defconfig_x86....  Our current thinking is to apply a
> > > > > > locally maintained patch so that we can track custom values easier to the
> > > > > > rte_config.h file unless there is another way to pass in an overridden
> > > > > > value.  As an example, one of the values we are customizing is
> > > > > > IP_FRAG_MAX_FRAG.
> > > > > >
> > > > > > Cheers,
> > > > > >
> > > > > There is no one defined way for overriding values in rte_config with the
> > > > > meson build system, as values there are ones that should rarely need to be
> > > > > overridden. If it's the case that one does need tuning, we generally want
> > > > > to look to either change the default so it works for everyone, or
> > > > > alternatively look to replace it with a runtime option.
> > > > >
> > > > > In the absense of that, a locally maintained patch may be reasonable. To
> > > > > what value do you want to change MAX_FRAG? Would it be worth considering as
> > > > > a newer default value in DPDK itself, since the current default is fairly
> > > > > low?
> > > >
> > > > That might be an option, with IP_FRAG_MAX_FRAG==8 it should be able
> > > > to cover common jumbo frame size (9K) pretty easily.
> > > > As a drawback default reassembly table size will double.
> > >
> > > Maybe not. I'm not an expert in the library, but it seems the basic struct
> > > used for tracking the packets and fragments is "struct ip_frag_pkt". Due to
> > > the other data in the struct and the linked-list overheads, the actual size
> > > increase when doubling MAX_FRAG from 4 to 8 is only 25%. According to gdb
> > > on my debug build it goes from 192B to 256B.
> >
> > Ah yes, you right, struct ip_frag should fit into 16B, key seems the biggest one.
> >
> > >
> > > > Even better would be to go a step further and rework lib/ip_frag
> > > > to make it configurable runtime parameter.
> > > >
> > > Agree. However, that's not as quick a fix as just increasing the default
> > > max segs value which could be done immediately if there is consensus on it.
> >
> > You mean for 21.11?
> > I don't mind in principle, but would like to know other people thoughts here.
> > Another thing -  we didn't announce it in advance, and it is definitely an ABI change.
> 
> I notice from this patch you submitted that the main structure in question
> is being hidden[1]. Will it still be an ABI change if that patch is merged
> in?

Yes, it would unfortunately:
struct rte_ip_frag_death_row still remains public.

> Alternatively, should a fragment count increase be considered as part of
> that change?

I don't think they are really related.
This patch just hides some structs that are already marked as 'internal'
and not used by public API. It doesn't make any changes in the public structs layout.
But I suppose we can bring that question (increase of RTE_LIBRTE_IP_FRAG_MAX_FRAG) to
tomorrow TB meeting, and ask for approval.
 
> /Bruce
> 
> [1] http://patches.dpdk.org/project/dpdk/patch/20211101124915.9640-1-konstantin.ananyev@intel.com/

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] Overriding rte_config.h
  2021-11-02 12:24  3%       ` Ananyev, Konstantin
@ 2021-11-02 14:19  3%         ` Bruce Richardson
  2021-11-02 15:00  0%           ` Ananyev, Konstantin
  0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2021-11-02 14:19 UTC (permalink / raw)
  To: Ananyev, Konstantin; +Cc: Ben Magistro, dev

On Tue, Nov 02, 2021 at 12:24:43PM +0000, Ananyev, Konstantin wrote:
> 
> > > > On Fri, Oct 29, 2021 at 09:48:30AM -0400, Ben Magistro wrote:
> > > > > With the transition to meson, what is the best way to provide custom values
> > > > > to parameters in rte_config.h?  When using makefiles, (from memory, I
> > > > > think) we used common_base as a template that was copied in as a
> > > > > replacement for defconfig_x86....  Our current thinking is to apply a
> > > > > locally maintained patch so that we can track custom values easier to the
> > > > > rte_config.h file unless there is another way to pass in an overridden
> > > > > value.  As an example, one of the values we are customizing is
> > > > > IP_FRAG_MAX_FRAG.
> > > > >
> > > > > Cheers,
> > > > >
> > > > There is no one defined way for overriding values in rte_config with the
> > > > meson build system, as values there are ones that should rarely need to be
> > > > overridden. If it's the case that one does need tuning, we generally want
> > > > to look to either change the default so it works for everyone, or
> > > > alternatively look to replace it with a runtime option.
> > > >
> > > > In the absense of that, a locally maintained patch may be reasonable. To
> > > > what value do you want to change MAX_FRAG? Would it be worth considering as
> > > > a newer default value in DPDK itself, since the current default is fairly
> > > > low?
> > >
> > > That might be an option, with IP_FRAG_MAX_FRAG==8 it should be able
> > > to cover common jumbo frame size (9K) pretty easily.
> > > As a drawback default reassembly table size will double.
> >
> > Maybe not. I'm not an expert in the library, but it seems the basic struct
> > used for tracking the packets and fragments is "struct ip_frag_pkt". Due to
> > the other data in the struct and the linked-list overheads, the actual size
> > increase when doubling MAX_FRAG from 4 to 8 is only 25%. According to gdb
> > on my debug build it goes from 192B to 256B.
> 
> Ah yes, you right, struct ip_frag should fit into 16B, key seems the biggest one.
> 
> >
> > > Even better would be to go a step further and rework lib/ip_frag
> > > to make it configurable runtime parameter.
> > >
> > Agree. However, that's not as quick a fix as just increasing the default
> > max segs value which could be done immediately if there is consensus on it.
> 
> You mean for 21.11?
> I don't mind in principle, but would like to know other people thoughts here.
> Another thing -  we didn't announce it in advance, and it is definitely an ABI change.

I notice from this patch you submitted that the main structure in question
is being hidden[1]. Will it still be an ABI change if that patch is merged
in? Alternatively, should a fragment count increase be considered as part of
that change?

/Bruce

[1] http://patches.dpdk.org/project/dpdk/patch/20211101124915.9640-1-konstantin.ananyev@intel.com/

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] Overriding rte_config.h
  @ 2021-11-02 12:24  3%       ` Ananyev, Konstantin
  2021-11-02 14:19  3%         ` Bruce Richardson
  0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-11-02 12:24 UTC (permalink / raw)
  To: Richardson, Bruce; +Cc: Ben Magistro, dev


> > > On Fri, Oct 29, 2021 at 09:48:30AM -0400, Ben Magistro wrote:
> > > > With the transition to meson, what is the best way to provide custom values
> > > > to parameters in rte_config.h?  When using makefiles, (from memory, I
> > > > think) we used common_base as a template that was copied in as a
> > > > replacement for defconfig_x86....  Our current thinking is to apply a
> > > > locally maintained patch so that we can track custom values easier to the
> > > > rte_config.h file unless there is another way to pass in an overridden
> > > > value.  As an example, one of the values we are customizing is
> > > > IP_FRAG_MAX_FRAG.
> > > >
> > > > Cheers,
> > > >
> > > There is no one defined way for overriding values in rte_config with the
> > > meson build system, as values there are ones that should rarely need to be
> > > overridden. If it's the case that one does need tuning, we generally want
> > > to look to either change the default so it works for everyone, or
> > > alternatively look to replace it with a runtime option.
> > >
> > > In the absense of that, a locally maintained patch may be reasonable. To
> > > what value do you want to change MAX_FRAG? Would it be worth considering as
> > > a newer default value in DPDK itself, since the current default is fairly
> > > low?
> >
> > That might be an option, with IP_FRAG_MAX_FRAG==8 it should be able
> > to cover common jumbo frame size (9K) pretty easily.
> > As a drawback default reassembly table size will double.
> 
> Maybe not. I'm not an expert in the library, but it seems the basic struct
> used for tracking the packets and fragments is "struct ip_frag_pkt". Due to
> the other data in the struct and the linked-list overheads, the actual size
> increase when doubling MAX_FRAG from 4 to 8 is only 25%. According to gdb
> on my debug build it goes from 192B to 256B.

Ah yes, you right, struct ip_frag should fit into 16B, key seems the biggest one. 

> 
> > Even better would be to go a step further and rework lib/ip_frag
> > to make it configurable runtime parameter.
> >
> Agree. However, that's not as quick a fix as just increasing the default
> max segs value which could be done immediately if there is consensus on it.

You mean for 21.11?
I don't mind in principle, but would like to know other people thoughts here.
Another thing -  we didn't announce it in advance, and it is definitely an ABI change. 

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH] vhost: rename driver callbacks struct
@ 2021-11-02 10:47  4% Maxime Coquelin
  2021-11-03  8:16  0% ` Xia, Chenbo
  0 siblings, 1 reply; 200+ results
From: Maxime Coquelin @ 2021-11-02 10:47 UTC (permalink / raw)
  To: dev, chenbo.xia, david.marchand; +Cc: Maxime Coquelin

As previously announced, this patch renames struct
vhost_device_ops to struct rte_vhost_device_ops.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 doc/guides/rel_notes/deprecation.rst   | 3 ---
 doc/guides/rel_notes/release_21_11.rst | 2 ++
 drivers/net/vhost/rte_eth_vhost.c      | 2 +-
 examples/vdpa/main.c                   | 2 +-
 examples/vhost/main.c                  | 2 +-
 examples/vhost_blk/vhost_blk.c         | 2 +-
 examples/vhost_blk/vhost_blk.h         | 2 +-
 examples/vhost_crypto/main.c           | 2 +-
 lib/vhost/rte_vhost.h                  | 4 ++--
 lib/vhost/socket.c                     | 6 +++---
 lib/vhost/vhost.h                      | 4 ++--
 11 files changed, 15 insertions(+), 16 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 4366015b01..a9e2433988 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -111,9 +111,6 @@ Deprecation Notices
   ``rte_vhost_host_notifier_ctrl`` and ``rte_vdpa_relay_vring_used`` vDPA
   driver interface will be marked as internal in DPDK v21.11.
 
-* vhost: rename ``struct vhost_device_ops`` to ``struct rte_vhost_device_ops``
-  in DPDK v21.11.
-
 * vhost: The experimental tags of ``rte_vhost_driver_get_protocol_features``,
   ``rte_vhost_driver_get_queue_num``, ``rte_vhost_crypto_create``,
   ``rte_vhost_crypto_free``, ``rte_vhost_crypto_fetch_requests``,
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 98d50a160b..dea038e3ac 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -564,6 +564,8 @@ ABI Changes
 
 * eventdev: Re-arranged fields in ``rte_event_timer`` to remove holes.
 
+* vhost: rename ``struct vhost_device_ops`` to ``struct rte_vhost_device_ops``.
+
 
 Known Issues
 ------------
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 8bb3b27d01..070f0e6dfd 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -975,7 +975,7 @@ vring_state_changed(int vid, uint16_t vring, int enable)
 	return 0;
 }
 
-static struct vhost_device_ops vhost_ops = {
+static struct rte_vhost_device_ops vhost_ops = {
 	.new_device          = new_device,
 	.destroy_device      = destroy_device,
 	.vring_state_changed = vring_state_changed,
diff --git a/examples/vdpa/main.c b/examples/vdpa/main.c
index 097a267b8c..5ab07655ae 100644
--- a/examples/vdpa/main.c
+++ b/examples/vdpa/main.c
@@ -153,7 +153,7 @@ destroy_device(int vid)
 	}
 }
 
-static const struct vhost_device_ops vdpa_sample_devops = {
+static const struct rte_vhost_device_ops vdpa_sample_devops = {
 	.new_device = new_device,
 	.destroy_device = destroy_device,
 };
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 58e12aa710..8685dfd81b 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -1519,7 +1519,7 @@ vring_state_changed(int vid, uint16_t queue_id, int enable)
  * These callback allow devices to be added to the data core when configuration
  * has been fully complete.
  */
-static const struct vhost_device_ops virtio_net_device_ops =
+static const struct rte_vhost_device_ops virtio_net_device_ops =
 {
 	.new_device =  new_device,
 	.destroy_device = destroy_device,
diff --git a/examples/vhost_blk/vhost_blk.c b/examples/vhost_blk/vhost_blk.c
index fe2b4e4803..feadacc62e 100644
--- a/examples/vhost_blk/vhost_blk.c
+++ b/examples/vhost_blk/vhost_blk.c
@@ -753,7 +753,7 @@ new_connection(int vid)
 	return 0;
 }
 
-struct vhost_device_ops vhost_blk_device_ops = {
+struct rte_vhost_device_ops vhost_blk_device_ops = {
 	.new_device =  new_device,
 	.destroy_device = destroy_device,
 	.new_connection = new_connection,
diff --git a/examples/vhost_blk/vhost_blk.h b/examples/vhost_blk/vhost_blk.h
index 540998eb1b..975f0b4065 100644
--- a/examples/vhost_blk/vhost_blk.h
+++ b/examples/vhost_blk/vhost_blk.h
@@ -104,7 +104,7 @@ struct vhost_blk_task {
 };
 
 extern struct vhost_blk_ctrlr *g_vhost_ctrlr;
-extern struct vhost_device_ops vhost_blk_device_ops;
+extern struct rte_vhost_device_ops vhost_blk_device_ops;
 
 int vhost_bdev_process_blk_commands(struct vhost_block_dev *bdev,
 				     struct vhost_blk_task *task);
diff --git a/examples/vhost_crypto/main.c b/examples/vhost_crypto/main.c
index dea7dcbd07..7d75623a5e 100644
--- a/examples/vhost_crypto/main.c
+++ b/examples/vhost_crypto/main.c
@@ -363,7 +363,7 @@ destroy_device(int vid)
 	RTE_LOG(INFO, USER1, "Vhost Crypto Device %i Removed\n", vid);
 }
 
-static const struct vhost_device_ops virtio_crypto_device_ops = {
+static const struct rte_vhost_device_ops virtio_crypto_device_ops = {
 	.new_device =  new_device,
 	.destroy_device = destroy_device,
 };
diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h
index 6f0915b98f..af0afbcf60 100644
--- a/lib/vhost/rte_vhost.h
+++ b/lib/vhost/rte_vhost.h
@@ -264,7 +264,7 @@ struct rte_vhost_user_extern_ops {
 /**
  * Device and vring operations.
  */
-struct vhost_device_ops {
+struct rte_vhost_device_ops {
 	int (*new_device)(int vid);		/**< Add device. */
 	void (*destroy_device)(int vid);	/**< Remove device. */
 
@@ -606,7 +606,7 @@ rte_vhost_get_negotiated_protocol_features(int vid,
 
 /* Register callbacks. */
 int rte_vhost_driver_callback_register(const char *path,
-	struct vhost_device_ops const * const ops);
+	struct rte_vhost_device_ops const * const ops);
 
 /**
  *
diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
index c6548608a3..82963c1e6d 100644
--- a/lib/vhost/socket.c
+++ b/lib/vhost/socket.c
@@ -58,7 +58,7 @@ struct vhost_user_socket {
 
 	struct rte_vdpa_device *vdpa_dev;
 
-	struct vhost_device_ops const *notify_ops;
+	struct rte_vhost_device_ops const *notify_ops;
 };
 
 struct vhost_user_connection {
@@ -1093,7 +1093,7 @@ rte_vhost_driver_unregister(const char *path)
  */
 int
 rte_vhost_driver_callback_register(const char *path,
-	struct vhost_device_ops const * const ops)
+	struct rte_vhost_device_ops const * const ops)
 {
 	struct vhost_user_socket *vsocket;
 
@@ -1106,7 +1106,7 @@ rte_vhost_driver_callback_register(const char *path,
 	return vsocket ? 0 : -1;
 }
 
-struct vhost_device_ops const *
+struct rte_vhost_device_ops const *
 vhost_driver_callback_get(const char *path)
 {
 	struct vhost_user_socket *vsocket;
diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
index 05ccc35f37..080c67ef99 100644
--- a/lib/vhost/vhost.h
+++ b/lib/vhost/vhost.h
@@ -394,7 +394,7 @@ struct virtio_net {
 	uint16_t		mtu;
 	uint8_t			status;
 
-	struct vhost_device_ops const *notify_ops;
+	struct rte_vhost_device_ops const *notify_ops;
 
 	uint32_t		nr_guest_pages;
 	uint32_t		max_guest_pages;
@@ -702,7 +702,7 @@ void vhost_enable_linearbuf(int vid);
 int vhost_enable_guest_notification(struct virtio_net *dev,
 		struct vhost_virtqueue *vq, int enable);
 
-struct vhost_device_ops const *vhost_driver_callback_get(const char *path);
+struct rte_vhost_device_ops const *vhost_driver_callback_get(const char *path);
 
 /*
  * Backend-specific cleanup.
-- 
2.31.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v3] vhost: mark vDPA driver API as internal
@ 2021-11-02  9:56  4% Maxime Coquelin
  0 siblings, 0 replies; 200+ results
From: Maxime Coquelin @ 2021-11-02  9:56 UTC (permalink / raw)
  To: dev, chenbo.xia, xuemingl, xiao.w.wang, david.marchand
  Cc: Maxime Coquelin, Thomas Monjalon

This patch marks the vDPA driver APIs as internal and
rename the corresponding header file to vdpa_driver.h.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
---
Changes in v3:
==============
- Update deprecation notice and release note

Changes in v2:
=============
- Alphabetical ordering in version.map (David)
- Rename header to vdpa_driver.h (David)
- Add Techboard in Cc to vote for API breakage exception

 doc/guides/rel_notes/deprecation.rst        |  4 ----
 doc/guides/rel_notes/release_21_11.rst      |  4 ++++
 drivers/vdpa/ifc/ifcvf_vdpa.c               |  2 +-
 drivers/vdpa/mlx5/mlx5_vdpa.h               |  2 +-
 lib/vhost/meson.build                       |  4 +++-
 lib/vhost/vdpa.c                            |  2 +-
 lib/vhost/{rte_vdpa_dev.h => vdpa_driver.h} | 12 +++++++++---
 lib/vhost/version.map                       | 13 +++++++++----
 lib/vhost/vhost.h                           |  2 +-
 9 files changed, 29 insertions(+), 16 deletions(-)
 rename lib/vhost/{rte_vdpa_dev.h => vdpa_driver.h} (95%)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 4366015b01..ce1b727e77 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -107,10 +107,6 @@ Deprecation Notices
   is deprecated as ambiguous with respect to the embedded switch. The use of
   these attributes will become invalid starting from DPDK 22.11.
 
-* vhost: ``rte_vdpa_register_device``, ``rte_vdpa_unregister_device``,
-  ``rte_vhost_host_notifier_ctrl`` and ``rte_vdpa_relay_vring_used`` vDPA
-  driver interface will be marked as internal in DPDK v21.11.
-
 * vhost: rename ``struct vhost_device_ops`` to ``struct rte_vhost_device_ops``
   in DPDK v21.11.
 
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 98d50a160b..7c2c976d47 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -475,6 +475,10 @@ API Changes
 * eventdev: Moved memory used by timer adapters to hugepage. This will prevent
   TLB misses if any and aligns to memory structure of other subsystems.
 
+* vhost: ``rte_vdpa_register_device``, ``rte_vdpa_unregister_device``,
+  ``rte_vhost_host_notifier_ctrl`` and ``rte_vdpa_relay_vring_used`` vDPA
+  driver interface are marked as internal.
+
 
 ABI Changes
 -----------
diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c
index dd5251d382..3853c4cf7e 100644
--- a/drivers/vdpa/ifc/ifcvf_vdpa.c
+++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
@@ -17,7 +17,7 @@
 #include <rte_bus_pci.h>
 #include <rte_vhost.h>
 #include <rte_vdpa.h>
-#include <rte_vdpa_dev.h>
+#include <vdpa_driver.h>
 #include <rte_vfio.h>
 #include <rte_spinlock.h>
 #include <rte_log.h>
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index cf4f384fa4..a6c9404cb0 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -12,7 +12,7 @@
 #pragma GCC diagnostic ignored "-Wpedantic"
 #endif
 #include <rte_vdpa.h>
-#include <rte_vdpa_dev.h>
+#include <vdpa_driver.h>
 #include <rte_vhost.h>
 #ifdef PEDANTIC
 #pragma GCC diagnostic error "-Wpedantic"
diff --git a/lib/vhost/meson.build b/lib/vhost/meson.build
index 2d8fe0239f..cdb37a4814 100644
--- a/lib/vhost/meson.build
+++ b/lib/vhost/meson.build
@@ -29,9 +29,11 @@ sources = files(
 )
 headers = files(
         'rte_vdpa.h',
-        'rte_vdpa_dev.h',
         'rte_vhost.h',
         'rte_vhost_async.h',
         'rte_vhost_crypto.h',
 )
+driver_sdk_headers = files(
+        'vdpa_driver.h',
+)
 deps += ['ethdev', 'cryptodev', 'hash', 'pci']
diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c
index 6dd91859ac..09ad5d866e 100644
--- a/lib/vhost/vdpa.c
+++ b/lib/vhost/vdpa.c
@@ -17,7 +17,7 @@
 #include <rte_tailq.h>
 
 #include "rte_vdpa.h"
-#include "rte_vdpa_dev.h"
+#include "vdpa_driver.h"
 #include "vhost.h"
 
 /** Double linked list of vDPA devices. */
diff --git a/lib/vhost/rte_vdpa_dev.h b/lib/vhost/vdpa_driver.h
similarity index 95%
rename from lib/vhost/rte_vdpa_dev.h
rename to lib/vhost/vdpa_driver.h
index b0f494815f..fc2d6acedd 100644
--- a/lib/vhost/rte_vdpa_dev.h
+++ b/lib/vhost/vdpa_driver.h
@@ -2,11 +2,13 @@
  * Copyright(c) 2018 Intel Corporation
  */
 
-#ifndef _RTE_VDPA_H_DEV_
-#define _RTE_VDPA_H_DEV_
+#ifndef _VDPA_DRIVER_H_
+#define _VDPA_DRIVER_H_
 
 #include <stdbool.h>
 
+#include <rte_compat.h>
+
 #include "rte_vhost.h"
 #include "rte_vdpa.h"
 
@@ -88,6 +90,7 @@ struct rte_vdpa_device {
  * @return
  *  vDPA device pointer on success, NULL on failure
  */
+__rte_internal
 struct rte_vdpa_device *
 rte_vdpa_register_device(struct rte_device *rte_dev,
 		struct rte_vdpa_dev_ops *ops);
@@ -100,6 +103,7 @@ rte_vdpa_register_device(struct rte_device *rte_dev,
  * @return
  *  device id on success, -1 on failure
  */
+__rte_internal
 int
 rte_vdpa_unregister_device(struct rte_vdpa_device *dev);
 
@@ -115,6 +119,7 @@ rte_vdpa_unregister_device(struct rte_vdpa_device *dev);
  * @return
  *  0 on success, -1 on failure
  */
+__rte_internal
 int
 rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable);
 
@@ -132,7 +137,8 @@ rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable);
  * @return
  *  number of synced used entries on success, -1 on failure
  */
+__rte_internal
 int
 rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m);
 
-#endif /* _RTE_VDPA_DEV_H_ */
+#endif /* _VDPA_DRIVER_H_ */
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index c8599ddb97..a7ef7f1976 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -8,10 +8,7 @@ DPDK_22 {
 	rte_vdpa_get_rte_device;
 	rte_vdpa_get_stats;
 	rte_vdpa_get_stats_names;
-	rte_vdpa_register_device;
-	rte_vdpa_relay_vring_used;
 	rte_vdpa_reset_stats;
-	rte_vdpa_unregister_device;
 	rte_vhost_avail_entries;
 	rte_vhost_clr_inflight_desc_packed;
 	rte_vhost_clr_inflight_desc_split;
@@ -52,7 +49,6 @@ DPDK_22 {
 	rte_vhost_get_vring_base_from_inflight;
 	rte_vhost_get_vring_num;
 	rte_vhost_gpa_to_vva;
-	rte_vhost_host_notifier_ctrl;
 	rte_vhost_log_used_vring;
 	rte_vhost_log_write;
 	rte_vhost_rx_queue_count;
@@ -89,3 +85,12 @@ EXPERIMENTAL {
 	# added in 21.11
 	rte_vhost_get_monitor_addr;
 };
+
+INTERNAL {
+	global;
+
+	rte_vdpa_register_device;
+	rte_vdpa_relay_vring_used;
+	rte_vdpa_unregister_device;
+	rte_vhost_host_notifier_ctrl;
+};
diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
index 05ccc35f37..c07219296d 100644
--- a/lib/vhost/vhost.h
+++ b/lib/vhost/vhost.h
@@ -22,7 +22,7 @@
 
 #include "rte_vhost.h"
 #include "rte_vdpa.h"
-#include "rte_vdpa_dev.h"
+#include "vdpa_driver.h"
 
 #include "rte_vhost_async.h"
 
-- 
2.31.1


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1
  2021-10-28  7:10  0% ` Jiang, YuX
@ 2021-11-01 11:53  0%   ` Jiang, YuX
  0 siblings, 0 replies; 200+ results
From: Jiang, YuX @ 2021-11-01 11:53 UTC (permalink / raw)
  To: Thomas Monjalon, dev (dev@dpdk.org)
  Cc: Devlin, Michelle, Mcnamara, John, Yigit, Ferruh

> -----Original Message-----
> From: Jiang, YuX
> Sent: Thursday, October 28, 2021 3:11 PM
> To: Thomas Monjalon <thomas@monjalon.net>; dev (dev@dpdk.org)
> <dev@dpdk.org>
> Cc: Devlin, Michelle <michelle.devlin@intel.com>; Mcnamara, John
> <john.mcnamara@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>
> Subject: RE: [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1
>
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Thomas Monjalon
> > Sent: Tuesday, October 26, 2021 5:41 AM
> > To: announce@dpdk.org
> > Subject: [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1
> >
> > A new DPDK release candidate is ready for testing:
> >     https://git.dpdk.org/dpdk/tag/?id=v21.11-rc1
> >
> > There are 1171 new patches in this snapshot, big as expected.
> >
> > Release notes:
> >     https://doc.dpdk.org/guides/rel_notes/release_21_11.html
> >
> > Highlights of 21.11-rc1:
> > * General
> >     - more than 512 MSI-X interrupts
> >     - hugetlbfs subdirectories
> >     - mempool flag for non-IO usages
> >     - device class for DMA accelerators
> >     - DMA drivers for Intel DSA and IOAT
> > * Networking
> >     - MTU handling rework
> >     - get all MAC addresses of a port
> >     - RSS based on L3/L4 checksum fields
> >     - flow match on L2TPv2 and PPP
> >     - flow flex parser for custom header
> >     - control delivery of HW Rx metadata
> >     - transfer flows API rework
> >     - shared Rx queue
> >     - Windows support of Intel e1000, ixgbe and iavf
> >     - testpmd multi-process
> >     - pcapng library and dumpcap tool
> > * API/ABI
> >     - API namespace improvements (mempool, mbuf, ethdev)
> >     - API internals hidden (intr, ethdev, security, cryptodev, eventdev,
> > cmdline)
> >     - flags check for future ABI compatibility (memzone, mbuf, mempool)
> >
> > Please test and report issues on bugs.dpdk.org.
> > DPDK 21.11-rc2 is expected in two weeks or less.
> >
> > Thank you everyone
> >
> Update the test status for Intel part. Till now dpdk21.11-rc1 test execution
> rate is 50%. No critical issue is found.
> But one little high issue https://bugs.dpdk.org/show_bug.cgi?id=843 impacts
> cryptodev function and performance test.
> Bad commit id is 8cb5d08db940a6b26f5c5ac03b49bac25e9a7022/Author:
> Harman Kalra <hkalra@marvell.com>. Please help to handle it.
> # Basic Intel(R) NIC testing
> * Build or compile:
>       *Build: cover the build test combination with latest GCC/Clang/ICC
> version and the popular OS revision such as Ubuntu20.04, Fedora34, RHEL8.4,
> etc.
>               - All test done.
>       *Compile: cover the CFLAGES(O0/O1/O2/O3) with popular OS such
> as Ubuntu20.04 and Fedora34.
>               - All test done.
>               - Find one bug: https://bugs.dpdk.org/show_bug.cgi?id=841
> Marvell Dev has provided patch and Intel validation team verify passed.
>                 Patch link:
> http://patchwork.dpdk.org/project/dpdk/patch/20211027131259.11775-1-
> ktejasree@marvell.com/
>       * PF(i40e, ixgbe): test scenarios including
> RTE_FLOW/TSO/Jumboframe/checksum offload/VLAN/VXLAN, etc.
>               - Execution rate is 60%. No new issue is found yet.
>       * VF(i40e, ixgbe): test scenarios including VF-
> RTE_FLOW/TSO/Jumboframe/checksum offload/VLAN/VXLAN, etc.
>
>               - Execution rate is 60%.
>               - One bug https://bugs.dpdk.org/show_bug.cgi?id=845
> about "vm_hotplug: vf testpmd core dumped after executing "device_del
> dev1" in qemu" is found.
>                       Bad commit id is commit
> c2bd9367e18f5b00c1a3c5eb281a512ef52c5dfd Author: Harman Kalra
> <hkalra@marvell.com>
>       * PF/VF(ice): test scenarios including Switch features/Package
> Management/Flow Director/Advanced Tx/Advanced RSS/ACL/DCF/Share
> code update/Flexible Descriptor, etc.
>               - Execution rate is 60%.
>               - One bug about kni_autotest failed on Suse15.3. Trying to
> find bad commit id. Known issues, Intel dev is under investigating.
>
>       * Intel NIC single core/NIC performance: test scenarios including
> PF/VF single core performance test, RFC2544 Zero packet loss performance
> test, etc.
>               - Execution rate is 60%.
>               - One bug about nic single core performance drop 2% is
> found. Bad commit id is commit:  efc6f9104c80d39ec168/Author: Olivier Matz
> <olivier.matz@6wind.com>
>       * Power and IPsec:
>               * Power: test scenarios including bi-
> direction/Telemetry/Empty Poll Lib/Priority Base Frequency, etc.
>                       - All passed.
>               * IPsec: test scenarios including ipsec/ipsec-gw/ipsec library
> basic test - QAT&SW/FIB library, etc.
>                       - Not Start.
> # Basic cryptodev and virtio testing
>       * Virtio: both function and performance test are covered. Such as
> PVP/Virtio_loopback/virtio-user loopback/virtio-net VM2VM perf
> testing/VMAWARE ESXI 7.0u3, etc.
>               - Execution rate is 80%.
>               - Two new bugs are found.
>                       - One about VMware ESXI 7.0U3: failed to start port.
> Intel Dev is under investigating.
>                       - One https://bugs.dpdk.org/show_bug.cgi?id=840
> about "dpdk-pdump capture the pcap file content are wrong" is found.
>                       Bad commit id: commit
> 10f726efe26c55805cf0bf6ca1b80e97b98eb724    //bad commit id Author:
> Stephen Hemminger <stephen@networkplumber.org>
>       * Cryptodev:
>               *Function test: test scenarios including Cryptodev API
> testing/CompressDev ISA-L/QAT/ZLIB PMD Testing/FIPS, etc.
>                       - Execution rate is 60%
>                       - Two new bugs are found.
>                               - One
> https://bugs.dpdk.org/show_bug.cgi?id=843 about crypto performance tests
> for QAT are failing. Bad commit id is
> 8cb5d08db940a6b26f5c5ac03b49bac25e9a7022/Author: Harman Kalra
> <hkalra@marvell.com>
>                               - One
> https://bugs.dpdk.org/show_bug.cgi?id=842 about FIP tests are failing. Bad
> commit id is commit f6849cdcc6ada2a8bc9b82e691eaab1aecf4952f Author:
> Akhil Goyal gakhil@marvell.com
>               *Performance test: test scenarios including Thoughput
> Performance /Cryptodev Latency, etc.
>                       - Execution rate is 10%. Most of performance test are
> blocked by Bug843.

Update the test status for Intel part. Till now dpdk21.11-rc1 test is almost finished. No critical issue is found.
One little high issue https://bugs.dpdk.org/show_bug.cgi?id=843 impacts cryptodev function and performance test.
It has patch https://git.dpdk.org/dpdk/commit/?id=eb89595d45ca268ebe6c0cb88f0ae17dba08d8f6 to fix and the patch has been merged into dpdk main.

# Basic Intel(R) NIC testing
* Build or compile:
        *Build: cover the build test combination with latest GCC/Clang/ICC version and the popular OS revision such as Ubuntu20.04, Fedora34, RHEL8.4, etc.
                - All test done. All passed.
        *Compile: cover the CFLAGES(O0/O1/O2/O3) with popular OS such as Ubuntu20.04 and RHEL8.4.
                - All test done.
                - Find one bug: https://bugs.dpdk.org/show_bug.cgi?id=841 Marvell Dev has provided patch and verify passed.
                  Patch link: http://patchwork.dpdk.org/project/dpdk/patch/20211027131259.11775-1-ktejasree@marvell.com/, patch has been applied into dpdk-next-net-mrvl/for-next-net.
        * PF(i40e, ixgbe): test scenarios including RTE_FLOW/TSO/Jumboframe/checksum offload/VLAN/VXLAN, etc.
                - All test done.
                - Find 5 new bugs.
                a, https://bugs.dpdk.org/show_bug.cgi?id=863 external_mempool_handler:execute mempool_autotest command failed on FreeBSD: verify patch passed.
                b, https://bugs.dpdk.org/show_bug.cgi?id=864 pmd_stacked_bonded/test_mode_backup_rx:after setup stacked bonded ports,start top level bond port.
                        - Has patch but verify failed.
                c, https://bugs.dpdk.org/show_bug.cgi?id=865 launch testpmd with "--vfio-intr=legacy" appears core dumped.
                        - Has patch from Redhat, Intel validation team will verify it later.
                d, when set rx_ offload rss_hash is set to off, port start will automatically load RSS_hash: Intel dev is investigating.
                e, checksum_offload/hardware_checksum_check_l4_tx: sctp checksum value incorrect: Intel dev is investigating.
        * VF(i40e, ixgbe): test scenarios including VF-RTE_FLOW/TSO/Jumboframe/checksum offload/VLAN/VXLAN, etc.
                - Execution rate is 60%.
                - Find 3 new bugs, Intel dev is investigating.
                a, https://bugs.dpdk.org/show_bug.cgi?id=845 about "vm_hotplug: vf testpmd core dumped after executing "device_del dev1" in qemu" is found.
                        - Bad commit id is commit c2bd9367e18f5b00c1a3c5eb281a512ef52c5dfd Author: Harman Kalra <hkalra@marvell.com>
                        - Marvell dev has no similar env, Intel validation team don't find avaiable app to reproduce this bug yet.
                b, send packet with vlan id(1~4095), vf port can received on i40e-2.17.1, this is not as expected: Intel dev is investigating.
                c, ixgbe_vf_get_extra_queue_information/ test_enable_dcb: executing "port config 0 dcb vt on 4 pfc off" failed under testpmd
        * PF/VF(ice): test scenarios including Switch features/Package Management/Flow Director/Advanced Tx/Advanced RSS/ACL/DCF/Share code update/Flexible Descriptor, etc.
                - All test done. No new issue is found during 21.11rc1. Some known issues are investigated by Intel Dev.
        * Intel NIC single core/NIC performance: test scenarios including PF/VF single core performance test, RFC2544 Zero packet loss performance test, etc.
                - All test done. No big performance drop.
        * Power and IPsec:
                * Power: test scenarios including bi-direction/Telemetry/Empty Poll Lib/Priority Base Frequency, etc.
                        - All passed.
                * IPsec: test scenarios including ipsec/ipsec-gw/ipsec library basic test - QAT&SW/FIB library, etc.
                        - All passed.
# Basic cryptodev and virtio testing
        * Virtio: both function and performance test are covered. Such as PVP/Virtio_loopback/virtio-user loopback/virtio-net VM2VM perf testing/VMAWARE ESXI 7.0u3, etc.
                - All test done.
                - 3 new bugs are found.
                        - One about VMware ESXI 7.0U3: failed to start port. Intel Dev is under investigating.
                        - One https://bugs.dpdk.org/show_bug.cgi?id=840 about "dpdk-pdump capture the pcap file content are wrong" is found.
                                Bad commit id: commit 10f726efe26c55805cf0bf6ca1b80e97b98eb724    //bad commit id Author: Stephen Hemminger <stephen@networkplumber.org>
                        - vhost_event_idx_interrupt/wake_up_packed_ring_vhost_user_cores_by_multi_virtio_net_in_vms_with_event_idx_interrupt: start 2 packed ring vm, lcore can't waked up.
                                - Intel Dev is under investigating.
        * Cryptodev:
                *Function test: test scenarios including Cryptodev API testing/CompressDev ISA-L/QAT/ZLIB PMD Testing/FIPS, etc.
                        - All test done.
                        - 2 new bugs are found.
                                - One https://bugs.dpdk.org/show_bug.cgi?id=843 about crypto performance tests for QAT are failing. Patch has been merged into dpdk main branch.
                                - One https://bugs.dpdk.org/show_bug.cgi?id=842 about FIP tests are failing. Has patch to fix and verify passed.
                *Performance test: test scenarios including Thoughput Performance /Cryptodev Latency, etc.
                        - All test done. No big performance drop. Most of performance test are blocked by Bug843.

BRs
Yu Jiang

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy support for AMD platform
  2021-10-27 14:31  0%                   ` Thomas Monjalon
@ 2021-10-29 16:01  0%                     ` Song, Keesang
  0 siblings, 0 replies; 200+ results
From: Song, Keesang @ 2021-10-29 16:01 UTC (permalink / raw)
  To: Thomas Monjalon, Aman Kumar, Ananyev, Konstantin, Van Haaren, Harry
  Cc: mattias. ronnblom, dev, viacheslavo, Burakov, Anatoly,
	jerinjacobk, Richardson, Bruce, honnappa.nagarahalli,
	Ruifeng Wang, David Christensen, david.marchand, stephen

[AMD Official Use Only]

Hi Thomas,

There are some gaps among us, so I think we really need another quick meeting call to discuss. I will set up a call like the last time on Monday.
Please join in the call if possible.

Thanks,
Keesang

-----Original Message-----
From: Thomas Monjalon <thomas@monjalon.net>
Sent: Wednesday, October 27, 2021 7:31 AM
To: Aman Kumar <aman.kumar@vvdntech.in>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Van Haaren, Harry <harry.van.haaren@intel.com>
Cc: mattias. ronnblom <mattias.ronnblom@ericsson.com>; dev@dpdk.org; viacheslavo@nvidia.com; Burakov, Anatoly <anatoly.burakov@intel.com>; Song, Keesang <Keesang.Song@amd.com>; jerinjacobk@gmail.com; Richardson, Bruce <bruce.richardson@intel.com>; honnappa.nagarahalli@arm.com; Ruifeng Wang <ruifeng.wang@arm.com>; David Christensen <drc@linux.vnet.ibm.com>; david.marchand@redhat.com; stephen@networkplumber.org
Subject: Re: [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy support for AMD platform

[CAUTION: External Email]

27/10/2021 16:10, Van Haaren, Harry:
> From: Aman Kumar <aman.kumar@vvdntech.in> On Wed, Oct 27, 2021 at 5:53
> PM Ananyev, Konstantin <mailto:konstantin.ananyev@intel.com> wrote
> >
> > Hi Mattias,
> >
> > > > 6) What is the use-case for this? When would a user *want* to
> > > > use this instead
> > > of rte_memcpy()?
> > > > If the data being loaded is relevant to datapath/packets,
> > > > presumably other
> > > packets might require the
> > > > loaded data, so temporal (normal) loads should be used to cache
> > > > the source
> > > data?
> > >
> > >
> > > I'm not sure if your first question is rhetorical or not, but a
> > > memcpy() in a NT variant is certainly useful. One use case for a
> > > memcpy() with temporal loads and non-temporal stores is if you
> > > need to archive packet payload for (distant, potential) future
> > > use, and want to avoid causing unnecessary LLC evictions while doing so.
> >
> > Yes I agree that there are certainly benefits in using cache-locality hints.
> > There is an open question around if the src or dst or both are non-temporal.
> >
> > In the implementation of this patch, the NT/T type of store is reversed from your use-case:
> > 1) Loads are NT (so loaded data is not cached for future packets)
> > 2) Stores are T (so copied/dst data is now resident in L1/L2)
> >
> > In theory there might even be valid uses for this type of memcpy
> > where loaded data is not needed again soon and stored data is
> > referenced again soon, although I cannot think of any here while typing this mail..
> >
> > I think some use-case examples, and clear documentation on when/how
> > to choose between rte_memcpy() or any (potential future)
> > rte_memcpy_nt() variants is required to progress this patch.
> >
> > Assuming a strong use-case exists, and it can be clearly indicators
> > to users of DPDK APIs which
> > rte_memcpy() to use, we can look at technical details around enabling the implementation.
> >
>
> [Konstantin wrote]:
> +1 here.
> Function behaviour and restrictions (src parameter needs to be 16/32 B
> aligned, etc.), along with expected usage scenarios have to be documented properly.
> Again, as Harry pointed out, I don't see any AMD specific instructions
> in this function, so presumably such function can go into __AVX2__
> code block and no new defines will be required.
>
>
> [Aman wrote]:
> Agreed that APIs are generic but we've kept under an AMD flag for a simple reason that it is NOT tested on any other platform.
> A use-case on how to use this was planned earlier for mlx5 pmd but dropped in this version of patch as the data path of mlx5 is going to be refactored soon and may not be useful for future versions of mlx5 (>22.02).
> Ref link:
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.dpdk.org%2Fproject%2Fdpdk%2Fpatch%2F20211019104724.19416-2-aman.kumar%40vvdntech.in%2F&amp;data=04%7C01%7CKeesang.Song%40amd.com%7C1988237087f74375caf808d9995678f0%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637709418976849481%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=FErr0cuni6WLxpq5z2KKjAx2StGTlGuN4QaXoXFE%2BKI%3D&amp;reserved=0(we've plan to adapt this into future version) The patch in the link basically enhances mlx5 mprq implementation for our specific use-case and with 128B packet size, we achieve ~60% better perf. We understand the use of this copy function should be documented which we shall plan along with few other platform specific optimizations in future versions of DPDK. As this does not conflict with other platforms, can we still keep under AMD flag for now as suggested by Thomas?

I said I could merge if there is no objection.
I've overlooked that it's adding completely new functions in the API.
And the comments go in the direction of what I asked in previous version:
what is specific to AMD here?
Now seeing the valid objections, I agree it should be reworked.
We must provide API to applications which is generic, stable and well documented.


> [HvH wrote]:
> As an open-source community, any contributions should aim to improve the whole.
> In the past, numerous improvements have been merged to DPDK that improve performance.
> Sometimes these are architecture specific (x86/arm/ppc) sometimes the are ISA specific (SSE, AVX512, NEON).
>
> I am not familiar with any cases in DPDK, where there is a #ifdef based on a *specific platform*.
> A quick "grep" through the "dpdk/lib" directory does not show any
> place where PMD or generic code has been explicitly optimized for a *specific platform*.
>
> Obviously, in cases where ISA either exists or does not exist, yes there is an optimization to enable it.
> But this is not exposed as a top-level compile-time option, it uses runtime CPU ISA detection.
>
> Please take a step back from the code, and look at what this patch asks of DPDK:
> "Please accept & maintain these changes upstream, which benefit only platform X, even though these ISA features are also available on other platforms".
>
> Other patches that enhance performance of DPDK ask this:
> "Please accept & maintain these changes upstream, which benefit all platforms which have ISA capability X".
>
>
> === Question "As this does not conflict with other platforms, can we still keep under AMD flag for now"?
> I feel the contribution is too specific to a platform. Make it generic by enabling it at an ISA capability level.
>
> Please yes, contribute to the DPDK community by improving performance of a PMD by enabling/leveraging ISA.
> But do so in a way that does not benefit only a specific platform - do
> so in a way that enhances all of DPDK, as other patches have done for the DPDK that this patch is built on.
>
> If you have concerns that the PMD maintainers will not accept the
> changes due to potential regressions on other platforms, then discuss those, make a plan on how to performance validate, and work to a solution.
>
>
> === Regarding specifically the request for "can we still keep under AMD flag for now"?
> I do not believe we should introduce APIs for specific platforms. DPDK's EAL is an abstraction layer.
> The value of EAL is to provide a common abstraction. This
> platform-specific flag breaks the abstraction, and results in packaging issues, as well as API/ABI instability based on -Dcpu_instruction_set choice.
> So, no, we should not introduce APIs based on any compile-time flag.

I agree



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 1/3] eventdev: allow for event devices requiring maintenance
  @ 2021-11-01  9:26  3%         ` Mattias Rönnblom
  0 siblings, 0 replies; 200+ results
From: Mattias Rönnblom @ 2021-11-01  9:26 UTC (permalink / raw)
  To: Jerin Jacob, Van Haaren, Harry, McDaniel, Timothy,
	Pavan Nikhilesh, Hemant Agrawal, Liang Ma
  Cc: Richardson, Bruce, Jerin Jacob, Gujjar, Abhinandan S,
	Erik Gabriel Carrillo, Jayatheerthan, Jay, dpdk-dev

On 2021-10-29 17:17, Jerin Jacob wrote:
> On Fri, Oct 29, 2021 at 8:33 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
>> On 2021-10-29 16:38, Jerin Jacob wrote:
>>> On Tue, Oct 26, 2021 at 11:02 PM Mattias Rönnblom
>>> <mattias.ronnblom@ericsson.com> wrote:
>>>> Extend Eventdev API to allow for event devices which require various
>>>> forms of internal processing to happen, even when events are not
>>>> enqueued to or dequeued from a port.
>>>>
>>>> PATCH v1:
>>>>     - Adapt to the move of fastpath function pointers out of
>>>>       rte_eventdev struct
>>>>     - Attempt to clarify how often the application is expected to
>>>>       call rte_event_maintain()
>>>>     - Add trace point
>>>> RFC v2:
>>>>     - Change rte_event_maintain() return type to be consistent
>>>>       with the documentation.
>>>>     - Remove unused typedef from eventdev_pmd.h.
>>>>
>>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>>> Tested-by: Richard Eklycke <richard.eklycke@ericsson.com>
>>>> Tested-by: Liron Himi <lironh@marvell.com>
>>>> ---
>>>>
>>>> +/**
>>>> + * Maintain an event device.
>>>> + *
>>>> + * This function is only relevant for event devices which has the
>>>> + * RTE_EVENT_DEV_CAP_REQUIRES_MAINT flag set. Such devices require the
>>>> + * application to call rte_event_maintain() on a port during periods
>>>> + * which it is neither enqueuing nor dequeuing events from that
>>>> + * port.
>>> # We need to add  "by the same core". Right? As other core such as
>>> service core can not call rte_event_maintain()
>>
>> Do you mean by the same lcore thread that "owns" (dequeues and enqueues
>> to) the port? Yes. I thought that was implicit, since eventdev port are
>> not MT safe. I'll try to figure out some wording that makes that more clear.
> OK.
>
>>
>>> # Also, Incase of Adapters enqueue() happens, right? If so, either
>>> above text is not correct.
>>> # @Erik Gabriel Carrillo  @Jayatheerthan, Jay @Gujjar, Abhinandan S
>>> Please review 3/3 patch on adapter change.
>>> Let me know you folks are OK with change or not or need more time to analyze.
>>>
>>> If it need only for the adapter subsystem then can we make it an
>>> internal API between DSW and adapters?
>>
>> No, it's needed for any producer-only eventdev ports, including any such
>> ports used by the application.
>
> In that case, the code path in testeventdev, eventdev_pipeline, etc needs
> to be updated. I am worried about the performance impact for the drivers they
> don't have such limitations.
>
> Why not have an additional config option in port_config which says
> it is a producer-only port by an application and takes care of the driver.
>
> In the current adapters code, you are calling maintain() when enqueue
> returns zero.
> In such a case, if the port is configured as producer and then
> internally it can call maintain.
>
> Thoughts from other eventdev maintainers?
> Cc+ @Van Haaren, Harry  @Richardson, Bruce @Gujjar, Abhinandan S
> @Jayatheerthan, Jay @Erik Gabriel Carrillo @McDaniel, Timothy @Pavan
> Nikhilesh  @Hemant Agrawal @Liang Ma
>

One more thing to consider: should we add a "int op" parameter to 
rte_event_maintain()? It would also solve hack #2 in DSW eventdev API 
integration: forcing an output buffer flush. This is today done with a 
zero-sized rte_event_enqueue() call.


You could have something like:

#define RTE_EVENT_DEV_MAINT_FLUSH (1)

int

rte_event_maintain(int op);


It would also allow future extensions of "maintain", without ABI breakage.


Explicit flush is rare in real applications, in my experience, but 
useful for test cases. I suspect for DSW to work with the DPDK eventdev 
test suite, flushing buffered events (either zero-sized enqueue, 
repeated rte_event_maintain() calls, or a single of the 
rte_event_maintain(RTE_EVENT_DEV_MAINT_FLUSH) call [assuming the above 
API]) is required in the test code.


>>
>> Should rte_event_maintain() be marked experimental? I don't know how
>> that works for inline functions.
>>
>>
>>> +  rte_event_maintain() is a low-overhead function and should be
>>>> + * called at a high rate (e.g., in the applications poll loop).
>>>> + *
>>>> + * No port may be left unmaintained.
>>>> + *
>>>> + * rte_event_maintain() may be called on event devices which haven't
>>>> + * set RTE_EVENT_DEV_CAP_REQUIRES_MAINT flag, in which case it is a
>>>> + * no-operation.
>>>> + *
>>>> + * @param dev_id
>>>> + *   The identifier of the device.
>>>> + * @param port_id
>>>> + *   The identifier of the event port.
>>>> + * @return
>>>> + *  - 0 on success.
>>>> + *  - -EINVAL if *dev_id* or *port_id* is invalid
>>>> + *
>>>> + * @see RTE_EVENT_DEV_CAP_REQUIRES_MAINT
>>>> + */
>>>> +static inline int
>>>> +rte_event_maintain(uint8_t dev_id, uint8_t port_id)
>>>> +{
>>>> +       const struct rte_event_fp_ops *fp_ops;
>>>> +       void *port;
>>>> +
>>>> +       fp_ops = &rte_event_fp_ops[dev_id];
>>>> +       port = fp_ops->data[port_id];
>>>> +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>>>> +       if (dev_id >= RTE_EVENT_MAX_DEVS ||
>>>> +           port_id >= RTE_EVENT_MAX_PORTS_PER_DEV) {
>>>> +               rte_errno = EINVAL;
>>>> +               return 0;
>>>> +       }
>>>> +
>>>> +       if (port == NULL) {
>>>> +               rte_errno = EINVAL;
>>>> +               return 0;
>>>> +       }
>>>> +#endif
>>>> +       rte_eventdev_trace_maintain(dev_id, port_id);
>>>> +
>>>> +       if (fp_ops->maintain != NULL)
>>>> +               fp_ops->maintain(port);
>>>> +
>>>> +       return 0;
>>>> +}
>>>> +
>>>>    #ifdef __cplusplus
>>>>    }
>>>>    #endif
>>>> diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
>>>> index 61d5ebdc44..61fa65cab3 100644
>>>> --- a/lib/eventdev/rte_eventdev_core.h
>>>> +++ b/lib/eventdev/rte_eventdev_core.h
>>>> @@ -29,6 +29,9 @@ typedef uint16_t (*event_dequeue_burst_t)(void *port, struct rte_event ev[],
>>>>                                             uint64_t timeout_ticks);
>>>>    /**< @internal Dequeue burst of events from port of a device */
>>>>
>>>> +typedef void (*event_maintain_t)(void *port);
>>>> +/**< @internal Maintains a port */
>>>> +
>>>>    typedef uint16_t (*event_tx_adapter_enqueue_t)(void *port,
>>>>                                                  struct rte_event ev[],
>>>>                                                  uint16_t nb_events);
>>>> @@ -54,6 +57,8 @@ struct rte_event_fp_ops {
>>>>           /**< PMD dequeue function. */
>>>>           event_dequeue_burst_t dequeue_burst;
>>>>           /**< PMD dequeue burst function. */
>>>> +       event_maintain_t maintain;
>>>> +       /**< PMD port maintenance function. */
>>>>           event_tx_adapter_enqueue_t txa_enqueue;
>>>>           /**< PMD Tx adapter enqueue function. */
>>>>           event_tx_adapter_enqueue_t txa_enqueue_same_dest;
>>>> diff --git a/lib/eventdev/rte_eventdev_trace_fp.h b/lib/eventdev/rte_eventdev_trace_fp.h
>>>> index 5639e0b83a..c5a79a14d8 100644
>>>> --- a/lib/eventdev/rte_eventdev_trace_fp.h
>>>> +++ b/lib/eventdev/rte_eventdev_trace_fp.h
>>>> @@ -38,6 +38,13 @@ RTE_TRACE_POINT_FP(
>>>>           rte_trace_point_emit_ptr(enq_mode_cb);
>>>>    )
>>>>
>>>> +RTE_TRACE_POINT_FP(
>>>> +       rte_eventdev_trace_maintain,
>>>> +       RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id),
>>>> +       rte_trace_point_emit_u8(dev_id);
>>>> +       rte_trace_point_emit_u8(port_id);
>>>> +)
>>>> +
>>>>    RTE_TRACE_POINT_FP(
>>>>           rte_eventdev_trace_eth_tx_adapter_enqueue,
>>>>           RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table,
>>>> --
>>>> 2.25.1
>>>>


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [RFC PATCH 0/1] Dataplane Workload Accelerator library
  2021-10-31 21:55  0%                 ` Thomas Monjalon
@ 2021-10-31 22:19  0%                   ` Jerin Jacob
  0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2021-10-31 22:19 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Mattias Rönnblom, jerinj, dev, ferruh.yigit, ajit.khaparde,
	aboyer, andrew.rybchenko, beilei.xing, bruce.richardson, chas3,
	chenbo.xia, ciara.loftus, dsinghrawat, ed.czeck, evgenys, grive,
	g.singh, zhouguoyang, haiyue.wang, hkalra, heinrich.kuhn,
	hemant.agrawal, hyonkim, igorch, irusskikh, jgrajcia,
	jasvinder.singh, jianwang, jiawenwu, jingjing.wu, johndale,
	john.miller, linville, keith.wiles, kirankumark, oulijun, lironh,
	longli, mw, spinler, matan, matt.peters, maxime.coquelin, mk,
	humin29, pnalla, ndabilpuram, qiming.yang, qi.z.zhang, radhac,
	rahul.lakkireddy, rmody, rosen.xu, sachin.saxena, skoteshwar,
	shshaikh, shaibran, shepard.siegel, asomalap, somnath.kotur,
	sthemmin, steven.webster, skori, mtetsuyah, vburru, viacheslavo,
	xiao.w.wang, cloud.wangxiaoyun, yisen.zhuang, yongwang,
	xuanziyang2, pkapoor, nadavh, sburla, pathreya, gakhil, mdr,
	dmitry.kozliuk, anatoly.burakov, cristian.dumitrescu,
	honnappa.nagarahalli, ruifeng.wang, drc, konstantin.ananyev,
	olivier.matz, jay.jayatheerthan, asekhar, pbhagavatula,
	Elana Agostini

On Mon, Nov 1, 2021 at 3:25 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 31/10/2021 22:13, Jerin Jacob:
> > On Mon, Nov 1, 2021 at 1:04 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> > >
> > > 31/10/2021 15:01, Jerin Jacob:
> > > > Since rte_flow already has the TLV concept it may not be new to DPDK.
> > >
> > > Where is there TLV in rte_flow?
> >
> > struct rte_flow_item {
> >         enum rte_flow_item_type type; /**< Item type. */
> >         const void *spec; /**< Pointer to item specification structure. */
> >
> > Type is the tag here and the spec is the value here. Length is the
> > size of the specification structure.
> > rte_flows spec does not support/need zero length variable at the end
> > of spec structure,
> > that reason for not embedding explicit length value as it is can be
> > derived from sizeof(specification structure).
>
> Ah OK I see what you mean.
> But rte_flow_item is quite limited,
> it is not the kind of TLV with multiple levels of nesting.
> Do you need nesting of objects in DWA?

No. Currently, ethernet-based on host port has the following
prototype[1] and it has array
of TLV(not in continuous memory). For simplicity, we could remove
legth value from rte_dwa_tlv and just
keep like rte_flow and let the payload contain the length of the
message if the message has
a variable length. See rte_dwa_profile_l3fwd_d2h_exception_pkts::nb_pkts below.


[1]
+/**
+ * Receive a burst of TLVs of type `TYPE_USER_PLANE` from the Rx queue
+ * designated by its *queue_id* of DWA object *obj*.
+ *
+ * @param obj
+ *   DWA object.
+ * @param queue_id
+ *   The identifier of Rx queue id. The queue id should in the range of
+ *   [0 to rte_dwa_port_host_ethernet_config::nb_rx_queues].
+ * @param[out] tlvs
+ *   Points to an array of *nb_tlvs* tlvs of type *rte_dwa_tlv* structure
+ *   to be received.
+ * @param nb_tlvs
+ *   The maximum number of TLVs to received.
+ *
+ * @return
+ * The number of TLVs actually received on the Rx queue. The return
+ * value can be less than the value of the *nb_tlvs* parameter when the
+ * Rx queue is not full.
+ */
+uint16_t rte_dwa_port_host_ethernet_rx(rte_dwa_obj_t obj, uint16_t queue_id,
+ struct rte_dwa_tlv **tlvs, uint16_t nb_tlvs);


[2]
example TLV for TYPE_USER_PLANE traffic.


+ /**
+ * Attribute |  Value
+ * ----------|--------
+ * Tag       | RTE_DWA_TAG_PROFILE_L3FWD
+ * Stag      | RTE_DWA_STAG_PROFILE_L3FWD_D2H_EXCEPTION_PACKETS
+ * Direction | D2H
+ * Type      | TYPE_USER_PLANE
+ * Payload   | struct rte_dwa_profile_l3fwd_d2h_exception_pkts
+ * Pair TLV  | NA
+ *
+ * Response from DWA of exception packets.
+ */

+/**
+ * Payload of RTE_DWA_STAG_PROFILE_L3FWD_D2H_EXCEPTION_PACKETS message.
+ */
+struct rte_dwa_profile_l3fwd_d2h_exception_pkts {
+ uint16_t nb_pkts;
+ /**< Number of packets in the variable size array.*/
+ uint16_t rsvd16;
+ /**< Reserved field to make pkts[0] to be 64bit aligned.*/
+ uint32_t rsvd32;
+ /**< Reserved field to make pkts[0] to be 64bit aligned.*/
+ struct rte_mbuf *pkts[0];
+ /**< Array of rte_mbufs of size nb_pkts. */
+} __rte_packed;


>
> > > > I really liked rte_flow enablement of ABI combability and its ease of adding
> > > > new stuff. Try to follow similar stuff which is proven in DPDK.
> > > > Ie. New profile creation will very easy, it will be a matter of identifying
> > > > the TLVs and their type and payload, rather than everyone comes with
> > > > new APIs in every profile.
> > > >
> > > > > Why not use protobuf and its IDL to specify the interface?
> > >
> > > Yes I think it is important to discuss alternatives,
> > > and at least get justifications of why TLV is chosen among others.
> >
> > Yes. Current list is
> >
> > 1) Very easy to enable ABI compatibility.
> > 2) If it needs to be transported over network etc it needs to be
> > packed so that way it is easy for implementation to do that
> > with TLV also gives better performance in such
> > cases by avoiding reformatting or possibly avoiding memcpy etc.
> > 3) It is easy to plugin with another high-level programing language as
> > just one API.
> > 4) Easy to decouple DWA core library functionalities from profile.
> > 5) Easy to enable asynchronous scheme using request and response TLVs.
> > 6) Most importantly, We could introduce type notion with TLV
> > (connected with the type of message  See TYPE_ATTACHED, TYPE_STOPPED,
> > TYPE_USER_PLANE etc ),
> > That way, we can have a uniform outlook of profiles instead of each profile
> > coming with a setup of its own APIs and __rules__ on the state machine.
> > I think, for a framework to leverage communication mechanisms and other
> > aspects between profiles, it's important to have some synergy between profiles.
> > 7) No Additional library dependencies like gRPC, protobuf
> > 8) Provide driver to implement the optimized means of supporting different
> > transport such as Ethernet, Shared memory, PCIe DMA style HW etc.
> > 9) Avoid creating endless APIs and their associated driver function
> > calls for each
> > profile APIs.
>
>
>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [RFC PATCH 0/1] Dataplane Workload Accelerator library
  2021-10-31 21:13  2%               ` Jerin Jacob
@ 2021-10-31 21:55  0%                 ` Thomas Monjalon
  2021-10-31 22:19  0%                   ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-31 21:55 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Mattias Rönnblom, jerinj, dev, ferruh.yigit, ajit.khaparde,
	aboyer, andrew.rybchenko, beilei.xing, bruce.richardson, chas3,
	chenbo.xia, ciara.loftus, dsinghrawat, ed.czeck, evgenys, grive,
	g.singh, zhouguoyang, haiyue.wang, hkalra, heinrich.kuhn,
	hemant.agrawal, hyonkim, igorch, irusskikh, jgrajcia,
	jasvinder.singh, jianwang, jiawenwu, jingjing.wu, johndale,
	john.miller, linville, keith.wiles, kirankumark, oulijun, lironh,
	longli, mw, spinler, matan, matt.peters, maxime.coquelin, mk,
	humin29, pnalla, ndabilpuram, qiming.yang, qi.z.zhang, radhac,
	rahul.lakkireddy, rmody, rosen.xu, sachin.saxena, skoteshwar,
	shshaikh, shaibran, shepard.siegel, asomalap, somnath.kotur,
	sthemmin, steven.webster, skori, mtetsuyah, vburru, viacheslavo,
	xiao.w.wang, cloud.wangxiaoyun, yisen.zhuang, yongwang,
	xuanziyang2, pkapoor, nadavh, sburla, pathreya, gakhil, mdr,
	dmitry.kozliuk, anatoly.burakov, cristian.dumitrescu,
	honnappa.nagarahalli, ruifeng.wang, drc, konstantin.ananyev,
	olivier.matz, jay.jayatheerthan, asekhar, pbhagavatula,
	Elana Agostini

31/10/2021 22:13, Jerin Jacob:
> On Mon, Nov 1, 2021 at 1:04 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> >
> > 31/10/2021 15:01, Jerin Jacob:
> > > Since rte_flow already has the TLV concept it may not be new to DPDK.
> >
> > Where is there TLV in rte_flow?
> 
> struct rte_flow_item {
>         enum rte_flow_item_type type; /**< Item type. */
>         const void *spec; /**< Pointer to item specification structure. */
> 
> Type is the tag here and the spec is the value here. Length is the
> size of the specification structure.
> rte_flows spec does not support/need zero length variable at the end
> of spec structure,
> that reason for not embedding explicit length value as it is can be
> derived from sizeof(specification structure).

Ah OK I see what you mean.
But rte_flow_item is quite limited,
it is not the kind of TLV with multiple levels of nesting.
Do you need nesting of objects in DWA?

> > > I really liked rte_flow enablement of ABI combability and its ease of adding
> > > new stuff. Try to follow similar stuff which is proven in DPDK.
> > > Ie. New profile creation will very easy, it will be a matter of identifying
> > > the TLVs and their type and payload, rather than everyone comes with
> > > new APIs in every profile.
> > >
> > > > Why not use protobuf and its IDL to specify the interface?
> >
> > Yes I think it is important to discuss alternatives,
> > and at least get justifications of why TLV is chosen among others.
> 
> Yes. Current list is
> 
> 1) Very easy to enable ABI compatibility.
> 2) If it needs to be transported over network etc it needs to be
> packed so that way it is easy for implementation to do that
> with TLV also gives better performance in such
> cases by avoiding reformatting or possibly avoiding memcpy etc.
> 3) It is easy to plugin with another high-level programing language as
> just one API.
> 4) Easy to decouple DWA core library functionalities from profile.
> 5) Easy to enable asynchronous scheme using request and response TLVs.
> 6) Most importantly, We could introduce type notion with TLV
> (connected with the type of message  See TYPE_ATTACHED, TYPE_STOPPED,
> TYPE_USER_PLANE etc ),
> That way, we can have a uniform outlook of profiles instead of each profile
> coming with a setup of its own APIs and __rules__ on the state machine.
> I think, for a framework to leverage communication mechanisms and other
> aspects between profiles, it's important to have some synergy between profiles.
> 7) No Additional library dependencies like gRPC, protobuf
> 8) Provide driver to implement the optimized means of supporting different
> transport such as Ethernet, Shared memory, PCIe DMA style HW etc.
> 9) Avoid creating endless APIs and their associated driver function
> calls for each
> profile APIs.




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [RFC PATCH 0/1] Dataplane Workload Accelerator library
  2021-10-31 19:34  0%             ` Thomas Monjalon
@ 2021-10-31 21:13  2%               ` Jerin Jacob
  2021-10-31 21:55  0%                 ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-10-31 21:13 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Mattias Rönnblom, jerinj, dev, ferruh.yigit, ajit.khaparde,
	aboyer, andrew.rybchenko, beilei.xing, bruce.richardson, chas3,
	chenbo.xia, ciara.loftus, dsinghrawat, ed.czeck, evgenys, grive,
	g.singh, zhouguoyang, haiyue.wang, hkalra, heinrich.kuhn,
	hemant.agrawal, hyonkim, igorch, irusskikh, jgrajcia,
	jasvinder.singh, jianwang, jiawenwu, jingjing.wu, johndale,
	john.miller, linville, keith.wiles, kirankumark, oulijun, lironh,
	longli, mw, spinler, matan, matt.peters, maxime.coquelin, mk,
	humin29, pnalla, ndabilpuram, qiming.yang, qi.z.zhang, radhac,
	rahul.lakkireddy, rmody, rosen.xu, sachin.saxena, skoteshwar,
	shshaikh, shaibran, shepard.siegel, asomalap, somnath.kotur,
	sthemmin, steven.webster, skori, mtetsuyah, vburru, viacheslavo,
	xiao.w.wang, cloud.wangxiaoyun, yisen.zhuang, yongwang,
	xuanziyang2, pkapoor, nadavh, sburla, pathreya, gakhil, mdr,
	dmitry.kozliuk, anatoly.burakov, cristian.dumitrescu,
	honnappa.nagarahalli, ruifeng.wang, drc, konstantin.ananyev,
	olivier.matz, jay.jayatheerthan, asekhar, pbhagavatula,
	Elana Agostini

On Mon, Nov 1, 2021 at 1:04 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 31/10/2021 15:01, Jerin Jacob:
> > Since rte_flow already has the TLV concept it may not be new to DPDK.
>
> Where is there TLV in rte_flow?

struct rte_flow_item {
        enum rte_flow_item_type type; /**< Item type. */
        const void *spec; /**< Pointer to item specification structure. */

Type is the tag here and the spec is the value here. Length is the
size of the specification structure.
rte_flows spec does not support/need zero length variable at the end
of spec structure,
that reason for not embedding explicit length value as it is can be
derived from sizeof(specification structure).


>
> > I really liked rte_flow enablement of ABI combability and its ease of adding
> > new stuff. Try to follow similar stuff which is proven in DPDK.
> > Ie. New profile creation will very easy, it will be a matter of identifying
> > the TLVs and their type and payload, rather than everyone comes with
> > new APIs in every profile.
> >
> > > Why not use protobuf and its IDL to specify the interface?
>
> Yes I think it is important to discuss alternatives,
> and at least get justifications of why TLV is chosen among others.

Yes. Current list is

1) Very easy to enable ABI compatibility.
2) If it needs to be transported over network etc it needs to be
packed so that way it is easy for implementation to do that
with TLV also gives better performance in such
cases by avoiding reformatting or possibly avoiding memcpy etc.
3) It is easy to plugin with another high-level programing language as
just one API.
4) Easy to decouple DWA core library functionalities from profile.
5) Easy to enable asynchronous scheme using request and response TLVs.
6) Most importantly, We could introduce type notion with TLV
(connected with the type of message  See TYPE_ATTACHED, TYPE_STOPPED,
TYPE_USER_PLANE etc ),
That way, we can have a uniform outlook of profiles instead of each profile
coming with a setup of its own APIs and __rules__ on the state machine.
I think, for a framework to leverage communication mechanisms and other
aspects between profiles, it's important to have some synergy between profiles.
7) No Additional library dependencies like gRPC, protobuf
8) Provide driver to implement the optimized means of supporting different
transport such as Ethernet, Shared memory, PCIe DMA style HW etc.
9) Avoid creating endless APIs and their associated driver function
calls for each
profile APIs.


>
>

^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [RFC PATCH 0/1] Dataplane Workload Accelerator library
  2021-10-31 14:01  4%           ` Jerin Jacob
@ 2021-10-31 19:34  0%             ` Thomas Monjalon
  2021-10-31 21:13  2%               ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-31 19:34 UTC (permalink / raw)
  To: Mattias Rönnblom, Jerin Jacob
  Cc: jerinj, dev, ferruh.yigit, ajit.khaparde, aboyer,
	andrew.rybchenko, beilei.xing, bruce.richardson, chas3,
	chenbo.xia, ciara.loftus, dsinghrawat, ed.czeck, evgenys, grive,
	g.singh, zhouguoyang, haiyue.wang, hkalra, heinrich.kuhn,
	hemant.agrawal, hyonkim, igorch, irusskikh, jgrajcia,
	jasvinder.singh, jianwang, jiawenwu, jingjing.wu, johndale,
	john.miller, linville, keith.wiles, kirankumark, oulijun, lironh,
	longli, mw, spinler, matan, matt.peters, maxime.coquelin, mk,
	humin29, pnalla, ndabilpuram, qiming.yang, qi.z.zhang, radhac,
	rahul.lakkireddy, rmody, rosen.xu, sachin.saxena, skoteshwar,
	shshaikh, shaibran, shepard.siegel, asomalap, somnath.kotur,
	sthemmin, steven.webster, skori, mtetsuyah, vburru, viacheslavo,
	xiao.w.wang, cloud.wangxiaoyun, yisen.zhuang, yongwang,
	xuanziyang2, pkapoor, nadavh, sburla, pathreya, gakhil, mdr,
	dmitry.kozliuk, anatoly.burakov, cristian.dumitrescu,
	honnappa.nagarahalli, ruifeng.wang, drc, konstantin.ananyev,
	olivier.matz, jay.jayatheerthan, asekhar, pbhagavatula,
	Elana Agostini

31/10/2021 15:01, Jerin Jacob:
> Since rte_flow already has the TLV concept it may not be new to DPDK.

Where is there TLV in rte_flow?

> I really liked rte_flow enablement of ABI combability and its ease of adding
> new stuff. Try to follow similar stuff which is proven in DPDK.
> Ie. New profile creation will very easy, it will be a matter of identifying
> the TLVs and their type and payload, rather than everyone comes with
> new APIs in every profile.
> 
> > Why not use protobuf and its IDL to specify the interface?

Yes I think it is important to discuss alternatives,
and at least get justifications of why TLV is chosen among others.



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [RFC PATCH 0/1] Dataplane Workload Accelerator library
  2021-10-31  9:18  4%         ` Mattias Rönnblom
@ 2021-10-31 14:01  4%           ` Jerin Jacob
  2021-10-31 19:34  0%             ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-10-31 14:01 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: jerinj, dev, thomas, ferruh.yigit, ajit.khaparde, aboyer,
	andrew.rybchenko, beilei.xing, bruce.richardson, chas3,
	chenbo.xia, ciara.loftus, dsinghrawat, ed.czeck, evgenys, grive,
	g.singh, zhouguoyang, haiyue.wang, hkalra, heinrich.kuhn,
	hemant.agrawal, hyonkim, igorch, irusskikh, jgrajcia,
	jasvinder.singh, jianwang, jiawenwu, jingjing.wu, johndale,
	john.miller, linville, keith.wiles, kirankumark, oulijun, lironh,
	longli, mw, spinler, matan, matt.peters, maxime.coquelin, mk,
	humin29, pnalla, ndabilpuram, qiming.yang, qi.z.zhang, radhac,
	rahul.lakkireddy, rmody, rosen.xu, sachin.saxena, skoteshwar,
	shshaikh, shaibran, shepard.siegel, asomalap, somnath.kotur,
	sthemmin, steven.webster, skori, mtetsuyah, vburru, viacheslavo,
	xiao.w.wang, cloud.wangxiaoyun, yisen.zhuang, yongwang,
	xuanziyang2, pkapoor, nadavh, sburla, pathreya, gakhil, mdr,
	dmitry.kozliuk, anatoly.burakov, cristian.dumitrescu,
	honnappa.nagarahalli, ruifeng.wang, drc, konstantin.ananyev,
	olivier.matz, jay.jayatheerthan, asekhar, pbhagavatula,
	Elana Agostini

On Sun, Oct 31, 2021 at 2:48 PM Mattias Rönnblom
<mattias.ronnblom@ericsson.com> wrote:
>
> On 2021-10-29 17:51, Jerin Jacob wrote:
> > On Fri, Oct 29, 2021 at 5:27 PM Mattias Rönnblom
> > <mattias.ronnblom@ericsson.com> wrote:
> >> On 2021-10-25 11:03, Jerin Jacob wrote:
> >>> On Mon, Oct 25, 2021 at 1:05 PM Mattias Rönnblom
> >>> <mattias.ronnblom@ericsson.com> wrote:
> >>>> On 2021-10-19 20:14, jerinj@marvell.com wrote:
> >>>>> From: Jerin Jacob <jerinj@marvell.com>
> >>>>>
> >>>>>
> >>>>> Dataplane Workload Accelerator library
> >>>>> ======================================
> >>>>>
> >>>>> Definition of Dataplane Workload Accelerator
> >>>>> --------------------------------------------
> >>>>> Dataplane Workload Accelerator(DWA) typically contains a set of CPUs,
> >>>>> Network controllers and programmable data acceleration engines for
> >>>>> packet processing, cryptography, regex engines, baseband processing, etc.
> >>>>> This allows DWA to offload  compute/packet processing/baseband/
> >>>>> cryptography-related workload from the host CPU to save the cost and power.
> >>>>> Also to enable scaling the workload by adding DWAs to the Host CPU as needed.
> >>>>>
> >>>>> Unlike other devices in DPDK, the DWA device is not fixed-function
> >>>>> due to the fact that it has CPUs and programmable HW accelerators.
> >>>> There are already several instances of DPDK devices with pure-software
> >>>> implementation. In this regard, a DPU/SmartNIC represents nothing new.
> >>>> What's new, it seems to me, is a much-increased need to
> >>>> configure/arrange the processing in complex manners, to avoid bouncing
> >>>> everything to the host CPU.
> >>> Yes and No. It will be based on the profile. The TLV type TYPE_USER_PLANE will
> >>> have user plane traffic from/to host. For example, offloading ORAN split 7.2
> >>> baseband profile. Transport blocks sent to/from host as TYPE_USER_PLANE.
> >>>
> >>>> Something like P4 or rte_flow-based hooks or
> >>>> some other kind of extension. The eventdev adapters solve the same
> >>>> problem (where on some systems packets go through the host CPU on their
> >>>> way to the event device, and others do not) - although on a *much*
> >>>> smaller scale.
> >>> Yes. Eventdev Adapters only for event device plumbing.
> >>>
> >>>
> >>>> "Not-fixed function" seems to call for more hot plug support in the
> >>>> device APIs. Such functionality could then be reused by anything that
> >>>> can be reconfigured dynamically (FPGAs, firmware-programmed
> >>>> accelerators, etc.),
> >>> Yes.
> >>>
> >>>> but which may not be able to serve as a RPC
> >>>> endpoint, like a SmartNIC.
> >>> It can. That's the reason for choosing TLVs. So that
> >>> any higher level language can use TLVs like https://protect2.fireeye.com/v1/url?k=96886daf-c91357b6-96882d34-8682aaa22bc0-c994a5dcbda5d9e8&q=1&e=e89c0aca-a3b3-4f72-b616-ba4550b856b6&u=https%3A%2F%2Fgithub.com%2Fustropo%2Futtlv
> >>> to communicate with the accelerator.  TLVs follow the request and
> >>> response scheme like RPC. So it can warp it under application if needed.
> >>>
> >>>> DWA could be some kind of DPDK-internal framework for managing certain
> >>>> type of DPUs, but should it be exposed to the user application?
> >>> Could you clarify a bit more.
> >>> The offload is represented as a set of TLVs in generic fashion. There
> >>> is no DPU specific bit in offload representation. See
> >>> rte_dwa_profiile_l3fwd.h header file.
> >>
> >> It seems a bit cumbersome to work with TLVs on the user application
> >> side. Would it be an alternative to have the profile API as a set of C
> >> APIs instead of TLV-based messaging interface? The underlying
> >> implementation could still be - in many or all cases - be TLVs sent over
> >> some appropriate transport.
> > The reason to pick TLVs is as follows
> >
> > 1) Very easy to enable ABI compatibility. (Learned from rte_flow)
>
>
> Do you include the TLV-defined profile interface in "ABI"? Or do you
> with ABI only mean the C ABI to send/receive TLVs? To me, the former
> makes the most sense, since changing the profile will break binary
> compatibility with then-existing applications.

The TLV payload will be as part of ABI just like rte_flow.
If there is  ABI breakage on any TLV we can add a new Tag and it is associated
payload to enable backward compatibility. i.e old TLV will work
without any change

>
>
> > 2) If it needs to be transported over network etc it needs to be
> > packed so that way
> > it is easy for implementation to do that with TLV also it gives better
> > performance in such
> > cases by avoiding reformatting or possibly avoiding memcpy etc.
>
> My question was not "why TLVs", but the more specific "why are TLVs
> exposed to the user application." I find it likely the user applications
> are going to wrap the TLV serialization and de-serialization into their
> own functions.

We can stack up the TLVs, unlike traditional function calls.
Those things really need if the device supports N profiles so multiple TLVs
can be used in a single shot in fastpath.


>
>
> > 3) It is easy to plugin with another high-level programing language as
> > just one API
>
>
> Make sense. One note though: the transport is just one API, but then
> each profile makes up an API as well, although it's not C, but TLV-based.

Yes,

>
>
> > 4) Easy to decouple DWA core library functionalities from profile.
> > 5) Easy to enable asynchronous scheme using request and response TLVs.
> > 6) Most importantly, We could introduce type notion with TLV
> > (connected with the type of message  See TYPE_ATTACHED, TYPE_STOPPED,
> > TYPE_USER_PLANE etc ),
> > That way, we can have a uniform outlook of profiles instead of each profile
> > coming with a setup of its own APIs and __rules__ on the state machine.
> > I think, for a framework to leverage communication mechanisms and other
> > aspects between profiles, it's important to have some synergy between profiles.
> >
> >
> > Yes. I agree that a bit more logic is required on the application side
> > to use TLV,
> > But I think we can have a wrapper function getting req and response structures.
>
>
> Do you think ethdev, eventdev, cryptodev and the other DPDK APIs had
> been better off as TLV-based messaging interfaces as well? From a user
> point of view, I'm not sure I see what's so special about talking to a
> SmartNIC compared to functions implemented in a GPU, FPGA, an
> fix-function ASIC, a large array of garden gnomes or some other manner.
> More functionality and more need for asynchronicity (if that's a word)
> maybe.

No. I am trying to avoid creating 1000s of API and it is driver hooks
for all profiles and enable symmetry between all the profiles by
attaching state, type attributes to TLV so that we can get a unified view.
Nothing specific to  SmartNIC/GPU/FPGA.
Also, TLVs are very common in interoperable solutions like
https://scf.io/en/documents/222_5G_FAPI_PHY_API_Specification.php


>
> >> Such a C API could still be asynchronous, and still be a profile API
> >> (rather than a set of new DPDK device types).
> >>
> >>
> >> What I tried to ask during the meeting but where I didn't get an answer
> >> (or at least one that I could understand) was how the profiles was to be
> >> specified and/or documented. Maybe the above is what you had in mind
> >> already.
> > Yes. Documentation is easy, please check the RFC header file for Doxygen
> > meta to express all the attributes of a TLV.
> >
> >
> > +enum rte_dwa_port_host_ethernet {
> > + /**
> > + * Attribute |  Value
> > + * ----------|--------
> > + * Tag       | RTE_DWA_TAG_PORT_HOST_ETHERNET
> > + * Stag      | RTE_DWA_STAG_PORT_HOST_ETHERNET_H2D_INFO
> > + * Direction | H2D
> > + * Type      | TYPE_ATTACHED
> > + * Payload   | NA
> > + * Pair TLV  | RTE_DWA_STAG_PORT_HOST_ETHERNET_D2H_INFO
> > + *
> > + * Request DWA host ethernet port information.
> > + */
> > + RTE_DWA_STAG_PORT_HOST_ETHERNET_H2D_INFO,
> > + /**
> > + * Attribute |  Value
> > + * ----------|---------
> > + * Tag       | RTE_DWA_TAG_PORT_HOST_ETHERNET
> > + * Stag      | RTE_DWA_STAG_PORT_HOST_ETHERNET_D2H_INFO
> > + * Direction | H2D
> > + * Type      | TYPE_ATTACHED
> > + * Payload   | struct rte_dwa_port_host_ethernet_d2h_info
> > + * Pair TLV  | RTE_DWA_STAG_PORT_HOST_ETHERNET_H2D_INFO
> > + *
> > + * Response for DWA host ethernet port information.
> > + */
> > + RTE_DWA_STAG_PORT_HOST_ETHERNET_D2H_INFO,
>
>
> Thanks for the pointer.
>
>
> It would make sense to have a machine-readable schema, so you can
> generate the (in my view) inevitable wrapper code. Much like what gRPC
> is to protobuf, or Sun RPC to XDR.

I thought of doing that, I thought it may not be good due to
1) Additional library dependencies
2) Performance overhead of such solutions.
Not all the transports are not supported in all the libraries
and allow drivers to enable any sort of transport.
3) Keep it simple
4) Better asynchronous support.
5) If someone needs gRPC kind of thing, it can be wrapped over TLV.

Since rte_flow already has the TLV concept it may not be new to DPDK.
I really liked rte_flow enablement of ABI combability and its ease of adding
new stuff. Try to follow similar stuff which is proven in DPDK.
Ie. New profile creation will very easy, it will be a matter of identifying
the TLVs and their type and payload, rather than everyone comes with
new APIs in every profile.


>
>
> Why not use protobuf and its IDL to specify the interface?
>
>

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [RFC PATCH 0/1] Dataplane Workload Accelerator library
  2021-10-29 15:51  2%       ` Jerin Jacob
@ 2021-10-31  9:18  4%         ` Mattias Rönnblom
  2021-10-31 14:01  4%           ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Mattias Rönnblom @ 2021-10-31  9:18 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: jerinj, dev, thomas, ferruh.yigit, ajit.khaparde, aboyer,
	andrew.rybchenko, beilei.xing, bruce.richardson, chas3,
	chenbo.xia, ciara.loftus, dsinghrawat, ed.czeck, evgenys, grive,
	g.singh, zhouguoyang, haiyue.wang, hkalra, heinrich.kuhn,
	hemant.agrawal, hyonkim, igorch, irusskikh, jgrajcia,
	jasvinder.singh, jianwang, jiawenwu, jingjing.wu, johndale,
	john.miller, linville, keith.wiles, kirankumark, oulijun, lironh,
	longli, mw, spinler, matan, matt.peters, maxime.coquelin, mk,
	humin29, pnalla, ndabilpuram, qiming.yang, qi.z.zhang, radhac,
	rahul.lakkireddy, rmody, rosen.xu, sachin.saxena, skoteshwar,
	shshaikh, shaibran, shepard.siegel, asomalap, somnath.kotur,
	sthemmin, steven.webster, skori, mtetsuyah, vburru, viacheslavo,
	xiao.w.wang, cloud.wangxiaoyun, yisen.zhuang, yongwang,
	xuanziyang2, pkapoor, nadavh, sburla, pathreya, gakhil, mdr,
	dmitry.kozliuk, anatoly.burakov, cristian.dumitrescu,
	honnappa.nagarahalli, ruifeng.wang, drc, konstantin.ananyev,
	olivier.matz, jay.jayatheerthan, asekhar, pbhagavatula,
	Elana Agostini

On 2021-10-29 17:51, Jerin Jacob wrote:
> On Fri, Oct 29, 2021 at 5:27 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
>> On 2021-10-25 11:03, Jerin Jacob wrote:
>>> On Mon, Oct 25, 2021 at 1:05 PM Mattias Rönnblom
>>> <mattias.ronnblom@ericsson.com> wrote:
>>>> On 2021-10-19 20:14, jerinj@marvell.com wrote:
>>>>> From: Jerin Jacob <jerinj@marvell.com>
>>>>>
>>>>>
>>>>> Dataplane Workload Accelerator library
>>>>> ======================================
>>>>>
>>>>> Definition of Dataplane Workload Accelerator
>>>>> --------------------------------------------
>>>>> Dataplane Workload Accelerator(DWA) typically contains a set of CPUs,
>>>>> Network controllers and programmable data acceleration engines for
>>>>> packet processing, cryptography, regex engines, baseband processing, etc.
>>>>> This allows DWA to offload  compute/packet processing/baseband/
>>>>> cryptography-related workload from the host CPU to save the cost and power.
>>>>> Also to enable scaling the workload by adding DWAs to the Host CPU as needed.
>>>>>
>>>>> Unlike other devices in DPDK, the DWA device is not fixed-function
>>>>> due to the fact that it has CPUs and programmable HW accelerators.
>>>> There are already several instances of DPDK devices with pure-software
>>>> implementation. In this regard, a DPU/SmartNIC represents nothing new.
>>>> What's new, it seems to me, is a much-increased need to
>>>> configure/arrange the processing in complex manners, to avoid bouncing
>>>> everything to the host CPU.
>>> Yes and No. It will be based on the profile. The TLV type TYPE_USER_PLANE will
>>> have user plane traffic from/to host. For example, offloading ORAN split 7.2
>>> baseband profile. Transport blocks sent to/from host as TYPE_USER_PLANE.
>>>
>>>> Something like P4 or rte_flow-based hooks or
>>>> some other kind of extension. The eventdev adapters solve the same
>>>> problem (where on some systems packets go through the host CPU on their
>>>> way to the event device, and others do not) - although on a *much*
>>>> smaller scale.
>>> Yes. Eventdev Adapters only for event device plumbing.
>>>
>>>
>>>> "Not-fixed function" seems to call for more hot plug support in the
>>>> device APIs. Such functionality could then be reused by anything that
>>>> can be reconfigured dynamically (FPGAs, firmware-programmed
>>>> accelerators, etc.),
>>> Yes.
>>>
>>>> but which may not be able to serve as a RPC
>>>> endpoint, like a SmartNIC.
>>> It can. That's the reason for choosing TLVs. So that
>>> any higher level language can use TLVs like https://protect2.fireeye.com/v1/url?k=96886daf-c91357b6-96882d34-8682aaa22bc0-c994a5dcbda5d9e8&q=1&e=e89c0aca-a3b3-4f72-b616-ba4550b856b6&u=https%3A%2F%2Fgithub.com%2Fustropo%2Futtlv
>>> to communicate with the accelerator.  TLVs follow the request and
>>> response scheme like RPC. So it can warp it under application if needed.
>>>
>>>> DWA could be some kind of DPDK-internal framework for managing certain
>>>> type of DPUs, but should it be exposed to the user application?
>>> Could you clarify a bit more.
>>> The offload is represented as a set of TLVs in generic fashion. There
>>> is no DPU specific bit in offload representation. See
>>> rte_dwa_profiile_l3fwd.h header file.
>>
>> It seems a bit cumbersome to work with TLVs on the user application
>> side. Would it be an alternative to have the profile API as a set of C
>> APIs instead of TLV-based messaging interface? The underlying
>> implementation could still be - in many or all cases - be TLVs sent over
>> some appropriate transport.
> The reason to pick TLVs is as follows
>
> 1) Very easy to enable ABI compatibility. (Learned from rte_flow)


Do you include the TLV-defined profile interface in "ABI"? Or do you 
with ABI only mean the C ABI to send/receive TLVs? To me, the former 
makes the most sense, since changing the profile will break binary 
compatibility with then-existing applications.


> 2) If it needs to be transported over network etc it needs to be
> packed so that way
> it is easy for implementation to do that with TLV also it gives better
> performance in such
> cases by avoiding reformatting or possibly avoiding memcpy etc.

My question was not "why TLVs", but the more specific "why are TLVs 
exposed to the user application." I find it likely the user applications 
are going to wrap the TLV serialization and de-serialization into their 
own functions.


> 3) It is easy to plugin with another high-level programing language as
> just one API


Make sense. One note though: the transport is just one API, but then 
each profile makes up an API as well, although it's not C, but TLV-based.


> 4) Easy to decouple DWA core library functionalities from profile.
> 5) Easy to enable asynchronous scheme using request and response TLVs.
> 6) Most importantly, We could introduce type notion with TLV
> (connected with the type of message  See TYPE_ATTACHED, TYPE_STOPPED,
> TYPE_USER_PLANE etc ),
> That way, we can have a uniform outlook of profiles instead of each profile
> coming with a setup of its own APIs and __rules__ on the state machine.
> I think, for a framework to leverage communication mechanisms and other
> aspects between profiles, it's important to have some synergy between profiles.
>
>
> Yes. I agree that a bit more logic is required on the application side
> to use TLV,
> But I think we can have a wrapper function getting req and response structures.


Do you think ethdev, eventdev, cryptodev and the other DPDK APIs had 
been better off as TLV-based messaging interfaces as well? From a user 
point of view, I'm not sure I see what's so special about talking to a 
SmartNIC compared to functions implemented in a GPU, FPGA, an 
fix-function ASIC, a large array of garden gnomes or some other manner. 
More functionality and more need for asynchronicity (if that's a word) 
maybe.


>> Such a C API could still be asynchronous, and still be a profile API
>> (rather than a set of new DPDK device types).
>>
>>
>> What I tried to ask during the meeting but where I didn't get an answer
>> (or at least one that I could understand) was how the profiles was to be
>> specified and/or documented. Maybe the above is what you had in mind
>> already.
> Yes. Documentation is easy, please check the RFC header file for Doxygen
> meta to express all the attributes of a TLV.
>
>
> +enum rte_dwa_port_host_ethernet {
> + /**
> + * Attribute |  Value
> + * ----------|--------
> + * Tag       | RTE_DWA_TAG_PORT_HOST_ETHERNET
> + * Stag      | RTE_DWA_STAG_PORT_HOST_ETHERNET_H2D_INFO
> + * Direction | H2D
> + * Type      | TYPE_ATTACHED
> + * Payload   | NA
> + * Pair TLV  | RTE_DWA_STAG_PORT_HOST_ETHERNET_D2H_INFO
> + *
> + * Request DWA host ethernet port information.
> + */
> + RTE_DWA_STAG_PORT_HOST_ETHERNET_H2D_INFO,
> + /**
> + * Attribute |  Value
> + * ----------|---------
> + * Tag       | RTE_DWA_TAG_PORT_HOST_ETHERNET
> + * Stag      | RTE_DWA_STAG_PORT_HOST_ETHERNET_D2H_INFO
> + * Direction | H2D
> + * Type      | TYPE_ATTACHED
> + * Payload   | struct rte_dwa_port_host_ethernet_d2h_info
> + * Pair TLV  | RTE_DWA_STAG_PORT_HOST_ETHERNET_H2D_INFO
> + *
> + * Response for DWA host ethernet port information.
> + */
> + RTE_DWA_STAG_PORT_HOST_ETHERNET_D2H_INFO,


Thanks for the pointer.


It would make sense to have a machine-readable schema, so you can 
generate the (in my view) inevitable wrapper code. Much like what gRPC 
is to protobuf, or Sun RPC to XDR.


Why not use protobuf and its IDL to specify the interface?



^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [dpdk-techboard] [PATCH v2] vhost: mark vDPA driver API as internal
  @ 2021-10-29 16:15  3% ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-10-29 16:15 UTC (permalink / raw)
  To: Maxime Coquelin
  Cc: dev, techboard, chenbo.xia, xuemingl, xiao.w.wang, david.marchand

28/10/2021 16:15, Maxime Coquelin:
> This patch marks the vDPA driver APIs as internal and
> rename the corresponding header file to vdpa_driver.h.
> 
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> 
> Hi Techboard,
> 
> Please vote for an exception for this unannounced API
> breakage.
[...]
>  lib/vhost/{rte_vdpa_dev.h => vdpa_driver.h} | 12 +++++++++---

Hiding more internal structs is a good breakage.

[...]
> --- a/lib/vhost/rte_vdpa_dev.h
> +++ b/lib/vhost/vdpa_driver.h
> +__rte_internal
>  struct rte_vdpa_device *
>  rte_vdpa_register_device(struct rte_device *rte_dev,
>  		struct rte_vdpa_dev_ops *ops);
[...]
> +__rte_internal
>  int
>  rte_vdpa_unregister_device(struct rte_vdpa_device *dev);
[...]
> +__rte_internal
>  int
>  rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable);
[...]
> +__rte_internal
>  int
>  rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m);
[...]
> --- a/lib/vhost/version.map
> +++ b/lib/vhost/version.map
> -	rte_vdpa_register_device;
> -	rte_vdpa_relay_vring_used;
> -	rte_vdpa_unregister_device;
> -	rte_vhost_host_notifier_ctrl;

OK to remove these functions from the ABI
and mark them internal.

I suppose this breakage should not hurt too much,
as I don't see the need for out-of-tree vDPA drivers.
Of course it is always better to announce such change,
but it would be a pity to wait one more year for hiding this.

Acked-by: Thomas Monjalon <thomas@monjalon.net>



^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [RFC PATCH 0/1] Dataplane Workload Accelerator library
  @ 2021-10-29 15:51  2%       ` Jerin Jacob
  2021-10-31  9:18  4%         ` Mattias Rönnblom
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-10-29 15:51 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: jerinj, dev, thomas, ferruh.yigit, ajit.khaparde, aboyer,
	andrew.rybchenko, beilei.xing, bruce.richardson, chas3,
	chenbo.xia, ciara.loftus, dsinghrawat, ed.czeck, evgenys, grive,
	g.singh, zhouguoyang, haiyue.wang, hkalra, heinrich.kuhn,
	hemant.agrawal, hyonkim, igorch, irusskikh, jgrajcia,
	jasvinder.singh, jianwang, jiawenwu, jingjing.wu, johndale,
	john.miller, linville, keith.wiles, kirankumark, oulijun, lironh,
	longli, mw, spinler, matan, matt.peters, maxime.coquelin, mk,
	humin29, pnalla, ndabilpuram, qiming.yang, qi.z.zhang, radhac,
	rahul.lakkireddy, rmody, rosen.xu, sachin.saxena, skoteshwar,
	shshaikh, shaibran, shepard.siegel, asomalap, somnath.kotur,
	sthemmin, steven.webster, skori, mtetsuyah, vburru, viacheslavo,
	xiao.w.wang, cloud.wangxiaoyun, yisen.zhuang, yongwang,
	xuanziyang2, pkapoor, nadavh, sburla, pathreya, gakhil, mdr,
	dmitry.kozliuk, anatoly.burakov, cristian.dumitrescu,
	honnappa.nagarahalli, ruifeng.wang, drc, konstantin.ananyev,
	olivier.matz, jay.jayatheerthan, asekhar, pbhagavatula,
	Elana Agostini

On Fri, Oct 29, 2021 at 5:27 PM Mattias Rönnblom
<mattias.ronnblom@ericsson.com> wrote:
>
> On 2021-10-25 11:03, Jerin Jacob wrote:
> > On Mon, Oct 25, 2021 at 1:05 PM Mattias Rönnblom
> > <mattias.ronnblom@ericsson.com> wrote:
> >> On 2021-10-19 20:14, jerinj@marvell.com wrote:
> >>> From: Jerin Jacob <jerinj@marvell.com>
> >>>
> >>>
> >>> Dataplane Workload Accelerator library
> >>> ======================================
> >>>
> >>> Definition of Dataplane Workload Accelerator
> >>> --------------------------------------------
> >>> Dataplane Workload Accelerator(DWA) typically contains a set of CPUs,
> >>> Network controllers and programmable data acceleration engines for
> >>> packet processing, cryptography, regex engines, baseband processing, etc.
> >>> This allows DWA to offload  compute/packet processing/baseband/
> >>> cryptography-related workload from the host CPU to save the cost and power.
> >>> Also to enable scaling the workload by adding DWAs to the Host CPU as needed.
> >>>
> >>> Unlike other devices in DPDK, the DWA device is not fixed-function
> >>> due to the fact that it has CPUs and programmable HW accelerators.
> >>
> >> There are already several instances of DPDK devices with pure-software
> >> implementation. In this regard, a DPU/SmartNIC represents nothing new.
> >> What's new, it seems to me, is a much-increased need to
> >> configure/arrange the processing in complex manners, to avoid bouncing
> >> everything to the host CPU.
> > Yes and No. It will be based on the profile. The TLV type TYPE_USER_PLANE will
> > have user plane traffic from/to host. For example, offloading ORAN split 7.2
> > baseband profile. Transport blocks sent to/from host as TYPE_USER_PLANE.
> >
> >> Something like P4 or rte_flow-based hooks or
> >> some other kind of extension. The eventdev adapters solve the same
> >> problem (where on some systems packets go through the host CPU on their
> >> way to the event device, and others do not) - although on a *much*
> >> smaller scale.
> > Yes. Eventdev Adapters only for event device plumbing.
> >
> >
> >>
> >> "Not-fixed function" seems to call for more hot plug support in the
> >> device APIs. Such functionality could then be reused by anything that
> >> can be reconfigured dynamically (FPGAs, firmware-programmed
> >> accelerators, etc.),
> > Yes.
> >
> >> but which may not be able to serve as a RPC
> >> endpoint, like a SmartNIC.
> > It can. That's the reason for choosing TLVs. So that
> > any higher level language can use TLVs like https://protect2.fireeye.com/v1/url?k=96886daf-c91357b6-96882d34-8682aaa22bc0-c994a5dcbda5d9e8&q=1&e=e89c0aca-a3b3-4f72-b616-ba4550b856b6&u=https%3A%2F%2Fgithub.com%2Fustropo%2Futtlv
> > to communicate with the accelerator.  TLVs follow the request and
> > response scheme like RPC. So it can warp it under application if needed.
> >
> >>
> >> DWA could be some kind of DPDK-internal framework for managing certain
> >> type of DPUs, but should it be exposed to the user application?
> >
> > Could you clarify a bit more.
> > The offload is represented as a set of TLVs in generic fashion. There
> > is no DPU specific bit in offload representation. See
> > rte_dwa_profiile_l3fwd.h header file.
>
>
> It seems a bit cumbersome to work with TLVs on the user application
> side. Would it be an alternative to have the profile API as a set of C
> APIs instead of TLV-based messaging interface? The underlying
> implementation could still be - in many or all cases - be TLVs sent over
> some appropriate transport.

The reason to pick TLVs is as follows

1) Very easy to enable ABI compatibility. (Learned from rte_flow)
2) If it needs to be transported over network etc it needs to be
packed so that way
it is easy for implementation to do that with TLV also it gives better
performance in such
cases by avoiding reformatting or possibly avoiding memcpy etc.
3) It is easy to plugin with another high-level programing language as
just one API
4) Easy to decouple DWA core library functionalities from profile.
5) Easy to enable asynchronous scheme using request and response TLVs.
6) Most importantly, We could introduce type notion with TLV
(connected with the type of message  See TYPE_ATTACHED, TYPE_STOPPED,
TYPE_USER_PLANE etc ),
That way, we can have a uniform outlook of profiles instead of each profile
coming with a setup of its own APIs and __rules__ on the state machine.
I think, for a framework to leverage communication mechanisms and other
aspects between profiles, it's important to have some synergy between profiles.


Yes. I agree that a bit more logic is required on the application side
to use TLV,
But I think we can have a wrapper function getting req and response structures.

>
> Such a C API could still be asynchronous, and still be a profile API
> (rather than a set of new DPDK device types).
>
>
> What I tried to ask during the meeting but where I didn't get an answer
> (or at least one that I could understand) was how the profiles was to be
> specified and/or documented. Maybe the above is what you had in mind
> already.

Yes. Documentation is easy, please check the RFC header file for Doxygen
meta to express all the attributes of a TLV.


+enum rte_dwa_port_host_ethernet {
+ /**
+ * Attribute |  Value
+ * ----------|--------
+ * Tag       | RTE_DWA_TAG_PORT_HOST_ETHERNET
+ * Stag      | RTE_DWA_STAG_PORT_HOST_ETHERNET_H2D_INFO
+ * Direction | H2D
+ * Type      | TYPE_ATTACHED
+ * Payload   | NA
+ * Pair TLV  | RTE_DWA_STAG_PORT_HOST_ETHERNET_D2H_INFO
+ *
+ * Request DWA host ethernet port information.
+ */
+ RTE_DWA_STAG_PORT_HOST_ETHERNET_H2D_INFO,
+ /**
+ * Attribute |  Value
+ * ----------|---------
+ * Tag       | RTE_DWA_TAG_PORT_HOST_ETHERNET
+ * Stag      | RTE_DWA_STAG_PORT_HOST_ETHERNET_D2H_INFO
+ * Direction | H2D
+ * Type      | TYPE_ATTACHED
+ * Payload   | struct rte_dwa_port_host_ethernet_d2h_info
+ * Pair TLV  | RTE_DWA_STAG_PORT_HOST_ETHERNET_H2D_INFO
+ *
+ * Response for DWA host ethernet port information.
+ */
+ RTE_DWA_STAG_PORT_HOST_ETHERNET_D2H_INFO,

^ permalink raw reply	[relevance 2%]

* [dpdk-dev] Windows community call: MoM 2021-10-27
@ 2021-10-28 21:01  4% Dmitry Kozlyuk
  0 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-28 21:01 UTC (permalink / raw)
  To: dev

# About

The meeting takes place in MS Teams every two weeks on Wednesday 15:00 UTC.
Note: it is going to be rescheduled.
Ask Harini Ramakrishnan <Harini.Ramakrishnan@microsoft.com> for invitation.


# Agenda

* Patch review
* Opens


# 1. Patch review

1.1. [kmods,v2] windows/netuio: add Intel device ID (William Tu)
     http://patchwork.dpdk.org/project/dpdk/patch/20211019190102.1903-1-u9012063@gmail.com/

     Ready to be merged.

1.2. [v3] eal/windows: ensure all enabled CPUs are counted (Naty)
     http://patchwork.dpdk.org/project/dpdk/patch/1629294360-5737-1-git-send-email-navasile@linux.microsoft.com/

     Merged.

1.3. Support MLX5 crypto driver on Windows (Tal)
     http://patchwork.dpdk.org/project/dpdk/list/?series=19951

     * Limited to crypto/mlx5 PMD, doesn't require Windows maintainers review.
     * Issues cross-compiling with MinGW.

1.4. app/test: enable subset of tests on Windows (Jie)
     http://patchwork.dpdk.org/project/dpdk/list/?series=19970

     * v8 sent, needs review.
     * Thomas recommends enabling tests on library-by-library basis.

1.5. eal: Add EAL API for threading (Naty)
     http://patchwork.dpdk.org/project/dpdk/list/?series=19478

     * Failed to integrate in 21.11:

       - Comments came late and require major rework.
       - DmitryK is going to send more comments, although small ones.
       - This blocks the plan to make DPDK 21.11 static build shippable,
         because we still need pthread shim.

     * Can be integrated before the next TLS, because only introduces new API
       and a unit test for it, doesn't break ABI for non-Windows parts.

1.6. Enable the internal EAL thread API
     http://patchwork.dpdk.org/project/dpdk/list/?series=18338

     * Depends on 1.5, not integrated.
     * Cannot be merged fully until the next TLS,
       because it breaks ABI of sync primitives.
     * Needs to be revised:

       - Parts that don't break ABI can be integrated early.
       - This course of action is approved. More time for review and testing.
       - Patches need to be rearranged.

1.7. Intel patches merged.


# 2. Opens

2.1. William Tu:

There is no solution for a Windows guest running on Windows host
to get a performant paravirtual device, like NetVSC in Linux.
The only option is VF passthrough.
William wonders if QEMU on Windows allows that.
Also some customers don't want to enable HyperV role for Windows host.
Resolution: no one has relevant experience, William is going to experiment.

2.2. Dmitry Kozlyuk:

Interrupt support draft is ready, but there are fundamental issues
that may require to rework NetUIO and userspace part.
An email thread is started on the topic
explaining the issue and possible solutions
(if someone is interested but not mentioned, tell DmitryK).

Mark Cheatham (Boulder Imaging) is willing to share info about interrupt
support in their app. However, their case is quite specialized
and logic is implemented in the kernel.

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [RFC PATCH v2] raw/ptdma: introduce ptdma driver
  2021-10-27 14:59  0%   ` Thomas Monjalon
@ 2021-10-28 14:54  0%     ` Sebastian, Selwin
  0 siblings, 0 replies; 200+ results
From: Sebastian, Selwin @ 2021-10-28 14:54 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, David Marchand

[AMD Official Use Only]

Hi,
I am working on making ptdma driver as a dmadev.  Will submit new patch for review.

Thanks and Regards
Selwin Sebastian

-----Original Message-----
From: Thomas Monjalon <thomas@monjalon.net> 
Sent: Wednesday, October 27, 2021 8:29 PM
To: Sebastian, Selwin <Selwin.Sebastian@amd.com>
Cc: dev@dpdk.org; David Marchand <david.marchand@redhat.com>
Subject: Re: [dpdk-dev] [RFC PATCH v2] raw/ptdma: introduce ptdma driver

[CAUTION: External Email]

Any update please?


06/09/2021 19:17, David Marchand:
> On Mon, Sep 6, 2021 at 6:56 PM Selwin Sebastian 
> <selwin.sebastian@amd.com> wrote:
> >
> > Add support for PTDMA driver
>
> - This description is rather short.
>
> Can this new driver be implemented as a dmadev?
> See (current revision):
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatc
> hwork.dpdk.org%2Fproject%2Fdpdk%2Flist%2F%3Fseries%3D18677%26state%3D%
> 252A%26archive%3Dboth&amp;data=04%7C01%7Cselwin.sebastian%40amd.com%7C
> 5ed560abb6e442d8b53108d9995a57ad%7C3dd8961fe4884e608e11a82d994e183d%7C
> 0%7C0%7C637709435600817110%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDA
> iLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=a%2FX
> cY1WBb1zzuAxtODkvYdg0jHFBue7HBULapymhrhk%3D&amp;reserved=0
>
>
> - In any case, quick comments on this patch:
> Please update release notes.
> vfio-pci should be preferred over igb_uio.
> Please check indent in meson.
> ABI version is incorrect in version.map.
> RTE_LOG_REGISTER_DEFAULT should be preferred.
> The patch is monolithic, could it be split per functionnality to ease review?
>
> Copy relevant maintainers and/or (sub-)tree maintainers to make them 
> aware of this work, and get those patches reviewed.
> Please submit new revisions of patchsets with increased revision 
> number in title + changelog that helps track what changed between 
> revisions.
>
> Some of those points are described in:
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdoc.
> dpdk.org%2Fguides%2Fcontributing%2Fpatches.html&amp;data=04%7C01%7Csel
> win.sebastian%40amd.com%7C5ed560abb6e442d8b53108d9995a57ad%7C3dd8961fe
> 4884e608e11a82d994e183d%7C0%7C0%7C637709435600817110%7CUnknown%7CTWFpb
> GZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0
> %3D%7C1000&amp;sdata=8TIG%2B8uXgz2DIwTHt69YKklYc3%2By3UdxGJ4deKG19iw%3
> D&amp;reserved=0
>
>
> Thanks.



^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v19 0/5] Add PIE support for HQoS library
  2021-10-25 11:32  3%         ` [dpdk-dev] [PATCH v18 " Liguzinski, WojciechX
  2021-10-26  8:24  3%           ` Liu, Yu Y
@ 2021-10-28 10:17  3%           ` Liguzinski, WojciechX
  2021-11-02 23:57  3%             ` [dpdk-dev] [PATCH v20 " Liguzinski, WojciechX
  1 sibling, 1 reply; 200+ results
From: Liguzinski, WojciechX @ 2021-10-28 10:17 UTC (permalink / raw)
  To: dev, jasvinder.singh, cristian.dumitrescu; +Cc: megha.ajmera

DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management. However, more
advanced queue management is required to address this problem and provide desirable
quality of service to users.

This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.

The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.

Liguzinski, WojciechX (5):
  sched: add PIE based congestion management
  example/qos_sched: add PIE support
  example/ip_pipeline: add PIE support
  doc/guides/prog_guide: added PIE
  app/test: add tests for PIE

 app/test/meson.build                         |    4 +
 app/test/test_pie.c                          | 1065 ++++++++++++++++++
 config/rte_config.h                          |    1 -
 doc/guides/prog_guide/glossary.rst           |    3 +
 doc/guides/prog_guide/qos_framework.rst      |   64 +-
 doc/guides/prog_guide/traffic_management.rst |   13 +-
 drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
 examples/ip_pipeline/tmgr.c                  |  142 +--
 examples/qos_sched/cfg_file.c                |  127 ++-
 examples/qos_sched/cfg_file.h                |    5 +
 examples/qos_sched/init.c                    |   27 +-
 examples/qos_sched/main.h                    |    3 +
 examples/qos_sched/profile.cfg               |  196 ++--
 lib/sched/meson.build                        |    3 +-
 lib/sched/rte_pie.c                          |   86 ++
 lib/sched/rte_pie.h                          |  398 +++++++
 lib/sched/rte_sched.c                        |  241 ++--
 lib/sched/rte_sched.h                        |   63 +-
 lib/sched/version.map                        |    4 +
 19 files changed, 2172 insertions(+), 279 deletions(-)
 create mode 100644 app/test/test_pie.c
 create mode 100644 lib/sched/rte_pie.c
 create mode 100644 lib/sched/rte_pie.h

-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] ethdev: promote device removal check function as stable
  2021-10-28  8:38  0% ` Kinsella, Ray
@ 2021-10-28  8:56  0%   ` Andrew Rybchenko
  2021-11-04 10:45  0%     ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-10-28  8:56 UTC (permalink / raw)
  To: Kinsella, Ray, Thomas Monjalon, dev; +Cc: matan, Ferruh Yigit

On 10/28/21 11:38 AM, Kinsella, Ray wrote:
> 
> 
> On 28/10/2021 09:35, Thomas Monjalon wrote:
>> The function rte_eth_dev_is_removed() was introduced in DPDK 18.02,
>> and is integrated in error checks of ethdev library.
>>
>> It is promoted as stable ABI.
>>
>> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
>> ---
>>   lib/ethdev/rte_ethdev.h | 4 ----
>>   lib/ethdev/version.map  | 2 +-
>>   2 files changed, 1 insertion(+), 5 deletions(-)
>>
> Acked-by: Ray Kinsella <mdr@ashroe.eu>

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] ethdev: promote device removal check function as stable
  2021-10-28  8:35  3% [dpdk-dev] [PATCH] ethdev: promote device removal check function as stable Thomas Monjalon
@ 2021-10-28  8:38  0% ` Kinsella, Ray
  2021-10-28  8:56  0%   ` Andrew Rybchenko
  0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-10-28  8:38 UTC (permalink / raw)
  To: Thomas Monjalon, dev; +Cc: matan, Ferruh Yigit, Andrew Rybchenko



On 28/10/2021 09:35, Thomas Monjalon wrote:
> The function rte_eth_dev_is_removed() was introduced in DPDK 18.02,
> and is integrated in error checks of ethdev library.
> 
> It is promoted as stable ABI.
> 
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
>   lib/ethdev/rte_ethdev.h | 4 ----
>   lib/ethdev/version.map  | 2 +-
>   2 files changed, 1 insertion(+), 5 deletions(-)
> 
Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH] ethdev: promote device removal check function as stable
@ 2021-10-28  8:35  3% Thomas Monjalon
  2021-10-28  8:38  0% ` Kinsella, Ray
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-28  8:35 UTC (permalink / raw)
  To: dev; +Cc: matan, Ferruh Yigit, Andrew Rybchenko, Ray Kinsella

The function rte_eth_dev_is_removed() was introduced in DPDK 18.02,
and is integrated in error checks of ethdev library.

It is promoted as stable ABI.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 lib/ethdev/rte_ethdev.h | 4 ----
 lib/ethdev/version.map  | 2 +-
 2 files changed, 1 insertion(+), 5 deletions(-)

diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 24f30b4b28..09d60351a3 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -2385,9 +2385,6 @@ int rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_queue,
 		uint16_t nb_tx_queue, const struct rte_eth_conf *eth_conf);
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice.
- *
  * Check if an Ethernet device was physically removed.
  *
  * @param port_id
@@ -2395,7 +2392,6 @@ int rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_queue,
  * @return
  *   1 when the Ethernet device is removed, otherwise 0.
  */
-__rte_experimental
 int
 rte_eth_dev_is_removed(uint16_t port_id);
 
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index e1abe99729..c2fb0669a4 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -31,6 +31,7 @@ DPDK_22 {
 	rte_eth_dev_get_supported_ptypes;
 	rte_eth_dev_get_vlan_offload;
 	rte_eth_dev_info_get;
+	rte_eth_dev_is_removed;
 	rte_eth_dev_is_valid_port;
 	rte_eth_dev_logtype;
 	rte_eth_dev_mac_addr_add;
@@ -148,7 +149,6 @@ EXPERIMENTAL {
 	rte_mtr_stats_update;
 
 	# added in 18.02
-	rte_eth_dev_is_removed;
 	rte_eth_dev_owner_delete;
 	rte_eth_dev_owner_get;
 	rte_eth_dev_owner_new;
-- 
2.33.0


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1
  2021-10-25 21:40  4% [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1 Thomas Monjalon
@ 2021-10-28  7:10  0% ` Jiang, YuX
  2021-11-01 11:53  0%   ` Jiang, YuX
  2021-11-05 21:51  0% ` Thinh Tran
  2021-11-08 10:50  0% ` Pei Zhang
  2 siblings, 1 reply; 200+ results
From: Jiang, YuX @ 2021-10-28  7:10 UTC (permalink / raw)
  To: Thomas Monjalon, dev (dev@dpdk.org)
  Cc: Devlin, Michelle, Mcnamara, John, Yigit, Ferruh

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Thomas Monjalon
> Sent: Tuesday, October 26, 2021 5:41 AM
> To: announce@dpdk.org
> Subject: [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1
>
> A new DPDK release candidate is ready for testing:
>       https://git.dpdk.org/dpdk/tag/?id=v21.11-rc1
>
> There are 1171 new patches in this snapshot, big as expected.
>
> Release notes:
>       https://doc.dpdk.org/guides/rel_notes/release_21_11.html
>
> Highlights of 21.11-rc1:
> * General
>       - more than 512 MSI-X interrupts
>       - hugetlbfs subdirectories
>       - mempool flag for non-IO usages
>       - device class for DMA accelerators
>       - DMA drivers for Intel DSA and IOAT
> * Networking
>       - MTU handling rework
>       - get all MAC addresses of a port
>       - RSS based on L3/L4 checksum fields
>       - flow match on L2TPv2 and PPP
>       - flow flex parser for custom header
>       - control delivery of HW Rx metadata
>       - transfer flows API rework
>       - shared Rx queue
>       - Windows support of Intel e1000, ixgbe and iavf
>       - testpmd multi-process
>       - pcapng library and dumpcap tool
> * API/ABI
>       - API namespace improvements (mempool, mbuf, ethdev)
>       - API internals hidden (intr, ethdev, security, cryptodev, eventdev,
> cmdline)
>       - flags check for future ABI compatibility (memzone, mbuf, mempool)
>
> Please test and report issues on bugs.dpdk.org.
> DPDK 21.11-rc2 is expected in two weeks or less.
>
> Thank you everyone
>
Update the test status for Intel part. Till now dpdk21.11-rc1 test execution rate is 50%. No critical issue is found.
But one little high issue https://bugs.dpdk.org/show_bug.cgi?id=843 impacts cryptodev function and performance test.
Bad commit id is 8cb5d08db940a6b26f5c5ac03b49bac25e9a7022/Author: Harman Kalra <hkalra@marvell.com>. Please help to handle it.
# Basic Intel(R) NIC testing
* Build or compile:
        *Build: cover the build test combination with latest GCC/Clang/ICC version and the popular OS revision such as Ubuntu20.04, Fedora34, RHEL8.4, etc.
                - All test done.
        *Compile: cover the CFLAGES(O0/O1/O2/O3) with popular OS such as Ubuntu20.04 and Fedora34.
                - All test done.
                - Find one bug: https://bugs.dpdk.org/show_bug.cgi?id=841 Marvell Dev has provided patch and Intel validation team verify passed.
                  Patch link: http://patchwork.dpdk.org/project/dpdk/patch/20211027131259.11775-1-ktejasree@marvell.com/
        * PF(i40e, ixgbe): test scenarios including RTE_FLOW/TSO/Jumboframe/checksum offload/VLAN/VXLAN, etc.
                - Execution rate is 60%. No new issue is found yet.
        * VF(i40e, ixgbe): test scenarios including VF-RTE_FLOW/TSO/Jumboframe/checksum offload/VLAN/VXLAN, etc.
                - Execution rate is 60%.
                - One bug https://bugs.dpdk.org/show_bug.cgi?id=845 about "vm_hotplug: vf testpmd core dumped after executing "device_del dev1" in qemu" is found.
                        Bad commit id is commit c2bd9367e18f5b00c1a3c5eb281a512ef52c5dfd Author: Harman Kalra <hkalra@marvell.com>
        * PF/VF(ice): test scenarios including Switch features/Package Management/Flow Director/Advanced Tx/Advanced RSS/ACL/DCF/Share code update/Flexible Descriptor, etc.
                - Execution rate is 60%.
                - One bug about kni_autotest failed on Suse15.3. Trying to find bad commit id. Known issues, Intel dev is under investigating.
        * Intel NIC single core/NIC performance: test scenarios including PF/VF single core performance test, RFC2544 Zero packet loss performance test, etc.
                - Execution rate is 60%.
                - One bug about nic single core performance drop 2% is found. Bad commit id is commit:  efc6f9104c80d39ec168/Author: Olivier Matz <olivier.matz@6wind.com>
        * Power and IPsec:
                * Power: test scenarios including bi-direction/Telemetry/Empty Poll Lib/Priority Base Frequency, etc.
                        - All passed.
                * IPsec: test scenarios including ipsec/ipsec-gw/ipsec library basic test - QAT&SW/FIB library, etc.
                        - Not Start.
# Basic cryptodev and virtio testing
        * Virtio: both function and performance test are covered. Such as PVP/Virtio_loopback/virtio-user loopback/virtio-net VM2VM perf testing/VMAWARE ESXI 7.0u3, etc.
                - Execution rate is 80%.
                - Two new bugs are found.
                        - One about VMware ESXI 7.0U3: failed to start port. Intel Dev is under investigating.
                        - One https://bugs.dpdk.org/show_bug.cgi?id=840 about "dpdk-pdump capture the pcap file content are wrong" is found.
                        Bad commit id: commit 10f726efe26c55805cf0bf6ca1b80e97b98eb724    //bad commit id Author: Stephen Hemminger <stephen@networkplumber.org>
        * Cryptodev:
                *Function test: test scenarios including Cryptodev API testing/CompressDev ISA-L/QAT/ZLIB PMD Testing/FIPS, etc.
                        - Execution rate is 60%
                        - Two new bugs are found.
                                - One https://bugs.dpdk.org/show_bug.cgi?id=843 about crypto performance tests for QAT are failing. Bad commit id is 8cb5d08db940a6b26f5c5ac03b49bac25e9a7022/Author: Harman Kalra <hkalra@marvell.com>
                                - One https://bugs.dpdk.org/show_bug.cgi?id=842 about FIP tests are failing. Bad commit id is commit f6849cdcc6ada2a8bc9b82e691eaab1aecf4952f Author: Akhil Goyal gakhil@marvell.com
                *Performance test: test scenarios including Thoughput Performance /Cryptodev Latency, etc.
                        - Execution rate is 10%. Most of performance test are blocked by Bug843.

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [Bug 842] [dpdk-21.11 rc1] FIPS tests are failing
@ 2021-10-27 17:43  2% bugzilla
  0 siblings, 0 replies; 200+ results
From: bugzilla @ 2021-10-27 17:43 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=842

            Bug ID: 842
           Summary: [dpdk-21.11 rc1] FIPS tests are failing
           Product: DPDK
           Version: 21.11
          Hardware: All
                OS: Linux
            Status: UNCONFIRMED
          Severity: minor
          Priority: Normal
         Component: cryptodev
          Assignee: dev@dpdk.org
          Reporter: varalakshmi.s@intel.com
  Target Milestone: ---

Environment

DPDK Version:  6c390cee976e33b1e9d8562d32c9d3ebe5d9ce94

OS: 5.4.0-89-generic #100~18.04.1-Ubuntu SMP Wed Sep 29 10:59:42 UTC 2021
x86_64 x86_64 x86_64 GNU/Linux 
Compiler: 7.5.0
Hardware platform: Purely

Steps to reproduce
root@dpdk-yaobing-purely147:~/dpdk#
x86_64-native-linuxapp-gcc/examples/dpdk-fips_validation  -l 9,10,66 -a
0000:af:00.0 --vdev crypto_aesni_gcm_pmd_1 --socket-mem 2048,2048 --legacy-mem
-n 6  -- --req-file /root/FIPS/GCM/req --rsp-file /root/FIPS/GCM/resp
--cryptodev crypto_aesni_gcm_pmd_1 --path-is-folder  --cryptodev-id 0
--self-test
EAL: Detected CPU lcores: 112
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: 1024 hugepages of size 2097152 reserved, but no mounted hugetlbfs found
for that size
EAL: VFIO support initialized
CRYPTODEV: Creating cryptodev crypto_aesni_gcm_pmd_1CRYPTODEV: Initialisation
parameters - name: crypto_aesni_gcm_pmd_1,socket id: 0, max queue pairs: 8
ipsec_mb_create() line 140: IPSec Multi-buffer library version used:
1.0.0CRYPTODEV: elt_size 0 is expanded to 384PMD: Testing (ID 0)
SELF_TEST_AES128_CBC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 0) SELF_TEST_AES128_CBC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 1) SELF_TEST_AES192_CBC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 1) SELF_TEST_AES192_CBC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 2) SELF_TEST_AES256_CBC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 2) SELF_TEST_AES256_CBC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 3) SELF_TEST_3DES_2KEY_CBC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 3) SELF_TEST_3DES_2KEY_CBC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 4) SELF_TEST_3DES_3KEY_CBC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 4) SELF_TEST_3DES_3KEY_CBC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 5) SELF_TEST_AES128_CCM_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 5) SELF_TEST_AES128_CCM_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 6) SELF_TEST_SHA1_HMAC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 6) SELF_TEST_SHA1_HMAC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 7) SELF_TEST_SHA224_HMAC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 7) SELF_TEST_SHA224_HMAC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 8) SELF_TEST_SHA256_HMAC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 8) SELF_TEST_SHA256_HMAC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 9) SELF_TEST_SHA384_HMAC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 9) SELF_TEST_SHA384_HMAC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 10) SELF_TEST_SHA512_HMAC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 10) SELF_TEST_SHA512_HMAC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 11) SELF_TEST_AES_CMAC_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 11) SELF_TEST_AES_CMAC_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 12) SELF_TEST_AES128_GCM_encrypt_test_vector Encrypt...
PMD: Testing (ID 12) SELF_TEST_AES128_GCM_encrypt_test_vector Decrypt...
PMD: Testing (ID 13) SELF_TEST_AES192_GCM_encrypt_test_vector Encrypt...
PMD: Testing (ID 13) SELF_TEST_AES192_GCM_encrypt_test_vector Decrypt...
PMD: Testing (ID 14) SELF_TEST_AES256_GCM_encrypt_test_vector Encrypt...
PMD: Testing (ID 14) SELF_TEST_AES256_GCM_encrypt_test_vector Decrypt...
PMD: Testing (ID 15) SELF_TEST_AES128_CTR_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 15) SELF_TEST_AES128_CTR_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 16) SELF_TEST_AES192_CTR_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 16) SELF_TEST_AES192_CTR_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 17) SELF_TEST_AES256_CTR_test_vector Encrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: Testing (ID 17) SELF_TEST_AES256_CTR_test_vector Decrypt...
PMD: Failed to get capability for cdev 0
PMD: Error -13: Prepare Xform
PMD: Not supported by crypto_aesni_gcm_pmd_1. Skip
PMD: PMD 0 finished self-test successfully
CRYPTODEV: elt_size 0 is expanded to 384

Segmentation fault (core dumped)

Expected Result
Test is expected to Pass with no errors.

Stack Trace or Log
-----------------------------------------------------------------
f6849cdcc6ada2a8bc9b82e691eaab1aecf4952f is the first bad commit
commit f6849cdcc6ada2a8bc9b82e691eaab1aecf4952f
Author: Akhil Goyal <gakhil@marvell.com>
Date:   Wed Oct 20 16:57:53 2021 +0530

    cryptodev: use new flat array in fast path API

    Rework fast-path cryptodev functions to use rte_crypto_fp_ops[].
    While it is an API/ABI breakage, this change is intended to be
    transparent for both users (no changes in user app is required) and
    PMD developers (no changes in PMD is required).

    Signed-off-by: Akhil Goyal <gakhil@marvell.com>
    Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
    Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [RFC PATCH v2] raw/ptdma: introduce ptdma driver
  @ 2021-10-27 14:59  0%   ` Thomas Monjalon
  2021-10-28 14:54  0%     ` Sebastian, Selwin
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-27 14:59 UTC (permalink / raw)
  To: Selwin Sebastian; +Cc: dev, David Marchand

Any update please?


06/09/2021 19:17, David Marchand:
> On Mon, Sep 6, 2021 at 6:56 PM Selwin Sebastian
> <selwin.sebastian@amd.com> wrote:
> >
> > Add support for PTDMA driver
> 
> - This description is rather short.
> 
> Can this new driver be implemented as a dmadev?
> See (current revision):
> https://patchwork.dpdk.org/project/dpdk/list/?series=18677&state=%2A&archive=both
> 
> 
> - In any case, quick comments on this patch:
> Please update release notes.
> vfio-pci should be preferred over igb_uio.
> Please check indent in meson.
> ABI version is incorrect in version.map.
> RTE_LOG_REGISTER_DEFAULT should be preferred.
> The patch is monolithic, could it be split per functionnality to ease review?
> 
> Copy relevant maintainers and/or (sub-)tree maintainers to make them
> aware of this work, and get those patches reviewed.
> Please submit new revisions of patchsets with increased revision
> number in title + changelog that helps track what changed between
> revisions.
> 
> Some of those points are described in:
> https://doc.dpdk.org/guides/contributing/patches.html
> 
> 
> Thanks.




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy support for AMD platform
  2021-10-27 14:10  2%                 ` Van Haaren, Harry
@ 2021-10-27 14:31  0%                   ` Thomas Monjalon
  2021-10-29 16:01  0%                     ` Song, Keesang
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-27 14:31 UTC (permalink / raw)
  To: Aman Kumar, Ananyev, Konstantin, Van Haaren, Harry
  Cc: mattias. ronnblom, dev, viacheslavo, Burakov, Anatoly, Song,
	Keesang, jerinjacobk, Richardson, Bruce, honnappa.nagarahalli,
	Ruifeng Wang, David Christensen, david.marchand, stephen

27/10/2021 16:10, Van Haaren, Harry:
> From: Aman Kumar <aman.kumar@vvdntech.in> 
> On Wed, Oct 27, 2021 at 5:53 PM Ananyev, Konstantin <mailto:konstantin.ananyev@intel.com> wrote
> > 
> > Hi Mattias,
> > 
> > > > 6) What is the use-case for this? When would a user *want* to use this instead
> > > of rte_memcpy()?
> > > > If the data being loaded is relevant to datapath/packets, presumably other
> > > packets might require the
> > > > loaded data, so temporal (normal) loads should be used to cache the source
> > > data?
> > >
> > >
> > > I'm not sure if your first question is rhetorical or not, but a memcpy()
> > > in a NT variant is certainly useful. One use case for a memcpy() with
> > > temporal loads and non-temporal stores is if you need to archive packet
> > > payload for (distant, potential) future use, and want to avoid causing
> > > unnecessary LLC evictions while doing so.
> > 
> > Yes I agree that there are certainly benefits in using cache-locality hints.
> > There is an open question around if the src or dst or both are non-temporal.
> > 
> > In the implementation of this patch, the NT/T type of store is reversed from your use-case:
> > 1) Loads are NT (so loaded data is not cached for future packets)
> > 2) Stores are T (so copied/dst data is now resident in L1/L2)
> > 
> > In theory there might even be valid uses for this type of memcpy where loaded
> > data is not needed again soon and stored data is referenced again soon,
> > although I cannot think of any here while typing this mail..
> > 
> > I think some use-case examples, and clear documentation on when/how to choose
> > between rte_memcpy() or any (potential future) rte_memcpy_nt() variants is required
> > to progress this patch.
> > 
> > Assuming a strong use-case exists, and it can be clearly indicators to users of DPDK APIs which
> > rte_memcpy() to use, we can look at technical details around enabling the implementation.
> > 
> 
> [Konstantin wrote]:
> +1 here.
> Function behaviour and restrictions (src parameter needs to be 16/32 B aligned, etc.),
> along with expected usage scenarios have to be documented properly.
> Again, as Harry pointed out, I don't see any AMD specific instructions in this function,
> so presumably such function can go into __AVX2__ code block and no new defines will
> be required. 
> 
> 
> [Aman wrote]:
> Agreed that APIs are generic but we've kept under an AMD flag for a simple reason that it is NOT tested on any other platform.
> A use-case on how to use this was planned earlier for mlx5 pmd but dropped in this version of patch as the data path of mlx5 is going to be refactored soon and may not be useful for future versions of mlx5 (>22.02). 
> Ref link: https://patchwork.dpdk.org/project/dpdk/patch/20211019104724.19416-2-aman.kumar@vvdntech.in/(we've plan to adapt this into future version)
> The patch in the link basically enhances mlx5 mprq implementation for our specific use-case and with 128B packet size, we achieve ~60% better perf. We understand the use of this copy function should be documented which we shall plan along with few other platform specific optimizations in future versions of DPDK. As this does not conflict with other platforms, can we still keep under AMD flag for now as suggested by Thomas?

I said I could merge if there is no objection.
I've overlooked that it's adding completely new functions in the API.
And the comments go in the direction of what I asked in previous version:
what is specific to AMD here?
Now seeing the valid objections, I agree it should be reworked.
We must provide API to applications which is generic, stable and well documented.


> [HvH wrote]:
> As an open-source community, any contributions should aim to improve the whole.
> In the past, numerous improvements have been merged to DPDK that improve performance.
> Sometimes these are architecture specific (x86/arm/ppc) sometimes the are ISA specific (SSE, AVX512, NEON).
> 
> I am not familiar with any cases in DPDK, where there is a #ifdef based on a *specific platform*.
> A quick "grep" through the "dpdk/lib" directory does not show any place where PMD or generic code
> has been explicitly optimized for a *specific platform*.
> 
> Obviously, in cases where ISA either exists or does not exist, yes there is an optimization to enable it.
> But this is not exposed as a top-level compile-time option, it uses runtime CPU ISA detection.
> 
> Please take a step back from the code, and look at what this patch asks of DPDK:
> "Please accept & maintain these changes upstream, which benefit only platform X, even though these ISA features are also available on other platforms".
> 
> Other patches that enhance performance of DPDK ask this:
> "Please accept & maintain these changes upstream, which benefit all platforms which have ISA capability X".
> 
> 
> === Question "As this does not conflict with other platforms, can we still keep under AMD flag for now"?
> I feel the contribution is too specific to a platform. Make it generic by enabling it at an ISA capability level.
> 
> Please yes, contribute to the DPDK community by improving performance of a PMD by enabling/leveraging ISA.
> But do so in a way that does not benefit only a specific platform - do so in a way that enhances all of DPDK, as
> other patches have done for the DPDK that this patch is built on.
> 
> If you have concerns that the PMD maintainers will not accept the changes due to potential regressions on
> other platforms, then discuss those, make a plan on how to performance validate, and work to a solution.
> 
> 
> === Regarding specifically the request for "can we still keep under AMD flag for now"?
> I do not believe we should introduce APIs for specific platforms. DPDK's EAL is an abstraction layer.
> The value of EAL is to provide a common abstraction. This platform-specific flag breaks the abstraction,
> and results in packaging issues, as well as API/ABI instability based on -Dcpu_instruction_set choice.
> So, no, we should not introduce APIs based on any compile-time flag.

I agree



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy support for AMD platform
  @ 2021-10-27 14:10  2%                 ` Van Haaren, Harry
  2021-10-27 14:31  0%                   ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Van Haaren, Harry @ 2021-10-27 14:10 UTC (permalink / raw)
  To: Aman Kumar, Ananyev, Konstantin
  Cc: mattias.ronnblom, Thomas Monjalon, dev, viacheslavo, Burakov,
	Anatoly, Song, Keesang, jerinjacobk, Richardson, Bruce,
	honnappa.nagarahalli, Ruifeng Wang, David Christensen,
	david.marchand, stephen

From: Aman Kumar <aman.kumar@vvdntech.in> 
Sent: Wednesday, October 27, 2021 2:35 PM
To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
Cc: Van Haaren, Harry <harry.van.haaren@intel.com>; mattias.ronnblom <mattias.ronnblom@ericsson.com>; Thomas Monjalon <thomas@monjalon.net>; dev@dpdk.org; viacheslavo@nvidia.com; Burakov, Anatoly <anatoly.burakov@intel.com>; Song, Keesang <Keesang.Song@amd.com>; jerinjacobk@gmail.com; Richardson, Bruce <bruce.richardson@intel.com>; honnappa.nagarahalli@arm.com; Ruifeng Wang <ruifeng.wang@arm.com>; David Christensen <drc@linux.vnet.ibm.com>; david.marchand@redhat.com; stephen@networkplumber.org
Subject: Re: [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy support for AMD platform

Hi Aman,

Please sent plain-text email, converting to other formats it makes writing inline replies difficult.
I've converted this reply email back to plain-text, and will annotate email below with [<author> wrote]:

On Wed, Oct 27, 2021 at 5:53 PM Ananyev, Konstantin <mailto:konstantin.ananyev@intel.com> wrote
> 
> Hi Mattias,
> 
> > > 6) What is the use-case for this? When would a user *want* to use this instead
> > of rte_memcpy()?
> > > If the data being loaded is relevant to datapath/packets, presumably other
> > packets might require the
> > > loaded data, so temporal (normal) loads should be used to cache the source
> > data?
> >
> >
> > I'm not sure if your first question is rhetorical or not, but a memcpy()
> > in a NT variant is certainly useful. One use case for a memcpy() with
> > temporal loads and non-temporal stores is if you need to archive packet
> > payload for (distant, potential) future use, and want to avoid causing
> > unnecessary LLC evictions while doing so.
> 
> Yes I agree that there are certainly benefits in using cache-locality hints.
> There is an open question around if the src or dst or both are non-temporal.
> 
> In the implementation of this patch, the NT/T type of store is reversed from your use-case:
> 1) Loads are NT (so loaded data is not cached for future packets)
> 2) Stores are T (so copied/dst data is now resident in L1/L2)
> 
> In theory there might even be valid uses for this type of memcpy where loaded
> data is not needed again soon and stored data is referenced again soon,
> although I cannot think of any here while typing this mail..
> 
> I think some use-case examples, and clear documentation on when/how to choose
> between rte_memcpy() or any (potential future) rte_memcpy_nt() variants is required
> to progress this patch.
> 
> Assuming a strong use-case exists, and it can be clearly indicators to users of DPDK APIs which
> rte_memcpy() to use, we can look at technical details around enabling the implementation.
> 

[Konstantin wrote]:
+1 here.
Function behaviour and restrictions (src parameter needs to be 16/32 B aligned, etc.),
along with expected usage scenarios have to be documented properly.
Again, as Harry pointed out, I don't see any AMD specific instructions in this function,
so presumably such function can go into __AVX2__ code block and no new defines will
be required. 


[Aman wrote]:
Agreed that APIs are generic but we've kept under an AMD flag for a simple reason that it is NOT tested on any other platform.
A use-case on how to use this was planned earlier for mlx5 pmd but dropped in this version of patch as the data path of mlx5 is going to be refactored soon and may not be useful for future versions of mlx5 (>22.02). 
Ref link: https://patchwork.dpdk.org/project/dpdk/patch/20211019104724.19416-2-aman.kumar@vvdntech.in/(we've plan to adapt this into future version)
The patch in the link basically enhances mlx5 mprq implementation for our specific use-case and with 128B packet size, we achieve ~60% better perf. We understand the use of this copy function should be documented which we shall plan along with few other platform specific optimizations in future versions of DPDK. As this does not conflict with other platforms, can we still keep under AMD flag for now as suggested by Thomas?


[HvH wrote]:
As an open-source community, any contributions should aim to improve the whole.
In the past, numerous improvements have been merged to DPDK that improve performance.
Sometimes these are architecture specific (x86/arm/ppc) sometimes the are ISA specific (SSE, AVX512, NEON).

I am not familiar with any cases in DPDK, where there is a #ifdef based on a *specific platform*.
A quick "grep" through the "dpdk/lib" directory does not show any place where PMD or generic code
has been explicitly optimized for a *specific platform*.

Obviously, in cases where ISA either exists or does not exist, yes there is an optimization to enable it.
But this is not exposed as a top-level compile-time option, it uses runtime CPU ISA detection.

Please take a step back from the code, and look at what this patch asks of DPDK:
"Please accept & maintain these changes upstream, which benefit only platform X, even though these ISA features are also available on other platforms".

Other patches that enhance performance of DPDK ask this:
"Please accept & maintain these changes upstream, which benefit all platforms which have ISA capability X".


=== Question "As this does not conflict with other platforms, can we still keep under AMD flag for now"?
I feel the contribution is too specific to a platform. Make it generic by enabling it at an ISA capability level.

Please yes, contribute to the DPDK community by improving performance of a PMD by enabling/leveraging ISA.
But do so in a way that does not benefit only a specific platform - do so in a way that enhances all of DPDK, as
other patches have done for the DPDK that this patch is built on.

If you have concerns that the PMD maintainers will not accept the changes due to potential regressions on
other platforms, then discuss those, make a plan on how to performance validate, and work to a solution.


=== Regarding specifically the request for "can we still keep under AMD flag for now"?
I do not believe we should introduce APIs for specific platforms. DPDK's EAL is an abstraction layer.
The value of EAL is to provide a common abstraction. This platform-specific flag breaks the abstraction,
and results in packaging issues, as well as API/ABI instability based on -Dcpu_instruction_set choice.
So, no, we should not introduce APIs based on any compile-time flag.

^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
  @ 2021-10-27 12:03  4%       ` Xia, Chenbo
  0 siblings, 0 replies; 200+ results
From: Xia, Chenbo @ 2021-10-27 12:03 UTC (permalink / raw)
  To: Thomas Monjalon, Harris, James R, Walker, Benjamin
  Cc: Liu, Changpeng, David Marchand, dev, Aaron Conole, Zawadzki, Tomasz

> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Thursday, October 14, 2021 4:26 PM
> To: Harris, James R <james.r.harris@intel.com>; Walker, Benjamin
> <benjamin.walker@intel.com>; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: Liu, Changpeng <changpeng.liu@intel.com>; David Marchand
> <david.marchand@redhat.com>; dev@dpdk.org; Aaron Conole <aconole@redhat.com>;
> Zawadzki, Tomasz <tomasz.zawadzki@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
> 
> 14/10/2021 10:07, Xia, Chenbo:
> > From: Thomas Monjalon <thomas@monjalon.net>
> > > 14/10/2021 09:00, Xia, Chenbo:
> > > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > > 14/10/2021 04:21, Xia, Chenbo:
> > > > > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > > > > Yes I think we need to agree on functions to keep as-is for
> > > compatibility.
> > > > > > > Waiting for your input please.
> > > > > >
> > > > > > So, do you mean currently DPDK doesn't guarantee ABI for drivers
> > > > >
> > > > > Yes
> > > > >
> > > > > > but could have driver ABI in the future?
> > > > >
> > > > > I don't think so, not general compatibility,
> > > > > but we can think about a way to avoid breaking SPDK specifically,
> > > > > which has less requirements.
> > > >
> > > > So the problem here is exposing some APIs to SPDK directly? Without the
> > > 'enable_driver_sdk'
> > > > option, I don't see a solution of both exposed and not-ABI. Any idea in
> your
> > > mind?
> > >
> > > No the idea is to keep using enable_driver_sdk.
> > > But so far, there is no compatibility guarantee for driver SDK.
> > > The discussion is about which basic compatibility requirement is needed
> for
> > > SPDK.
> >
> > Sorry for not understanding your point quickly, but what's the difference of
> > 'general compatibility' and 'basic compatibility'? Because in my mind, one
> > struct or function should either be ABI-compatible or not. Could you help
> explain
> > it a bit?
> 
> I wonder whether we could have a guarantee for a subset of structs and
> functions.
> Anyway, this is just opening the discussion to collect some inputs first.
> Then we'll have to check what is possible and get a techboard approval.
> 

After going through related code in SPDK, I think we can add some new functions and keep
some macros in the exposed header (i.e., rte_bus_pci.h) for SPDK to register pci driver
and get needed info.

Most structs/marocs will be hided and SPDK can use the new proposed APIs and small set
of macros/structs to build. In this way, the problem of SPDK building with DPDK distros
and ABI issue can both be solved. APIs like struct rte_pci_device and struct rte_pci_driver
can be hided to minimize pci bus ABI.

Thomas & SPDK folks, please share your opinions of above.

Thanks,
Chenbo



^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy support for AMD platform
  2021-10-27 11:03  3%       ` Van Haaren, Harry
@ 2021-10-27 11:41  0%         ` Mattias Rönnblom
    0 siblings, 1 reply; 200+ results
From: Mattias Rönnblom @ 2021-10-27 11:41 UTC (permalink / raw)
  To: Van Haaren, Harry, Thomas Monjalon, Aman Kumar
  Cc: dev, viacheslavo, Burakov, Anatoly, Song, Keesang, jerinjacobk,
	Ananyev, Konstantin, Richardson, Bruce, honnappa.nagarahalli,
	Ruifeng Wang, David Christensen, david.marchand, stephen

On 2021-10-27 13:03, Van Haaren, Harry wrote:
>> -----Original Message-----
>> From: dev <dev-bounces@dpdk.org> On Behalf Of Thomas Monjalon
>> Sent: Wednesday, October 27, 2021 9:13 AM
>> To: Aman Kumar <aman.kumar@vvdntech.in>
>> Cc: dev@dpdk.org; viacheslavo@nvidia.com; Burakov, Anatoly
>> <anatoly.burakov@intel.com>; keesang.song@amd.com;
>> aman.kumar@vvdntech.in; jerinjacobk@gmail.com; Ananyev, Konstantin
>> <konstantin.ananyev@intel.com>; Richardson, Bruce
>> <bruce.richardson@intel.com>; honnappa.nagarahalli@arm.com; Ruifeng Wang
>> <ruifeng.wang@arm.com>; David Christensen <drc@linux.vnet.ibm.com>;
>> david.marchand@redhat.com; stephen@networkplumber.org
>> Subject: Re: [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy
>> support for AMD platform
>>
>> 27/10/2021 09:28, Aman Kumar:
>>> This patch provides a rte_memcpy* call with temporal stores.
>>> Use -Dcpu_instruction_set=znverX with build to enable this API.
>>>
>>> Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
>> For the series, Acked-by: Thomas Monjalon <thomas@monjalon.net>
>> With the hope that such optimization will go in libc in a near future.
>>
>> If there is no objection, I will merge this AMD-specific series in 21.11-rc2.
>> It should not affect other platforms.
> Hi Folks,
>
> This patchset was brought to my attention, and I have a few concerns.
> I'll add short snippets of context from the patch here so I can refer to it below;
>
> +/**
> + * Copy 16 bytes from one location to another,
> + * with temporal stores
> + */
> +static __rte_always_inline void
> +rte_copy16_ts(uint8_t *dst, uint8_t *src)
> +{
> +	__m128i var128;
> +
> +	var128 = _mm_stream_load_si128((__m128i *)src);
> +	_mm_storeu_si128((__m128i *)dst, var128);
> +}
>
> 1) What is fundamentally specific to the znverX CPU? Is there any reason this can not just be enabled for x86-64 generic with SSE4.1 ISA requirements?
> _mm_stream_load_si128() is part of SSE4.1
> _mm_storeu_si128() is SSE2.
> Using the intrinsics guide for lookup of intrinsics to ISA level: https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html?wapkw=intrinsics%20guide#text=_mm_stream_load&ig_expand=6884
>
> 2) Are -D options allowed to change/break API/ABI?
> By allowing -Dcpu_instruction_set= to change available functions, any application using it is no longer source-code (API) compatible with "DPDK" proper.
> This patch essentially splits a "DPDK" app to depend on "DPDK + CPU version -D flag", in an incompatible way (no fallback?).
>
> 3) The stream load instruction used here *requires* 16-byte alignment for its operand.
> This is not documented, and worse, a uint8_t* is accepted, which is cast to (__m128i *).
> This cast hides the compiler warning for expanding type-alignments.
> And the code itself is broken - passing a "src" parameter that is not 16-byte aligned will segfault.
>
> 4) Temporal and Non-temporal are not logically presented here.
> Temporal loads/stores are normal loads/stores. They use the L1/L2 caches.
> Non-temporal loads/stores indicate that the data will *not* be used again in a short space of time.
> Non-temporal means "having no relation to time" according to my internet search.
>
> 5) The *store* here uses a normal store (temporal, targets cache). The *load* however is a streaming (non-temporal, no cache) load.
> It is not clearly documented that A) stream load will be used.
> The inverse is documented "copy with ts" aka, copy with temporal store.
> Is documenting the store as temporal meant to imply that the load is non-temporal?
>
> 6) What is the use-case for this? When would a user *want* to use this instead of rte_memcpy()?
> If the data being loaded is relevant to datapath/packets, presumably other packets might require the
> loaded data, so temporal (normal) loads should be used to cache the source data?


I'm not sure if your first question is rhetorical or not, but a memcpy() 
in a NT variant is certainly useful. One use case for a memcpy() with 
temporal loads and non-temporal stores is if you need to archive packet 
payload for (distant, potential) future use, and want to avoid causing 
unnecessary LLC evictions while doing so.


> 7) Why is streaming (non-temporal) loads & stores not used? I guess maybe this is regarding the use-case,
> but its not clear to me right now why loads are NT, and stores are T.
>
> All in all, I do not think merging this patch is a good idea. I would like to understand the motivation for adding
> this type of function, and then see it being done in a way that is clearly documented regarding temporal loads/stores,
> and not changing/adding APIs for specific CPUs.
>
> So apologies for late feedback, but this is not of high enough quality to be merged to DPDK right now, NACK.



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy support for AMD platform
  @ 2021-10-27 11:03  3%       ` Van Haaren, Harry
  2021-10-27 11:41  0%         ` Mattias Rönnblom
  0 siblings, 1 reply; 200+ results
From: Van Haaren, Harry @ 2021-10-27 11:03 UTC (permalink / raw)
  To: Thomas Monjalon, Aman Kumar
  Cc: dev, viacheslavo, Burakov, Anatoly, keesang.song, aman.kumar,
	jerinjacobk, Ananyev, Konstantin, Richardson, Bruce,
	honnappa.nagarahalli, Ruifeng Wang, David Christensen,
	david.marchand, stephen

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Thomas Monjalon
> Sent: Wednesday, October 27, 2021 9:13 AM
> To: Aman Kumar <aman.kumar@vvdntech.in>
> Cc: dev@dpdk.org; viacheslavo@nvidia.com; Burakov, Anatoly
> <anatoly.burakov@intel.com>; keesang.song@amd.com;
> aman.kumar@vvdntech.in; jerinjacobk@gmail.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; honnappa.nagarahalli@arm.com; Ruifeng Wang
> <ruifeng.wang@arm.com>; David Christensen <drc@linux.vnet.ibm.com>;
> david.marchand@redhat.com; stephen@networkplumber.org
> Subject: Re: [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy
> support for AMD platform
> 
> 27/10/2021 09:28, Aman Kumar:
> > This patch provides a rte_memcpy* call with temporal stores.
> > Use -Dcpu_instruction_set=znverX with build to enable this API.
> >
> > Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
> 
> For the series, Acked-by: Thomas Monjalon <thomas@monjalon.net>
> With the hope that such optimization will go in libc in a near future.
> 
> If there is no objection, I will merge this AMD-specific series in 21.11-rc2.
> It should not affect other platforms.

Hi Folks,

This patchset was brought to my attention, and I have a few concerns.
I'll add short snippets of context from the patch here so I can refer to it below;

+/**
+ * Copy 16 bytes from one location to another,
+ * with temporal stores
+ */
+static __rte_always_inline void
+rte_copy16_ts(uint8_t *dst, uint8_t *src)
+{
+	__m128i var128;
+
+	var128 = _mm_stream_load_si128((__m128i *)src);
+	_mm_storeu_si128((__m128i *)dst, var128);
+}

1) What is fundamentally specific to the znverX CPU? Is there any reason this can not just be enabled for x86-64 generic with SSE4.1 ISA requirements?
_mm_stream_load_si128() is part of SSE4.1
_mm_storeu_si128() is SSE2. 
Using the intrinsics guide for lookup of intrinsics to ISA level: https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html?wapkw=intrinsics%20guide#text=_mm_stream_load&ig_expand=6884

2) Are -D options allowed to change/break API/ABI?
By allowing -Dcpu_instruction_set= to change available functions, any application using it is no longer source-code (API) compatible with "DPDK" proper.
This patch essentially splits a "DPDK" app to depend on "DPDK + CPU version -D flag", in an incompatible way (no fallback?).

3) The stream load instruction used here *requires* 16-byte alignment for its operand.
This is not documented, and worse, a uint8_t* is accepted, which is cast to (__m128i *).
This cast hides the compiler warning for expanding type-alignments.
And the code itself is broken - passing a "src" parameter that is not 16-byte aligned will segfault.

4) Temporal and Non-temporal are not logically presented here.
Temporal loads/stores are normal loads/stores. They use the L1/L2 caches.
Non-temporal loads/stores indicate that the data will *not* be used again in a short space of time.
Non-temporal means "having no relation to time" according to my internet search.

5) The *store* here uses a normal store (temporal, targets cache). The *load* however is a streaming (non-temporal, no cache) load.
It is not clearly documented that A) stream load will be used.
The inverse is documented "copy with ts" aka, copy with temporal store.
Is documenting the store as temporal meant to imply that the load is non-temporal?

6) What is the use-case for this? When would a user *want* to use this instead of rte_memcpy()?
If the data being loaded is relevant to datapath/packets, presumably other packets might require the
loaded data, so temporal (normal) loads should be used to cache the source data?

7) Why is streaming (non-temporal) loads & stores not used? I guess maybe this is regarding the use-case,
but its not clear to me right now why loads are NT, and stores are T.

All in all, I do not think merging this patch is a good idea. I would like to understand the motivation for adding
this type of function, and then see it being done in a way that is clearly documented regarding temporal loads/stores,
and not changing/adding APIs for specific CPUs.

So apologies for late feedback, but this is not of high enough quality to be merged to DPDK right now, NACK.


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v15 06/12] pdump: support pcapng and filtering
  2021-10-20 21:42  1%   ` [dpdk-dev] [PATCH v15 06/12] pdump: support pcapng and filtering Stephen Hemminger
  2021-10-21 14:16  0%     ` Kinsella, Ray
@ 2021-10-27  6:34  0%     ` Wang, Yinan
  1 sibling, 0 replies; 200+ results
From: Wang, Yinan @ 2021-10-27  6:34 UTC (permalink / raw)
  To: Stephen Hemminger, dev
  Cc: Pattan, Reshma, Ray Kinsella, Burakov, Anatoly, Ling, WeiX, He,
	Xingguang

Hi Hemminger,

I meet an issue when using dpdk-pdump with your patch ,we try to capture pkts from virtio port, all packets captured shows malformed packets , and no issue if remove your patch. Bug link:https://bugs.dpdk.org/show_bug.cgi?id=840
 Could you help to take a look at this issue?

BR,
Yinan

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Stephen Hemminger
> Sent: 2021?10?21? 5:43
> To: dev@dpdk.org
> Cc: Stephen Hemminger <stephen@networkplumber.org>; Pattan, Reshma
> <reshma.pattan@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Burakov,
> Anatoly <anatoly.burakov@intel.com>
> Subject: [dpdk-dev] [PATCH v15 06/12] pdump: support pcapng and filtering
> 
> This enhances the DPDK pdump library to support new
> pcapng format and filtering via BPF.
> 
> The internal client/server protocol is changed to support
> two versions: the original pdump basic version and a
> new pcapng version.
> 
> The internal version number (not part of exposed API or ABI)
> is intentionally increased to cause any attempt to try
> mismatched primary/secondary process to fail.
> 
> Add new API to do allow filtering of captured packets with
> DPDK BPF (eBPF) filter program. It keeps statistics
> on packets captured, filtered, and missed (because ring was full).
> 
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> Acked-by: Reshma Pattan <reshma.pattan@intel.com>


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v18 0/5] Add PIE support for HQoS library
  2021-10-26  8:33  0%             ` Thomas Monjalon
@ 2021-10-26 10:02  0%               ` Dumitrescu, Cristian
  0 siblings, 0 replies; 200+ results
From: Dumitrescu, Cristian @ 2021-10-26 10:02 UTC (permalink / raw)
  To: Thomas Monjalon, Liguzinski, WojciechX, Singh, Jasvinder, Liu,
	Yu Y, Singh, Jasvinder
  Cc: dev, Ajmera, Megha, Liu, Yu Y, david.marchand



> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Tuesday, October 26, 2021 9:33 AM
> To: Liguzinski, WojciechX <wojciechx.liguzinski@intel.com>; Singh, Jasvinder
> <jasvinder.singh@intel.com>; Dumitrescu, Cristian
> <cristian.dumitrescu@intel.com>; Liu, Yu Y <yu.y.liu@intel.com>
> Cc: dev@dpdk.org; Ajmera, Megha <megha.ajmera@intel.com>; Liu, Yu Y
> <yu.y.liu@intel.com>; david.marchand@redhat.com
> Subject: Re: [dpdk-dev] [PATCH v18 0/5] Add PIE support for HQoS library
>
> 26/10/2021 10:24, Liu, Yu Y:
> > Hi Thomas,
> >
> > Would you merge this patch as the series is acked by Cristian as below?
> >
> https://patchwork.dpdk.org/project/dpdk/cover/20211019081902.3514841-
> 1-wojciechx.liguzinski@intel.com/
>
> I didn't see any email from Cristian.
> It seems you just added this ack silently at the bottom of the cover letter.
>
> 1/ an email from Cristian is far better
> 2/ when integrating ack, it must be done in patches, not cover letter
>

Hi Thomas,

I did ack this set in a previous version (V15) by replying with "Series-acked-by" on the cover letter email, which does not show in patchwork. Is there a better way to do this?

It would be good to have Jasvinder's ack as well on this series, as he is looking into some other aspects of the sched library.

Regards,
Cristian
>
> >
> > Thanks & Regards,
> > Yu Liu
> >
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Liguzinski, WojciechX
> > Sent: Monday, October 25, 2021 7:32 PM
> > To: dev@dpdk.org; Singh, Jasvinder <jasvinder.singh@intel.com>;
> Dumitrescu, Cristian <cristian.dumitrescu@intel.com>
> > Cc: Ajmera, Megha <megha.ajmera@intel.com>
> > Subject: [dpdk-dev] [PATCH v18 0/5] Add PIE support for HQoS library
> >
> > DPDK sched library is equipped with mechanism that secures it from the
> bufferbloat problem which is a situation when excess buffers in the network
> cause high latency and latency variation. Currently, it supports RED for active
> queue management. However, more advanced queue management is
> required to address this problem and provide desirable quality of service to
> users.
> >
> > This solution (RFC) proposes usage of new algorithm called "PIE"
> (Proportional Integral controller Enhanced) that can effectively and directly
> control queuing latency to address the bufferbloat problem.
> >
> > The implementation of mentioned functionality includes modification of
> existing and adding a new set of data structures to the library, adding PIE
> related APIs.
> > This affects structures in public API/ABI. That is why deprecation notice is
> going to be prepared and sent.
> >
> > Liguzinski, WojciechX (5):
> >   sched: add PIE based congestion management
> >   example/qos_sched: add PIE support
> >   example/ip_pipeline: add PIE support
> >   doc/guides/prog_guide: added PIE
> >   app/test: add tests for PIE
> >
> >  app/test/meson.build                         |    4 +
> >  app/test/test_pie.c                          | 1065 ++++++++++++++++++
> >  config/rte_config.h                          |    1 -
> >  doc/guides/prog_guide/glossary.rst           |    3 +
> >  doc/guides/prog_guide/qos_framework.rst      |   64 +-
> >  doc/guides/prog_guide/traffic_management.rst |   13 +-
> >  drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
> >  examples/ip_pipeline/tmgr.c                  |  142 +--
> >  examples/qos_sched/cfg_file.c                |  127 ++-
> >  examples/qos_sched/cfg_file.h                |    5 +
> >  examples/qos_sched/init.c                    |   27 +-
> >  examples/qos_sched/main.h                    |    3 +
> >  examples/qos_sched/profile.cfg               |  196 ++--
> >  lib/sched/meson.build                        |    3 +-
> >  lib/sched/rte_pie.c                          |   86 ++
> >  lib/sched/rte_pie.h                          |  398 +++++++
> >  lib/sched/rte_sched.c                        |  241 ++--
> >  lib/sched/rte_sched.h                        |   63 +-
> >  lib/sched/version.map                        |    4 +
> >  19 files changed, 2172 insertions(+), 279 deletions(-)  create mode 100644
> app/test/test_pie.c  create mode 100644 lib/sched/rte_pie.c  create mode
> 100644 lib/sched/rte_pie.h
> >
> > --
> > 2.25.1
> >
> > Series-acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
> >
>
>
>
>


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v18 0/5] Add PIE support for HQoS library
  2021-10-26  8:24  3%           ` Liu, Yu Y
@ 2021-10-26  8:33  0%             ` Thomas Monjalon
  2021-10-26 10:02  0%               ` Dumitrescu, Cristian
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-26  8:33 UTC (permalink / raw)
  To: Liguzinski, WojciechX, Singh, Jasvinder, Dumitrescu, Cristian, Liu, Yu Y
  Cc: dev, Ajmera, Megha, Liu, Yu Y, david.marchand

26/10/2021 10:24, Liu, Yu Y:
> Hi Thomas,
> 
> Would you merge this patch as the series is acked by Cristian as below?
> https://patchwork.dpdk.org/project/dpdk/cover/20211019081902.3514841-1-wojciechx.liguzinski@intel.com/

I didn't see any email from Cristian.
It seems you just added this ack silently at the bottom of the cover letter.

1/ an email from Cristian is far better
2/ when integrating ack, it must be done in patches, not cover letter


> 
> Thanks & Regards,
> Yu Liu
> 
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Liguzinski, WojciechX
> Sent: Monday, October 25, 2021 7:32 PM
> To: dev@dpdk.org; Singh, Jasvinder <jasvinder.singh@intel.com>; Dumitrescu, Cristian <cristian.dumitrescu@intel.com>
> Cc: Ajmera, Megha <megha.ajmera@intel.com>
> Subject: [dpdk-dev] [PATCH v18 0/5] Add PIE support for HQoS library
> 
> DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem which is a situation when excess buffers in the network cause high latency and latency variation. Currently, it supports RED for active queue management. However, more advanced queue management is required to address this problem and provide desirable quality of service to users.
> 
> This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral controller Enhanced) that can effectively and directly control queuing latency to address the bufferbloat problem.
> 
> The implementation of mentioned functionality includes modification of existing and adding a new set of data structures to the library, adding PIE related APIs.
> This affects structures in public API/ABI. That is why deprecation notice is going to be prepared and sent.
> 
> Liguzinski, WojciechX (5):
>   sched: add PIE based congestion management
>   example/qos_sched: add PIE support
>   example/ip_pipeline: add PIE support
>   doc/guides/prog_guide: added PIE
>   app/test: add tests for PIE
> 
>  app/test/meson.build                         |    4 +
>  app/test/test_pie.c                          | 1065 ++++++++++++++++++
>  config/rte_config.h                          |    1 -
>  doc/guides/prog_guide/glossary.rst           |    3 +
>  doc/guides/prog_guide/qos_framework.rst      |   64 +-
>  doc/guides/prog_guide/traffic_management.rst |   13 +-
>  drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
>  examples/ip_pipeline/tmgr.c                  |  142 +--
>  examples/qos_sched/cfg_file.c                |  127 ++-
>  examples/qos_sched/cfg_file.h                |    5 +
>  examples/qos_sched/init.c                    |   27 +-
>  examples/qos_sched/main.h                    |    3 +
>  examples/qos_sched/profile.cfg               |  196 ++--
>  lib/sched/meson.build                        |    3 +-
>  lib/sched/rte_pie.c                          |   86 ++
>  lib/sched/rte_pie.h                          |  398 +++++++
>  lib/sched/rte_sched.c                        |  241 ++--
>  lib/sched/rte_sched.h                        |   63 +-
>  lib/sched/version.map                        |    4 +
>  19 files changed, 2172 insertions(+), 279 deletions(-)  create mode 100644 app/test/test_pie.c  create mode 100644 lib/sched/rte_pie.c  create mode 100644 lib/sched/rte_pie.h
> 
> --
> 2.25.1
> 
> Series-acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
> 






^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v18 0/5] Add PIE support for HQoS library
  2021-10-25 11:32  3%         ` [dpdk-dev] [PATCH v18 " Liguzinski, WojciechX
@ 2021-10-26  8:24  3%           ` Liu, Yu Y
  2021-10-26  8:33  0%             ` Thomas Monjalon
  2021-10-28 10:17  3%           ` [dpdk-dev] [PATCH v19 " Liguzinski, WojciechX
  1 sibling, 1 reply; 200+ results
From: Liu, Yu Y @ 2021-10-26  8:24 UTC (permalink / raw)
  To: Thomas Monjalon, dev, Liguzinski, WojciechX, Singh, Jasvinder,
	Dumitrescu, Cristian
  Cc: Ajmera, Megha, Liu, Yu Y

Hi Thomas,

Would you merge this patch as the series is acked by Cristian as below?
https://patchwork.dpdk.org/project/dpdk/cover/20211019081902.3514841-1-wojciechx.liguzinski@intel.com/ 

Thanks & Regards,
Yu Liu

-----Original Message-----
From: dev <dev-bounces@dpdk.org> On Behalf Of Liguzinski, WojciechX
Sent: Monday, October 25, 2021 7:32 PM
To: dev@dpdk.org; Singh, Jasvinder <jasvinder.singh@intel.com>; Dumitrescu, Cristian <cristian.dumitrescu@intel.com>
Cc: Ajmera, Megha <megha.ajmera@intel.com>
Subject: [dpdk-dev] [PATCH v18 0/5] Add PIE support for HQoS library

DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem which is a situation when excess buffers in the network cause high latency and latency variation. Currently, it supports RED for active queue management. However, more advanced queue management is required to address this problem and provide desirable quality of service to users.

This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral controller Enhanced) that can effectively and directly control queuing latency to address the bufferbloat problem.

The implementation of mentioned functionality includes modification of existing and adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going to be prepared and sent.

Liguzinski, WojciechX (5):
  sched: add PIE based congestion management
  example/qos_sched: add PIE support
  example/ip_pipeline: add PIE support
  doc/guides/prog_guide: added PIE
  app/test: add tests for PIE

 app/test/meson.build                         |    4 +
 app/test/test_pie.c                          | 1065 ++++++++++++++++++
 config/rte_config.h                          |    1 -
 doc/guides/prog_guide/glossary.rst           |    3 +
 doc/guides/prog_guide/qos_framework.rst      |   64 +-
 doc/guides/prog_guide/traffic_management.rst |   13 +-
 drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
 examples/ip_pipeline/tmgr.c                  |  142 +--
 examples/qos_sched/cfg_file.c                |  127 ++-
 examples/qos_sched/cfg_file.h                |    5 +
 examples/qos_sched/init.c                    |   27 +-
 examples/qos_sched/main.h                    |    3 +
 examples/qos_sched/profile.cfg               |  196 ++--
 lib/sched/meson.build                        |    3 +-
 lib/sched/rte_pie.c                          |   86 ++
 lib/sched/rte_pie.h                          |  398 +++++++
 lib/sched/rte_sched.c                        |  241 ++--
 lib/sched/rte_sched.h                        |   63 +-
 lib/sched/version.map                        |    4 +
 19 files changed, 2172 insertions(+), 279 deletions(-)  create mode 100644 app/test/test_pie.c  create mode 100644 lib/sched/rte_pie.c  create mode 100644 lib/sched/rte_pie.h

--
2.25.1

Series-acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1
@ 2021-10-25 21:40  4% Thomas Monjalon
  2021-10-28  7:10  0% ` Jiang, YuX
                   ` (2 more replies)
  0 siblings, 3 replies; 200+ results
From: Thomas Monjalon @ 2021-10-25 21:40 UTC (permalink / raw)
  To: announce

A new DPDK release candidate is ready for testing:
	https://git.dpdk.org/dpdk/tag/?id=v21.11-rc1

There are 1171 new patches in this snapshot, big as expected.

Release notes:
	https://doc.dpdk.org/guides/rel_notes/release_21_11.html

Highlights of 21.11-rc1:
* General
	- more than 512 MSI-X interrupts
	- hugetlbfs subdirectories
	- mempool flag for non-IO usages
	- device class for DMA accelerators
	- DMA drivers for Intel DSA and IOAT
* Networking
	- MTU handling rework
	- get all MAC addresses of a port
	- RSS based on L3/L4 checksum fields
	- flow match on L2TPv2 and PPP
	- flow flex parser for custom header
	- control delivery of HW Rx metadata
	- transfer flows API rework
	- shared Rx queue
	- Windows support of Intel e1000, ixgbe and iavf
	- testpmd multi-process
	- pcapng library and dumpcap tool
* API/ABI
	- API namespace improvements (mempool, mbuf, ethdev)
	- API internals hidden (intr, ethdev, security, cryptodev, eventdev, cmdline)
	- flags check for future ABI compatibility (memzone, mbuf, mempool)

Please test and report issues on bugs.dpdk.org.
DPDK 21.11-rc2 is expected in two weeks or less.

Thank you everyone



^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal
  2021-10-25 14:27  4%   ` [dpdk-dev] [PATCH v8 " David Marchand
  2021-10-25 14:32  0%     ` Raslan Darawsheh
@ 2021-10-25 19:24  0%     ` David Marchand
  1 sibling, 0 replies; 200+ results
From: David Marchand @ 2021-10-25 19:24 UTC (permalink / raw)
  To: Harman Kalra, dev; +Cc: Dmitry Kozlyuk, Raslan Darawsheh, Thomas Monjalon

On Mon, Oct 25, 2021 at 4:27 PM David Marchand
<david.marchand@redhat.com> wrote:
>
> Moving struct rte_intr_handle as an internal structure to
> avoid any ABI breakages in future. Since this structure defines
> some static arrays and changing respective macros breaks the ABI.
> Eg:
> Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> MSI-X interrupts that can be defined for a PCI device, while PCI
> specification allows maximum 2048 MSI-X interrupts that can be used.
> If some PCI device requires more than 512 vectors, either change the
> RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
> PCI device MSI-X size on probe time. Either way its an ABI breakage.
>
> Change already included in 21.11 ABI improvement spreadsheet (item 42):
> https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
> preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
> 3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
> XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
> Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
>
> This series makes struct rte_intr_handle totally opaque to the outside
> world by wrapping it inside a .c file and providing get set wrapper APIs
> to read or manipulate its fields.. Any changes to be made to any of the
> fields should be done via these get set APIs.
> Introduced a new eal_common_interrupts.c where all these APIs are defined
> and also hides struct rte_intr_handle definition.
>
> v1:
> * Fixed freebsd compilation failure
> * Fixed seg fault in case of memif
>
> v2:
> * Merged the prototype and implementation patch to 1.
> * Restricting allocation of single interrupt instance.
> * Removed base APIs, as they were exposing internally
> allocated memory information.
> * Fixed some memory leak issues.
> * Marked some library specific APIs as internal.
>
> v3:
> * Removed flag from instance alloc API, rather auto detect
> if memory should be allocated using glibc malloc APIs or
> rte_malloc*
> * Added APIs for get/set windows handle.
> * Defined macros for repeated checks.
>
> v4:
> * Rectified some typo in the APIs documentation.
> * Better names for some internal variables.
>
> v5:
> * Reverted back to passing flag to instance alloc API, as
> with auto detect some multiprocess issues existing in the
> library were causing tests failure.
> * Rebased to top of tree.
>
> v6:
> * renamed RTE_INTR_INSTANCE_F_UNSHARED as RTE_INTR_INSTANCE_F_PRIVATE,
> * changed API and removed need for alloc_flag content exposure
>   (see rte_intr_instance_dup() in patch 1 and 2),
> * exported all symbols for Windows,
> * fixed leak in unit tests in case of alloc failure,
> * split (previously) patch 4 into three patches
>   * (now) patch 4 only concerns alarm and (previously) patch 6 cleanup bits
>     are squashed in it,
>   * (now) patch 5 concerns other libraries updates,
>   * (now) patch 6 concerns drivers updates:
>     * instance allocation is moved to probing for auxiliary,
>     * there might be a bug for PCI drivers non requesting
>       RTE_PCI_DRV_NEED_MAPPING, but code is left as v5,
> * split (previously) patch 5 into three patches
>   * (now) patch 7 only hides structure, but keep it in a EAL private
>     header, this makes it possible to keep info in tracepoints,
>   * (now) patch 8 deals with VFIO/UIO internal fds merge,
>   * (now) patch 9 extends event list,
>
> v7:
> * fixed compilation on FreeBSD,
> * removed unused interrupt handle in FreeBSD alarm code,
> * fixed interrupt handle allocation for PCI drivers without
>   RTE_PCI_DRV_NEED_MAPPING,
>
> v8:
> * lowered logs level to DEBUG in sanity checks,
> * fixed corner case with vector list access,
>
> --
> David Marchand
>
> Harman Kalra (9):
>   interrupts: add allocator and accessors
>   interrupts: remove direct access to interrupt handle
>   test/interrupts: remove direct access to interrupt handle
>   alarm: remove direct access to interrupt handle
>   lib: remove direct access to interrupt handle
>   drivers: remove direct access to interrupt handle
>   interrupts: make interrupt handle structure opaque
>   interrupts: rename device specific file descriptor
>   interrupts: extend event list

Series applied, thanks.


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3] ci: update machine meson option to platform
  @ 2021-10-25 15:42  0%     ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-10-25 15:42 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: dev, david.marchand, maicolgabriel, ohilyard, ci, Aaron Conole

14/10/2021 14:26, Aaron Conole:
> Juraj Linkeš <juraj.linkes@pantheon.tech> writes:
> 
> > The way we're building DPDK in CI, with -Dmachine=default, has not been
> > updated when the option got replaced to preserve a backwards-complatible
> > build call to facilitate ABI verification between DPDK versions. Update
> > the call to use -Dplatform=generic, which is the most up to date way to
> > execute the same build which is now present in all DPDK versions the ABI
> > check verifies.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> 
> Acked-by: Aaron Conole <aconole@redhat.com>

Applied, thanks.




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal
  2021-10-25 14:27  4%   ` [dpdk-dev] [PATCH v8 " David Marchand
@ 2021-10-25 14:32  0%     ` Raslan Darawsheh
  2021-10-25 19:24  0%     ` David Marchand
  1 sibling, 0 replies; 200+ results
From: Raslan Darawsheh @ 2021-10-25 14:32 UTC (permalink / raw)
  To: David Marchand, hkalra, dev; +Cc: dmitry.kozliuk, NBU-Contact-Thomas Monjalon

Hi,
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Monday, October 25, 2021 5:27 PM
> To: hkalra@marvell.com; dev@dpdk.org
> Cc: dmitry.kozliuk@gmail.com; Raslan Darawsheh <rasland@nvidia.com>;
> NBU-Contact-Thomas Monjalon <thomas@monjalon.net>
> Subject: [PATCH v8 0/9] make rte_intr_handle internal
> 
> Moving struct rte_intr_handle as an internal structure to avoid any ABI
> breakages in future. Since this structure defines some static arrays and
> changing respective macros breaks the ABI.
> Eg:
> Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> MSI-X interrupts that can be defined for a PCI device, while PCI specification
> allows maximum 2048 MSI-X interrupts that can be used.
> If some PCI device requires more than 512 vectors, either change the
> RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on PCI
> device MSI-X size on probe time. Either way its an ABI breakage.
> 
> Change already included in 21.11 ABI improvement spreadsheet (item 42):
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Furld
> efense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-
> 3A__docs.google.com_s&amp;data=04%7C01%7Crasland%40nvidia.com%7C
> c626e0d058714bc3075a08d997c39557%7C43083d15727340c1b7db39efd9ccc17
> a%7C0%7C0%7C637707688554493769%7CUnknown%7CTWFpbGZsb3d8eyJWI
> joiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1
> 000&amp;sdata=y7vFUXbUzh6ise1zn8bzbfuUGv6L24gCNcUsuWKqRBk%3D&
> amp;reserved=0
> preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-
> 23gid-
> 3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-
> 7JdkxT_Z_SU6RrS37ys4U
> XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c
> &s=lh6DEGhR
> Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
> 
> This series makes struct rte_intr_handle totally opaque to the outside world
> by wrapping it inside a .c file and providing get set wrapper APIs to read or
> manipulate its fields.. Any changes to be made to any of the fields should be
> done via these get set APIs.
> Introduced a new eal_common_interrupts.c where all these APIs are
> defined and also hides struct rte_intr_handle definition.
> 
> v1:
> * Fixed freebsd compilation failure
> * Fixed seg fault in case of memif
> 
> v2:
> * Merged the prototype and implementation patch to 1.
> * Restricting allocation of single interrupt instance.
> * Removed base APIs, as they were exposing internally allocated memory
> information.
> * Fixed some memory leak issues.
> * Marked some library specific APIs as internal.
> 
> v3:
> * Removed flag from instance alloc API, rather auto detect if memory should
> be allocated using glibc malloc APIs or
> rte_malloc*
> * Added APIs for get/set windows handle.
> * Defined macros for repeated checks.
> 
> v4:
> * Rectified some typo in the APIs documentation.
> * Better names for some internal variables.
> 
> v5:
> * Reverted back to passing flag to instance alloc API, as with auto detect
> some multiprocess issues existing in the library were causing tests failure.
> * Rebased to top of tree.
> 
> v6:
> * renamed RTE_INTR_INSTANCE_F_UNSHARED as
> RTE_INTR_INSTANCE_F_PRIVATE,
> * changed API and removed need for alloc_flag content exposure
>   (see rte_intr_instance_dup() in patch 1 and 2),
> * exported all symbols for Windows,
> * fixed leak in unit tests in case of alloc failure,
> * split (previously) patch 4 into three patches
>   * (now) patch 4 only concerns alarm and (previously) patch 6 cleanup bits
>     are squashed in it,
>   * (now) patch 5 concerns other libraries updates,
>   * (now) patch 6 concerns drivers updates:
>     * instance allocation is moved to probing for auxiliary,
>     * there might be a bug for PCI drivers non requesting
>       RTE_PCI_DRV_NEED_MAPPING, but code is left as v5,
> * split (previously) patch 5 into three patches
>   * (now) patch 7 only hides structure, but keep it in a EAL private
>     header, this makes it possible to keep info in tracepoints,
>   * (now) patch 8 deals with VFIO/UIO internal fds merge,
>   * (now) patch 9 extends event list,
> 
> v7:
> * fixed compilation on FreeBSD,
> * removed unused interrupt handle in FreeBSD alarm code,
> * fixed interrupt handle allocation for PCI drivers without
>   RTE_PCI_DRV_NEED_MAPPING,
> 
> v8:
> * lowered logs level to DEBUG in sanity checks,
> * fixed corner case with vector list access,
> 
> --
> David Marchand
> 
> Harman Kalra (9):
>   interrupts: add allocator and accessors
>   interrupts: remove direct access to interrupt handle
>   test/interrupts: remove direct access to interrupt handle
>   alarm: remove direct access to interrupt handle
>   lib: remove direct access to interrupt handle
>   drivers: remove direct access to interrupt handle
>   interrupts: make interrupt handle structure opaque
>   interrupts: rename device specific file descriptor
>   interrupts: extend event list
> 
>  MAINTAINERS                                   |   1 +
>  app/test/test_interrupts.c                    | 164 +++--
>  drivers/baseband/acc100/rte_acc100_pmd.c      |  14 +-
>  .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |  24 +-
>  drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |  24 +-
>  drivers/bus/auxiliary/auxiliary_common.c      |  17 +-
>  drivers/bus/auxiliary/rte_bus_auxiliary.h     |   2 +-
>  drivers/bus/dpaa/dpaa_bus.c                   |  28 +-
>  drivers/bus/dpaa/rte_dpaa_bus.h               |   2 +-
>  drivers/bus/fslmc/fslmc_bus.c                 |  14 +-
>  drivers/bus/fslmc/fslmc_vfio.c                |  30 +-
>  drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |  18 +-
>  drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |   2 +-
>  drivers/bus/fslmc/rte_fslmc.h                 |   2 +-
>  drivers/bus/ifpga/ifpga_bus.c                 |  13 +-
>  drivers/bus/ifpga/rte_bus_ifpga.h             |   2 +-
>  drivers/bus/pci/bsd/pci.c                     |  20 +-
>  drivers/bus/pci/linux/pci.c                   |   4 +-
>  drivers/bus/pci/linux/pci_uio.c               |  69 +-
>  drivers/bus/pci/linux/pci_vfio.c              | 108 ++-
>  drivers/bus/pci/pci_common.c                  |  47 +-
>  drivers/bus/pci/pci_common_uio.c              |  21 +-
>  drivers/bus/pci/rte_bus_pci.h                 |   4 +-
>  drivers/bus/vmbus/linux/vmbus_bus.c           |   6 +
>  drivers/bus/vmbus/linux/vmbus_uio.c           |  35 +-
>  drivers/bus/vmbus/rte_bus_vmbus.h             |   2 +-
>  drivers/bus/vmbus/vmbus_common_uio.c          |  23 +-
>  drivers/common/cnxk/roc_cpt.c                 |   8 +-
>  drivers/common/cnxk/roc_dev.c                 |  14 +-
>  drivers/common/cnxk/roc_irq.c                 | 107 +--
>  drivers/common/cnxk/roc_nix_inl_dev_irq.c     |   8 +-
>  drivers/common/cnxk/roc_nix_irq.c             |  36 +-
>  drivers/common/cnxk/roc_npa.c                 |   2 +-
>  drivers/common/cnxk/roc_platform.h            |  49 +-
>  drivers/common/cnxk/roc_sso.c                 |   4 +-
>  drivers/common/cnxk/roc_tim.c                 |   4 +-
>  drivers/common/octeontx2/otx2_dev.c           |  14 +-
>  drivers/common/octeontx2/otx2_irq.c           | 117 ++--
>  .../octeontx2/otx2_cryptodev_hw_access.c      |   4 +-
>  drivers/event/octeontx2/otx2_evdev_irq.c      |  12 +-
>  drivers/mempool/octeontx2/otx2_mempool.c      |   2 +-
>  drivers/net/atlantic/atl_ethdev.c             |  20 +-
>  drivers/net/avp/avp_ethdev.c                  |   8 +-
>  drivers/net/axgbe/axgbe_ethdev.c              |  12 +-
>  drivers/net/axgbe/axgbe_mdio.c                |   6 +-
>  drivers/net/bnx2x/bnx2x_ethdev.c              |  10 +-
>  drivers/net/bnxt/bnxt_ethdev.c                |  33 +-
>  drivers/net/bnxt/bnxt_irq.c                   |   4 +-
>  drivers/net/dpaa/dpaa_ethdev.c                |  48 +-
>  drivers/net/dpaa2/dpaa2_ethdev.c              |  10 +-
>  drivers/net/e1000/em_ethdev.c                 |  23 +-
>  drivers/net/e1000/igb_ethdev.c                |  79 +--
>  drivers/net/ena/ena_ethdev.c                  |  35 +-
>  drivers/net/enic/enic_main.c                  |  26 +-
>  drivers/net/failsafe/failsafe.c               |  21 +-
>  drivers/net/failsafe/failsafe_intr.c          |  43 +-
>  drivers/net/failsafe/failsafe_ops.c           |  19 +-
>  drivers/net/failsafe/failsafe_private.h       |   2 +-
>  drivers/net/fm10k/fm10k_ethdev.c              |  32 +-
>  drivers/net/hinic/hinic_pmd_ethdev.c          |  10 +-
>  drivers/net/hns3/hns3_ethdev.c                |  57 +-
>  drivers/net/hns3/hns3_ethdev_vf.c             |  64 +-
>  drivers/net/hns3/hns3_rxtx.c                  |   2 +-
>  drivers/net/i40e/i40e_ethdev.c                |  53 +-
>  drivers/net/iavf/iavf_ethdev.c                |  42 +-
>  drivers/net/iavf/iavf_vchnl.c                 |   4 +-
>  drivers/net/ice/ice_dcf.c                     |  10 +-
>  drivers/net/ice/ice_dcf_ethdev.c              |  21 +-
>  drivers/net/ice/ice_ethdev.c                  |  49 +-
>  drivers/net/igc/igc_ethdev.c                  |  45 +-
>  drivers/net/ionic/ionic_ethdev.c              |  17 +-
>  drivers/net/ixgbe/ixgbe_ethdev.c              |  66 +-
>  drivers/net/memif/memif_socket.c              | 108 ++-
>  drivers/net/memif/memif_socket.h              |   4 +-
>  drivers/net/memif/rte_eth_memif.c             |  56 +-
>  drivers/net/memif/rte_eth_memif.h             |   2 +-
>  drivers/net/mlx4/mlx4.c                       |  19 +-
>  drivers/net/mlx4/mlx4.h                       |   2 +-
>  drivers/net/mlx4/mlx4_intr.c                  |  47 +-
>  drivers/net/mlx5/linux/mlx5_os.c              |  55 +-
>  drivers/net/mlx5/linux/mlx5_socket.c          |  25 +-
>  drivers/net/mlx5/mlx5.h                       |   6 +-
>  drivers/net/mlx5/mlx5_rxq.c                   |  43 +-
>  drivers/net/mlx5/mlx5_trigger.c               |   4 +-
>  drivers/net/mlx5/mlx5_txpp.c                  |  25 +-
>  drivers/net/netvsc/hn_ethdev.c                |   4 +-
>  drivers/net/nfp/nfp_common.c                  |  34 +-
>  drivers/net/nfp/nfp_ethdev.c                  |  13 +-
>  drivers/net/nfp/nfp_ethdev_vf.c               |  13 +-
>  drivers/net/ngbe/ngbe_ethdev.c                |  29 +-
>  drivers/net/octeontx2/otx2_ethdev_irq.c       |  35 +-
>  drivers/net/qede/qede_ethdev.c                |  16 +-
>  drivers/net/sfc/sfc_intr.c                    |  30 +-
>  drivers/net/tap/rte_eth_tap.c                 |  33 +-
>  drivers/net/tap/rte_eth_tap.h                 |   2 +-
>  drivers/net/tap/tap_intr.c                    |  33 +-
>  drivers/net/thunderx/nicvf_ethdev.c           |  10 +
>  drivers/net/thunderx/nicvf_struct.h           |   2 +-
>  drivers/net/txgbe/txgbe_ethdev.c              |  38 +-
>  drivers/net/txgbe/txgbe_ethdev_vf.c           |  33 +-
>  drivers/net/vhost/rte_eth_vhost.c             |  80 ++-
>  drivers/net/virtio/virtio_ethdev.c            |  21 +-
>  .../net/virtio/virtio_user/virtio_user_dev.c  |  56 +-
>  drivers/net/vmxnet3/vmxnet3_ethdev.c          |  43 +-
>  drivers/raw/ifpga/ifpga_rawdev.c              |  62 +-
>  drivers/raw/ntb/ntb.c                         |   9 +-
>  .../regex/octeontx2/otx2_regexdev_hw_access.c |   4 +-
>  drivers/vdpa/ifc/ifcvf_vdpa.c                 |   5 +-
>  drivers/vdpa/mlx5/mlx5_vdpa.c                 |   8 +
>  drivers/vdpa/mlx5/mlx5_vdpa.h                 |   4 +-
>  drivers/vdpa/mlx5/mlx5_vdpa_event.c           |  21 +-
>  drivers/vdpa/mlx5/mlx5_vdpa_virtq.c           |  44 +-
>  lib/bbdev/rte_bbdev.c                         |   4 +-
>  lib/eal/common/eal_common_interrupts.c        | 500 ++++++++++++++
>  lib/eal/common/eal_interrupts.h               |  30 +
>  lib/eal/common/eal_private.h                  |  10 +
>  lib/eal/common/meson.build                    |   1 +
>  lib/eal/freebsd/eal.c                         |   1 +
>  lib/eal/freebsd/eal_alarm.c                   |  35 +-
>  lib/eal/freebsd/eal_interrupts.c              |  85 ++-
>  lib/eal/include/meson.build                   |   2 +-
>  lib/eal/include/rte_eal_interrupts.h          | 269 --------
>  lib/eal/include/rte_eal_trace.h               |  10 +-
>  lib/eal/include/rte_epoll.h                   | 118 ++++
>  lib/eal/include/rte_interrupts.h              | 651 +++++++++++++++++-
>  lib/eal/linux/eal.c                           |   1 +
>  lib/eal/linux/eal_alarm.c                     |  32 +-
>  lib/eal/linux/eal_dev.c                       |  57 +-
>  lib/eal/linux/eal_interrupts.c                | 304 ++++----
>  lib/eal/version.map                           |  45 +-
>  lib/ethdev/ethdev_pci.h                       |   2 +-
>  lib/ethdev/rte_ethdev.c                       |  14 +-
>  132 files changed, 3449 insertions(+), 1748 deletions(-)  create mode 100644
> lib/eal/common/eal_common_interrupts.c
>  create mode 100644 lib/eal/common/eal_interrupts.h  delete mode 100644
> lib/eal/include/rte_eal_interrupts.h
>  create mode 100644 lib/eal/include/rte_epoll.h
> 
> --
> 2.23.0

Tested-by: Raslan Darawsheh <rasland@nvidia.com>

Kindest regards,
Raslan Darawsheh


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal
  2021-10-22 20:49  4% ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
                     ` (2 preceding siblings ...)
  2021-10-25 13:34  4%   ` [dpdk-dev] [PATCH v7 0/9] " David Marchand
@ 2021-10-25 14:27  4%   ` David Marchand
  2021-10-25 14:32  0%     ` Raslan Darawsheh
  2021-10-25 19:24  0%     ` David Marchand
  3 siblings, 2 replies; 200+ results
From: David Marchand @ 2021-10-25 14:27 UTC (permalink / raw)
  To: hkalra, dev; +Cc: dmitry.kozliuk, rasland, thomas

Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.

Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=

This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.

v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif

v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.

v3:
* Removed flag from instance alloc API, rather auto detect
if memory should be allocated using glibc malloc APIs or
rte_malloc*
* Added APIs for get/set windows handle.
* Defined macros for repeated checks.

v4:
* Rectified some typo in the APIs documentation.
* Better names for some internal variables.

v5:
* Reverted back to passing flag to instance alloc API, as
with auto detect some multiprocess issues existing in the
library were causing tests failure.
* Rebased to top of tree.

v6:
* renamed RTE_INTR_INSTANCE_F_UNSHARED as RTE_INTR_INSTANCE_F_PRIVATE,
* changed API and removed need for alloc_flag content exposure
  (see rte_intr_instance_dup() in patch 1 and 2),
* exported all symbols for Windows,
* fixed leak in unit tests in case of alloc failure,
* split (previously) patch 4 into three patches
  * (now) patch 4 only concerns alarm and (previously) patch 6 cleanup bits
    are squashed in it,
  * (now) patch 5 concerns other libraries updates,
  * (now) patch 6 concerns drivers updates:
    * instance allocation is moved to probing for auxiliary,
    * there might be a bug for PCI drivers non requesting
      RTE_PCI_DRV_NEED_MAPPING, but code is left as v5,
* split (previously) patch 5 into three patches
  * (now) patch 7 only hides structure, but keep it in a EAL private
    header, this makes it possible to keep info in tracepoints,
  * (now) patch 8 deals with VFIO/UIO internal fds merge,
  * (now) patch 9 extends event list,

v7:
* fixed compilation on FreeBSD,
* removed unused interrupt handle in FreeBSD alarm code,
* fixed interrupt handle allocation for PCI drivers without
  RTE_PCI_DRV_NEED_MAPPING,

v8:
* lowered logs level to DEBUG in sanity checks,
* fixed corner case with vector list access,

-- 
David Marchand

Harman Kalra (9):
  interrupts: add allocator and accessors
  interrupts: remove direct access to interrupt handle
  test/interrupts: remove direct access to interrupt handle
  alarm: remove direct access to interrupt handle
  lib: remove direct access to interrupt handle
  drivers: remove direct access to interrupt handle
  interrupts: make interrupt handle structure opaque
  interrupts: rename device specific file descriptor
  interrupts: extend event list

 MAINTAINERS                                   |   1 +
 app/test/test_interrupts.c                    | 164 +++--
 drivers/baseband/acc100/rte_acc100_pmd.c      |  14 +-
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |  24 +-
 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |  24 +-
 drivers/bus/auxiliary/auxiliary_common.c      |  17 +-
 drivers/bus/auxiliary/rte_bus_auxiliary.h     |   2 +-
 drivers/bus/dpaa/dpaa_bus.c                   |  28 +-
 drivers/bus/dpaa/rte_dpaa_bus.h               |   2 +-
 drivers/bus/fslmc/fslmc_bus.c                 |  14 +-
 drivers/bus/fslmc/fslmc_vfio.c                |  30 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |  18 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |   2 +-
 drivers/bus/fslmc/rte_fslmc.h                 |   2 +-
 drivers/bus/ifpga/ifpga_bus.c                 |  13 +-
 drivers/bus/ifpga/rte_bus_ifpga.h             |   2 +-
 drivers/bus/pci/bsd/pci.c                     |  20 +-
 drivers/bus/pci/linux/pci.c                   |   4 +-
 drivers/bus/pci/linux/pci_uio.c               |  69 +-
 drivers/bus/pci/linux/pci_vfio.c              | 108 ++-
 drivers/bus/pci/pci_common.c                  |  47 +-
 drivers/bus/pci/pci_common_uio.c              |  21 +-
 drivers/bus/pci/rte_bus_pci.h                 |   4 +-
 drivers/bus/vmbus/linux/vmbus_bus.c           |   6 +
 drivers/bus/vmbus/linux/vmbus_uio.c           |  35 +-
 drivers/bus/vmbus/rte_bus_vmbus.h             |   2 +-
 drivers/bus/vmbus/vmbus_common_uio.c          |  23 +-
 drivers/common/cnxk/roc_cpt.c                 |   8 +-
 drivers/common/cnxk/roc_dev.c                 |  14 +-
 drivers/common/cnxk/roc_irq.c                 | 107 +--
 drivers/common/cnxk/roc_nix_inl_dev_irq.c     |   8 +-
 drivers/common/cnxk/roc_nix_irq.c             |  36 +-
 drivers/common/cnxk/roc_npa.c                 |   2 +-
 drivers/common/cnxk/roc_platform.h            |  49 +-
 drivers/common/cnxk/roc_sso.c                 |   4 +-
 drivers/common/cnxk/roc_tim.c                 |   4 +-
 drivers/common/octeontx2/otx2_dev.c           |  14 +-
 drivers/common/octeontx2/otx2_irq.c           | 117 ++--
 .../octeontx2/otx2_cryptodev_hw_access.c      |   4 +-
 drivers/event/octeontx2/otx2_evdev_irq.c      |  12 +-
 drivers/mempool/octeontx2/otx2_mempool.c      |   2 +-
 drivers/net/atlantic/atl_ethdev.c             |  20 +-
 drivers/net/avp/avp_ethdev.c                  |   8 +-
 drivers/net/axgbe/axgbe_ethdev.c              |  12 +-
 drivers/net/axgbe/axgbe_mdio.c                |   6 +-
 drivers/net/bnx2x/bnx2x_ethdev.c              |  10 +-
 drivers/net/bnxt/bnxt_ethdev.c                |  33 +-
 drivers/net/bnxt/bnxt_irq.c                   |   4 +-
 drivers/net/dpaa/dpaa_ethdev.c                |  48 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  10 +-
 drivers/net/e1000/em_ethdev.c                 |  23 +-
 drivers/net/e1000/igb_ethdev.c                |  79 +--
 drivers/net/ena/ena_ethdev.c                  |  35 +-
 drivers/net/enic/enic_main.c                  |  26 +-
 drivers/net/failsafe/failsafe.c               |  21 +-
 drivers/net/failsafe/failsafe_intr.c          |  43 +-
 drivers/net/failsafe/failsafe_ops.c           |  19 +-
 drivers/net/failsafe/failsafe_private.h       |   2 +-
 drivers/net/fm10k/fm10k_ethdev.c              |  32 +-
 drivers/net/hinic/hinic_pmd_ethdev.c          |  10 +-
 drivers/net/hns3/hns3_ethdev.c                |  57 +-
 drivers/net/hns3/hns3_ethdev_vf.c             |  64 +-
 drivers/net/hns3/hns3_rxtx.c                  |   2 +-
 drivers/net/i40e/i40e_ethdev.c                |  53 +-
 drivers/net/iavf/iavf_ethdev.c                |  42 +-
 drivers/net/iavf/iavf_vchnl.c                 |   4 +-
 drivers/net/ice/ice_dcf.c                     |  10 +-
 drivers/net/ice/ice_dcf_ethdev.c              |  21 +-
 drivers/net/ice/ice_ethdev.c                  |  49 +-
 drivers/net/igc/igc_ethdev.c                  |  45 +-
 drivers/net/ionic/ionic_ethdev.c              |  17 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |  66 +-
 drivers/net/memif/memif_socket.c              | 108 ++-
 drivers/net/memif/memif_socket.h              |   4 +-
 drivers/net/memif/rte_eth_memif.c             |  56 +-
 drivers/net/memif/rte_eth_memif.h             |   2 +-
 drivers/net/mlx4/mlx4.c                       |  19 +-
 drivers/net/mlx4/mlx4.h                       |   2 +-
 drivers/net/mlx4/mlx4_intr.c                  |  47 +-
 drivers/net/mlx5/linux/mlx5_os.c              |  55 +-
 drivers/net/mlx5/linux/mlx5_socket.c          |  25 +-
 drivers/net/mlx5/mlx5.h                       |   6 +-
 drivers/net/mlx5/mlx5_rxq.c                   |  43 +-
 drivers/net/mlx5/mlx5_trigger.c               |   4 +-
 drivers/net/mlx5/mlx5_txpp.c                  |  25 +-
 drivers/net/netvsc/hn_ethdev.c                |   4 +-
 drivers/net/nfp/nfp_common.c                  |  34 +-
 drivers/net/nfp/nfp_ethdev.c                  |  13 +-
 drivers/net/nfp/nfp_ethdev_vf.c               |  13 +-
 drivers/net/ngbe/ngbe_ethdev.c                |  29 +-
 drivers/net/octeontx2/otx2_ethdev_irq.c       |  35 +-
 drivers/net/qede/qede_ethdev.c                |  16 +-
 drivers/net/sfc/sfc_intr.c                    |  30 +-
 drivers/net/tap/rte_eth_tap.c                 |  33 +-
 drivers/net/tap/rte_eth_tap.h                 |   2 +-
 drivers/net/tap/tap_intr.c                    |  33 +-
 drivers/net/thunderx/nicvf_ethdev.c           |  10 +
 drivers/net/thunderx/nicvf_struct.h           |   2 +-
 drivers/net/txgbe/txgbe_ethdev.c              |  38 +-
 drivers/net/txgbe/txgbe_ethdev_vf.c           |  33 +-
 drivers/net/vhost/rte_eth_vhost.c             |  80 ++-
 drivers/net/virtio/virtio_ethdev.c            |  21 +-
 .../net/virtio/virtio_user/virtio_user_dev.c  |  56 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.c          |  43 +-
 drivers/raw/ifpga/ifpga_rawdev.c              |  62 +-
 drivers/raw/ntb/ntb.c                         |   9 +-
 .../regex/octeontx2/otx2_regexdev_hw_access.c |   4 +-
 drivers/vdpa/ifc/ifcvf_vdpa.c                 |   5 +-
 drivers/vdpa/mlx5/mlx5_vdpa.c                 |   8 +
 drivers/vdpa/mlx5/mlx5_vdpa.h                 |   4 +-
 drivers/vdpa/mlx5/mlx5_vdpa_event.c           |  21 +-
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c           |  44 +-
 lib/bbdev/rte_bbdev.c                         |   4 +-
 lib/eal/common/eal_common_interrupts.c        | 500 ++++++++++++++
 lib/eal/common/eal_interrupts.h               |  30 +
 lib/eal/common/eal_private.h                  |  10 +
 lib/eal/common/meson.build                    |   1 +
 lib/eal/freebsd/eal.c                         |   1 +
 lib/eal/freebsd/eal_alarm.c                   |  35 +-
 lib/eal/freebsd/eal_interrupts.c              |  85 ++-
 lib/eal/include/meson.build                   |   2 +-
 lib/eal/include/rte_eal_interrupts.h          | 269 --------
 lib/eal/include/rte_eal_trace.h               |  10 +-
 lib/eal/include/rte_epoll.h                   | 118 ++++
 lib/eal/include/rte_interrupts.h              | 651 +++++++++++++++++-
 lib/eal/linux/eal.c                           |   1 +
 lib/eal/linux/eal_alarm.c                     |  32 +-
 lib/eal/linux/eal_dev.c                       |  57 +-
 lib/eal/linux/eal_interrupts.c                | 304 ++++----
 lib/eal/version.map                           |  45 +-
 lib/ethdev/ethdev_pci.h                       |   2 +-
 lib/ethdev/rte_ethdev.c                       |  14 +-
 132 files changed, 3449 insertions(+), 1748 deletions(-)
 create mode 100644 lib/eal/common/eal_common_interrupts.c
 create mode 100644 lib/eal/common/eal_interrupts.h
 delete mode 100644 lib/eal/include/rte_eal_interrupts.h
 create mode 100644 lib/eal/include/rte_epoll.h

-- 
2.23.0


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v7 0/9] make rte_intr_handle internal
  2021-10-22 20:49  4% ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
  2021-10-24 20:04  4%   ` [dpdk-dev] [PATCH v6 0/9] " David Marchand
  2021-10-25 13:04  0%   ` [dpdk-dev] [PATCH v5 0/6] " Raslan Darawsheh
@ 2021-10-25 13:34  4%   ` David Marchand
  2021-10-25 14:27  4%   ` [dpdk-dev] [PATCH v8 " David Marchand
  3 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-10-25 13:34 UTC (permalink / raw)
  To: hkalra, dev; +Cc: dmitry.kozliuk, rasland, thomas

Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.

Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=

This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.

v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif

v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.

v3:
* Removed flag from instance alloc API, rather auto detect
if memory should be allocated using glibc malloc APIs or
rte_malloc*
* Added APIs for get/set windows handle.
* Defined macros for repeated checks.

v4:
* Rectified some typo in the APIs documentation.
* Better names for some internal variables.

v5:
* Reverted back to passing flag to instance alloc API, as
with auto detect some multiprocess issues existing in the
library were causing tests failure.
* Rebased to top of tree.

v6:
* renamed RTE_INTR_INSTANCE_F_UNSHARED as RTE_INTR_INSTANCE_F_PRIVATE,
* changed API and removed need for alloc_flag content exposure
  (see rte_intr_instance_dup() in patch 1 and 2),
* exported all symbols for Windows,
* fixed leak in unit tests in case of alloc failure,
* split (previously) patch 4 into three patches
  * (now) patch 4 only concerns alarm and (previously) patch 6 cleanup bits
    are squashed in it,
  * (now) patch 5 concerns other libraries updates,
  * (now) patch 6 concerns drivers updates:
    * instance allocation is moved to probing for auxiliary,
    * there might be a bug for PCI drivers non requesting
      RTE_PCI_DRV_NEED_MAPPING, but code is left as v5,
* split (previously) patch 5 into three patches
  * (now) patch 7 only hides structure, but keep it in a EAL private
    header, this makes it possible to keep info in tracepoints,
  * (now) patch 8 deals with VFIO/UIO internal fds merge,
  * (now) patch 9 extends event list,

v7:
* fixed compilation on FreeBSD,
* removed unused interrupt handle in FreeBSD alarm code,
* fixed interrupt handle allocation for PCI drivers without
  RTE_PCI_DRV_NEED_MAPPING,

-- 
David Marchand

Harman Kalra (9):
  interrupts: add allocator and accessors
  interrupts: remove direct access to interrupt handle
  test/interrupts: remove direct access to interrupt handle
  alarm: remove direct access to interrupt handle
  lib: remove direct access to interrupt handle
  drivers: remove direct access to interrupt handle
  interrupts: make interrupt handle structure opaque
  interrupts: rename device specific file descriptor
  interrupts: extend event list

 MAINTAINERS                                   |   1 +
 app/test/test_interrupts.c                    | 164 +++--
 drivers/baseband/acc100/rte_acc100_pmd.c      |  14 +-
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |  24 +-
 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |  24 +-
 drivers/bus/auxiliary/auxiliary_common.c      |  17 +-
 drivers/bus/auxiliary/rte_bus_auxiliary.h     |   2 +-
 drivers/bus/dpaa/dpaa_bus.c                   |  28 +-
 drivers/bus/dpaa/rte_dpaa_bus.h               |   2 +-
 drivers/bus/fslmc/fslmc_bus.c                 |  14 +-
 drivers/bus/fslmc/fslmc_vfio.c                |  30 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |  18 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |   2 +-
 drivers/bus/fslmc/rte_fslmc.h                 |   2 +-
 drivers/bus/ifpga/ifpga_bus.c                 |  13 +-
 drivers/bus/ifpga/rte_bus_ifpga.h             |   2 +-
 drivers/bus/pci/bsd/pci.c                     |  20 +-
 drivers/bus/pci/linux/pci.c                   |   4 +-
 drivers/bus/pci/linux/pci_uio.c               |  69 +-
 drivers/bus/pci/linux/pci_vfio.c              | 108 ++-
 drivers/bus/pci/pci_common.c                  |  47 +-
 drivers/bus/pci/pci_common_uio.c              |  21 +-
 drivers/bus/pci/rte_bus_pci.h                 |   4 +-
 drivers/bus/vmbus/linux/vmbus_bus.c           |   6 +
 drivers/bus/vmbus/linux/vmbus_uio.c           |  35 +-
 drivers/bus/vmbus/rte_bus_vmbus.h             |   2 +-
 drivers/bus/vmbus/vmbus_common_uio.c          |  23 +-
 drivers/common/cnxk/roc_cpt.c                 |   8 +-
 drivers/common/cnxk/roc_dev.c                 |  14 +-
 drivers/common/cnxk/roc_irq.c                 | 107 +--
 drivers/common/cnxk/roc_nix_inl_dev_irq.c     |   8 +-
 drivers/common/cnxk/roc_nix_irq.c             |  36 +-
 drivers/common/cnxk/roc_npa.c                 |   2 +-
 drivers/common/cnxk/roc_platform.h            |  49 +-
 drivers/common/cnxk/roc_sso.c                 |   4 +-
 drivers/common/cnxk/roc_tim.c                 |   4 +-
 drivers/common/octeontx2/otx2_dev.c           |  14 +-
 drivers/common/octeontx2/otx2_irq.c           | 117 ++--
 .../octeontx2/otx2_cryptodev_hw_access.c      |   4 +-
 drivers/event/octeontx2/otx2_evdev_irq.c      |  12 +-
 drivers/mempool/octeontx2/otx2_mempool.c      |   2 +-
 drivers/net/atlantic/atl_ethdev.c             |  20 +-
 drivers/net/avp/avp_ethdev.c                  |   8 +-
 drivers/net/axgbe/axgbe_ethdev.c              |  12 +-
 drivers/net/axgbe/axgbe_mdio.c                |   6 +-
 drivers/net/bnx2x/bnx2x_ethdev.c              |  10 +-
 drivers/net/bnxt/bnxt_ethdev.c                |  33 +-
 drivers/net/bnxt/bnxt_irq.c                   |   4 +-
 drivers/net/dpaa/dpaa_ethdev.c                |  48 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  10 +-
 drivers/net/e1000/em_ethdev.c                 |  23 +-
 drivers/net/e1000/igb_ethdev.c                |  79 +--
 drivers/net/ena/ena_ethdev.c                  |  35 +-
 drivers/net/enic/enic_main.c                  |  26 +-
 drivers/net/failsafe/failsafe.c               |  21 +-
 drivers/net/failsafe/failsafe_intr.c          |  43 +-
 drivers/net/failsafe/failsafe_ops.c           |  19 +-
 drivers/net/failsafe/failsafe_private.h       |   2 +-
 drivers/net/fm10k/fm10k_ethdev.c              |  32 +-
 drivers/net/hinic/hinic_pmd_ethdev.c          |  10 +-
 drivers/net/hns3/hns3_ethdev.c                |  57 +-
 drivers/net/hns3/hns3_ethdev_vf.c             |  64 +-
 drivers/net/hns3/hns3_rxtx.c                  |   2 +-
 drivers/net/i40e/i40e_ethdev.c                |  53 +-
 drivers/net/iavf/iavf_ethdev.c                |  42 +-
 drivers/net/iavf/iavf_vchnl.c                 |   4 +-
 drivers/net/ice/ice_dcf.c                     |  10 +-
 drivers/net/ice/ice_dcf_ethdev.c              |  21 +-
 drivers/net/ice/ice_ethdev.c                  |  49 +-
 drivers/net/igc/igc_ethdev.c                  |  45 +-
 drivers/net/ionic/ionic_ethdev.c              |  17 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |  66 +-
 drivers/net/memif/memif_socket.c              | 108 ++-
 drivers/net/memif/memif_socket.h              |   4 +-
 drivers/net/memif/rte_eth_memif.c             |  56 +-
 drivers/net/memif/rte_eth_memif.h             |   2 +-
 drivers/net/mlx4/mlx4.c                       |  19 +-
 drivers/net/mlx4/mlx4.h                       |   2 +-
 drivers/net/mlx4/mlx4_intr.c                  |  47 +-
 drivers/net/mlx5/linux/mlx5_os.c              |  55 +-
 drivers/net/mlx5/linux/mlx5_socket.c          |  25 +-
 drivers/net/mlx5/mlx5.h                       |   6 +-
 drivers/net/mlx5/mlx5_rxq.c                   |  43 +-
 drivers/net/mlx5/mlx5_trigger.c               |   4 +-
 drivers/net/mlx5/mlx5_txpp.c                  |  25 +-
 drivers/net/netvsc/hn_ethdev.c                |   4 +-
 drivers/net/nfp/nfp_common.c                  |  34 +-
 drivers/net/nfp/nfp_ethdev.c                  |  13 +-
 drivers/net/nfp/nfp_ethdev_vf.c               |  13 +-
 drivers/net/ngbe/ngbe_ethdev.c                |  29 +-
 drivers/net/octeontx2/otx2_ethdev_irq.c       |  35 +-
 drivers/net/qede/qede_ethdev.c                |  16 +-
 drivers/net/sfc/sfc_intr.c                    |  30 +-
 drivers/net/tap/rte_eth_tap.c                 |  33 +-
 drivers/net/tap/rte_eth_tap.h                 |   2 +-
 drivers/net/tap/tap_intr.c                    |  33 +-
 drivers/net/thunderx/nicvf_ethdev.c           |  10 +
 drivers/net/thunderx/nicvf_struct.h           |   2 +-
 drivers/net/txgbe/txgbe_ethdev.c              |  38 +-
 drivers/net/txgbe/txgbe_ethdev_vf.c           |  33 +-
 drivers/net/vhost/rte_eth_vhost.c             |  80 ++-
 drivers/net/virtio/virtio_ethdev.c            |  21 +-
 .../net/virtio/virtio_user/virtio_user_dev.c  |  56 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.c          |  43 +-
 drivers/raw/ifpga/ifpga_rawdev.c              |  62 +-
 drivers/raw/ntb/ntb.c                         |   9 +-
 .../regex/octeontx2/otx2_regexdev_hw_access.c |   4 +-
 drivers/vdpa/ifc/ifcvf_vdpa.c                 |   5 +-
 drivers/vdpa/mlx5/mlx5_vdpa.c                 |   8 +
 drivers/vdpa/mlx5/mlx5_vdpa.h                 |   4 +-
 drivers/vdpa/mlx5/mlx5_vdpa_event.c           |  21 +-
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c           |  44 +-
 lib/bbdev/rte_bbdev.c                         |   4 +-
 lib/eal/common/eal_common_interrupts.c        | 504 ++++++++++++++
 lib/eal/common/eal_interrupts.h               |  30 +
 lib/eal/common/eal_private.h                  |  10 +
 lib/eal/common/meson.build                    |   1 +
 lib/eal/freebsd/eal.c                         |   1 +
 lib/eal/freebsd/eal_alarm.c                   |  35 +-
 lib/eal/freebsd/eal_interrupts.c              |  85 ++-
 lib/eal/include/meson.build                   |   2 +-
 lib/eal/include/rte_eal_interrupts.h          | 269 --------
 lib/eal/include/rte_eal_trace.h               |  10 +-
 lib/eal/include/rte_epoll.h                   | 118 ++++
 lib/eal/include/rte_interrupts.h              | 651 +++++++++++++++++-
 lib/eal/linux/eal.c                           |   1 +
 lib/eal/linux/eal_alarm.c                     |  32 +-
 lib/eal/linux/eal_dev.c                       |  57 +-
 lib/eal/linux/eal_interrupts.c                | 304 ++++----
 lib/eal/version.map                           |  45 +-
 lib/ethdev/ethdev_pci.h                       |   2 +-
 lib/ethdev/rte_ethdev.c                       |  14 +-
 132 files changed, 3453 insertions(+), 1748 deletions(-)
 create mode 100644 lib/eal/common/eal_common_interrupts.c
 create mode 100644 lib/eal/common/eal_interrupts.h
 delete mode 100644 lib/eal/include/rte_eal_interrupts.h
 create mode 100644 lib/eal/include/rte_epoll.h

-- 
2.23.0


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal
  2021-10-25 13:04  0%   ` [dpdk-dev] [PATCH v5 0/6] " Raslan Darawsheh
@ 2021-10-25 13:09  0%     ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-10-25 13:09 UTC (permalink / raw)
  To: Raslan Darawsheh
  Cc: Harman Kalra, dev, dmitry.kozliuk, mdr, NBU-Contact-Thomas Monjalon

On Mon, Oct 25, 2021 at 3:04 PM Raslan Darawsheh <rasland@nvidia.com> wrote:
>
> Hi,
>
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Harman Kalra
> > Sent: Friday, October 22, 2021 11:49 PM
> > To: dev@dpdk.org
> > Cc: david.marchand@redhat.com; dmitry.kozliuk@gmail.com;
> > mdr@ashroe.eu; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> > Harman Kalra <hkalra@marvell.com>
> > Subject: [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal
> >
> > Moving struct rte_intr_handle as an internal structure to
> > avoid any ABI breakages in future. Since this structure defines
> > some static arrays and changing respective macros breaks the ABI.
> > Eg:
> > Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> > MSI-X interrupts that can be defined for a PCI device, while PCI
> > specification allows maximum 2048 MSI-X interrupts that can be used.
> > If some PCI device requires more than 512 vectors, either change the
> > RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
> > PCI device MSI-X size on probe time. Either way its an ABI breakage.
> >
> > Change already included in 21.11 ABI improvement spreadsheet (item 42):
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Furld
> > efense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-
> > 3A__docs.google.com_s&amp;data=04%7C01%7Crasland%40nvidia.com%7C
> > 567d8ee2e3c842a9e59808d9959d822e%7C43083d15727340c1b7db39efd9ccc1
> > 7a%7C0%7C0%7C637705326003996997%7CUnknown%7CTWFpbGZsb3d8eyJ
> > WIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%
> > 7C1000&amp;sdata=7UgxpkEtH%2Fnjk7xo9qELjqWi58XLzzCH2pimeDWLzvc%
> > 3D&amp;reserved=0
> > preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-
> > 23gid-
> > 3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-
> > 7JdkxT_Z_SU6RrS37ys4U
> > XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c
> > &s=lh6DEGhR
> > Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
> >
> > This series makes struct rte_intr_handle totally opaque to the outside
> > world by wrapping it inside a .c file and providing get set wrapper APIs
> > to read or manipulate its fields.. Any changes to be made to any of the
> > fields should be done via these get set APIs.
> > Introduced a new eal_common_interrupts.c where all these APIs are
> > defined
> > and also hides struct rte_intr_handle definition.
> >
> > Details on each patch of the series:
> > Patch 1: eal/interrupts: implement get set APIs
> > This patch provides prototypes and implementation of all the new
> > get set APIs. Alloc APIs are implemented to allocate memory for
> > interrupt handle instance. Currently most of the drivers defines
> > interrupt handle instance as static but now it cant be static as
> > size of rte_intr_handle is unknown to all the drivers. Drivers are
> > expected to allocate interrupt instances during initialization
> > and free these instances during cleanup phase.
> > This patch also rearranges the headers related to interrupt
> > framework. Epoll related definitions prototypes are moved into a
> > new header i.e. rte_epoll.h and APIs defined in rte_eal_interrupts.h
> > which were driver specific are moved to rte_interrupts.h (as anyways
> > it was accessible and used outside DPDK library. Later in the series
> > rte_eal_interrupts.h is removed.
> >
> > Patch 2: eal/interrupts: avoid direct access to interrupt handle
> > Modifying the interrupt framework for linux and freebsd to use these
> > get set alloc APIs as per requirement and avoid accessing the fields
> > directly.
> >
> > Patch 3: test/interrupt: apply get set interrupt handle APIs
> > Updating interrupt test suite to use interrupt handle APIs.
> >
> > Patch 4: drivers: remove direct access to interrupt handle fields
> > Modifying all the drivers and libraries which are currently directly
> > accessing the interrupt handle fields. Drivers are expected to
> > allocated the interrupt instance, use get set APIs with the allocated
> > interrupt handle and free it on cleanup.
> >
> > Patch 5: eal/interrupts: make interrupt handle structure opaque
> > In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
> > definition is moved to c file to make it completely opaque. As part of
> > interrupt handle allocation, array like efds and elist(which are currently
> > static) are dynamically allocated with default size
> > (RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
> > device requirement using new API rte_intr_handle_event_list_update().
> > Eg, on PCI device probing MSIX size can be queried and these arrays can
> > be reallocated accordingly.
> >
> > Patch 6: eal/alarm: introduce alarm fini routine
> > Introducing alarm fini routine, as the memory allocated for alarm interrupt
> > instance can be freed in alarm fini.
> >
> > Testing performed:
> > 1. Validated the series by running interrupts and alarm test suite.
> > 2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
> >    where interrupts are expected on packet arrival.
> >
> > v1:
> > * Fixed freebsd compilation failure
> > * Fixed seg fault in case of memif
> >
> > v2:
> > * Merged the prototype and implementation patch to 1.
> > * Restricting allocation of single interrupt instance.
> > * Removed base APIs, as they were exposing internally
> > allocated memory information.
> > * Fixed some memory leak issues.
> > * Marked some library specific APIs as internal.
> >
> > v3:
> > * Removed flag from instance alloc API, rather auto detect
> > if memory should be allocated using glibc malloc APIs or
> > rte_malloc*
> > * Added APIs for get/set windows handle.
> > * Defined macros for repeated checks.
> >
> > v4:
> > * Rectified some typo in the APIs documentation.
> > * Better names for some internal variables.
> >
> > v5:
> > * Reverted back to passing flag to instance alloc API, as
> > with auto detect some multiprocess issues existing in the
> > library were causing tests failure.
> > * Rebased to top of tree.
> >
> > Harman Kalra (6):
> >   eal/interrupts: implement get set APIs
> >   eal/interrupts: avoid direct access to interrupt handle
> >   test/interrupt: apply get set interrupt handle APIs
> >   drivers: remove direct access to interrupt handle
> >   eal/interrupts: make interrupt handle structure opaque
> >   eal/alarm: introduce alarm fini routine
> >
> >  MAINTAINERS                                   |   1 +
> >  app/test/test_interrupts.c                    | 163 +++--
> >  drivers/baseband/acc100/rte_acc100_pmd.c      |  18 +-
> >  .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |  21 +-
> >  drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |  21 +-
> >  drivers/bus/auxiliary/auxiliary_common.c      |   2 +
> >  drivers/bus/auxiliary/linux/auxiliary.c       |  10 +
> >  drivers/bus/auxiliary/rte_bus_auxiliary.h     |   2 +-
> >  drivers/bus/dpaa/dpaa_bus.c                   |  28 +-
> >  drivers/bus/dpaa/rte_dpaa_bus.h               |   2 +-
> >  drivers/bus/fslmc/fslmc_bus.c                 |  16 +-
> >  drivers/bus/fslmc/fslmc_vfio.c                |  32 +-
> >  drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |  20 +-
> >  drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |   2 +-
> >  drivers/bus/fslmc/rte_fslmc.h                 |   2 +-
> >  drivers/bus/ifpga/ifpga_bus.c                 |  15 +-
> >  drivers/bus/ifpga/rte_bus_ifpga.h             |   2 +-
> >  drivers/bus/pci/bsd/pci.c                     |  21 +-
> >  drivers/bus/pci/linux/pci.c                   |   4 +-
> >  drivers/bus/pci/linux/pci_uio.c               |  73 +-
> >  drivers/bus/pci/linux/pci_vfio.c              | 115 ++-
> >  drivers/bus/pci/pci_common.c                  |  29 +-
> >  drivers/bus/pci/pci_common_uio.c              |  21 +-
> >  drivers/bus/pci/rte_bus_pci.h                 |   4 +-
> >  drivers/bus/vmbus/linux/vmbus_bus.c           |   6 +
> >  drivers/bus/vmbus/linux/vmbus_uio.c           |  37 +-
> >  drivers/bus/vmbus/rte_bus_vmbus.h             |   2 +-
> >  drivers/bus/vmbus/vmbus_common_uio.c          |  24 +-
> >  drivers/common/cnxk/roc_cpt.c                 |   8 +-
> >  drivers/common/cnxk/roc_dev.c                 |  14 +-
> >  drivers/common/cnxk/roc_irq.c                 | 108 +--
> >  drivers/common/cnxk/roc_nix_inl_dev_irq.c     |   8 +-
> >  drivers/common/cnxk/roc_nix_irq.c             |  36 +-
> >  drivers/common/cnxk/roc_npa.c                 |   2 +-
> >  drivers/common/cnxk/roc_platform.h            |  49 +-
> >  drivers/common/cnxk/roc_sso.c                 |   4 +-
> >  drivers/common/cnxk/roc_tim.c                 |   4 +-
> >  drivers/common/octeontx2/otx2_dev.c           |  14 +-
> >  drivers/common/octeontx2/otx2_irq.c           | 117 +--
> >  .../octeontx2/otx2_cryptodev_hw_access.c      |   4 +-
> >  drivers/event/octeontx2/otx2_evdev_irq.c      |  12 +-
> >  drivers/mempool/octeontx2/otx2_mempool.c      |   2 +-
> >  drivers/net/atlantic/atl_ethdev.c             |  20 +-
> >  drivers/net/avp/avp_ethdev.c                  |   8 +-
> >  drivers/net/axgbe/axgbe_ethdev.c              |  12 +-
> >  drivers/net/axgbe/axgbe_mdio.c                |   6 +-
> >  drivers/net/bnx2x/bnx2x_ethdev.c              |  10 +-
> >  drivers/net/bnxt/bnxt_ethdev.c                |  33 +-
> >  drivers/net/bnxt/bnxt_irq.c                   |   4 +-
> >  drivers/net/dpaa/dpaa_ethdev.c                |  47 +-
> >  drivers/net/dpaa2/dpaa2_ethdev.c              |  10 +-
> >  drivers/net/e1000/em_ethdev.c                 |  23 +-
> >  drivers/net/e1000/igb_ethdev.c                |  79 +--
> >  drivers/net/ena/ena_ethdev.c                  |  35 +-
> >  drivers/net/enic/enic_main.c                  |  26 +-
> >  drivers/net/failsafe/failsafe.c               |  23 +-
> >  drivers/net/failsafe/failsafe_intr.c          |  43 +-
> >  drivers/net/failsafe/failsafe_ops.c           |  19 +-
> >  drivers/net/failsafe/failsafe_private.h       |   2 +-
> >  drivers/net/fm10k/fm10k_ethdev.c              |  32 +-
> >  drivers/net/hinic/hinic_pmd_ethdev.c          |  10 +-
> >  drivers/net/hns3/hns3_ethdev.c                |  57 +-
> >  drivers/net/hns3/hns3_ethdev_vf.c             |  64 +-
> >  drivers/net/hns3/hns3_rxtx.c                  |   2 +-
> >  drivers/net/i40e/i40e_ethdev.c                |  53 +-
> >  drivers/net/iavf/iavf_ethdev.c                |  42 +-
> >  drivers/net/iavf/iavf_vchnl.c                 |   4 +-
> >  drivers/net/ice/ice_dcf.c                     |  10 +-
> >  drivers/net/ice/ice_dcf_ethdev.c              |  21 +-
> >  drivers/net/ice/ice_ethdev.c                  |  49 +-
> >  drivers/net/igc/igc_ethdev.c                  |  45 +-
> >  drivers/net/ionic/ionic_ethdev.c              |  17 +-
> >  drivers/net/ixgbe/ixgbe_ethdev.c              |  66 +-
> >  drivers/net/memif/memif_socket.c              | 111 ++-
> >  drivers/net/memif/memif_socket.h              |   4 +-
> >  drivers/net/memif/rte_eth_memif.c             |  61 +-
> >  drivers/net/memif/rte_eth_memif.h             |   2 +-
> >  drivers/net/mlx4/mlx4.c                       |  19 +-
> >  drivers/net/mlx4/mlx4.h                       |   2 +-
> >  drivers/net/mlx4/mlx4_intr.c                  |  47 +-
> >  drivers/net/mlx5/linux/mlx5_os.c              |  53 +-
> >  drivers/net/mlx5/linux/mlx5_socket.c          |  25 +-
> >  drivers/net/mlx5/mlx5.h                       |   6 +-
> >  drivers/net/mlx5/mlx5_rxq.c                   |  42 +-
> >  drivers/net/mlx5/mlx5_trigger.c               |   4 +-
> >  drivers/net/mlx5/mlx5_txpp.c                  |  26 +-
> >  drivers/net/netvsc/hn_ethdev.c                |   4 +-
> >  drivers/net/nfp/nfp_common.c                  |  34 +-
> >  drivers/net/nfp/nfp_ethdev.c                  |  13 +-
> >  drivers/net/nfp/nfp_ethdev_vf.c               |  13 +-
> >  drivers/net/ngbe/ngbe_ethdev.c                |  29 +-
> >  drivers/net/octeontx2/otx2_ethdev_irq.c       |  35 +-
> >  drivers/net/qede/qede_ethdev.c                |  16 +-
> >  drivers/net/sfc/sfc_intr.c                    |  30 +-
> >  drivers/net/tap/rte_eth_tap.c                 |  36 +-
> >  drivers/net/tap/rte_eth_tap.h                 |   2 +-
> >  drivers/net/tap/tap_intr.c                    |  32 +-
> >  drivers/net/thunderx/nicvf_ethdev.c           |  12 +
> >  drivers/net/thunderx/nicvf_struct.h           |   2 +-
> >  drivers/net/txgbe/txgbe_ethdev.c              |  38 +-
> >  drivers/net/txgbe/txgbe_ethdev_vf.c           |  33 +-
> >  drivers/net/vhost/rte_eth_vhost.c             |  76 +-
> >  drivers/net/virtio/virtio_ethdev.c            |  21 +-
> >  .../net/virtio/virtio_user/virtio_user_dev.c  |  48 +-
> >  drivers/net/vmxnet3/vmxnet3_ethdev.c          |  43 +-
> >  drivers/raw/ifpga/ifpga_rawdev.c              |  62 +-
> >  drivers/raw/ntb/ntb.c                         |   9 +-
> >  .../regex/octeontx2/otx2_regexdev_hw_access.c |   4 +-
> >  drivers/vdpa/ifc/ifcvf_vdpa.c                 |   5 +-
> >  drivers/vdpa/mlx5/mlx5_vdpa.c                 |  10 +
> >  drivers/vdpa/mlx5/mlx5_vdpa.h                 |   4 +-
> >  drivers/vdpa/mlx5/mlx5_vdpa_event.c           |  22 +-
> >  drivers/vdpa/mlx5/mlx5_vdpa_virtq.c           |  45 +-
> >  lib/bbdev/rte_bbdev.c                         |   4 +-
> >  lib/eal/common/eal_common_interrupts.c        | 588 +++++++++++++++
> >  lib/eal/common/eal_private.h                  |  11 +
> >  lib/eal/common/meson.build                    |   1 +
> >  lib/eal/freebsd/eal.c                         |   1 +
> >  lib/eal/freebsd/eal_alarm.c                   |  53 +-
> >  lib/eal/freebsd/eal_interrupts.c              | 112 ++-
> >  lib/eal/include/meson.build                   |   2 +-
> >  lib/eal/include/rte_eal_interrupts.h          | 269 -------
> >  lib/eal/include/rte_eal_trace.h               |  24 +-
> >  lib/eal/include/rte_epoll.h                   | 118 ++++
> >  lib/eal/include/rte_interrupts.h              | 668 +++++++++++++++++-
> >  lib/eal/linux/eal.c                           |   1 +
> >  lib/eal/linux/eal_alarm.c                     |  37 +-
> >  lib/eal/linux/eal_dev.c                       |  63 +-
> >  lib/eal/linux/eal_interrupts.c                | 303 +++++---
> >  lib/eal/version.map                           |  46 +-
> >  lib/ethdev/ethdev_pci.h                       |   2 +-
> >  lib/ethdev/rte_ethdev.c                       |  14 +-
> >  132 files changed, 3631 insertions(+), 1713 deletions(-)
> >  create mode 100644 lib/eal/common/eal_common_interrupts.c
> >  delete mode 100644 lib/eal/include/rte_eal_interrupts.h
> >  create mode 100644 lib/eal/include/rte_epoll.h
> >
> > --
> > 2.18.0
>
> This series is causing this seg fault with MLX5 pmd:
> Thread 1 "dpdk-l3fwd-powe" received signal SIGSEGV, Segmentation fault.
> rte_intr_free_epoll_fd (intr_handle=0x0) at ../lib/eal/linux/eal_interrupts.c:1512
> 1512                    if (__atomic_load_n(&rev->status,
> (gdb) bt
> #0  rte_intr_free_epoll_fd (intr_handle=0x0) at ../lib/eal/linux/eal_interrupts.c:1512
> #1  0x0000555556de7814 in mlx5_rx_intr_vec_disable (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_rxq.c:934
> #2  0x0000555556de73da in mlx5_rx_intr_vec_enable (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_rxq.c:836
> #3  0x0000555556e04012 in mlx5_dev_start (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_trigger.c:1146
> #4  0x0000555555b82da7 in rte_eth_dev_start (port_id=0) at ../lib/ethdev/rte_ethdev.c:1823
> #5  0x000055555575e66d in main (argc=7, argv=0x7fffffffe3f0) at ../examples/l3fwd-power/main.c:2811
> (gdb) f 1
> #1  0x0000555556de7814 in mlx5_rx_intr_vec_disable (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_rxq.c:934
> 934             rte_intr_free_epoll_fd(intr_handle);
>
>
> It can be easily reproduced as following:
> dpdk-l3fwd-power -n 4 -a 0000:08:00.0,txq_inline_mpw=439,rx_vec_en=1 -a 0000:08:00.,txq_inline_mpw=439,rx_vec_en=1 -c 0xfffffff -- -p 0x3 -P --interrupt-only --parse-ptype --config='(0, 0, 0)(1, 0, 1)(0, 1, 2)(1, 1, 3)(0, 2, 4)(1, 2, 5)(0, 3, 6)(1, 3, 7)'
>

That confirms my suspicion on pci bus update that look at
RTE_PCI_DRV_NEED_MAPPING.
v7 incoming.


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal
  2021-10-22 20:49  4% ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
  2021-10-24 20:04  4%   ` [dpdk-dev] [PATCH v6 0/9] " David Marchand
@ 2021-10-25 13:04  0%   ` Raslan Darawsheh
  2021-10-25 13:09  0%     ` David Marchand
  2021-10-25 13:34  4%   ` [dpdk-dev] [PATCH v7 0/9] " David Marchand
  2021-10-25 14:27  4%   ` [dpdk-dev] [PATCH v8 " David Marchand
  3 siblings, 1 reply; 200+ results
From: Raslan Darawsheh @ 2021-10-25 13:04 UTC (permalink / raw)
  To: Harman Kalra, dev
  Cc: david.marchand, dmitry.kozliuk, mdr, NBU-Contact-Thomas Monjalon

Hi,

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Harman Kalra
> Sent: Friday, October 22, 2021 11:49 PM
> To: dev@dpdk.org
> Cc: david.marchand@redhat.com; dmitry.kozliuk@gmail.com;
> mdr@ashroe.eu; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> Harman Kalra <hkalra@marvell.com>
> Subject: [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal
> 
> Moving struct rte_intr_handle as an internal structure to
> avoid any ABI breakages in future. Since this structure defines
> some static arrays and changing respective macros breaks the ABI.
> Eg:
> Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> MSI-X interrupts that can be defined for a PCI device, while PCI
> specification allows maximum 2048 MSI-X interrupts that can be used.
> If some PCI device requires more than 512 vectors, either change the
> RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
> PCI device MSI-X size on probe time. Either way its an ABI breakage.
> 
> Change already included in 21.11 ABI improvement spreadsheet (item 42):
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Furld
> efense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-
> 3A__docs.google.com_s&amp;data=04%7C01%7Crasland%40nvidia.com%7C
> 567d8ee2e3c842a9e59808d9959d822e%7C43083d15727340c1b7db39efd9ccc1
> 7a%7C0%7C0%7C637705326003996997%7CUnknown%7CTWFpbGZsb3d8eyJ
> WIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%
> 7C1000&amp;sdata=7UgxpkEtH%2Fnjk7xo9qELjqWi58XLzzCH2pimeDWLzvc%
> 3D&amp;reserved=0
> preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-
> 23gid-
> 3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-
> 7JdkxT_Z_SU6RrS37ys4U
> XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c
> &s=lh6DEGhR
> Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
> 
> This series makes struct rte_intr_handle totally opaque to the outside
> world by wrapping it inside a .c file and providing get set wrapper APIs
> to read or manipulate its fields.. Any changes to be made to any of the
> fields should be done via these get set APIs.
> Introduced a new eal_common_interrupts.c where all these APIs are
> defined
> and also hides struct rte_intr_handle definition.
> 
> Details on each patch of the series:
> Patch 1: eal/interrupts: implement get set APIs
> This patch provides prototypes and implementation of all the new
> get set APIs. Alloc APIs are implemented to allocate memory for
> interrupt handle instance. Currently most of the drivers defines
> interrupt handle instance as static but now it cant be static as
> size of rte_intr_handle is unknown to all the drivers. Drivers are
> expected to allocate interrupt instances during initialization
> and free these instances during cleanup phase.
> This patch also rearranges the headers related to interrupt
> framework. Epoll related definitions prototypes are moved into a
> new header i.e. rte_epoll.h and APIs defined in rte_eal_interrupts.h
> which were driver specific are moved to rte_interrupts.h (as anyways
> it was accessible and used outside DPDK library. Later in the series
> rte_eal_interrupts.h is removed.
> 
> Patch 2: eal/interrupts: avoid direct access to interrupt handle
> Modifying the interrupt framework for linux and freebsd to use these
> get set alloc APIs as per requirement and avoid accessing the fields
> directly.
> 
> Patch 3: test/interrupt: apply get set interrupt handle APIs
> Updating interrupt test suite to use interrupt handle APIs.
> 
> Patch 4: drivers: remove direct access to interrupt handle fields
> Modifying all the drivers and libraries which are currently directly
> accessing the interrupt handle fields. Drivers are expected to
> allocated the interrupt instance, use get set APIs with the allocated
> interrupt handle and free it on cleanup.
> 
> Patch 5: eal/interrupts: make interrupt handle structure opaque
> In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
> definition is moved to c file to make it completely opaque. As part of
> interrupt handle allocation, array like efds and elist(which are currently
> static) are dynamically allocated with default size
> (RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
> device requirement using new API rte_intr_handle_event_list_update().
> Eg, on PCI device probing MSIX size can be queried and these arrays can
> be reallocated accordingly.
> 
> Patch 6: eal/alarm: introduce alarm fini routine
> Introducing alarm fini routine, as the memory allocated for alarm interrupt
> instance can be freed in alarm fini.
> 
> Testing performed:
> 1. Validated the series by running interrupts and alarm test suite.
> 2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
>    where interrupts are expected on packet arrival.
> 
> v1:
> * Fixed freebsd compilation failure
> * Fixed seg fault in case of memif
> 
> v2:
> * Merged the prototype and implementation patch to 1.
> * Restricting allocation of single interrupt instance.
> * Removed base APIs, as they were exposing internally
> allocated memory information.
> * Fixed some memory leak issues.
> * Marked some library specific APIs as internal.
> 
> v3:
> * Removed flag from instance alloc API, rather auto detect
> if memory should be allocated using glibc malloc APIs or
> rte_malloc*
> * Added APIs for get/set windows handle.
> * Defined macros for repeated checks.
> 
> v4:
> * Rectified some typo in the APIs documentation.
> * Better names for some internal variables.
> 
> v5:
> * Reverted back to passing flag to instance alloc API, as
> with auto detect some multiprocess issues existing in the
> library were causing tests failure.
> * Rebased to top of tree.
> 
> Harman Kalra (6):
>   eal/interrupts: implement get set APIs
>   eal/interrupts: avoid direct access to interrupt handle
>   test/interrupt: apply get set interrupt handle APIs
>   drivers: remove direct access to interrupt handle
>   eal/interrupts: make interrupt handle structure opaque
>   eal/alarm: introduce alarm fini routine
> 
>  MAINTAINERS                                   |   1 +
>  app/test/test_interrupts.c                    | 163 +++--
>  drivers/baseband/acc100/rte_acc100_pmd.c      |  18 +-
>  .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |  21 +-
>  drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |  21 +-
>  drivers/bus/auxiliary/auxiliary_common.c      |   2 +
>  drivers/bus/auxiliary/linux/auxiliary.c       |  10 +
>  drivers/bus/auxiliary/rte_bus_auxiliary.h     |   2 +-
>  drivers/bus/dpaa/dpaa_bus.c                   |  28 +-
>  drivers/bus/dpaa/rte_dpaa_bus.h               |   2 +-
>  drivers/bus/fslmc/fslmc_bus.c                 |  16 +-
>  drivers/bus/fslmc/fslmc_vfio.c                |  32 +-
>  drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |  20 +-
>  drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |   2 +-
>  drivers/bus/fslmc/rte_fslmc.h                 |   2 +-
>  drivers/bus/ifpga/ifpga_bus.c                 |  15 +-
>  drivers/bus/ifpga/rte_bus_ifpga.h             |   2 +-
>  drivers/bus/pci/bsd/pci.c                     |  21 +-
>  drivers/bus/pci/linux/pci.c                   |   4 +-
>  drivers/bus/pci/linux/pci_uio.c               |  73 +-
>  drivers/bus/pci/linux/pci_vfio.c              | 115 ++-
>  drivers/bus/pci/pci_common.c                  |  29 +-
>  drivers/bus/pci/pci_common_uio.c              |  21 +-
>  drivers/bus/pci/rte_bus_pci.h                 |   4 +-
>  drivers/bus/vmbus/linux/vmbus_bus.c           |   6 +
>  drivers/bus/vmbus/linux/vmbus_uio.c           |  37 +-
>  drivers/bus/vmbus/rte_bus_vmbus.h             |   2 +-
>  drivers/bus/vmbus/vmbus_common_uio.c          |  24 +-
>  drivers/common/cnxk/roc_cpt.c                 |   8 +-
>  drivers/common/cnxk/roc_dev.c                 |  14 +-
>  drivers/common/cnxk/roc_irq.c                 | 108 +--
>  drivers/common/cnxk/roc_nix_inl_dev_irq.c     |   8 +-
>  drivers/common/cnxk/roc_nix_irq.c             |  36 +-
>  drivers/common/cnxk/roc_npa.c                 |   2 +-
>  drivers/common/cnxk/roc_platform.h            |  49 +-
>  drivers/common/cnxk/roc_sso.c                 |   4 +-
>  drivers/common/cnxk/roc_tim.c                 |   4 +-
>  drivers/common/octeontx2/otx2_dev.c           |  14 +-
>  drivers/common/octeontx2/otx2_irq.c           | 117 +--
>  .../octeontx2/otx2_cryptodev_hw_access.c      |   4 +-
>  drivers/event/octeontx2/otx2_evdev_irq.c      |  12 +-
>  drivers/mempool/octeontx2/otx2_mempool.c      |   2 +-
>  drivers/net/atlantic/atl_ethdev.c             |  20 +-
>  drivers/net/avp/avp_ethdev.c                  |   8 +-
>  drivers/net/axgbe/axgbe_ethdev.c              |  12 +-
>  drivers/net/axgbe/axgbe_mdio.c                |   6 +-
>  drivers/net/bnx2x/bnx2x_ethdev.c              |  10 +-
>  drivers/net/bnxt/bnxt_ethdev.c                |  33 +-
>  drivers/net/bnxt/bnxt_irq.c                   |   4 +-
>  drivers/net/dpaa/dpaa_ethdev.c                |  47 +-
>  drivers/net/dpaa2/dpaa2_ethdev.c              |  10 +-
>  drivers/net/e1000/em_ethdev.c                 |  23 +-
>  drivers/net/e1000/igb_ethdev.c                |  79 +--
>  drivers/net/ena/ena_ethdev.c                  |  35 +-
>  drivers/net/enic/enic_main.c                  |  26 +-
>  drivers/net/failsafe/failsafe.c               |  23 +-
>  drivers/net/failsafe/failsafe_intr.c          |  43 +-
>  drivers/net/failsafe/failsafe_ops.c           |  19 +-
>  drivers/net/failsafe/failsafe_private.h       |   2 +-
>  drivers/net/fm10k/fm10k_ethdev.c              |  32 +-
>  drivers/net/hinic/hinic_pmd_ethdev.c          |  10 +-
>  drivers/net/hns3/hns3_ethdev.c                |  57 +-
>  drivers/net/hns3/hns3_ethdev_vf.c             |  64 +-
>  drivers/net/hns3/hns3_rxtx.c                  |   2 +-
>  drivers/net/i40e/i40e_ethdev.c                |  53 +-
>  drivers/net/iavf/iavf_ethdev.c                |  42 +-
>  drivers/net/iavf/iavf_vchnl.c                 |   4 +-
>  drivers/net/ice/ice_dcf.c                     |  10 +-
>  drivers/net/ice/ice_dcf_ethdev.c              |  21 +-
>  drivers/net/ice/ice_ethdev.c                  |  49 +-
>  drivers/net/igc/igc_ethdev.c                  |  45 +-
>  drivers/net/ionic/ionic_ethdev.c              |  17 +-
>  drivers/net/ixgbe/ixgbe_ethdev.c              |  66 +-
>  drivers/net/memif/memif_socket.c              | 111 ++-
>  drivers/net/memif/memif_socket.h              |   4 +-
>  drivers/net/memif/rte_eth_memif.c             |  61 +-
>  drivers/net/memif/rte_eth_memif.h             |   2 +-
>  drivers/net/mlx4/mlx4.c                       |  19 +-
>  drivers/net/mlx4/mlx4.h                       |   2 +-
>  drivers/net/mlx4/mlx4_intr.c                  |  47 +-
>  drivers/net/mlx5/linux/mlx5_os.c              |  53 +-
>  drivers/net/mlx5/linux/mlx5_socket.c          |  25 +-
>  drivers/net/mlx5/mlx5.h                       |   6 +-
>  drivers/net/mlx5/mlx5_rxq.c                   |  42 +-
>  drivers/net/mlx5/mlx5_trigger.c               |   4 +-
>  drivers/net/mlx5/mlx5_txpp.c                  |  26 +-
>  drivers/net/netvsc/hn_ethdev.c                |   4 +-
>  drivers/net/nfp/nfp_common.c                  |  34 +-
>  drivers/net/nfp/nfp_ethdev.c                  |  13 +-
>  drivers/net/nfp/nfp_ethdev_vf.c               |  13 +-
>  drivers/net/ngbe/ngbe_ethdev.c                |  29 +-
>  drivers/net/octeontx2/otx2_ethdev_irq.c       |  35 +-
>  drivers/net/qede/qede_ethdev.c                |  16 +-
>  drivers/net/sfc/sfc_intr.c                    |  30 +-
>  drivers/net/tap/rte_eth_tap.c                 |  36 +-
>  drivers/net/tap/rte_eth_tap.h                 |   2 +-
>  drivers/net/tap/tap_intr.c                    |  32 +-
>  drivers/net/thunderx/nicvf_ethdev.c           |  12 +
>  drivers/net/thunderx/nicvf_struct.h           |   2 +-
>  drivers/net/txgbe/txgbe_ethdev.c              |  38 +-
>  drivers/net/txgbe/txgbe_ethdev_vf.c           |  33 +-
>  drivers/net/vhost/rte_eth_vhost.c             |  76 +-
>  drivers/net/virtio/virtio_ethdev.c            |  21 +-
>  .../net/virtio/virtio_user/virtio_user_dev.c  |  48 +-
>  drivers/net/vmxnet3/vmxnet3_ethdev.c          |  43 +-
>  drivers/raw/ifpga/ifpga_rawdev.c              |  62 +-
>  drivers/raw/ntb/ntb.c                         |   9 +-
>  .../regex/octeontx2/otx2_regexdev_hw_access.c |   4 +-
>  drivers/vdpa/ifc/ifcvf_vdpa.c                 |   5 +-
>  drivers/vdpa/mlx5/mlx5_vdpa.c                 |  10 +
>  drivers/vdpa/mlx5/mlx5_vdpa.h                 |   4 +-
>  drivers/vdpa/mlx5/mlx5_vdpa_event.c           |  22 +-
>  drivers/vdpa/mlx5/mlx5_vdpa_virtq.c           |  45 +-
>  lib/bbdev/rte_bbdev.c                         |   4 +-
>  lib/eal/common/eal_common_interrupts.c        | 588 +++++++++++++++
>  lib/eal/common/eal_private.h                  |  11 +
>  lib/eal/common/meson.build                    |   1 +
>  lib/eal/freebsd/eal.c                         |   1 +
>  lib/eal/freebsd/eal_alarm.c                   |  53 +-
>  lib/eal/freebsd/eal_interrupts.c              | 112 ++-
>  lib/eal/include/meson.build                   |   2 +-
>  lib/eal/include/rte_eal_interrupts.h          | 269 -------
>  lib/eal/include/rte_eal_trace.h               |  24 +-
>  lib/eal/include/rte_epoll.h                   | 118 ++++
>  lib/eal/include/rte_interrupts.h              | 668 +++++++++++++++++-
>  lib/eal/linux/eal.c                           |   1 +
>  lib/eal/linux/eal_alarm.c                     |  37 +-
>  lib/eal/linux/eal_dev.c                       |  63 +-
>  lib/eal/linux/eal_interrupts.c                | 303 +++++---
>  lib/eal/version.map                           |  46 +-
>  lib/ethdev/ethdev_pci.h                       |   2 +-
>  lib/ethdev/rte_ethdev.c                       |  14 +-
>  132 files changed, 3631 insertions(+), 1713 deletions(-)
>  create mode 100644 lib/eal/common/eal_common_interrupts.c
>  delete mode 100644 lib/eal/include/rte_eal_interrupts.h
>  create mode 100644 lib/eal/include/rte_epoll.h
> 
> --
> 2.18.0

This series is causing this seg fault with MLX5 pmd:
Thread 1 "dpdk-l3fwd-powe" received signal SIGSEGV, Segmentation fault.
rte_intr_free_epoll_fd (intr_handle=0x0) at ../lib/eal/linux/eal_interrupts.c:1512
1512                    if (__atomic_load_n(&rev->status,
(gdb) bt
#0  rte_intr_free_epoll_fd (intr_handle=0x0) at ../lib/eal/linux/eal_interrupts.c:1512
#1  0x0000555556de7814 in mlx5_rx_intr_vec_disable (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_rxq.c:934
#2  0x0000555556de73da in mlx5_rx_intr_vec_enable (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_rxq.c:836
#3  0x0000555556e04012 in mlx5_dev_start (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_trigger.c:1146
#4  0x0000555555b82da7 in rte_eth_dev_start (port_id=0) at ../lib/ethdev/rte_ethdev.c:1823
#5  0x000055555575e66d in main (argc=7, argv=0x7fffffffe3f0) at ../examples/l3fwd-power/main.c:2811
(gdb) f 1
#1  0x0000555556de7814 in mlx5_rx_intr_vec_disable (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_rxq.c:934
934             rte_intr_free_epoll_fd(intr_handle);


It can be easily reproduced as following:
dpdk-l3fwd-power -n 4 -a 0000:08:00.0,txq_inline_mpw=439,rx_vec_en=1 -a 0000:08:00.,txq_inline_mpw=439,rx_vec_en=1 -c 0xfffffff -- -p 0x3 -P --interrupt-only --parse-ptype --config='(0, 0, 0)(1, 0, 1)(0, 1, 2)(1, 1, 3)(0, 2, 4)(1, 2, 5)(0, 3, 6)(1, 3, 7)'


Kindest regards,
Raslan Darawsheh

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v18 0/5] Add PIE support for HQoS library
  2021-10-20  7:49  3%       ` [dpdk-dev] [PATCH v17 " Liguzinski, WojciechX
@ 2021-10-25 11:32  3%         ` Liguzinski, WojciechX
  2021-10-26  8:24  3%           ` Liu, Yu Y
  2021-10-28 10:17  3%           ` [dpdk-dev] [PATCH v19 " Liguzinski, WojciechX
  0 siblings, 2 replies; 200+ results
From: Liguzinski, WojciechX @ 2021-10-25 11:32 UTC (permalink / raw)
  To: dev, jasvinder.singh, cristian.dumitrescu; +Cc: megha.ajmera

DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management. However, more
advanced queue management is required to address this problem and provide desirable
quality of service to users.

This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.

The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.

Liguzinski, WojciechX (5):
  sched: add PIE based congestion management
  example/qos_sched: add PIE support
  example/ip_pipeline: add PIE support
  doc/guides/prog_guide: added PIE
  app/test: add tests for PIE

 app/test/meson.build                         |    4 +
 app/test/test_pie.c                          | 1065 ++++++++++++++++++
 config/rte_config.h                          |    1 -
 doc/guides/prog_guide/glossary.rst           |    3 +
 doc/guides/prog_guide/qos_framework.rst      |   64 +-
 doc/guides/prog_guide/traffic_management.rst |   13 +-
 drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
 examples/ip_pipeline/tmgr.c                  |  142 +--
 examples/qos_sched/cfg_file.c                |  127 ++-
 examples/qos_sched/cfg_file.h                |    5 +
 examples/qos_sched/init.c                    |   27 +-
 examples/qos_sched/main.h                    |    3 +
 examples/qos_sched/profile.cfg               |  196 ++--
 lib/sched/meson.build                        |    3 +-
 lib/sched/rte_pie.c                          |   86 ++
 lib/sched/rte_pie.h                          |  398 +++++++
 lib/sched/rte_sched.c                        |  241 ++--
 lib/sched/rte_sched.h                        |   63 +-
 lib/sched/version.map                        |    4 +
 19 files changed, 2172 insertions(+), 279 deletions(-)
 create mode 100644 app/test/test_pie.c
 create mode 100644 lib/sched/rte_pie.c
 create mode 100644 lib/sched/rte_pie.h

-- 
2.25.1

Series-acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v6 0/9] make rte_intr_handle internal
  2021-10-22 20:49  4% ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
@ 2021-10-24 20:04  4%   ` David Marchand
  2021-10-25 13:04  0%   ` [dpdk-dev] [PATCH v5 0/6] " Raslan Darawsheh
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-10-24 20:04 UTC (permalink / raw)
  To: hkalra, dev; +Cc: dmitry.kozliuk

Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.

Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=

This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.

v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif

v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.

v3:
* Removed flag from instance alloc API, rather auto detect
if memory should be allocated using glibc malloc APIs or
rte_malloc*
* Added APIs for get/set windows handle.
* Defined macros for repeated checks.

v4:
* Rectified some typo in the APIs documentation.
* Better names for some internal variables.

v5:
* Reverted back to passing flag to instance alloc API, as
with auto detect some multiprocess issues existing in the
library were causing tests failure.
* Rebased to top of tree.

v6:
* renamed RTE_INTR_INSTANCE_F_UNSHARED as RTE_INTR_INSTANCE_F_PRIVATE,
* changed API and removed need for alloc_flag content exposure
  (see rte_intr_instance_dup() in patch 1 and 2),
* exported all symbols for Windows,
* fixed leak in unit tests in case of alloc failure,
* split (previously) patch 4 into three patches
  * (now) patch 4 only concerns alarm and (previously) patch 6 cleanup bits
    are squashed in it,
  * (now) patch 5 concerns other libraries updates,
  * (now) patch 6 concerns drivers updates:
    * instance allocation is moved to probing for auxiliary,
    * there might be a bug for PCI drivers non requesting
      RTE_PCI_DRV_NEED_MAPPING, but code is left as v5,
* split (previously) patch 5 into three patches
  * (now) patch 7 only hides structure, but keep it in a EAL private
    header, this makes it possible to keep info in tracepoints,
  * (now) patch 8 deals with VFIO/UIO internal fds merge,
  * (now) patch 9 extends event list,


-- 
David Marchand

Harman Kalra (9):
  interrupts: add allocator and accessors
  interrupts: remove direct access to interrupt handle
  test/interrupts: remove direct access to interrupt handle
  alarm: remove direct access to interrupt handle
  lib: remove direct access to interrupt handle
  drivers: remove direct access to interrupt handle
  interrupts: make interrupt handle structure opaque
  interrupts: rename device specific file descriptor
  interrupts: extend event list

 MAINTAINERS                                   |   1 +
 app/test/test_interrupts.c                    | 164 +++--
 drivers/baseband/acc100/rte_acc100_pmd.c      |  14 +-
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |  24 +-
 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |  24 +-
 drivers/bus/auxiliary/auxiliary_common.c      |  17 +-
 drivers/bus/auxiliary/rte_bus_auxiliary.h     |   2 +-
 drivers/bus/dpaa/dpaa_bus.c                   |  28 +-
 drivers/bus/dpaa/rte_dpaa_bus.h               |   2 +-
 drivers/bus/fslmc/fslmc_bus.c                 |  14 +-
 drivers/bus/fslmc/fslmc_vfio.c                |  30 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |  18 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |   2 +-
 drivers/bus/fslmc/rte_fslmc.h                 |   2 +-
 drivers/bus/ifpga/ifpga_bus.c                 |  13 +-
 drivers/bus/ifpga/rte_bus_ifpga.h             |   2 +-
 drivers/bus/pci/bsd/pci.c                     |  20 +-
 drivers/bus/pci/linux/pci.c                   |   4 +-
 drivers/bus/pci/linux/pci_uio.c               |  69 +-
 drivers/bus/pci/linux/pci_vfio.c              | 108 ++-
 drivers/bus/pci/pci_common.c                  |  28 +-
 drivers/bus/pci/pci_common_uio.c              |  21 +-
 drivers/bus/pci/rte_bus_pci.h                 |   4 +-
 drivers/bus/vmbus/linux/vmbus_bus.c           |   6 +
 drivers/bus/vmbus/linux/vmbus_uio.c           |  35 +-
 drivers/bus/vmbus/rte_bus_vmbus.h             |   2 +-
 drivers/bus/vmbus/vmbus_common_uio.c          |  23 +-
 drivers/common/cnxk/roc_cpt.c                 |   8 +-
 drivers/common/cnxk/roc_dev.c                 |  14 +-
 drivers/common/cnxk/roc_irq.c                 | 107 +--
 drivers/common/cnxk/roc_nix_inl_dev_irq.c     |   8 +-
 drivers/common/cnxk/roc_nix_irq.c             |  36 +-
 drivers/common/cnxk/roc_npa.c                 |   2 +-
 drivers/common/cnxk/roc_platform.h            |  49 +-
 drivers/common/cnxk/roc_sso.c                 |   4 +-
 drivers/common/cnxk/roc_tim.c                 |   4 +-
 drivers/common/octeontx2/otx2_dev.c           |  14 +-
 drivers/common/octeontx2/otx2_irq.c           | 117 ++--
 .../octeontx2/otx2_cryptodev_hw_access.c      |   4 +-
 drivers/event/octeontx2/otx2_evdev_irq.c      |  12 +-
 drivers/mempool/octeontx2/otx2_mempool.c      |   2 +-
 drivers/net/atlantic/atl_ethdev.c             |  20 +-
 drivers/net/avp/avp_ethdev.c                  |   8 +-
 drivers/net/axgbe/axgbe_ethdev.c              |  12 +-
 drivers/net/axgbe/axgbe_mdio.c                |   6 +-
 drivers/net/bnx2x/bnx2x_ethdev.c              |  10 +-
 drivers/net/bnxt/bnxt_ethdev.c                |  33 +-
 drivers/net/bnxt/bnxt_irq.c                   |   4 +-
 drivers/net/dpaa/dpaa_ethdev.c                |  48 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  10 +-
 drivers/net/e1000/em_ethdev.c                 |  23 +-
 drivers/net/e1000/igb_ethdev.c                |  79 +--
 drivers/net/ena/ena_ethdev.c                  |  35 +-
 drivers/net/enic/enic_main.c                  |  26 +-
 drivers/net/failsafe/failsafe.c               |  21 +-
 drivers/net/failsafe/failsafe_intr.c          |  43 +-
 drivers/net/failsafe/failsafe_ops.c           |  19 +-
 drivers/net/failsafe/failsafe_private.h       |   2 +-
 drivers/net/fm10k/fm10k_ethdev.c              |  32 +-
 drivers/net/hinic/hinic_pmd_ethdev.c          |  10 +-
 drivers/net/hns3/hns3_ethdev.c                |  57 +-
 drivers/net/hns3/hns3_ethdev_vf.c             |  64 +-
 drivers/net/hns3/hns3_rxtx.c                  |   2 +-
 drivers/net/i40e/i40e_ethdev.c                |  53 +-
 drivers/net/iavf/iavf_ethdev.c                |  42 +-
 drivers/net/iavf/iavf_vchnl.c                 |   4 +-
 drivers/net/ice/ice_dcf.c                     |  10 +-
 drivers/net/ice/ice_dcf_ethdev.c              |  21 +-
 drivers/net/ice/ice_ethdev.c                  |  49 +-
 drivers/net/igc/igc_ethdev.c                  |  45 +-
 drivers/net/ionic/ionic_ethdev.c              |  17 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |  66 +-
 drivers/net/memif/memif_socket.c              | 108 ++-
 drivers/net/memif/memif_socket.h              |   4 +-
 drivers/net/memif/rte_eth_memif.c             |  56 +-
 drivers/net/memif/rte_eth_memif.h             |   2 +-
 drivers/net/mlx4/mlx4.c                       |  19 +-
 drivers/net/mlx4/mlx4.h                       |   2 +-
 drivers/net/mlx4/mlx4_intr.c                  |  47 +-
 drivers/net/mlx5/linux/mlx5_os.c              |  55 +-
 drivers/net/mlx5/linux/mlx5_socket.c          |  25 +-
 drivers/net/mlx5/mlx5.h                       |   6 +-
 drivers/net/mlx5/mlx5_rxq.c                   |  43 +-
 drivers/net/mlx5/mlx5_trigger.c               |   4 +-
 drivers/net/mlx5/mlx5_txpp.c                  |  25 +-
 drivers/net/netvsc/hn_ethdev.c                |   4 +-
 drivers/net/nfp/nfp_common.c                  |  34 +-
 drivers/net/nfp/nfp_ethdev.c                  |  13 +-
 drivers/net/nfp/nfp_ethdev_vf.c               |  13 +-
 drivers/net/ngbe/ngbe_ethdev.c                |  29 +-
 drivers/net/octeontx2/otx2_ethdev_irq.c       |  35 +-
 drivers/net/qede/qede_ethdev.c                |  16 +-
 drivers/net/sfc/sfc_intr.c                    |  30 +-
 drivers/net/tap/rte_eth_tap.c                 |  33 +-
 drivers/net/tap/rte_eth_tap.h                 |   2 +-
 drivers/net/tap/tap_intr.c                    |  33 +-
 drivers/net/thunderx/nicvf_ethdev.c           |  10 +
 drivers/net/thunderx/nicvf_struct.h           |   2 +-
 drivers/net/txgbe/txgbe_ethdev.c              |  38 +-
 drivers/net/txgbe/txgbe_ethdev_vf.c           |  33 +-
 drivers/net/vhost/rte_eth_vhost.c             |  80 ++-
 drivers/net/virtio/virtio_ethdev.c            |  21 +-
 .../net/virtio/virtio_user/virtio_user_dev.c  |  56 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.c          |  43 +-
 drivers/raw/ifpga/ifpga_rawdev.c              |  62 +-
 drivers/raw/ntb/ntb.c                         |   9 +-
 .../regex/octeontx2/otx2_regexdev_hw_access.c |   4 +-
 drivers/vdpa/ifc/ifcvf_vdpa.c                 |   5 +-
 drivers/vdpa/mlx5/mlx5_vdpa.c                 |   8 +
 drivers/vdpa/mlx5/mlx5_vdpa.h                 |   4 +-
 drivers/vdpa/mlx5/mlx5_vdpa_event.c           |  21 +-
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c           |  44 +-
 lib/bbdev/rte_bbdev.c                         |   4 +-
 lib/eal/common/eal_common_interrupts.c        | 528 ++++++++++++++
 lib/eal/common/eal_interrupts.h               |  30 +
 lib/eal/common/eal_private.h                  |  10 +
 lib/eal/common/meson.build                    |   1 +
 lib/eal/freebsd/eal.c                         |   1 +
 lib/eal/freebsd/eal_alarm.c                   |  44 +-
 lib/eal/freebsd/eal_interrupts.c              |  85 ++-
 lib/eal/include/meson.build                   |   2 +-
 lib/eal/include/rte_eal_interrupts.h          | 269 --------
 lib/eal/include/rte_eal_trace.h               |  10 +-
 lib/eal/include/rte_epoll.h                   | 118 ++++
 lib/eal/include/rte_interrupts.h              | 651 +++++++++++++++++-
 lib/eal/linux/eal.c                           |   1 +
 lib/eal/linux/eal_alarm.c                     |  32 +-
 lib/eal/linux/eal_dev.c                       |  57 +-
 lib/eal/linux/eal_interrupts.c                | 304 ++++----
 lib/eal/version.map                           |  45 +-
 lib/ethdev/ethdev_pci.h                       |   2 +-
 lib/ethdev/rte_ethdev.c                       |  14 +-
 132 files changed, 3473 insertions(+), 1742 deletions(-)
 create mode 100644 lib/eal/common/eal_common_interrupts.c
 create mode 100644 lib/eal/common/eal_interrupts.h
 delete mode 100644 lib/eal/include/rte_eal_interrupts.h
 create mode 100644 lib/eal/include/rte_epoll.h

-- 
2.23.0


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v5 0/2] cmdline: reduce ABI
  @ 2021-10-22 21:24  4%   ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-10-22 21:24 UTC (permalink / raw)
  To: Dmitry Kozlyuk; +Cc: dev

> Dmitry Kozlyuk (2):
>   cmdline: make struct cmdline opaque
>   cmdline: make struct rdline opaque

Applied, thanks.




^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal
    2021-10-18 19:37  4% ` [dpdk-dev] [PATCH v3 " Harman Kalra
  2021-10-19 18:35  4% ` [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal Harman Kalra
@ 2021-10-22 20:49  4% ` Harman Kalra
  2021-10-24 20:04  4%   ` [dpdk-dev] [PATCH v6 0/9] " David Marchand
                     ` (3 more replies)
  2 siblings, 4 replies; 200+ results
From: Harman Kalra @ 2021-10-22 20:49 UTC (permalink / raw)
  To: dev; +Cc: david.marchand, dmitry.kozliuk, mdr, thomas, Harman Kalra

Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.

Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=

This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.

Details on each patch of the series:
Patch 1: eal/interrupts: implement get set APIs
This patch provides prototypes and implementation of all the new
get set APIs. Alloc APIs are implemented to allocate memory for
interrupt handle instance. Currently most of the drivers defines
interrupt handle instance as static but now it cant be static as
size of rte_intr_handle is unknown to all the drivers. Drivers are
expected to allocate interrupt instances during initialization
and free these instances during cleanup phase.
This patch also rearranges the headers related to interrupt
framework. Epoll related definitions prototypes are moved into a
new header i.e. rte_epoll.h and APIs defined in rte_eal_interrupts.h
which were driver specific are moved to rte_interrupts.h (as anyways
it was accessible and used outside DPDK library. Later in the series
rte_eal_interrupts.h is removed.

Patch 2: eal/interrupts: avoid direct access to interrupt handle
Modifying the interrupt framework for linux and freebsd to use these
get set alloc APIs as per requirement and avoid accessing the fields
directly.

Patch 3: test/interrupt: apply get set interrupt handle APIs
Updating interrupt test suite to use interrupt handle APIs.

Patch 4: drivers: remove direct access to interrupt handle fields
Modifying all the drivers and libraries which are currently directly
accessing the interrupt handle fields. Drivers are expected to
allocated the interrupt instance, use get set APIs with the allocated
interrupt handle and free it on cleanup.

Patch 5: eal/interrupts: make interrupt handle structure opaque
In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
definition is moved to c file to make it completely opaque. As part of
interrupt handle allocation, array like efds and elist(which are currently
static) are dynamically allocated with default size
(RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
device requirement using new API rte_intr_handle_event_list_update().
Eg, on PCI device probing MSIX size can be queried and these arrays can
be reallocated accordingly.

Patch 6: eal/alarm: introduce alarm fini routine
Introducing alarm fini routine, as the memory allocated for alarm interrupt
instance can be freed in alarm fini.

Testing performed:
1. Validated the series by running interrupts and alarm test suite.
2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
   where interrupts are expected on packet arrival.

v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif

v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.

v3:
* Removed flag from instance alloc API, rather auto detect
if memory should be allocated using glibc malloc APIs or
rte_malloc*
* Added APIs for get/set windows handle.
* Defined macros for repeated checks.

v4:
* Rectified some typo in the APIs documentation.
* Better names for some internal variables.

v5:
* Reverted back to passing flag to instance alloc API, as
with auto detect some multiprocess issues existing in the
library were causing tests failure.
* Rebased to top of tree.

Harman Kalra (6):
  eal/interrupts: implement get set APIs
  eal/interrupts: avoid direct access to interrupt handle
  test/interrupt: apply get set interrupt handle APIs
  drivers: remove direct access to interrupt handle
  eal/interrupts: make interrupt handle structure opaque
  eal/alarm: introduce alarm fini routine

 MAINTAINERS                                   |   1 +
 app/test/test_interrupts.c                    | 163 +++--
 drivers/baseband/acc100/rte_acc100_pmd.c      |  18 +-
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |  21 +-
 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |  21 +-
 drivers/bus/auxiliary/auxiliary_common.c      |   2 +
 drivers/bus/auxiliary/linux/auxiliary.c       |  10 +
 drivers/bus/auxiliary/rte_bus_auxiliary.h     |   2 +-
 drivers/bus/dpaa/dpaa_bus.c                   |  28 +-
 drivers/bus/dpaa/rte_dpaa_bus.h               |   2 +-
 drivers/bus/fslmc/fslmc_bus.c                 |  16 +-
 drivers/bus/fslmc/fslmc_vfio.c                |  32 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |  20 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |   2 +-
 drivers/bus/fslmc/rte_fslmc.h                 |   2 +-
 drivers/bus/ifpga/ifpga_bus.c                 |  15 +-
 drivers/bus/ifpga/rte_bus_ifpga.h             |   2 +-
 drivers/bus/pci/bsd/pci.c                     |  21 +-
 drivers/bus/pci/linux/pci.c                   |   4 +-
 drivers/bus/pci/linux/pci_uio.c               |  73 +-
 drivers/bus/pci/linux/pci_vfio.c              | 115 ++-
 drivers/bus/pci/pci_common.c                  |  29 +-
 drivers/bus/pci/pci_common_uio.c              |  21 +-
 drivers/bus/pci/rte_bus_pci.h                 |   4 +-
 drivers/bus/vmbus/linux/vmbus_bus.c           |   6 +
 drivers/bus/vmbus/linux/vmbus_uio.c           |  37 +-
 drivers/bus/vmbus/rte_bus_vmbus.h             |   2 +-
 drivers/bus/vmbus/vmbus_common_uio.c          |  24 +-
 drivers/common/cnxk/roc_cpt.c                 |   8 +-
 drivers/common/cnxk/roc_dev.c                 |  14 +-
 drivers/common/cnxk/roc_irq.c                 | 108 +--
 drivers/common/cnxk/roc_nix_inl_dev_irq.c     |   8 +-
 drivers/common/cnxk/roc_nix_irq.c             |  36 +-
 drivers/common/cnxk/roc_npa.c                 |   2 +-
 drivers/common/cnxk/roc_platform.h            |  49 +-
 drivers/common/cnxk/roc_sso.c                 |   4 +-
 drivers/common/cnxk/roc_tim.c                 |   4 +-
 drivers/common/octeontx2/otx2_dev.c           |  14 +-
 drivers/common/octeontx2/otx2_irq.c           | 117 +--
 .../octeontx2/otx2_cryptodev_hw_access.c      |   4 +-
 drivers/event/octeontx2/otx2_evdev_irq.c      |  12 +-
 drivers/mempool/octeontx2/otx2_mempool.c      |   2 +-
 drivers/net/atlantic/atl_ethdev.c             |  20 +-
 drivers/net/avp/avp_ethdev.c                  |   8 +-
 drivers/net/axgbe/axgbe_ethdev.c              |  12 +-
 drivers/net/axgbe/axgbe_mdio.c                |   6 +-
 drivers/net/bnx2x/bnx2x_ethdev.c              |  10 +-
 drivers/net/bnxt/bnxt_ethdev.c                |  33 +-
 drivers/net/bnxt/bnxt_irq.c                   |   4 +-
 drivers/net/dpaa/dpaa_ethdev.c                |  47 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  10 +-
 drivers/net/e1000/em_ethdev.c                 |  23 +-
 drivers/net/e1000/igb_ethdev.c                |  79 +--
 drivers/net/ena/ena_ethdev.c                  |  35 +-
 drivers/net/enic/enic_main.c                  |  26 +-
 drivers/net/failsafe/failsafe.c               |  23 +-
 drivers/net/failsafe/failsafe_intr.c          |  43 +-
 drivers/net/failsafe/failsafe_ops.c           |  19 +-
 drivers/net/failsafe/failsafe_private.h       |   2 +-
 drivers/net/fm10k/fm10k_ethdev.c              |  32 +-
 drivers/net/hinic/hinic_pmd_ethdev.c          |  10 +-
 drivers/net/hns3/hns3_ethdev.c                |  57 +-
 drivers/net/hns3/hns3_ethdev_vf.c             |  64 +-
 drivers/net/hns3/hns3_rxtx.c                  |   2 +-
 drivers/net/i40e/i40e_ethdev.c                |  53 +-
 drivers/net/iavf/iavf_ethdev.c                |  42 +-
 drivers/net/iavf/iavf_vchnl.c                 |   4 +-
 drivers/net/ice/ice_dcf.c                     |  10 +-
 drivers/net/ice/ice_dcf_ethdev.c              |  21 +-
 drivers/net/ice/ice_ethdev.c                  |  49 +-
 drivers/net/igc/igc_ethdev.c                  |  45 +-
 drivers/net/ionic/ionic_ethdev.c              |  17 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |  66 +-
 drivers/net/memif/memif_socket.c              | 111 ++-
 drivers/net/memif/memif_socket.h              |   4 +-
 drivers/net/memif/rte_eth_memif.c             |  61 +-
 drivers/net/memif/rte_eth_memif.h             |   2 +-
 drivers/net/mlx4/mlx4.c                       |  19 +-
 drivers/net/mlx4/mlx4.h                       |   2 +-
 drivers/net/mlx4/mlx4_intr.c                  |  47 +-
 drivers/net/mlx5/linux/mlx5_os.c              |  53 +-
 drivers/net/mlx5/linux/mlx5_socket.c          |  25 +-
 drivers/net/mlx5/mlx5.h                       |   6 +-
 drivers/net/mlx5/mlx5_rxq.c                   |  42 +-
 drivers/net/mlx5/mlx5_trigger.c               |   4 +-
 drivers/net/mlx5/mlx5_txpp.c                  |  26 +-
 drivers/net/netvsc/hn_ethdev.c                |   4 +-
 drivers/net/nfp/nfp_common.c                  |  34 +-
 drivers/net/nfp/nfp_ethdev.c                  |  13 +-
 drivers/net/nfp/nfp_ethdev_vf.c               |  13 +-
 drivers/net/ngbe/ngbe_ethdev.c                |  29 +-
 drivers/net/octeontx2/otx2_ethdev_irq.c       |  35 +-
 drivers/net/qede/qede_ethdev.c                |  16 +-
 drivers/net/sfc/sfc_intr.c                    |  30 +-
 drivers/net/tap/rte_eth_tap.c                 |  36 +-
 drivers/net/tap/rte_eth_tap.h                 |   2 +-
 drivers/net/tap/tap_intr.c                    |  32 +-
 drivers/net/thunderx/nicvf_ethdev.c           |  12 +
 drivers/net/thunderx/nicvf_struct.h           |   2 +-
 drivers/net/txgbe/txgbe_ethdev.c              |  38 +-
 drivers/net/txgbe/txgbe_ethdev_vf.c           |  33 +-
 drivers/net/vhost/rte_eth_vhost.c             |  76 +-
 drivers/net/virtio/virtio_ethdev.c            |  21 +-
 .../net/virtio/virtio_user/virtio_user_dev.c  |  48 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.c          |  43 +-
 drivers/raw/ifpga/ifpga_rawdev.c              |  62 +-
 drivers/raw/ntb/ntb.c                         |   9 +-
 .../regex/octeontx2/otx2_regexdev_hw_access.c |   4 +-
 drivers/vdpa/ifc/ifcvf_vdpa.c                 |   5 +-
 drivers/vdpa/mlx5/mlx5_vdpa.c                 |  10 +
 drivers/vdpa/mlx5/mlx5_vdpa.h                 |   4 +-
 drivers/vdpa/mlx5/mlx5_vdpa_event.c           |  22 +-
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c           |  45 +-
 lib/bbdev/rte_bbdev.c                         |   4 +-
 lib/eal/common/eal_common_interrupts.c        | 588 +++++++++++++++
 lib/eal/common/eal_private.h                  |  11 +
 lib/eal/common/meson.build                    |   1 +
 lib/eal/freebsd/eal.c                         |   1 +
 lib/eal/freebsd/eal_alarm.c                   |  53 +-
 lib/eal/freebsd/eal_interrupts.c              | 112 ++-
 lib/eal/include/meson.build                   |   2 +-
 lib/eal/include/rte_eal_interrupts.h          | 269 -------
 lib/eal/include/rte_eal_trace.h               |  24 +-
 lib/eal/include/rte_epoll.h                   | 118 ++++
 lib/eal/include/rte_interrupts.h              | 668 +++++++++++++++++-
 lib/eal/linux/eal.c                           |   1 +
 lib/eal/linux/eal_alarm.c                     |  37 +-
 lib/eal/linux/eal_dev.c                       |  63 +-
 lib/eal/linux/eal_interrupts.c                | 303 +++++---
 lib/eal/version.map                           |  46 +-
 lib/ethdev/ethdev_pci.h                       |   2 +-
 lib/ethdev/rte_ethdev.c                       |  14 +-
 132 files changed, 3631 insertions(+), 1713 deletions(-)
 create mode 100644 lib/eal/common/eal_common_interrupts.c
 delete mode 100644 lib/eal/include/rte_eal_interrupts.h
 create mode 100644 lib/eal/include/rte_epoll.h

-- 
2.18.0


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v7] ethdev: add namespace
  2021-10-22  2:02  1%     ` [dpdk-dev] [PATCH v6] " Ferruh Yigit
@ 2021-10-22 11:03  1%       ` Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-10-22 11:03 UTC (permalink / raw)
  To: Maryam Tahhan, Reshma Pattan, Jerin Jacob, Wisam Jaddo,
	Cristian Dumitrescu, Xiaoyun Li, Thomas Monjalon,
	Andrew Rybchenko, Jay Jayatheerthan, Chas Williams,
	Min Hu (Connor),
	Pavan Nikhilesh, Shijith Thotton, Ajit Khaparde, Somnath Kotur,
	John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, Beilei Xing,
	Haiyue Wang, Matan Azrad, Viacheslav Ovsiienko, Keith Wiles,
	Jiayu Hu, Olivier Matz, Ori Kam, Akhil Goyal, Declan Doherty,
	Ray Kinsella, Radu Nicolau, Hemant Agrawal, Sachin Saxena,
	Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
	John W. Linville, Ciara Loftus, Shepard Siegel, Ed Czeck,
	John Miller, Igor Russkikh, Steven Webster, Matt Peters,
	Chandubabu Namburu, Rasesh Mody, Shahed Shaikh, Bruce Richardson,
	Konstantin Ananyev, Ruifeng Wang, Rahul Lakkireddy,
	Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
	Igor Chauskin, Gagandeep Singh, Gaetan Rivet, Ziyang Xuan,
	Xiaoyun Wang, Guoyang Zhou, Yisen Zhuang, Lijun Ou, Jingjing Wu,
	Qiming Yang, Andrew Boyer, Rosen Xu,
	Srisivasubramanian Srinivasan, Jakub Grajciar, Zyta Szpak,
	Liron Himi, Stephen Hemminger, Long Li, Martin Spinler,
	Heinrich Kuhn, Jiawen Wu, Tetsuya Mukawa, Harman Kalra,
	Anoob Joseph, Nalla Pradeep, Radha Mohan Chintakuntla,
	Veerasenareddy Burru, Devendra Singh Rawat, Jasvinder Singh,
	Maciej Czekaj, Jian Wang, Maxime Coquelin, Chenbo Xia, Yong Wang,
	Nicolas Chautru, David Hunt, Harry van Haaren, Bernard Iremonger,
	Anatoly Burakov, John McNamara, Kirill Rybalchenko, Byron Marohn,
	Yipeng Wang
  Cc: Ferruh Yigit, dev, Tyler Retzlaff, David Marchand

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=true, Size: 1213541 bytes --]

Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.

All internal components switched to using new names.

Syntax fixed on lines that this patch touches.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
Cc: David Marchand <david.marchand@redhat.com>
Cc: Thomas Monjalon <thomas@monjalon.net>

v2:
* Updated internal components
* Removed deprecation notice

v3:
* Updated missing macros / structs that David highlighted
* Added release notes update

v4:
* rebased on latest next-net
* depends on https://patches.dpdk.org/user/todo/dpdk/?series=19744
* Not able to complete scripts to update user code, although some
  shared by Aman:
  https://patches.dpdk.org/project/dpdk/patch/20211008102949.70716-1-aman.deep.singh@intel.com/
  Sending new version for possible option to get this patch for -rc1 and
  work for scripts later, before release.

v5:
* rebased on latest next-net

v6:
* rebased on latest next-net

v7:
* Remove mirror structures which are rebase residue
* rebased on latest next-net
---
 app/proc-info/main.c                          |   8 +-
 app/test-eventdev/test_perf_common.c          |   4 +-
 app/test-eventdev/test_pipeline_common.c      |  10 +-
 app/test-flow-perf/config.h                   |   2 +-
 app/test-pipeline/init.c                      |   8 +-
 app/test-pmd/cmdline.c                        | 286 ++---
 app/test-pmd/config.c                         | 200 ++--
 app/test-pmd/csumonly.c                       |  28 +-
 app/test-pmd/flowgen.c                        |   6 +-
 app/test-pmd/macfwd.c                         |   6 +-
 app/test-pmd/macswap_common.h                 |   6 +-
 app/test-pmd/parameters.c                     |  54 +-
 app/test-pmd/testpmd.c                        |  52 +-
 app/test-pmd/testpmd.h                        |   2 +-
 app/test-pmd/txonly.c                         |   6 +-
 app/test/test_ethdev_link.c                   |  68 +-
 app/test/test_event_eth_rx_adapter.c          |   4 +-
 app/test/test_kni.c                           |   2 +-
 app/test/test_link_bonding.c                  |   4 +-
 app/test/test_link_bonding_mode4.c            |   4 +-
 app/test/test_link_bonding_rssconf.c          |  28 +-
 app/test/test_pmd_perf.c                      |  12 +-
 app/test/virtual_pmd.c                        |  10 +-
 doc/guides/eventdevs/cnxk.rst                 |   2 +-
 doc/guides/eventdevs/octeontx2.rst            |   2 +-
 doc/guides/nics/af_packet.rst                 |   2 +-
 doc/guides/nics/bnxt.rst                      |  24 +-
 doc/guides/nics/enic.rst                      |   2 +-
 doc/guides/nics/features.rst                  | 114 +-
 doc/guides/nics/fm10k.rst                     |   6 +-
 doc/guides/nics/intel_vf.rst                  |  10 +-
 doc/guides/nics/ixgbe.rst                     |  12 +-
 doc/guides/nics/mlx5.rst                      |   4 +-
 doc/guides/nics/tap.rst                       |   2 +-
 .../generic_segmentation_offload_lib.rst      |   8 +-
 doc/guides/prog_guide/mbuf_lib.rst            |  18 +-
 doc/guides/prog_guide/poll_mode_drv.rst       |   8 +-
 doc/guides/prog_guide/rte_flow.rst            |  34 +-
 doc/guides/prog_guide/rte_security.rst        |   2 +-
 doc/guides/rel_notes/deprecation.rst          |  10 +-
 doc/guides/rel_notes/release_21_11.rst        |   3 +
 doc/guides/sample_app_ug/ipsec_secgw.rst      |   4 +-
 doc/guides/testpmd_app_ug/run_app.rst         |   2 +-
 drivers/bus/dpaa/include/process.h            |  16 +-
 drivers/common/cnxk/roc_npc.h                 |   2 +-
 drivers/net/af_packet/rte_eth_af_packet.c     |  20 +-
 drivers/net/af_xdp/rte_eth_af_xdp.c           |  12 +-
 drivers/net/ark/ark_ethdev.c                  |  16 +-
 drivers/net/atlantic/atl_ethdev.c             |  88 +-
 drivers/net/atlantic/atl_ethdev.h             |  18 +-
 drivers/net/atlantic/atl_rxtx.c               |   6 +-
 drivers/net/avp/avp_ethdev.c                  |  26 +-
 drivers/net/axgbe/axgbe_dev.c                 |   6 +-
 drivers/net/axgbe/axgbe_ethdev.c              | 104 +-
 drivers/net/axgbe/axgbe_ethdev.h              |  12 +-
 drivers/net/axgbe/axgbe_mdio.c                |   2 +-
 drivers/net/axgbe/axgbe_rxtx.c                |   6 +-
 drivers/net/bnx2x/bnx2x_ethdev.c              |  12 +-
 drivers/net/bnxt/bnxt.h                       |  62 +-
 drivers/net/bnxt/bnxt_ethdev.c                | 172 +--
 drivers/net/bnxt/bnxt_flow.c                  |   6 +-
 drivers/net/bnxt/bnxt_hwrm.c                  | 112 +-
 drivers/net/bnxt/bnxt_reps.c                  |   2 +-
 drivers/net/bnxt/bnxt_ring.c                  |   4 +-
 drivers/net/bnxt/bnxt_rxq.c                   |  28 +-
 drivers/net/bnxt/bnxt_rxr.c                   |   4 +-
 drivers/net/bnxt/bnxt_rxtx_vec_avx2.c         |   2 +-
 drivers/net/bnxt/bnxt_rxtx_vec_common.h       |   2 +-
 drivers/net/bnxt/bnxt_rxtx_vec_neon.c         |   2 +-
 drivers/net/bnxt/bnxt_rxtx_vec_sse.c          |   2 +-
 drivers/net/bnxt/bnxt_txr.c                   |   4 +-
 drivers/net/bnxt/bnxt_vnic.c                  |  30 +-
 drivers/net/bnxt/rte_pmd_bnxt.c               |   8 +-
 drivers/net/bonding/eth_bond_private.h        |   4 +-
 drivers/net/bonding/rte_eth_bond_8023ad.c     |  16 +-
 drivers/net/bonding/rte_eth_bond_api.c        |   6 +-
 drivers/net/bonding/rte_eth_bond_pmd.c        |  50 +-
 drivers/net/cnxk/cn10k_ethdev.c               |  42 +-
 drivers/net/cnxk/cn10k_rte_flow.c             |   2 +-
 drivers/net/cnxk/cn10k_rx.c                   |   4 +-
 drivers/net/cnxk/cn10k_tx.c                   |   4 +-
 drivers/net/cnxk/cn9k_ethdev.c                |  60 +-
 drivers/net/cnxk/cn9k_rx.c                    |   4 +-
 drivers/net/cnxk/cn9k_tx.c                    |   4 +-
 drivers/net/cnxk/cnxk_ethdev.c                | 112 +-
 drivers/net/cnxk/cnxk_ethdev.h                |  49 +-
 drivers/net/cnxk/cnxk_ethdev_devargs.c        |   6 +-
 drivers/net/cnxk/cnxk_ethdev_ops.c            | 106 +-
 drivers/net/cnxk/cnxk_link.c                  |  14 +-
 drivers/net/cnxk/cnxk_ptp.c                   |   4 +-
 drivers/net/cnxk/cnxk_rte_flow.c              |   2 +-
 drivers/net/cxgbe/cxgbe.h                     |  46 +-
 drivers/net/cxgbe/cxgbe_ethdev.c              |  42 +-
 drivers/net/cxgbe/cxgbe_main.c                |  12 +-
 drivers/net/dpaa/dpaa_ethdev.c                | 180 ++--
 drivers/net/dpaa/dpaa_ethdev.h                |  10 +-
 drivers/net/dpaa/dpaa_flow.c                  |  32 +-
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c        |  47 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              | 138 +--
 drivers/net/dpaa2/dpaa2_ethdev.h              |  22 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |   8 +-
 drivers/net/e1000/e1000_ethdev.h              |  18 +-
 drivers/net/e1000/em_ethdev.c                 |  64 +-
 drivers/net/e1000/em_rxtx.c                   |  38 +-
 drivers/net/e1000/igb_ethdev.c                | 158 +--
 drivers/net/e1000/igb_pf.c                    |   2 +-
 drivers/net/e1000/igb_rxtx.c                  | 116 +--
 drivers/net/ena/ena_ethdev.c                  |  70 +-
 drivers/net/ena/ena_ethdev.h                  |   4 +-
 drivers/net/ena/ena_rss.c                     |  74 +-
 drivers/net/enetc/enetc_ethdev.c              |  30 +-
 drivers/net/enic/enic.h                       |   2 +-
 drivers/net/enic/enic_ethdev.c                |  88 +-
 drivers/net/enic/enic_main.c                  |  40 +-
 drivers/net/enic/enic_res.c                   |  50 +-
 drivers/net/failsafe/failsafe.c               |   8 +-
 drivers/net/failsafe/failsafe_intr.c          |   4 +-
 drivers/net/failsafe/failsafe_ops.c           |  78 +-
 drivers/net/fm10k/fm10k.h                     |   4 +-
 drivers/net/fm10k/fm10k_ethdev.c              | 146 +--
 drivers/net/fm10k/fm10k_rxtx_vec.c            |   6 +-
 drivers/net/hinic/base/hinic_pmd_hwdev.c      |  22 +-
 drivers/net/hinic/hinic_pmd_ethdev.c          | 136 +--
 drivers/net/hinic/hinic_pmd_rx.c              |  36 +-
 drivers/net/hinic/hinic_pmd_rx.h              |  22 +-
 drivers/net/hns3/hns3_dcb.c                   |  14 +-
 drivers/net/hns3/hns3_ethdev.c                | 352 +++----
 drivers/net/hns3/hns3_ethdev.h                |  12 +-
 drivers/net/hns3/hns3_ethdev_vf.c             | 100 +-
 drivers/net/hns3/hns3_flow.c                  |   6 +-
 drivers/net/hns3/hns3_ptp.c                   |   2 +-
 drivers/net/hns3/hns3_rss.c                   | 108 +-
 drivers/net/hns3/hns3_rss.h                   |  28 +-
 drivers/net/hns3/hns3_rxtx.c                  |  30 +-
 drivers/net/hns3/hns3_rxtx.h                  |   2 +-
 drivers/net/hns3/hns3_rxtx_vec.c              |  10 +-
 drivers/net/i40e/i40e_ethdev.c                | 272 ++---
 drivers/net/i40e/i40e_ethdev.h                |  24 +-
 drivers/net/i40e/i40e_flow.c                  |  32 +-
 drivers/net/i40e/i40e_hash.c                  | 158 +--
 drivers/net/i40e/i40e_pf.c                    |  14 +-
 drivers/net/i40e/i40e_rxtx.c                  |   8 +-
 drivers/net/i40e/i40e_rxtx.h                  |   4 +-
 drivers/net/i40e/i40e_rxtx_vec_avx512.c       |   2 +-
 drivers/net/i40e/i40e_rxtx_vec_common.h       |   8 +-
 drivers/net/i40e/i40e_vf_representor.c        |  48 +-
 drivers/net/iavf/iavf.h                       |  24 +-
 drivers/net/iavf/iavf_ethdev.c                | 178 ++--
 drivers/net/iavf/iavf_hash.c                  | 320 +++---
 drivers/net/iavf/iavf_rxtx.c                  |   2 +-
 drivers/net/iavf/iavf_rxtx.h                  |  24 +-
 drivers/net/iavf/iavf_rxtx_vec_avx2.c         |   4 +-
 drivers/net/iavf/iavf_rxtx_vec_avx512.c       |   6 +-
 drivers/net/iavf/iavf_rxtx_vec_sse.c          |   2 +-
 drivers/net/ice/ice_dcf.c                     |   2 +-
 drivers/net/ice/ice_dcf_ethdev.c              |  86 +-
 drivers/net/ice/ice_dcf_vf_representor.c      |  56 +-
 drivers/net/ice/ice_ethdev.c                  | 180 ++--
 drivers/net/ice/ice_ethdev.h                  |  26 +-
 drivers/net/ice/ice_hash.c                    | 290 +++---
 drivers/net/ice/ice_rxtx.c                    |  16 +-
 drivers/net/ice/ice_rxtx_vec_avx2.c           |   2 +-
 drivers/net/ice/ice_rxtx_vec_avx512.c         |   4 +-
 drivers/net/ice/ice_rxtx_vec_common.h         |  28 +-
 drivers/net/ice/ice_rxtx_vec_sse.c            |   2 +-
 drivers/net/igc/igc_ethdev.c                  | 138 +--
 drivers/net/igc/igc_ethdev.h                  |  54 +-
 drivers/net/igc/igc_txrx.c                    |  48 +-
 drivers/net/ionic/ionic_ethdev.c              | 138 +--
 drivers/net/ionic/ionic_ethdev.h              |  12 +-
 drivers/net/ionic/ionic_lif.c                 |  36 +-
 drivers/net/ionic/ionic_rxtx.c                |  10 +-
 drivers/net/ipn3ke/ipn3ke_representor.c       |  64 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              | 285 +++--
 drivers/net/ixgbe/ixgbe_ethdev.h              |  18 +-
 drivers/net/ixgbe/ixgbe_fdir.c                |  24 +-
 drivers/net/ixgbe/ixgbe_flow.c                |   2 +-
 drivers/net/ixgbe/ixgbe_ipsec.c               |  12 +-
 drivers/net/ixgbe/ixgbe_pf.c                  |  34 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                | 249 +++--
 drivers/net/ixgbe/ixgbe_rxtx.h                |   4 +-
 drivers/net/ixgbe/ixgbe_rxtx_vec_common.h     |   2 +-
 drivers/net/ixgbe/ixgbe_tm.c                  |  16 +-
 drivers/net/ixgbe/ixgbe_vf_representor.c      |  16 +-
 drivers/net/ixgbe/rte_pmd_ixgbe.c             |  14 +-
 drivers/net/ixgbe/rte_pmd_ixgbe.h             |   4 +-
 drivers/net/kni/rte_eth_kni.c                 |   8 +-
 drivers/net/liquidio/lio_ethdev.c             | 114 +-
 drivers/net/memif/memif_socket.c              |   2 +-
 drivers/net/memif/rte_eth_memif.c             |  16 +-
 drivers/net/mlx4/mlx4_ethdev.c                |  32 +-
 drivers/net/mlx4/mlx4_flow.c                  |  30 +-
 drivers/net/mlx4/mlx4_intr.c                  |   8 +-
 drivers/net/mlx4/mlx4_rxq.c                   |  18 +-
 drivers/net/mlx4/mlx4_txq.c                   |  24 +-
 drivers/net/mlx5/linux/mlx5_ethdev_os.c       |  54 +-
 drivers/net/mlx5/linux/mlx5_os.c              |   6 +-
 drivers/net/mlx5/mlx5.c                       |   4 +-
 drivers/net/mlx5/mlx5.h                       |   2 +-
 drivers/net/mlx5/mlx5_defs.h                  |   6 +-
 drivers/net/mlx5/mlx5_ethdev.c                |   6 +-
 drivers/net/mlx5/mlx5_flow.c                  |  54 +-
 drivers/net/mlx5/mlx5_flow.h                  |  12 +-
 drivers/net/mlx5/mlx5_flow_dv.c               |  44 +-
 drivers/net/mlx5/mlx5_flow_verbs.c            |   4 +-
 drivers/net/mlx5/mlx5_rss.c                   |  10 +-
 drivers/net/mlx5/mlx5_rxq.c                   |  40 +-
 drivers/net/mlx5/mlx5_rxtx_vec.h              |   8 +-
 drivers/net/mlx5/mlx5_tx.c                    |  30 +-
 drivers/net/mlx5/mlx5_txq.c                   |  58 +-
 drivers/net/mlx5/mlx5_vlan.c                  |   4 +-
 drivers/net/mlx5/windows/mlx5_os.c            |   4 +-
 drivers/net/mvneta/mvneta_ethdev.c            |  32 +-
 drivers/net/mvneta/mvneta_ethdev.h            |  10 +-
 drivers/net/mvneta/mvneta_rxtx.c              |   2 +-
 drivers/net/mvpp2/mrvl_ethdev.c               | 112 +-
 drivers/net/netvsc/hn_ethdev.c                |  70 +-
 drivers/net/netvsc/hn_rndis.c                 |  50 +-
 drivers/net/nfb/nfb_ethdev.c                  |  20 +-
 drivers/net/nfb/nfb_rx.c                      |   2 +-
 drivers/net/nfp/nfp_common.c                  | 122 +--
 drivers/net/nfp/nfp_ethdev.c                  |   2 +-
 drivers/net/nfp/nfp_ethdev_vf.c               |   2 +-
 drivers/net/ngbe/ngbe_ethdev.c                |  50 +-
 drivers/net/null/rte_eth_null.c               |  28 +-
 drivers/net/octeontx/octeontx_ethdev.c        |  74 +-
 drivers/net/octeontx/octeontx_ethdev.h        |  30 +-
 drivers/net/octeontx/octeontx_ethdev_ops.c    |  26 +-
 drivers/net/octeontx2/otx2_ethdev.c           |  96 +-
 drivers/net/octeontx2/otx2_ethdev.h           |  64 +-
 drivers/net/octeontx2/otx2_ethdev_devargs.c   |  12 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c       |  14 +-
 drivers/net/octeontx2/otx2_ethdev_sec.c       |   8 +-
 drivers/net/octeontx2/otx2_flow.c             |   2 +-
 drivers/net/octeontx2/otx2_flow_ctrl.c        |  36 +-
 drivers/net/octeontx2/otx2_flow_parse.c       |   4 +-
 drivers/net/octeontx2/otx2_link.c             |  40 +-
 drivers/net/octeontx2/otx2_mcast.c            |   2 +-
 drivers/net/octeontx2/otx2_ptp.c              |   4 +-
 drivers/net/octeontx2/otx2_rss.c              |  70 +-
 drivers/net/octeontx2/otx2_rx.c               |   4 +-
 drivers/net/octeontx2/otx2_tx.c               |   2 +-
 drivers/net/octeontx2/otx2_vlan.c             |  42 +-
 drivers/net/octeontx_ep/otx_ep_ethdev.c       |   6 +-
 drivers/net/octeontx_ep/otx_ep_rxtx.c         |   6 +-
 drivers/net/pcap/pcap_ethdev.c                |  12 +-
 drivers/net/pfe/pfe_ethdev.c                  |  18 +-
 drivers/net/qede/base/mcp_public.h            |   4 +-
 drivers/net/qede/qede_ethdev.c                | 156 +--
 drivers/net/qede/qede_filter.c                |  42 +-
 drivers/net/qede/qede_rxtx.c                  |   2 +-
 drivers/net/qede/qede_rxtx.h                  |  16 +-
 drivers/net/ring/rte_eth_ring.c               |  20 +-
 drivers/net/sfc/sfc.c                         |  30 +-
 drivers/net/sfc/sfc_ef100_rx.c                |  10 +-
 drivers/net/sfc/sfc_ef100_tx.c                |  20 +-
 drivers/net/sfc/sfc_ef10_essb_rx.c            |   4 +-
 drivers/net/sfc/sfc_ef10_rx.c                 |   8 +-
 drivers/net/sfc/sfc_ef10_tx.c                 |  32 +-
 drivers/net/sfc/sfc_ethdev.c                  |  50 +-
 drivers/net/sfc/sfc_flow.c                    |   2 +-
 drivers/net/sfc/sfc_port.c                    |  52 +-
 drivers/net/sfc/sfc_repr.c                    |  10 +-
 drivers/net/sfc/sfc_rx.c                      |  50 +-
 drivers/net/sfc/sfc_tx.c                      |  50 +-
 drivers/net/softnic/rte_eth_softnic.c         |  12 +-
 drivers/net/szedata2/rte_eth_szedata2.c       |  14 +-
 drivers/net/tap/rte_eth_tap.c                 | 104 +-
 drivers/net/tap/tap_rss.h                     |   2 +-
 drivers/net/thunderx/nicvf_ethdev.c           | 102 +-
 drivers/net/thunderx/nicvf_ethdev.h           |  40 +-
 drivers/net/txgbe/txgbe_ethdev.c              | 242 ++---
 drivers/net/txgbe/txgbe_ethdev.h              |  18 +-
 drivers/net/txgbe/txgbe_ethdev_vf.c           |  24 +-
 drivers/net/txgbe/txgbe_fdir.c                |  20 +-
 drivers/net/txgbe/txgbe_flow.c                |   2 +-
 drivers/net/txgbe/txgbe_ipsec.c               |  12 +-
 drivers/net/txgbe/txgbe_pf.c                  |  34 +-
 drivers/net/txgbe/txgbe_rxtx.c                | 308 +++---
 drivers/net/txgbe/txgbe_rxtx.h                |   4 +-
 drivers/net/txgbe/txgbe_tm.c                  |  16 +-
 drivers/net/vhost/rte_eth_vhost.c             |  16 +-
 drivers/net/virtio/virtio_ethdev.c            | 124 +--
 drivers/net/vmxnet3/vmxnet3_ethdev.c          |  72 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.h          |  16 +-
 drivers/net/vmxnet3/vmxnet3_rxtx.c            |  16 +-
 examples/bbdev_app/main.c                     |   6 +-
 examples/bond/main.c                          |  14 +-
 examples/distributor/main.c                   |  12 +-
 examples/ethtool/ethtool-app/main.c           |   2 +-
 examples/ethtool/lib/rte_ethtool.c            |  18 +-
 .../pipeline_worker_generic.c                 |  16 +-
 .../eventdev_pipeline/pipeline_worker_tx.c    |  12 +-
 examples/flow_classify/flow_classify.c        |   4 +-
 examples/flow_filtering/main.c                |  16 +-
 examples/ioat/ioatfwd.c                       |   8 +-
 examples/ip_fragmentation/main.c              |  12 +-
 examples/ip_pipeline/link.c                   |  20 +-
 examples/ip_reassembly/main.c                 |  18 +-
 examples/ipsec-secgw/ipsec-secgw.c            |  32 +-
 examples/ipsec-secgw/sa.c                     |   8 +-
 examples/ipv4_multicast/main.c                |   6 +-
 examples/kni/main.c                           |   8 +-
 examples/l2fwd-crypto/main.c                  |  10 +-
 examples/l2fwd-event/l2fwd_common.c           |  10 +-
 examples/l2fwd-event/main.c                   |   2 +-
 examples/l2fwd-jobstats/main.c                |   8 +-
 examples/l2fwd-keepalive/main.c               |   8 +-
 examples/l2fwd/main.c                         |   8 +-
 examples/l3fwd-acl/main.c                     |  18 +-
 examples/l3fwd-graph/main.c                   |  14 +-
 examples/l3fwd-power/main.c                   |  16 +-
 examples/l3fwd/l3fwd_event.c                  |   4 +-
 examples/l3fwd/main.c                         |  18 +-
 examples/link_status_interrupt/main.c         |  10 +-
 .../client_server_mp/mp_server/init.c         |   4 +-
 examples/multi_process/symmetric_mp/main.c    |  14 +-
 examples/ntb/ntb_fwd.c                        |   6 +-
 examples/packet_ordering/main.c               |   4 +-
 .../performance-thread/l3fwd-thread/main.c    |  16 +-
 examples/pipeline/obj.c                       |  20 +-
 examples/ptpclient/ptpclient.c                |  10 +-
 examples/qos_meter/main.c                     |  16 +-
 examples/qos_sched/init.c                     |   6 +-
 examples/rxtx_callbacks/main.c                |   8 +-
 examples/server_node_efd/server/init.c        |   8 +-
 examples/skeleton/basicfwd.c                  |   4 +-
 examples/vhost/main.c                         |  26 +-
 examples/vm_power_manager/main.c              |   6 +-
 examples/vmdq/main.c                          |  20 +-
 examples/vmdq_dcb/main.c                      |  40 +-
 lib/ethdev/ethdev_driver.h                    |  36 +-
 lib/ethdev/rte_ethdev.c                       | 181 ++--
 lib/ethdev/rte_ethdev.h                       | 986 +++++++++++-------
 lib/ethdev/rte_flow.h                         |   2 +-
 lib/gso/rte_gso.c                             |  20 +-
 lib/gso/rte_gso.h                             |   4 +-
 lib/mbuf/rte_mbuf_core.h                      |   8 +-
 lib/mbuf/rte_mbuf_dyn.h                       |   2 +-
 339 files changed, 6601 insertions(+), 6385 deletions(-)

diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index bfe5ce825b70..a4271047e693 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -757,11 +757,11 @@ show_port(void)
 		}
 
 		ret = rte_eth_dev_flow_ctrl_get(i, &fc_conf);
-		if (ret == 0 && fc_conf.mode != RTE_FC_NONE)  {
+		if (ret == 0 && fc_conf.mode != RTE_ETH_FC_NONE)  {
 			printf("\t  -- flow control mode %s%s high %u low %u pause %u%s%s\n",
-			       fc_conf.mode == RTE_FC_RX_PAUSE ? "rx " :
-			       fc_conf.mode == RTE_FC_TX_PAUSE ? "tx " :
-			       fc_conf.mode == RTE_FC_FULL ? "full" : "???",
+			       fc_conf.mode == RTE_ETH_FC_RX_PAUSE ? "rx " :
+			       fc_conf.mode == RTE_ETH_FC_TX_PAUSE ? "tx " :
+			       fc_conf.mode == RTE_ETH_FC_FULL ? "full" : "???",
 			       fc_conf.autoneg ? " auto" : "",
 			       fc_conf.high_water,
 			       fc_conf.low_water,
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index 660d5a0364b6..31d1b0e14653 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -668,13 +668,13 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 	struct test_perf *t = evt_test_priv(test);
 	struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 			.split_hdr_size = 0,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 			},
 		},
 	};
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 2775e72c580d..d202091077a6 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -176,12 +176,12 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 	struct rte_eth_rxconf rx_conf;
 	struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 			},
 		},
 	};
@@ -223,7 +223,7 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 
 		if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT))
 			local_port_conf.rxmode.offloads |=
-				DEV_RX_OFFLOAD_RSS_HASH;
+				RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 		ret = rte_eth_dev_info_get(i, &dev_info);
 		if (ret != 0) {
@@ -233,9 +233,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 		}
 
 		/* Enable mbuf fast free if PMD has the capability. */
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		rx_conf = dev_info.default_rxconf;
 		rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h
index a14d4e05e185..4249b6175b82 100644
--- a/app/test-flow-perf/config.h
+++ b/app/test-flow-perf/config.h
@@ -5,7 +5,7 @@
 #define FLOW_ITEM_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ACTION_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ATTR_MASK(_x) (UINT64_C(1) << _x)
-#define GET_RSS_HF() (ETH_RSS_IP)
+#define GET_RSS_HF() (RTE_ETH_RSS_IP)
 
 /* Configuration */
 #define RXQ_NUM 4
diff --git a/app/test-pipeline/init.c b/app/test-pipeline/init.c
index fe37d63730c6..c73801904103 100644
--- a/app/test-pipeline/init.c
+++ b/app/test-pipeline/init.c
@@ -70,16 +70,16 @@ struct app_params app = {
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -178,7 +178,7 @@ app_ports_check_link(void)
 		RTE_LOG(INFO, USER1, "Port %u %s\n",
 			port,
 			link_status_text);
-		if (link.link_status == ETH_LINK_DOWN)
+		if (link.link_status == RTE_ETH_LINK_DOWN)
 			all_ports_up = 0;
 	}
 
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 3221f6e1aa40..ebea13f86ab0 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1478,51 +1478,51 @@ parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
 	int duplex;
 
 	if (!strcmp(duplexstr, "half")) {
-		duplex = ETH_LINK_HALF_DUPLEX;
+		duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	} else if (!strcmp(duplexstr, "full")) {
-		duplex = ETH_LINK_FULL_DUPLEX;
+		duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	} else if (!strcmp(duplexstr, "auto")) {
-		duplex = ETH_LINK_FULL_DUPLEX;
+		duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	} else {
 		fprintf(stderr, "Unknown duplex parameter\n");
 		return -1;
 	}
 
 	if (!strcmp(speedstr, "10")) {
-		*speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
-				ETH_LINK_SPEED_10M_HD : ETH_LINK_SPEED_10M;
+		*speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+				RTE_ETH_LINK_SPEED_10M_HD : RTE_ETH_LINK_SPEED_10M;
 	} else if (!strcmp(speedstr, "100")) {
-		*speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
-				ETH_LINK_SPEED_100M_HD : ETH_LINK_SPEED_100M;
+		*speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+				RTE_ETH_LINK_SPEED_100M_HD : RTE_ETH_LINK_SPEED_100M;
 	} else {
-		if (duplex != ETH_LINK_FULL_DUPLEX) {
+		if (duplex != RTE_ETH_LINK_FULL_DUPLEX) {
 			fprintf(stderr, "Invalid speed/duplex parameters\n");
 			return -1;
 		}
 		if (!strcmp(speedstr, "1000")) {
-			*speed = ETH_LINK_SPEED_1G;
+			*speed = RTE_ETH_LINK_SPEED_1G;
 		} else if (!strcmp(speedstr, "10000")) {
-			*speed = ETH_LINK_SPEED_10G;
+			*speed = RTE_ETH_LINK_SPEED_10G;
 		} else if (!strcmp(speedstr, "25000")) {
-			*speed = ETH_LINK_SPEED_25G;
+			*speed = RTE_ETH_LINK_SPEED_25G;
 		} else if (!strcmp(speedstr, "40000")) {
-			*speed = ETH_LINK_SPEED_40G;
+			*speed = RTE_ETH_LINK_SPEED_40G;
 		} else if (!strcmp(speedstr, "50000")) {
-			*speed = ETH_LINK_SPEED_50G;
+			*speed = RTE_ETH_LINK_SPEED_50G;
 		} else if (!strcmp(speedstr, "100000")) {
-			*speed = ETH_LINK_SPEED_100G;
+			*speed = RTE_ETH_LINK_SPEED_100G;
 		} else if (!strcmp(speedstr, "200000")) {
-			*speed = ETH_LINK_SPEED_200G;
+			*speed = RTE_ETH_LINK_SPEED_200G;
 		} else if (!strcmp(speedstr, "auto")) {
-			*speed = ETH_LINK_SPEED_AUTONEG;
+			*speed = RTE_ETH_LINK_SPEED_AUTONEG;
 		} else {
 			fprintf(stderr, "Unknown speed parameter\n");
 			return -1;
 		}
 	}
 
-	if (*speed != ETH_LINK_SPEED_AUTONEG)
-		*speed |= ETH_LINK_SPEED_FIXED;
+	if (*speed != RTE_ETH_LINK_SPEED_AUTONEG)
+		*speed |= RTE_ETH_LINK_SPEED_FIXED;
 
 	return 0;
 }
@@ -2166,33 +2166,33 @@ cmd_config_rss_parsed(void *parsed_result,
 	int ret;
 
 	if (!strcmp(res->value, "all"))
-		rss_conf.rss_hf = ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP |
-			ETH_RSS_TCP | ETH_RSS_UDP | ETH_RSS_SCTP |
-			ETH_RSS_L2_PAYLOAD | ETH_RSS_L2TPV3 | ETH_RSS_ESP |
-			ETH_RSS_AH | ETH_RSS_PFCP | ETH_RSS_GTPU |
-			ETH_RSS_ECPRI;
+		rss_conf.rss_hf = RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP |
+			RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP |
+			RTE_ETH_RSS_L2_PAYLOAD | RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP |
+			RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP | RTE_ETH_RSS_GTPU |
+			RTE_ETH_RSS_ECPRI;
 	else if (!strcmp(res->value, "eth"))
-		rss_conf.rss_hf = ETH_RSS_ETH;
+		rss_conf.rss_hf = RTE_ETH_RSS_ETH;
 	else if (!strcmp(res->value, "vlan"))
-		rss_conf.rss_hf = ETH_RSS_VLAN;
+		rss_conf.rss_hf = RTE_ETH_RSS_VLAN;
 	else if (!strcmp(res->value, "ip"))
-		rss_conf.rss_hf = ETH_RSS_IP;
+		rss_conf.rss_hf = RTE_ETH_RSS_IP;
 	else if (!strcmp(res->value, "udp"))
-		rss_conf.rss_hf = ETH_RSS_UDP;
+		rss_conf.rss_hf = RTE_ETH_RSS_UDP;
 	else if (!strcmp(res->value, "tcp"))
-		rss_conf.rss_hf = ETH_RSS_TCP;
+		rss_conf.rss_hf = RTE_ETH_RSS_TCP;
 	else if (!strcmp(res->value, "sctp"))
-		rss_conf.rss_hf = ETH_RSS_SCTP;
+		rss_conf.rss_hf = RTE_ETH_RSS_SCTP;
 	else if (!strcmp(res->value, "ether"))
-		rss_conf.rss_hf = ETH_RSS_L2_PAYLOAD;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2_PAYLOAD;
 	else if (!strcmp(res->value, "port"))
-		rss_conf.rss_hf = ETH_RSS_PORT;
+		rss_conf.rss_hf = RTE_ETH_RSS_PORT;
 	else if (!strcmp(res->value, "vxlan"))
-		rss_conf.rss_hf = ETH_RSS_VXLAN;
+		rss_conf.rss_hf = RTE_ETH_RSS_VXLAN;
 	else if (!strcmp(res->value, "geneve"))
-		rss_conf.rss_hf = ETH_RSS_GENEVE;
+		rss_conf.rss_hf = RTE_ETH_RSS_GENEVE;
 	else if (!strcmp(res->value, "nvgre"))
-		rss_conf.rss_hf = ETH_RSS_NVGRE;
+		rss_conf.rss_hf = RTE_ETH_RSS_NVGRE;
 	else if (!strcmp(res->value, "l3-pre32"))
 		rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE32;
 	else if (!strcmp(res->value, "l3-pre40"))
@@ -2206,46 +2206,46 @@ cmd_config_rss_parsed(void *parsed_result,
 	else if (!strcmp(res->value, "l3-pre96"))
 		rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE96;
 	else if (!strcmp(res->value, "l3-src-only"))
-		rss_conf.rss_hf = ETH_RSS_L3_SRC_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L3_SRC_ONLY;
 	else if (!strcmp(res->value, "l3-dst-only"))
-		rss_conf.rss_hf = ETH_RSS_L3_DST_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L3_DST_ONLY;
 	else if (!strcmp(res->value, "l4-src-only"))
-		rss_conf.rss_hf = ETH_RSS_L4_SRC_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L4_SRC_ONLY;
 	else if (!strcmp(res->value, "l4-dst-only"))
-		rss_conf.rss_hf = ETH_RSS_L4_DST_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L4_DST_ONLY;
 	else if (!strcmp(res->value, "l2-src-only"))
-		rss_conf.rss_hf = ETH_RSS_L2_SRC_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2_SRC_ONLY;
 	else if (!strcmp(res->value, "l2-dst-only"))
-		rss_conf.rss_hf = ETH_RSS_L2_DST_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2_DST_ONLY;
 	else if (!strcmp(res->value, "l2tpv3"))
-		rss_conf.rss_hf = ETH_RSS_L2TPV3;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2TPV3;
 	else if (!strcmp(res->value, "esp"))
-		rss_conf.rss_hf = ETH_RSS_ESP;
+		rss_conf.rss_hf = RTE_ETH_RSS_ESP;
 	else if (!strcmp(res->value, "ah"))
-		rss_conf.rss_hf = ETH_RSS_AH;
+		rss_conf.rss_hf = RTE_ETH_RSS_AH;
 	else if (!strcmp(res->value, "pfcp"))
-		rss_conf.rss_hf = ETH_RSS_PFCP;
+		rss_conf.rss_hf = RTE_ETH_RSS_PFCP;
 	else if (!strcmp(res->value, "pppoe"))
-		rss_conf.rss_hf = ETH_RSS_PPPOE;
+		rss_conf.rss_hf = RTE_ETH_RSS_PPPOE;
 	else if (!strcmp(res->value, "gtpu"))
-		rss_conf.rss_hf = ETH_RSS_GTPU;
+		rss_conf.rss_hf = RTE_ETH_RSS_GTPU;
 	else if (!strcmp(res->value, "ecpri"))
-		rss_conf.rss_hf = ETH_RSS_ECPRI;
+		rss_conf.rss_hf = RTE_ETH_RSS_ECPRI;
 	else if (!strcmp(res->value, "mpls"))
-		rss_conf.rss_hf = ETH_RSS_MPLS;
+		rss_conf.rss_hf = RTE_ETH_RSS_MPLS;
 	else if (!strcmp(res->value, "ipv4-chksum"))
-		rss_conf.rss_hf = ETH_RSS_IPV4_CHKSUM;
+		rss_conf.rss_hf = RTE_ETH_RSS_IPV4_CHKSUM;
 	else if (!strcmp(res->value, "none"))
 		rss_conf.rss_hf = 0;
 	else if (!strcmp(res->value, "level-default")) {
-		rss_hf &= (~ETH_RSS_LEVEL_MASK);
-		rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_PMD_DEFAULT);
+		rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+		rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_PMD_DEFAULT);
 	} else if (!strcmp(res->value, "level-outer")) {
-		rss_hf &= (~ETH_RSS_LEVEL_MASK);
-		rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_OUTERMOST);
+		rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+		rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_OUTERMOST);
 	} else if (!strcmp(res->value, "level-inner")) {
-		rss_hf &= (~ETH_RSS_LEVEL_MASK);
-		rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_INNERMOST);
+		rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+		rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_INNERMOST);
 	} else if (!strcmp(res->value, "default"))
 		use_default = 1;
 	else if (isdigit(res->value[0]) && atoi(res->value) > 0 &&
@@ -2982,8 +2982,8 @@ parse_reta_config(const char *str,
 			return -1;
 		}
 
-		idx = hash_index / RTE_RETA_GROUP_SIZE;
-		shift = hash_index % RTE_RETA_GROUP_SIZE;
+		idx = hash_index / RTE_ETH_RETA_GROUP_SIZE;
+		shift = hash_index % RTE_ETH_RETA_GROUP_SIZE;
 		reta_conf[idx].mask |= (1ULL << shift);
 		reta_conf[idx].reta[shift] = nb_queue;
 	}
@@ -3012,10 +3012,10 @@ cmd_set_rss_reta_parsed(void *parsed_result,
 	} else
 		printf("The reta size of port %d is %u\n",
 			res->port_id, dev_info.reta_size);
-	if (dev_info.reta_size > ETH_RSS_RETA_SIZE_512) {
+	if (dev_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
 		fprintf(stderr,
 			"Currently do not support more than %u entries of redirection table\n",
-			ETH_RSS_RETA_SIZE_512);
+			RTE_ETH_RSS_RETA_SIZE_512);
 		return;
 	}
 
@@ -3086,8 +3086,8 @@ showport_parse_reta_config(struct rte_eth_rss_reta_entry64 *conf,
 	char *end;
 	char *str_fld[8];
 	uint16_t i;
-	uint16_t num = (nb_entries + RTE_RETA_GROUP_SIZE - 1) /
-			RTE_RETA_GROUP_SIZE;
+	uint16_t num = (nb_entries + RTE_ETH_RETA_GROUP_SIZE - 1) /
+			RTE_ETH_RETA_GROUP_SIZE;
 	int ret;
 
 	p = strchr(p0, '(');
@@ -3132,7 +3132,7 @@ cmd_showport_reta_parsed(void *parsed_result,
 	if (ret != 0)
 		return;
 
-	max_reta_size = RTE_MIN(dev_info.reta_size, ETH_RSS_RETA_SIZE_512);
+	max_reta_size = RTE_MIN(dev_info.reta_size, RTE_ETH_RSS_RETA_SIZE_512);
 	if (res->size == 0 || res->size > max_reta_size) {
 		fprintf(stderr, "Invalid redirection table size: %u (1-%u)\n",
 			res->size, max_reta_size);
@@ -3272,7 +3272,7 @@ cmd_config_dcb_parsed(void *parsed_result,
 		return;
 	}
 
-	if ((res->num_tcs != ETH_4_TCS) && (res->num_tcs != ETH_8_TCS)) {
+	if ((res->num_tcs != RTE_ETH_4_TCS) && (res->num_tcs != RTE_ETH_8_TCS)) {
 		fprintf(stderr,
 			"The invalid number of traffic class, only 4 or 8 allowed.\n");
 		return;
@@ -4276,9 +4276,9 @@ cmd_vlan_tpid_parsed(void *parsed_result,
 	enum rte_vlan_type vlan_type;
 
 	if (!strcmp(res->vlan_type, "inner"))
-		vlan_type = ETH_VLAN_TYPE_INNER;
+		vlan_type = RTE_ETH_VLAN_TYPE_INNER;
 	else if (!strcmp(res->vlan_type, "outer"))
-		vlan_type = ETH_VLAN_TYPE_OUTER;
+		vlan_type = RTE_ETH_VLAN_TYPE_OUTER;
 	else {
 		fprintf(stderr, "Unknown vlan type\n");
 		return;
@@ -4615,55 +4615,55 @@ csum_show(int port_id)
 	printf("Parse tunnel is %s\n",
 		(ports[port_id].parse_tunnel) ? "on" : "off");
 	printf("IP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
 	printf("UDP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
 	printf("TCP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
 	printf("SCTP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
 	printf("Outer-Ip checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
 	printf("Outer-Udp checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
 
 	/* display warnings if configuration is not supported by the NIC */
 	ret = eth_dev_info_get_print_err(port_id, &dev_info);
 	if (ret != 0)
 		return;
 
-	if ((tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware IP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware UDP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware TCP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_SCTP_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware SCTP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware outer IP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 			== 0) {
 		fprintf(stderr,
 			"Warning: hardware outer UDP checksum enabled but not supported by port %d\n",
@@ -4713,8 +4713,8 @@ cmd_csum_parsed(void *parsed_result,
 
 		if (!strcmp(res->proto, "ip")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_IPV4_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+						RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 			} else {
 				fprintf(stderr,
 					"IP checksum offload is not supported by port %u\n",
@@ -4722,8 +4722,8 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "udp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_UDP_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"UDP checksum offload is not supported by port %u\n",
@@ -4731,8 +4731,8 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "tcp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_TCP_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"TCP checksum offload is not supported by port %u\n",
@@ -4740,8 +4740,8 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "sctp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_SCTP_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_SCTP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"SCTP checksum offload is not supported by port %u\n",
@@ -4749,9 +4749,9 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "outer-ip")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-					DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+					RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 				csum_offloads |=
-						DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+						RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 			} else {
 				fprintf(stderr,
 					"Outer IP checksum offload is not supported by port %u\n",
@@ -4759,9 +4759,9 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "outer-udp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-					DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+					RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
 				csum_offloads |=
-						DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"Outer UDP checksum offload is not supported by port %u\n",
@@ -4916,7 +4916,7 @@ cmd_tso_set_parsed(void *parsed_result,
 		return;
 
 	if ((ports[res->port_id].tso_segsz != 0) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
 		fprintf(stderr, "Error: TSO is not supported by port %d\n",
 			res->port_id);
 		return;
@@ -4924,11 +4924,11 @@ cmd_tso_set_parsed(void *parsed_result,
 
 	if (ports[res->port_id].tso_segsz == 0) {
 		ports[res->port_id].dev_conf.txmode.offloads &=
-						~DEV_TX_OFFLOAD_TCP_TSO;
+						~RTE_ETH_TX_OFFLOAD_TCP_TSO;
 		printf("TSO for non-tunneled packets is disabled\n");
 	} else {
 		ports[res->port_id].dev_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_TCP_TSO;
+						RTE_ETH_TX_OFFLOAD_TCP_TSO;
 		printf("TSO segment size for non-tunneled packets is %d\n",
 			ports[res->port_id].tso_segsz);
 	}
@@ -4940,7 +4940,7 @@ cmd_tso_set_parsed(void *parsed_result,
 		return;
 
 	if ((ports[res->port_id].tso_segsz != 0) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
 		fprintf(stderr,
 			"Warning: TSO enabled but not supported by port %d\n",
 			res->port_id);
@@ -5011,27 +5011,27 @@ check_tunnel_tso_nic_support(portid_t port_id)
 	if (eth_dev_info_get_print_err(port_id, &dev_info) != 0)
 		return dev_info;
 
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VXLAN_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO))
 		fprintf(stderr,
 			"Warning: VXLAN TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		fprintf(stderr,
 			"Warning: GRE TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPIP_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO))
 		fprintf(stderr,
 			"Warning: IPIP TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
 		fprintf(stderr,
 			"Warning: GENEVE TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IP_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IP_TNL_TSO))
 		fprintf(stderr,
 			"Warning: IP TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO))
 		fprintf(stderr,
 			"Warning: UDP TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
@@ -5059,20 +5059,20 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
 	dev_info = check_tunnel_tso_nic_support(res->port_id);
 	if (ports[res->port_id].tunnel_tso_segsz == 0) {
 		ports[res->port_id].dev_conf.txmode.offloads &=
-			~(DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			  DEV_TX_OFFLOAD_GRE_TNL_TSO |
-			  DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-			  DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-			  DEV_TX_OFFLOAD_IP_TNL_TSO |
-			  DEV_TX_OFFLOAD_UDP_TNL_TSO);
+			~(RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 		printf("TSO for tunneled packets is disabled\n");
 	} else {
-		uint64_t tso_offloads = (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-					 DEV_TX_OFFLOAD_GRE_TNL_TSO |
-					 DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-					 DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-					 DEV_TX_OFFLOAD_IP_TNL_TSO |
-					 DEV_TX_OFFLOAD_UDP_TNL_TSO);
+		uint64_t tso_offloads = (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 
 		ports[res->port_id].dev_conf.txmode.offloads |=
 			(tso_offloads & dev_info.tx_offload_capa);
@@ -5095,7 +5095,7 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
 			fprintf(stderr,
 				"Warning: csum parse_tunnel must be set so that tunneled packets are recognized\n");
 		if (!(ports[res->port_id].dev_conf.txmode.offloads &
-		      DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+		      RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
 			fprintf(stderr,
 				"Warning: csum set outer-ip must be set to hw if outer L3 is IPv4; not necessary for IPv6\n");
 	}
@@ -7227,9 +7227,9 @@ cmd_link_flow_ctrl_show_parsed(void *parsed_result,
 		return;
 	}
 
-	if (fc_conf.mode == RTE_FC_RX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+	if (fc_conf.mode == RTE_ETH_FC_RX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
 		rx_fc_en = true;
-	if (fc_conf.mode == RTE_FC_TX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+	if (fc_conf.mode == RTE_ETH_FC_TX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
 		tx_fc_en = true;
 
 	printf("\n%s Flow control infos for port %-2d %s\n",
@@ -7507,12 +7507,12 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
 
 	/*
 	 * Rx on/off, flow control is enabled/disabled on RX side. This can indicate
-	 * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+	 * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
 	 * Tx on/off, flow control is enabled/disabled on TX side. This can indicate
-	 * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+	 * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
 	 */
 	static enum rte_eth_fc_mode rx_tx_onoff_2_lfc_mode[2][2] = {
-			{RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+			{RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
 	};
 
 	/* Partial command line, retrieve current configuration */
@@ -7525,11 +7525,11 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
 			return;
 		}
 
-		if ((fc_conf.mode == RTE_FC_RX_PAUSE) ||
-		    (fc_conf.mode == RTE_FC_FULL))
+		if ((fc_conf.mode == RTE_ETH_FC_RX_PAUSE) ||
+		    (fc_conf.mode == RTE_ETH_FC_FULL))
 			rx_fc_en = 1;
-		if ((fc_conf.mode == RTE_FC_TX_PAUSE) ||
-		    (fc_conf.mode == RTE_FC_FULL))
+		if ((fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ||
+		    (fc_conf.mode == RTE_ETH_FC_FULL))
 			tx_fc_en = 1;
 	}
 
@@ -7597,12 +7597,12 @@ cmd_priority_flow_ctrl_set_parsed(void *parsed_result,
 
 	/*
 	 * Rx on/off, flow control is enabled/disabled on RX side. This can indicate
-	 * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+	 * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
 	 * Tx on/off, flow control is enabled/disabled on TX side. This can indicate
-	 * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+	 * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
 	 */
 	static enum rte_eth_fc_mode rx_tx_onoff_2_pfc_mode[2][2] = {
-		{RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+		{RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
 	};
 
 	memset(&pfc_conf, 0, sizeof(struct rte_eth_pfc_conf));
@@ -9250,13 +9250,13 @@ cmd_set_vf_rxmode_parsed(void *parsed_result,
 	int is_on = (strcmp(res->on, "on") == 0) ? 1 : 0;
 	if (!strcmp(res->what,"rxmode")) {
 		if (!strcmp(res->mode, "AUPE"))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_UNTAG;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_UNTAG;
 		else if (!strcmp(res->mode, "ROPE"))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_HASH_UC;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_HASH_UC;
 		else if (!strcmp(res->mode, "BAM"))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_BROADCAST;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_BROADCAST;
 		else if (!strncmp(res->mode, "MPE",3))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_MULTICAST;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_MULTICAST;
 	}
 
 	RTE_SET_USED(is_on);
@@ -9656,7 +9656,7 @@ cmd_tunnel_udp_config_parsed(void *parsed_result,
 	int ret;
 
 	tunnel_udp.udp_port = res->udp_port;
-	tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+	tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
 
 	if (!strcmp(res->what, "add"))
 		ret = rte_eth_dev_udp_tunnel_port_add(res->port_id,
@@ -9722,13 +9722,13 @@ cmd_cfg_tunnel_udp_port_parsed(void *parsed_result,
 	tunnel_udp.udp_port = res->udp_port;
 
 	if (!strcmp(res->tunnel_type, "vxlan")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
 	} else if (!strcmp(res->tunnel_type, "geneve")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_GENEVE;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_GENEVE;
 	} else if (!strcmp(res->tunnel_type, "vxlan-gpe")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN_GPE;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN_GPE;
 	} else if (!strcmp(res->tunnel_type, "ecpri")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_ECPRI;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_ECPRI;
 	} else {
 		fprintf(stderr, "Invalid tunnel type\n");
 		return;
@@ -11859,7 +11859,7 @@ cmd_set_macsec_offload_on_parsed(
 	if (ret != 0)
 		return;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
 #ifdef RTE_NET_IXGBE
 		ret = rte_pmd_ixgbe_macsec_enable(port_id, en, rp);
 #endif
@@ -11870,7 +11870,7 @@ cmd_set_macsec_offload_on_parsed(
 	switch (ret) {
 	case 0:
 		ports[port_id].dev_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_MACSEC_INSERT;
+						RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 		cmd_reconfig_device_queue(port_id, 1, 1);
 		break;
 	case -ENODEV:
@@ -11956,7 +11956,7 @@ cmd_set_macsec_offload_off_parsed(
 	if (ret != 0)
 		return;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
 #ifdef RTE_NET_IXGBE
 		ret = rte_pmd_ixgbe_macsec_disable(port_id);
 #endif
@@ -11964,7 +11964,7 @@ cmd_set_macsec_offload_off_parsed(
 	switch (ret) {
 	case 0:
 		ports[port_id].dev_conf.txmode.offloads &=
-						~DEV_TX_OFFLOAD_MACSEC_INSERT;
+						~RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 		cmd_reconfig_device_queue(port_id, 1, 1);
 		break;
 	case -ENODEV:
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index cad78350dcc9..a18871d461c4 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -86,62 +86,62 @@ static const struct {
 };
 
 const struct rss_type_info rss_type_table[] = {
-	{ "all", ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP | ETH_RSS_TCP |
-		ETH_RSS_UDP | ETH_RSS_SCTP | ETH_RSS_L2_PAYLOAD |
-		ETH_RSS_L2TPV3 | ETH_RSS_ESP | ETH_RSS_AH | ETH_RSS_PFCP |
-		ETH_RSS_GTPU | ETH_RSS_ECPRI | ETH_RSS_MPLS},
+	{ "all", RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP |
+		RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_L2_PAYLOAD |
+		RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP | RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP |
+		RTE_ETH_RSS_GTPU | RTE_ETH_RSS_ECPRI | RTE_ETH_RSS_MPLS},
 	{ "none", 0 },
-	{ "eth", ETH_RSS_ETH },
-	{ "l2-src-only", ETH_RSS_L2_SRC_ONLY },
-	{ "l2-dst-only", ETH_RSS_L2_DST_ONLY },
-	{ "vlan", ETH_RSS_VLAN },
-	{ "s-vlan", ETH_RSS_S_VLAN },
-	{ "c-vlan", ETH_RSS_C_VLAN },
-	{ "ipv4", ETH_RSS_IPV4 },
-	{ "ipv4-frag", ETH_RSS_FRAG_IPV4 },
-	{ "ipv4-tcp", ETH_RSS_NONFRAG_IPV4_TCP },
-	{ "ipv4-udp", ETH_RSS_NONFRAG_IPV4_UDP },
-	{ "ipv4-sctp", ETH_RSS_NONFRAG_IPV4_SCTP },
-	{ "ipv4-other", ETH_RSS_NONFRAG_IPV4_OTHER },
-	{ "ipv6", ETH_RSS_IPV6 },
-	{ "ipv6-frag", ETH_RSS_FRAG_IPV6 },
-	{ "ipv6-tcp", ETH_RSS_NONFRAG_IPV6_TCP },
-	{ "ipv6-udp", ETH_RSS_NONFRAG_IPV6_UDP },
-	{ "ipv6-sctp", ETH_RSS_NONFRAG_IPV6_SCTP },
-	{ "ipv6-other", ETH_RSS_NONFRAG_IPV6_OTHER },
-	{ "l2-payload", ETH_RSS_L2_PAYLOAD },
-	{ "ipv6-ex", ETH_RSS_IPV6_EX },
-	{ "ipv6-tcp-ex", ETH_RSS_IPV6_TCP_EX },
-	{ "ipv6-udp-ex", ETH_RSS_IPV6_UDP_EX },
-	{ "port", ETH_RSS_PORT },
-	{ "vxlan", ETH_RSS_VXLAN },
-	{ "geneve", ETH_RSS_GENEVE },
-	{ "nvgre", ETH_RSS_NVGRE },
-	{ "ip", ETH_RSS_IP },
-	{ "udp", ETH_RSS_UDP },
-	{ "tcp", ETH_RSS_TCP },
-	{ "sctp", ETH_RSS_SCTP },
-	{ "tunnel", ETH_RSS_TUNNEL },
+	{ "eth", RTE_ETH_RSS_ETH },
+	{ "l2-src-only", RTE_ETH_RSS_L2_SRC_ONLY },
+	{ "l2-dst-only", RTE_ETH_RSS_L2_DST_ONLY },
+	{ "vlan", RTE_ETH_RSS_VLAN },
+	{ "s-vlan", RTE_ETH_RSS_S_VLAN },
+	{ "c-vlan", RTE_ETH_RSS_C_VLAN },
+	{ "ipv4", RTE_ETH_RSS_IPV4 },
+	{ "ipv4-frag", RTE_ETH_RSS_FRAG_IPV4 },
+	{ "ipv4-tcp", RTE_ETH_RSS_NONFRAG_IPV4_TCP },
+	{ "ipv4-udp", RTE_ETH_RSS_NONFRAG_IPV4_UDP },
+	{ "ipv4-sctp", RTE_ETH_RSS_NONFRAG_IPV4_SCTP },
+	{ "ipv4-other", RTE_ETH_RSS_NONFRAG_IPV4_OTHER },
+	{ "ipv6", RTE_ETH_RSS_IPV6 },
+	{ "ipv6-frag", RTE_ETH_RSS_FRAG_IPV6 },
+	{ "ipv6-tcp", RTE_ETH_RSS_NONFRAG_IPV6_TCP },
+	{ "ipv6-udp", RTE_ETH_RSS_NONFRAG_IPV6_UDP },
+	{ "ipv6-sctp", RTE_ETH_RSS_NONFRAG_IPV6_SCTP },
+	{ "ipv6-other", RTE_ETH_RSS_NONFRAG_IPV6_OTHER },
+	{ "l2-payload", RTE_ETH_RSS_L2_PAYLOAD },
+	{ "ipv6-ex", RTE_ETH_RSS_IPV6_EX },
+	{ "ipv6-tcp-ex", RTE_ETH_RSS_IPV6_TCP_EX },
+	{ "ipv6-udp-ex", RTE_ETH_RSS_IPV6_UDP_EX },
+	{ "port", RTE_ETH_RSS_PORT },
+	{ "vxlan", RTE_ETH_RSS_VXLAN },
+	{ "geneve", RTE_ETH_RSS_GENEVE },
+	{ "nvgre", RTE_ETH_RSS_NVGRE },
+	{ "ip", RTE_ETH_RSS_IP },
+	{ "udp", RTE_ETH_RSS_UDP },
+	{ "tcp", RTE_ETH_RSS_TCP },
+	{ "sctp", RTE_ETH_RSS_SCTP },
+	{ "tunnel", RTE_ETH_RSS_TUNNEL },
 	{ "l3-pre32", RTE_ETH_RSS_L3_PRE32 },
 	{ "l3-pre40", RTE_ETH_RSS_L3_PRE40 },
 	{ "l3-pre48", RTE_ETH_RSS_L3_PRE48 },
 	{ "l3-pre56", RTE_ETH_RSS_L3_PRE56 },
 	{ "l3-pre64", RTE_ETH_RSS_L3_PRE64 },
 	{ "l3-pre96", RTE_ETH_RSS_L3_PRE96 },
-	{ "l3-src-only", ETH_RSS_L3_SRC_ONLY },
-	{ "l3-dst-only", ETH_RSS_L3_DST_ONLY },
-	{ "l4-src-only", ETH_RSS_L4_SRC_ONLY },
-	{ "l4-dst-only", ETH_RSS_L4_DST_ONLY },
-	{ "esp", ETH_RSS_ESP },
-	{ "ah", ETH_RSS_AH },
-	{ "l2tpv3", ETH_RSS_L2TPV3 },
-	{ "pfcp", ETH_RSS_PFCP },
-	{ "pppoe", ETH_RSS_PPPOE },
-	{ "gtpu", ETH_RSS_GTPU },
-	{ "ecpri", ETH_RSS_ECPRI },
-	{ "mpls", ETH_RSS_MPLS },
-	{ "ipv4-chksum", ETH_RSS_IPV4_CHKSUM },
-	{ "l4-chksum", ETH_RSS_L4_CHKSUM },
+	{ "l3-src-only", RTE_ETH_RSS_L3_SRC_ONLY },
+	{ "l3-dst-only", RTE_ETH_RSS_L3_DST_ONLY },
+	{ "l4-src-only", RTE_ETH_RSS_L4_SRC_ONLY },
+	{ "l4-dst-only", RTE_ETH_RSS_L4_DST_ONLY },
+	{ "esp", RTE_ETH_RSS_ESP },
+	{ "ah", RTE_ETH_RSS_AH },
+	{ "l2tpv3", RTE_ETH_RSS_L2TPV3 },
+	{ "pfcp", RTE_ETH_RSS_PFCP },
+	{ "pppoe", RTE_ETH_RSS_PPPOE },
+	{ "gtpu", RTE_ETH_RSS_GTPU },
+	{ "ecpri", RTE_ETH_RSS_ECPRI },
+	{ "mpls", RTE_ETH_RSS_MPLS },
+	{ "ipv4-chksum", RTE_ETH_RSS_IPV4_CHKSUM },
+	{ "l4-chksum", RTE_ETH_RSS_L4_CHKSUM },
 	{ NULL, 0 },
 };
 
@@ -538,39 +538,39 @@ static void
 device_infos_display_speeds(uint32_t speed_capa)
 {
 	printf("\n\tDevice speed capability:");
-	if (speed_capa == ETH_LINK_SPEED_AUTONEG)
+	if (speed_capa == RTE_ETH_LINK_SPEED_AUTONEG)
 		printf(" Autonegotiate (all speeds)");
-	if (speed_capa & ETH_LINK_SPEED_FIXED)
+	if (speed_capa & RTE_ETH_LINK_SPEED_FIXED)
 		printf(" Disable autonegotiate (fixed speed)  ");
-	if (speed_capa & ETH_LINK_SPEED_10M_HD)
+	if (speed_capa & RTE_ETH_LINK_SPEED_10M_HD)
 		printf(" 10 Mbps half-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_10M)
+	if (speed_capa & RTE_ETH_LINK_SPEED_10M)
 		printf(" 10 Mbps full-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_100M_HD)
+	if (speed_capa & RTE_ETH_LINK_SPEED_100M_HD)
 		printf(" 100 Mbps half-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_100M)
+	if (speed_capa & RTE_ETH_LINK_SPEED_100M)
 		printf(" 100 Mbps full-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_1G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_1G)
 		printf(" 1 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_2_5G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_2_5G)
 		printf(" 2.5 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_5G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_5G)
 		printf(" 5 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_10G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_10G)
 		printf(" 10 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_20G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_20G)
 		printf(" 20 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_25G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_25G)
 		printf(" 25 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_40G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_40G)
 		printf(" 40 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_50G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_50G)
 		printf(" 50 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_56G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_56G)
 		printf(" 56 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_100G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_100G)
 		printf(" 100 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_200G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_200G)
 		printf(" 200 Gbps  ");
 }
 
@@ -723,9 +723,9 @@ port_infos_display(portid_t port_id)
 
 	printf("\nLink status: %s\n", (link.link_status) ? ("up") : ("down"));
 	printf("Link speed: %s\n", rte_eth_link_speed_to_str(link.link_speed));
-	printf("Link duplex: %s\n", (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+	printf("Link duplex: %s\n", (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 	       ("full-duplex") : ("half-duplex"));
-	printf("Autoneg status: %s\n", (link.link_autoneg == ETH_LINK_AUTONEG) ?
+	printf("Autoneg status: %s\n", (link.link_autoneg == RTE_ETH_LINK_AUTONEG) ?
 	       ("On") : ("Off"));
 
 	if (!rte_eth_dev_get_mtu(port_id, &mtu))
@@ -743,22 +743,22 @@ port_infos_display(portid_t port_id)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 	if (vlan_offload >= 0){
 		printf("VLAN offload: \n");
-		if (vlan_offload & ETH_VLAN_STRIP_OFFLOAD)
+		if (vlan_offload & RTE_ETH_VLAN_STRIP_OFFLOAD)
 			printf("  strip on, ");
 		else
 			printf("  strip off, ");
 
-		if (vlan_offload & ETH_VLAN_FILTER_OFFLOAD)
+		if (vlan_offload & RTE_ETH_VLAN_FILTER_OFFLOAD)
 			printf("filter on, ");
 		else
 			printf("filter off, ");
 
-		if (vlan_offload & ETH_VLAN_EXTEND_OFFLOAD)
+		if (vlan_offload & RTE_ETH_VLAN_EXTEND_OFFLOAD)
 			printf("extend on, ");
 		else
 			printf("extend off, ");
 
-		if (vlan_offload & ETH_QINQ_STRIP_OFFLOAD)
+		if (vlan_offload & RTE_ETH_QINQ_STRIP_OFFLOAD)
 			printf("qinq strip on\n");
 		else
 			printf("qinq strip off\n");
@@ -2953,8 +2953,8 @@ port_rss_reta_info(portid_t port_id,
 	}
 
 	for (i = 0; i < nb_entries; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (!(reta_conf[idx].mask & (1ULL << shift)))
 			continue;
 		printf("RSS RETA configuration: hash index=%u, queue=%u\n",
@@ -3427,7 +3427,7 @@ dcb_fwd_config_setup(void)
 	for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
 		fwd_lcores[lc_id]->stream_nb = 0;
 		fwd_lcores[lc_id]->stream_idx = sm_id;
-		for (i = 0; i < ETH_MAX_VMDQ_POOL; i++) {
+		for (i = 0; i < RTE_ETH_MAX_VMDQ_POOL; i++) {
 			/* if the nb_queue is zero, means this tc is
 			 * not enabled on the POOL
 			 */
@@ -4490,11 +4490,11 @@ vlan_extend_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_VLAN_EXTEND_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+		vlan_offload |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	} else {
-		vlan_offload &= ~ETH_VLAN_EXTEND_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
+		vlan_offload &= ~RTE_ETH_VLAN_EXTEND_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4520,11 +4520,11 @@ rx_vlan_strip_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
-		vlan_offload &= ~ETH_VLAN_STRIP_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		vlan_offload &= ~RTE_ETH_VLAN_STRIP_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4565,11 +4565,11 @@ rx_vlan_filter_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_VLAN_FILTER_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+		vlan_offload |= RTE_ETH_VLAN_FILTER_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	} else {
-		vlan_offload &= ~ETH_VLAN_FILTER_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+		vlan_offload &= ~RTE_ETH_VLAN_FILTER_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4595,11 +4595,11 @@ rx_vlan_qinq_strip_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_QINQ_STRIP_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+		vlan_offload |= RTE_ETH_QINQ_STRIP_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 	} else {
-		vlan_offload &= ~ETH_QINQ_STRIP_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+		vlan_offload &= ~RTE_ETH_QINQ_STRIP_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4669,7 +4669,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
 		return;
 
 	if (ports[port_id].dev_conf.txmode.offloads &
-	    DEV_TX_OFFLOAD_QINQ_INSERT) {
+	    RTE_ETH_TX_OFFLOAD_QINQ_INSERT) {
 		fprintf(stderr, "Error, as QinQ has been enabled.\n");
 		return;
 	}
@@ -4678,7 +4678,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
 	if (ret != 0)
 		return;
 
-	if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VLAN_INSERT) == 0) {
+	if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) == 0) {
 		fprintf(stderr,
 			"Error: vlan insert is not supported by port %d\n",
 			port_id);
@@ -4686,7 +4686,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
 	}
 
 	tx_vlan_reset(port_id);
-	ports[port_id].dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
+	ports[port_id].dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	ports[port_id].tx_vlan_id = vlan_id;
 }
 
@@ -4705,7 +4705,7 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
 	if (ret != 0)
 		return;
 
-	if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_QINQ_INSERT) == 0) {
+	if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) == 0) {
 		fprintf(stderr,
 			"Error: qinq insert not supported by port %d\n",
 			port_id);
@@ -4713,8 +4713,8 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
 	}
 
 	tx_vlan_reset(port_id);
-	ports[port_id].dev_conf.txmode.offloads |= (DEV_TX_OFFLOAD_VLAN_INSERT |
-						    DEV_TX_OFFLOAD_QINQ_INSERT);
+	ports[port_id].dev_conf.txmode.offloads |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+						    RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
 	ports[port_id].tx_vlan_id = vlan_id;
 	ports[port_id].tx_vlan_id_outer = vlan_id_outer;
 }
@@ -4723,8 +4723,8 @@ void
 tx_vlan_reset(portid_t port_id)
 {
 	ports[port_id].dev_conf.txmode.offloads &=
-				~(DEV_TX_OFFLOAD_VLAN_INSERT |
-				  DEV_TX_OFFLOAD_QINQ_INSERT);
+				~(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				  RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
 	ports[port_id].tx_vlan_id = 0;
 	ports[port_id].tx_vlan_id_outer = 0;
 }
@@ -5130,7 +5130,7 @@ set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint16_t rate)
 	ret = eth_link_get_nowait_print_err(port_id, &link);
 	if (ret < 0)
 		return 1;
-	if (link.link_speed != ETH_SPEED_NUM_UNKNOWN &&
+	if (link.link_speed != RTE_ETH_SPEED_NUM_UNKNOWN &&
 	    rate > link.link_speed) {
 		fprintf(stderr,
 			"Invalid rate value:%u bigger than link speed: %u\n",
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 090797318a35..75b24487e72e 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -485,7 +485,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		if (info->l4_proto == IPPROTO_TCP && tso_segsz) {
 			ol_flags |= PKT_TX_IP_CKSUM;
 		} else {
-			if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+			if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
 				ol_flags |= PKT_TX_IP_CKSUM;
 			} else {
 				ipv4_hdr->hdr_checksum = 0;
@@ -502,7 +502,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		udp_hdr = (struct rte_udp_hdr *)((char *)l3_hdr + info->l3_len);
 		/* do not recalculate udp cksum if it was 0 */
 		if (udp_hdr->dgram_cksum != 0) {
-			if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+			if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
 				ol_flags |= PKT_TX_UDP_CKSUM;
 			} else {
 				udp_hdr->dgram_cksum = 0;
@@ -517,7 +517,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		tcp_hdr = (struct rte_tcp_hdr *)((char *)l3_hdr + info->l3_len);
 		if (tso_segsz)
 			ol_flags |= PKT_TX_TCP_SEG;
-		else if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+		else if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
 			ol_flags |= PKT_TX_TCP_CKSUM;
 		} else {
 			tcp_hdr->cksum = 0;
@@ -532,7 +532,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 			((char *)l3_hdr + info->l3_len);
 		/* sctp payload must be a multiple of 4 to be
 		 * offloaded */
-		if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
+		if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
 			((ipv4_hdr->total_length & 0x3) == 0)) {
 			ol_flags |= PKT_TX_SCTP_CKSUM;
 		} else {
@@ -559,7 +559,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 		ipv4_hdr->hdr_checksum = 0;
 		ol_flags |= PKT_TX_OUTER_IPV4;
 
-		if (tx_offloads	& DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+		if (tx_offloads	& RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 			ol_flags |= PKT_TX_OUTER_IP_CKSUM;
 		else
 			ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
@@ -576,7 +576,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 		ol_flags |= PKT_TX_TCP_SEG;
 
 	/* Skip SW outer UDP checksum generation if HW supports it */
-	if (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) {
 		if (info->outer_ethertype == _htons(RTE_ETHER_TYPE_IPV4))
 			udp_hdr->dgram_cksum
 				= rte_ipv4_phdr_cksum(ipv4_hdr, ol_flags);
@@ -959,9 +959,9 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 		if (info.is_tunnel == 1) {
 			if (info.tunnel_tso_segsz ||
 			    (tx_offloads &
-			     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+			     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
 			    (tx_offloads &
-			     DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+			     RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
 				m->outer_l2_len = info.outer_l2_len;
 				m->outer_l3_len = info.outer_l3_len;
 				m->l2_len = info.l2_len;
@@ -1022,19 +1022,19 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 					rte_be_to_cpu_16(info.outer_ethertype),
 					info.outer_l3_len);
 			/* dump tx packet info */
-			if ((tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-					    DEV_TX_OFFLOAD_UDP_CKSUM |
-					    DEV_TX_OFFLOAD_TCP_CKSUM |
-					    DEV_TX_OFFLOAD_SCTP_CKSUM)) ||
+			if ((tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+					    RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+					    RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+					    RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) ||
 				info.tso_segsz != 0)
 				printf("tx: m->l2_len=%d m->l3_len=%d "
 					"m->l4_len=%d\n",
 					m->l2_len, m->l3_len, m->l4_len);
 			if (info.is_tunnel == 1) {
 				if ((tx_offloads &
-				    DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+				    RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
 				    (tx_offloads &
-				    DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
+				    RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
 				    (tx_ol_flags & PKT_TX_OUTER_IPV6))
 					printf("tx: m->outer_l2_len=%d "
 						"m->outer_l3_len=%d\n",
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 7ebed9fed334..03d026dec169 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -99,11 +99,11 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
 	vlan_tci_outer = ports[fs->tx_port].tx_vlan_id_outer;
 
 	tx_offloads = ports[fs->tx_port].dev_conf.txmode.offloads;
-	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ol_flags |= PKT_TX_VLAN_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		ol_flags |= PKT_TX_QINQ_PKT;
-	if (tx_offloads	& DEV_TX_OFFLOAD_MACSEC_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 
 	for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index ee76df7f0323..57e00bca20e7 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -72,11 +72,11 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
 	fs->rx_packets += nb_rx;
 	txp = &ports[fs->tx_port];
 	tx_offloads = txp->dev_conf.txmode.offloads;
-	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ol_flags = PKT_TX_VLAN_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		ol_flags |= PKT_TX_QINQ_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 	for (i = 0; i < nb_rx; i++) {
 		if (likely(i < nb_rx - 1))
diff --git a/app/test-pmd/macswap_common.h b/app/test-pmd/macswap_common.h
index 7e9a3590a436..7ade9a686b7c 100644
--- a/app/test-pmd/macswap_common.h
+++ b/app/test-pmd/macswap_common.h
@@ -10,11 +10,11 @@ ol_flags_init(uint64_t tx_offload)
 {
 	uint64_t ol_flags = 0;
 
-	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_VLAN_INSERT) ?
+	ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) ?
 			PKT_TX_VLAN : 0;
-	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_QINQ_INSERT) ?
+	ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) ?
 			PKT_TX_QINQ : 0;
-	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_MACSEC_INSERT) ?
+	ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) ?
 			PKT_TX_MACSEC : 0;
 
 	return ol_flags;
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index afc75f6bd213..cb40917077ea 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -547,29 +547,29 @@ parse_xstats_list(const char *in_str, struct rte_eth_xstat_name **xstats,
 static int
 parse_link_speed(int n)
 {
-	uint32_t speed = ETH_LINK_SPEED_FIXED;
+	uint32_t speed = RTE_ETH_LINK_SPEED_FIXED;
 
 	switch (n) {
 	case 1000:
-		speed |= ETH_LINK_SPEED_1G;
+		speed |= RTE_ETH_LINK_SPEED_1G;
 		break;
 	case 10000:
-		speed |= ETH_LINK_SPEED_10G;
+		speed |= RTE_ETH_LINK_SPEED_10G;
 		break;
 	case 25000:
-		speed |= ETH_LINK_SPEED_25G;
+		speed |= RTE_ETH_LINK_SPEED_25G;
 		break;
 	case 40000:
-		speed |= ETH_LINK_SPEED_40G;
+		speed |= RTE_ETH_LINK_SPEED_40G;
 		break;
 	case 50000:
-		speed |= ETH_LINK_SPEED_50G;
+		speed |= RTE_ETH_LINK_SPEED_50G;
 		break;
 	case 100000:
-		speed |= ETH_LINK_SPEED_100G;
+		speed |= RTE_ETH_LINK_SPEED_100G;
 		break;
 	case 200000:
-		speed |= ETH_LINK_SPEED_200G;
+		speed |= RTE_ETH_LINK_SPEED_200G;
 		break;
 	case 100:
 	case 10:
@@ -1002,13 +1002,13 @@ launch_args_parse(int argc, char** argv)
 			if (!strcmp(lgopts[opt_idx].name, "pkt-filter-size")) {
 				if (!strcmp(optarg, "64K"))
 					fdir_conf.pballoc =
-						RTE_FDIR_PBALLOC_64K;
+						RTE_ETH_FDIR_PBALLOC_64K;
 				else if (!strcmp(optarg, "128K"))
 					fdir_conf.pballoc =
-						RTE_FDIR_PBALLOC_128K;
+						RTE_ETH_FDIR_PBALLOC_128K;
 				else if (!strcmp(optarg, "256K"))
 					fdir_conf.pballoc =
-						RTE_FDIR_PBALLOC_256K;
+						RTE_ETH_FDIR_PBALLOC_256K;
 				else
 					rte_exit(EXIT_FAILURE, "pkt-filter-size %s invalid -"
 						 " must be: 64K or 128K or 256K\n",
@@ -1050,34 +1050,34 @@ launch_args_parse(int argc, char** argv)
 			}
 #endif
 			if (!strcmp(lgopts[opt_idx].name, "disable-crc-strip"))
-				rx_offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 			if (!strcmp(lgopts[opt_idx].name, "enable-lro"))
-				rx_offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 			if (!strcmp(lgopts[opt_idx].name, "enable-scatter"))
-				rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 			if (!strcmp(lgopts[opt_idx].name, "enable-rx-cksum"))
-				rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-rx-timestamp"))
-				rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 			if (!strcmp(lgopts[opt_idx].name, "enable-hw-vlan"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-vlan-filter"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-vlan-strip"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-vlan-extend"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-qinq-strip"))
-				rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 
 			if (!strcmp(lgopts[opt_idx].name, "enable-drop-en"))
 				rx_drop_en = 1;
@@ -1099,13 +1099,13 @@ launch_args_parse(int argc, char** argv)
 			if (!strcmp(lgopts[opt_idx].name, "forward-mode"))
 				set_pkt_forwarding_mode(optarg);
 			if (!strcmp(lgopts[opt_idx].name, "rss-ip"))
-				rss_hf = ETH_RSS_IP;
+				rss_hf = RTE_ETH_RSS_IP;
 			if (!strcmp(lgopts[opt_idx].name, "rss-udp"))
-				rss_hf = ETH_RSS_UDP;
+				rss_hf = RTE_ETH_RSS_UDP;
 			if (!strcmp(lgopts[opt_idx].name, "rss-level-inner"))
-				rss_hf |= ETH_RSS_LEVEL_INNERMOST;
+				rss_hf |= RTE_ETH_RSS_LEVEL_INNERMOST;
 			if (!strcmp(lgopts[opt_idx].name, "rss-level-outer"))
-				rss_hf |= ETH_RSS_LEVEL_OUTERMOST;
+				rss_hf |= RTE_ETH_RSS_LEVEL_OUTERMOST;
 			if (!strcmp(lgopts[opt_idx].name, "rxq")) {
 				n = atoi(optarg);
 				if (n >= 0 && check_nb_rxq((queueid_t)n) == 0)
@@ -1495,12 +1495,12 @@ launch_args_parse(int argc, char** argv)
 			if (!strcmp(lgopts[opt_idx].name, "rx-mq-mode")) {
 				char *end = NULL;
 				n = strtoul(optarg, &end, 16);
-				if (n >= 0 && n <= ETH_MQ_RX_VMDQ_DCB_RSS)
+				if (n >= 0 && n <= RTE_ETH_MQ_RX_VMDQ_DCB_RSS)
 					rx_mq_mode = (enum rte_eth_rx_mq_mode)n;
 				else
 					rte_exit(EXIT_FAILURE,
 						 "rx-mq-mode must be >= 0 and <= %d\n",
-						 ETH_MQ_RX_VMDQ_DCB_RSS);
+						 RTE_ETH_MQ_RX_VMDQ_DCB_RSS);
 			}
 			if (!strcmp(lgopts[opt_idx].name, "record-core-cycles"))
 				record_core_cycles = 1;
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 2b835a27bcd9..a66dfb297c65 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -349,7 +349,7 @@ uint64_t noisy_lkup_num_reads_writes;
 /*
  * Receive Side Scaling (RSS) configuration.
  */
-uint64_t rss_hf = ETH_RSS_IP; /* RSS IP by default. */
+uint64_t rss_hf = RTE_ETH_RSS_IP; /* RSS IP by default. */
 
 /*
  * Port topology configuration
@@ -460,12 +460,12 @@ lcoreid_t latencystats_lcore_id = -1;
 struct rte_eth_rxmode rx_mode;
 
 struct rte_eth_txmode tx_mode = {
-	.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
+	.offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
 };
 
-struct rte_fdir_conf fdir_conf = {
+struct rte_eth_fdir_conf fdir_conf = {
 	.mode = RTE_FDIR_MODE_NONE,
-	.pballoc = RTE_FDIR_PBALLOC_64K,
+	.pballoc = RTE_ETH_FDIR_PBALLOC_64K,
 	.status = RTE_FDIR_REPORT_STATUS,
 	.mask = {
 		.vlan_tci_mask = 0xFFEF,
@@ -524,7 +524,7 @@ uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
 /*
  * hexadecimal bitmask of RX mq mode can be enabled.
  */
-enum rte_eth_rx_mq_mode rx_mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
+enum rte_eth_rx_mq_mode rx_mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
 
 /*
  * Used to set forced link speed
@@ -1578,9 +1578,9 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
 	if (ret != 0)
 		rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
 
-	if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(port->dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		port->dev_conf.txmode.offloads &=
-			~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Apply Rx offloads configuration */
 	for (i = 0; i < port->dev_info.max_rx_queues; i++)
@@ -1717,8 +1717,8 @@ init_config(void)
 
 	init_port_config();
 
-	gso_types = DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_UDP_TSO;
+	gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO;
 	/*
 	 * Records which Mbuf pool to use by each logical core, if needed.
 	 */
@@ -3466,7 +3466,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -3769,17 +3769,17 @@ init_port_config(void)
 			if (port->dev_conf.rx_adv_conf.rss_conf.rss_hf != 0) {
 				port->dev_conf.rxmode.mq_mode =
 					(enum rte_eth_rx_mq_mode)
-						(rx_mq_mode & ETH_MQ_RX_RSS);
+						(rx_mq_mode & RTE_ETH_MQ_RX_RSS);
 			} else {
-				port->dev_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+				port->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
 				port->dev_conf.rxmode.offloads &=
-						~DEV_RX_OFFLOAD_RSS_HASH;
+						~RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 				for (i = 0;
 				     i < port->dev_info.nb_rx_queues;
 				     i++)
 					port->rx_conf[i].offloads &=
-						~DEV_RX_OFFLOAD_RSS_HASH;
+						~RTE_ETH_RX_OFFLOAD_RSS_HASH;
 			}
 		}
 
@@ -3867,9 +3867,9 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 		vmdq_rx_conf->enable_default_pool = 0;
 		vmdq_rx_conf->default_pool = 0;
 		vmdq_rx_conf->nb_queue_pools =
-			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+			(num_tcs ==  RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
 		vmdq_tx_conf->nb_queue_pools =
-			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+			(num_tcs ==  RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
 
 		vmdq_rx_conf->nb_pool_maps = vmdq_rx_conf->nb_queue_pools;
 		for (i = 0; i < vmdq_rx_conf->nb_pool_maps; i++) {
@@ -3877,7 +3877,7 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 			vmdq_rx_conf->pool_map[i].pools =
 				1 << (i % vmdq_rx_conf->nb_queue_pools);
 		}
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			vmdq_rx_conf->dcb_tc[i] = i % num_tcs;
 			vmdq_tx_conf->dcb_tc[i] = i % num_tcs;
 		}
@@ -3885,8 +3885,8 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 		/* set DCB mode of RX and TX of multiple queues */
 		eth_conf->rxmode.mq_mode =
 				(enum rte_eth_rx_mq_mode)
-					(rx_mq_mode & ETH_MQ_RX_VMDQ_DCB);
-		eth_conf->txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+					(rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB);
+		eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
 	} else {
 		struct rte_eth_dcb_rx_conf *rx_conf =
 				&eth_conf->rx_adv_conf.dcb_rx_conf;
@@ -3902,23 +3902,23 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 		rx_conf->nb_tcs = num_tcs;
 		tx_conf->nb_tcs = num_tcs;
 
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			rx_conf->dcb_tc[i] = i % num_tcs;
 			tx_conf->dcb_tc[i] = i % num_tcs;
 		}
 
 		eth_conf->rxmode.mq_mode =
 				(enum rte_eth_rx_mq_mode)
-					(rx_mq_mode & ETH_MQ_RX_DCB_RSS);
+					(rx_mq_mode & RTE_ETH_MQ_RX_DCB_RSS);
 		eth_conf->rx_adv_conf.rss_conf = rss_conf;
-		eth_conf->txmode.mq_mode = ETH_MQ_TX_DCB;
+		eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_DCB;
 	}
 
 	if (pfc_en)
 		eth_conf->dcb_capability_en =
-				ETH_DCB_PG_SUPPORT | ETH_DCB_PFC_SUPPORT;
+				RTE_ETH_DCB_PG_SUPPORT | RTE_ETH_DCB_PFC_SUPPORT;
 	else
-		eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
+		eth_conf->dcb_capability_en = RTE_ETH_DCB_PG_SUPPORT;
 
 	return 0;
 }
@@ -3947,7 +3947,7 @@ init_port_dcb_config(portid_t pid,
 	retval = get_eth_dcb_conf(pid, &port_conf, dcb_mode, num_tcs, pfc_en);
 	if (retval < 0)
 		return retval;
-	port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	/* re-configure the device . */
 	retval = rte_eth_dev_configure(pid, nb_rxq, nb_rxq, &port_conf);
@@ -3997,7 +3997,7 @@ init_port_dcb_config(portid_t pid,
 
 	rxtx_port_config(pid);
 	/* VLAN filter */
-	rte_port->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	rte_port->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	for (i = 0; i < RTE_DIM(vlan_tags); i++)
 		rx_vft_set(pid, vlan_tags[i], 1);
 
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 071e4e7d63a3..669ce1e87d79 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -493,7 +493,7 @@ extern lcoreid_t bitrate_lcore_id;
 extern uint8_t bitrate_enabled;
 #endif
 
-extern struct rte_fdir_conf fdir_conf;
+extern struct rte_eth_fdir_conf fdir_conf;
 
 extern uint32_t max_rx_pkt_len;
 
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index e45f8840c91c..9eb7992815e8 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -354,11 +354,11 @@ pkt_burst_transmit(struct fwd_stream *fs)
 	tx_offloads = txp->dev_conf.txmode.offloads;
 	vlan_tci = txp->tx_vlan_id;
 	vlan_tci_outer = txp->tx_vlan_id_outer;
-	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ol_flags = PKT_TX_VLAN_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		ol_flags |= PKT_TX_QINQ_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 
 	/*
diff --git a/app/test/test_ethdev_link.c b/app/test/test_ethdev_link.c
index ee11987bae28..6248aea49abd 100644
--- a/app/test/test_ethdev_link.c
+++ b/app/test/test_ethdev_link.c
@@ -14,10 +14,10 @@ test_link_status_up_default(void)
 {
 	int ret = 0;
 	struct rte_eth_link link_status = {
-		.link_speed = ETH_SPEED_NUM_2_5G,
-		.link_status = ETH_LINK_UP,
-		.link_autoneg = ETH_LINK_AUTONEG,
-		.link_duplex = ETH_LINK_FULL_DUPLEX
+		.link_speed = RTE_ETH_SPEED_NUM_2_5G,
+		.link_status = RTE_ETH_LINK_UP,
+		.link_autoneg = RTE_ETH_LINK_AUTONEG,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -27,9 +27,9 @@ test_link_status_up_default(void)
 	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg",
 		text, strlen(text), "Invalid default link status string");
 
-	link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link_status.link_autoneg = ETH_LINK_FIXED;
-	link_status.link_speed = ETH_SPEED_NUM_10M,
+	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link_status.link_autoneg = RTE_ETH_LINK_FIXED;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_10M;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #2: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -37,7 +37,7 @@ test_link_status_up_default(void)
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
-	link_status.link_speed = ETH_SPEED_NUM_UNKNOWN;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -45,7 +45,7 @@ test_link_status_up_default(void)
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
-	link_status.link_speed = ETH_SPEED_NUM_NONE;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -54,9 +54,9 @@ test_link_status_up_default(void)
 		"string with HDX");
 
 	/* test max str len */
-	link_status.link_speed = ETH_SPEED_NUM_200G;
-	link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link_status.link_autoneg = ETH_LINK_AUTONEG;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_200G;
+	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link_status.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #4:len = %d, %s\n", ret, text);
 	RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
@@ -69,10 +69,10 @@ test_link_status_down_default(void)
 {
 	int ret = 0;
 	struct rte_eth_link link_status = {
-		.link_speed = ETH_SPEED_NUM_2_5G,
-		.link_status = ETH_LINK_DOWN,
-		.link_autoneg = ETH_LINK_AUTONEG,
-		.link_duplex = ETH_LINK_FULL_DUPLEX
+		.link_speed = RTE_ETH_SPEED_NUM_2_5G,
+		.link_status = RTE_ETH_LINK_DOWN,
+		.link_autoneg = RTE_ETH_LINK_AUTONEG,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -90,9 +90,9 @@ test_link_status_invalid(void)
 	int ret = 0;
 	struct rte_eth_link link_status = {
 		.link_speed = 55555,
-		.link_status = ETH_LINK_UP,
-		.link_autoneg = ETH_LINK_AUTONEG,
-		.link_duplex = ETH_LINK_FULL_DUPLEX
+		.link_status = RTE_ETH_LINK_UP,
+		.link_autoneg = RTE_ETH_LINK_AUTONEG,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -116,21 +116,21 @@ test_link_speed_all_values(void)
 		const char *value;
 		uint32_t link_speed;
 	} speed_str_map[] = {
-		{ "None",   ETH_SPEED_NUM_NONE },
-		{ "10 Mbps",  ETH_SPEED_NUM_10M },
-		{ "100 Mbps", ETH_SPEED_NUM_100M },
-		{ "1 Gbps",   ETH_SPEED_NUM_1G },
-		{ "2.5 Gbps", ETH_SPEED_NUM_2_5G },
-		{ "5 Gbps",   ETH_SPEED_NUM_5G },
-		{ "10 Gbps",  ETH_SPEED_NUM_10G },
-		{ "20 Gbps",  ETH_SPEED_NUM_20G },
-		{ "25 Gbps",  ETH_SPEED_NUM_25G },
-		{ "40 Gbps",  ETH_SPEED_NUM_40G },
-		{ "50 Gbps",  ETH_SPEED_NUM_50G },
-		{ "56 Gbps",  ETH_SPEED_NUM_56G },
-		{ "100 Gbps", ETH_SPEED_NUM_100G },
-		{ "200 Gbps", ETH_SPEED_NUM_200G },
-		{ "Unknown",  ETH_SPEED_NUM_UNKNOWN },
+		{ "None",   RTE_ETH_SPEED_NUM_NONE },
+		{ "10 Mbps",  RTE_ETH_SPEED_NUM_10M },
+		{ "100 Mbps", RTE_ETH_SPEED_NUM_100M },
+		{ "1 Gbps",   RTE_ETH_SPEED_NUM_1G },
+		{ "2.5 Gbps", RTE_ETH_SPEED_NUM_2_5G },
+		{ "5 Gbps",   RTE_ETH_SPEED_NUM_5G },
+		{ "10 Gbps",  RTE_ETH_SPEED_NUM_10G },
+		{ "20 Gbps",  RTE_ETH_SPEED_NUM_20G },
+		{ "25 Gbps",  RTE_ETH_SPEED_NUM_25G },
+		{ "40 Gbps",  RTE_ETH_SPEED_NUM_40G },
+		{ "50 Gbps",  RTE_ETH_SPEED_NUM_50G },
+		{ "56 Gbps",  RTE_ETH_SPEED_NUM_56G },
+		{ "100 Gbps", RTE_ETH_SPEED_NUM_100G },
+		{ "200 Gbps", RTE_ETH_SPEED_NUM_200G },
+		{ "Unknown",  RTE_ETH_SPEED_NUM_UNKNOWN },
 		{ "Invalid",   50505 }
 	};
 
diff --git a/app/test/test_event_eth_rx_adapter.c b/app/test/test_event_eth_rx_adapter.c
index add4d8a67821..a09253e91814 100644
--- a/app/test/test_event_eth_rx_adapter.c
+++ b/app/test/test_event_eth_rx_adapter.c
@@ -103,7 +103,7 @@ port_init_rx_intr(uint16_t port, struct rte_mempool *mp)
 {
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_NONE,
+			.mq_mode = RTE_ETH_MQ_RX_NONE,
 		},
 		.intr_conf = {
 			.rxq = 1,
@@ -118,7 +118,7 @@ port_init(uint16_t port, struct rte_mempool *mp)
 {
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_NONE,
+			.mq_mode = RTE_ETH_MQ_RX_NONE,
 		},
 	};
 
diff --git a/app/test/test_kni.c b/app/test/test_kni.c
index 96733554b6c4..40ab0d5c4ca4 100644
--- a/app/test/test_kni.c
+++ b/app/test/test_kni.c
@@ -74,7 +74,7 @@ static const struct rte_eth_txconf tx_conf = {
 
 static const struct rte_eth_conf port_conf = {
 	.txmode = {
-		.mq_mode = ETH_DCB_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 5388d18125a6..8a9ef851789f 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -134,11 +134,11 @@ static uint16_t vlan_id = 0x100;
 
 static struct rte_eth_conf default_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 189d2430f27e..351129de2f9b 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -107,11 +107,11 @@ static struct link_bonding_unittest_params test_params  = {
 
 static struct rte_eth_conf default_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index e7bb0497b663..f9eae9397386 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -52,7 +52,7 @@ struct slave_conf {
 
 	struct rte_eth_rss_conf rss_conf;
 	uint8_t rss_key[40];
-	struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
 
 	uint8_t is_slave;
 	struct rte_ring *rxtx_queue[RXTX_QUEUE_COUNT];
@@ -61,7 +61,7 @@ struct slave_conf {
 struct link_bonding_rssconf_unittest_params {
 	uint8_t bond_port_id;
 	struct rte_eth_dev_info bond_dev_info;
-	struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
 	struct slave_conf slave_ports[SLAVE_COUNT];
 
 	struct rte_mempool *mbuf_pool;
@@ -80,27 +80,27 @@ static struct link_bonding_rssconf_unittest_params test_params  = {
  */
 static struct rte_eth_conf default_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
 
 static struct rte_eth_conf rss_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IPV6,
+			.rss_hf = RTE_ETH_RSS_IPV6,
 		},
 	},
 	.lpbk_mode = 0,
@@ -207,13 +207,13 @@ bond_slaves(void)
 static int
 reta_set(uint16_t port_id, uint8_t value, int reta_size)
 {
-	struct rte_eth_rss_reta_entry64 reta_conf[512/RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[512/RTE_ETH_RETA_GROUP_SIZE];
 	int i, j;
 
-	for (i = 0; i < reta_size / RTE_RETA_GROUP_SIZE; i++) {
+	for (i = 0; i < reta_size / RTE_ETH_RETA_GROUP_SIZE; i++) {
 		/* select all fields to set */
 		reta_conf[i].mask = ~0LL;
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			reta_conf[i].reta[j] = value;
 	}
 
@@ -232,8 +232,8 @@ reta_check_synced(struct slave_conf *port)
 	for (i = 0; i < test_params.bond_dev_info.reta_size;
 			i++) {
 
-		int index = i / RTE_RETA_GROUP_SIZE;
-		int shift = i % RTE_RETA_GROUP_SIZE;
+		int index = i / RTE_ETH_RETA_GROUP_SIZE;
+		int shift = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (port->reta_conf[index].reta[shift] !=
 				test_params.bond_reta_conf[index].reta[shift])
@@ -251,7 +251,7 @@ static int
 bond_reta_fetch(void) {
 	unsigned j;
 
-	for (j = 0; j < test_params.bond_dev_info.reta_size / RTE_RETA_GROUP_SIZE;
+	for (j = 0; j < test_params.bond_dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE;
 			j++)
 		test_params.bond_reta_conf[j].mask = ~0LL;
 
@@ -268,7 +268,7 @@ static int
 slave_reta_fetch(struct slave_conf *port) {
 	unsigned j;
 
-	for (j = 0; j < port->dev_info.reta_size / RTE_RETA_GROUP_SIZE; j++)
+	for (j = 0; j < port->dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE; j++)
 		port->reta_conf[j].mask = ~0LL;
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_rss_reta_query(port->port_id,
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index a3b4f52c65e6..1df86ce080e5 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -62,11 +62,11 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 1,  /* enable loopback */
 };
@@ -155,7 +155,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -822,7 +822,7 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
 		/* bulk alloc rx, full-featured tx */
 		tx_conf.tx_rs_thresh = 32;
 		tx_conf.tx_free_thresh = 32;
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 		return 0;
 	} else if (!strcmp(mode, "hybrid")) {
 		/* bulk alloc rx, vector tx
@@ -831,13 +831,13 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
 		 */
 		tx_conf.tx_rs_thresh = 32;
 		tx_conf.tx_free_thresh = 32;
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 		return 0;
 	} else if (!strcmp(mode, "full")) {
 		/* full feature rx,tx pair */
 		tx_conf.tx_rs_thresh = 32;
 		tx_conf.tx_free_thresh = 32;
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 		return 0;
 	}
 
diff --git a/app/test/virtual_pmd.c b/app/test/virtual_pmd.c
index 7e15b47eb0fb..d9f2e4f66bde 100644
--- a/app/test/virtual_pmd.c
+++ b/app/test/virtual_pmd.c
@@ -53,7 +53,7 @@ static int  virtual_ethdev_stop(struct rte_eth_dev *eth_dev __rte_unused)
 	void *pkt = NULL;
 	struct virtual_ethdev_private *prv = eth_dev->data->dev_private;
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 0;
 	while (rte_ring_dequeue(prv->rx_queue, &pkt) != -ENOENT)
 		rte_pktmbuf_free(pkt);
@@ -168,7 +168,7 @@ virtual_ethdev_link_update_success(struct rte_eth_dev *bonded_eth_dev,
 		int wait_to_complete __rte_unused)
 {
 	if (!bonded_eth_dev->data->dev_started)
-		bonded_eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		bonded_eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -562,9 +562,9 @@ virtual_ethdev_create(const char *name, struct rte_ether_addr *mac_addr,
 	eth_dev->data->nb_rx_queues = (uint16_t)1;
 	eth_dev->data->nb_tx_queues = (uint16_t)1;
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
-	eth_dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
-	eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	eth_dev->data->mac_addrs = rte_zmalloc(name, RTE_ETHER_ADDR_LEN, 0);
 	if (eth_dev->data->mac_addrs == NULL)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 53560d3830d7..1c0ea988f239 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -42,7 +42,7 @@ Features of the OCTEON cnxk SSO PMD are:
 - HW managed packets enqueued from ethdev to eventdev exposed through event eth
   RX adapter.
 - N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
   capability while maintaining receive packet order.
 - Full Rx/Tx offload support defined through ethdev queue configuration.
 - HW managed event vectorization on CN10K for packets enqueued from ethdev to
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 11fbebfcd243..0fa57abfa3e0 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -35,7 +35,7 @@ Features of the OCTEON TX2 SSO PMD are:
 - HW managed packets enqueued from ethdev to eventdev exposed through event eth
   RX adapter.
 - N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
   capability while maintaining receive packet order.
 - Full Rx/Tx offload support defined through ethdev queue config.
 
diff --git a/doc/guides/nics/af_packet.rst b/doc/guides/nics/af_packet.rst
index bdd6e7263c85..54feffdef4bd 100644
--- a/doc/guides/nics/af_packet.rst
+++ b/doc/guides/nics/af_packet.rst
@@ -70,5 +70,5 @@ Features and Limitations
 ------------------------
 
 The PMD will re-insert the VLAN tag transparently to the packet if the kernel
-strips it, as long as the ``DEV_RX_OFFLOAD_VLAN_STRIP`` is not enabled by the
+strips it, as long as the ``RTE_ETH_RX_OFFLOAD_VLAN_STRIP`` is not enabled by the
 application.
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index aa6032889a55..b3d10f30dc77 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -877,21 +877,21 @@ processing. This improved performance is derived from a number of optimizations:
     * TX: only the following reduced set of transmit offloads is supported in
       vector mode::
 
-       DEV_TX_OFFLOAD_MBUF_FAST_FREE
+       RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
 
     * RX: only the following reduced set of receive offloads is supported in
       vector mode (note that jumbo MTU is allowed only when the MTU setting
-      does not require `DEV_RX_OFFLOAD_SCATTER` to be enabled)::
-
-       DEV_RX_OFFLOAD_VLAN_STRIP
-       DEV_RX_OFFLOAD_KEEP_CRC
-       DEV_RX_OFFLOAD_IPV4_CKSUM
-       DEV_RX_OFFLOAD_UDP_CKSUM
-       DEV_RX_OFFLOAD_TCP_CKSUM
-       DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
-       DEV_RX_OFFLOAD_OUTER_UDP_CKSUM
-       DEV_RX_OFFLOAD_RSS_HASH
-       DEV_RX_OFFLOAD_VLAN_FILTER
+      does not require `RTE_ETH_RX_OFFLOAD_SCATTER` to be enabled)::
+
+       RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+       RTE_ETH_RX_OFFLOAD_KEEP_CRC
+       RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+       RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+       RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+       RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+       RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+       RTE_ETH_RX_OFFLOAD_RSS_HASH
+       RTE_ETH_RX_OFFLOAD_VLAN_FILTER
 
 The BNXT Vector PMD is enabled in DPDK builds by default. The decision to enable
 vector processing is made at run-time when the port is started; if no transmit
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index 91bdcd065a95..0209730b904a 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -432,7 +432,7 @@ Limitations
 .. code-block:: console
 
      vlan_offload = rte_eth_dev_get_vlan_offload(port);
-     vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
+     vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
      rte_eth_dev_set_vlan_offload(port, vlan_offload);
 
 Another alternative is modify the adapter's ingress VLAN rewrite mode so that
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index d35751d5b5a7..594e98a6b803 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -30,7 +30,7 @@ Speed capabilities
 
 Supports getting the speed capabilities that the current device is capable of.
 
-* **[provides] rte_eth_dev_info**: ``speed_capa:ETH_LINK_SPEED_*``.
+* **[provides] rte_eth_dev_info**: ``speed_capa:RTE_ETH_LINK_SPEED_*``.
 * **[related]  API**: ``rte_eth_dev_info_get()``.
 
 
@@ -101,11 +101,11 @@ Supports Rx interrupts.
 Lock-free Tx queue
 ------------------
 
-If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+If a PMD advertises RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
 invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
 
-* **[uses]    rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
+* **[uses]    rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
 * **[related]  API**: ``rte_eth_tx_burst()``.
 
 
@@ -117,8 +117,8 @@ Fast mbuf free
 Supports optimization for fast release of mbufs following successful Tx.
 Requires that per queue, all mbufs come from the same mempool and has refcnt = 1.
 
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
-* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
 
 
 .. _nic_features_free_tx_mbuf_on_demand:
@@ -177,7 +177,7 @@ Scattered Rx
 
 Supports receiving segmented mbufs.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SCATTER``.
 * **[implements] datapath**: ``Scattered Rx function``.
 * **[implements] rte_eth_dev_data**: ``scattered_rx``.
 * **[provides]   eth_dev_ops**: ``rxq_info_get:scattered_rx``.
@@ -205,12 +205,12 @@ LRO
 
 Supports Large Receive Offload.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
   ``dev_conf.rxmode.max_lro_pkt_size``.
 * **[implements] datapath**: ``LRO functionality``.
 * **[implements] rte_eth_dev_data**: ``lro``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
 * **[provides]   rte_eth_dev_info**: ``max_lro_pkt_size``.
 
 
@@ -221,12 +221,12 @@ TSO
 
 Supports TCP Segmentation Offloading.
 
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_TCP_TSO``.
 * **[uses]       rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:`` ``PKT_TX_TCP_SEG``, ``PKT_TX_IPV4``, ``PKT_TX_IPV6``, ``PKT_TX_IP_CKSUM``.
 * **[uses]       mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
 * **[implements] datapath**: ``TSO functionality``.
-* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_TCP_TSO,RTE_ETH_TX_OFFLOAD_UDP_TSO``.
 
 
 .. _nic_features_promiscuous_mode:
@@ -287,9 +287,9 @@ RSS hash
 
 Supports RSS hashing on RX.
 
-* **[uses]     user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_RSS_FLAG``.
+* **[uses]     user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_RSS_FLAG``.
 * **[uses]     user config**: ``dev_conf.rx_adv_conf.rss_conf``.
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
 * **[provides] rte_eth_dev_info**: ``flow_type_rss_offloads``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
 
@@ -302,7 +302,7 @@ Inner RSS
 Supports RX RSS hashing on Inner headers.
 
 * **[uses]    rte_flow_action_rss**: ``level``.
-* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
 
 
@@ -339,7 +339,7 @@ VMDq
 
 Supports Virtual Machine Device Queues (VMDq).
 
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_VMDQ_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_VMDQ_FLAG``.
 * **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
 * **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_rx_conf``.
 * **[uses] user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -362,7 +362,7 @@ DCB
 
 Supports Data Center Bridging (DCB).
 
-* **[uses]       user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_DCB_FLAG``.
+* **[uses]       user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_DCB_FLAG``.
 * **[uses]       user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
 * **[uses]       user config**: ``dev_conf.rx_adv_conf.dcb_rx_conf``.
 * **[uses]       user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -378,7 +378,7 @@ VLAN filter
 
 Supports filtering of a VLAN Tag identifier.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_FILTER``.
 * **[implements] eth_dev_ops**: ``vlan_filter_set``.
 * **[related]    API**: ``rte_eth_dev_vlan_filter()``.
 
@@ -416,13 +416,13 @@ Supports inline crypto processing defined by rte_security library to perform cry
 operations of security protocol while packet is received in NIC. NIC is not aware
 of protocol operations. See Security library and PMD documentation for more details.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[uses]       mbuf**: ``mbuf.l2_len``.
 * **[implements] rte_security_ops**: ``session_create``, ``session_update``,
   ``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
   ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO``
@@ -438,14 +438,14 @@ protocol processing for the security protocol (e.g. IPsec, MACSEC) while the
 packet is received at NIC. The NIC is capable of understanding the security
 protocol operations. See security library and PMD documentation for more details.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[uses]       mbuf**: ``mbuf.l2_len``.
 * **[implements] rte_security_ops**: ``session_create``, ``session_update``,
   ``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``get_userdata``,
   ``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
   ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL``
@@ -459,7 +459,7 @@ CRC offload
 Supports CRC stripping by hardware.
 A PMD assumed to support CRC stripping by default. PMD should advertise if it supports keeping CRC.
 
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_KEEP_CRC``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_KEEP_CRC``.
 
 
 .. _nic_features_vlan_offload:
@@ -469,13 +469,13 @@ VLAN offload
 
 Supports VLAN offload to hardware.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_STRIP,RTE_ETH_RX_OFFLOAD_VLAN_FILTER,RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:PKT_TX_VLAN``, ``mbuf.vlan_tci``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN`` ``mbuf.vlan_tci``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_VLAN_STRIP``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
   ``rte_eth_dev_get_vlan_offload()``.
 
@@ -487,14 +487,14 @@ QinQ offload
 
 Supports QinQ (queue in queue) offload.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ``, ``mbuf.vlan_tci_outer``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.ol_flags:PKT_RX_QINQ``,
   ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN``
   ``mbuf.vlan_tci``, ``mbuf.vlan_tci_outer``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
 
 
 .. _nic_features_fec:
@@ -508,7 +508,7 @@ information to correct the bit errors generated during data packet transmission
 improves signal quality but also brings a delay to signals. This function can be enabled or disabled as required.
 
 * **[implements] eth_dev_ops**: ``fec_get_capability``, ``fec_get``, ``fec_set``.
-* **[provides]   rte_eth_fec_capa**: ``speed:ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
+* **[provides]   rte_eth_fec_capa**: ``speed:RTE_ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
 * **[related]    API**: ``rte_eth_fec_get_capability()``, ``rte_eth_fec_get()``, ``rte_eth_fec_set()``.
 
 
@@ -519,16 +519,16 @@ L3 checksum offload
 
 Supports L3 checksum offload.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[uses]     mbuf**: ``mbuf.l2_len``, ``mbuf.l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
   ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
   ``PKT_RX_IP_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
 
 
 .. _nic_features_l4_checksum_offload:
@@ -538,8 +538,8 @@ L4 checksum offload
 
 Supports L4 checksum offload.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -547,8 +547,8 @@ Supports L4 checksum offload.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
   ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
   ``PKT_RX_L4_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
 
 .. _nic_features_hw_timestamp:
 
@@ -557,10 +557,10 @@ Timestamp offload
 
 Supports Timestamp.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_TIMESTAMP``.
 * **[provides] mbuf**: ``mbuf.timestamp``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
 * **[related] eth_dev_ops**: ``read_clock``.
 
 .. _nic_features_macsec_offload:
@@ -570,11 +570,11 @@ MACsec offload
 
 Supports MACsec.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
 
 
 .. _nic_features_inner_l3_checksum:
@@ -584,16 +584,16 @@ Inner L3 checksum
 
 Supports inner packet L3 checksum.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_IP_CKSUM_BAD``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
 
 .. _nic_features_inner_l4_checksum:
@@ -603,15 +603,15 @@ Inner L4 checksum
 
 Supports inner packet L4 checksum.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_L4_CKSUM_UNKNOWN`` |
   ``PKT_RX_OUTER_L4_CKSUM_BAD`` | ``PKT_RX_OUTER_L4_CKSUM_GOOD`` | ``PKT_RX_OUTER_L4_CKSUM_INVALID``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
   ``mbuf.ol_flags:PKT_TX_OUTER_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
 
 
 .. _nic_features_shared_rx_queue:
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index ed6afd62703d..bba53f5a64ee 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -78,11 +78,11 @@ To enable via ``RX_OLFLAGS`` use ``RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y``.
 To guarantee the constraint, the following capabilities in ``dev_conf.rxmode.offloads``
 will be checked:
 
-*   ``DEV_RX_OFFLOAD_VLAN_EXTEND``
+*   ``RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``
 
-*   ``DEV_RX_OFFLOAD_CHECKSUM``
+*   ``RTE_ETH_RX_OFFLOAD_CHECKSUM``
 
-*   ``DEV_RX_OFFLOAD_HEADER_SPLIT``
+*   ``RTE_ETH_RX_OFFLOAD_HEADER_SPLIT``
 
 *   ``fdir_conf->mode``
 
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index 2efdd1a41bb4..a1e236ad75e5 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -216,21 +216,21 @@ For example,
     *   If the max number of VFs (max_vfs) is set in the range of 1 to 32:
 
         If the number of Rx queues is specified as 4 (``--rxq=4`` in testpmd), then there are totally 32
-        pools (ETH_32_POOLS), and each VF could have 4 Rx queues;
+        pools (RTE_ETH_32_POOLS), and each VF could have 4 Rx queues;
 
         If the number of Rx queues is specified as 2 (``--rxq=2`` in testpmd), then there are totally 32
-        pools (ETH_32_POOLS), and each VF could have 2 Rx queues;
+        pools (RTE_ETH_32_POOLS), and each VF could have 2 Rx queues;
 
     *   If the max number of VFs (max_vfs) is in the range of 33 to 64:
 
         If the number of Rx queues in specified as 4 (``--rxq=4`` in testpmd), then error message is expected
         as ``rxq`` is not correct at this case;
 
-        If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (ETH_64_POOLS),
+        If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (RTE_ETH_64_POOLS),
         and each VF have 2 Rx queues;
 
-    On host, to enable VF RSS functionality, rx mq mode should be set as ETH_MQ_RX_VMDQ_RSS
-    or ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
+    On host, to enable VF RSS functionality, rx mq mode should be set as RTE_ETH_MQ_RX_VMDQ_RSS
+    or RTE_ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
     It also needs config VF RSS information like hash function, RSS key, RSS key length.
 
 .. note::
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index 20a74b9b5bcd..148d2f5fc2be 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -89,13 +89,13 @@ Other features are supported using optional MACRO configuration. They include:
 
 To guarantee the constraint, capabilities in dev_conf.rxmode.offloads will be checked:
 
-*   DEV_RX_OFFLOAD_VLAN_STRIP
+*   RTE_ETH_RX_OFFLOAD_VLAN_STRIP
 
-*   DEV_RX_OFFLOAD_VLAN_EXTEND
+*   RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
 
-*   DEV_RX_OFFLOAD_CHECKSUM
+*   RTE_ETH_RX_OFFLOAD_CHECKSUM
 
-*   DEV_RX_OFFLOAD_HEADER_SPLIT
+*   RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
 
 *   dev_conf
 
@@ -163,13 +163,13 @@ l3fwd
 ~~~~~
 
 When running l3fwd with vPMD, there is one thing to note.
-In the configuration, ensure that DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
+In the configuration, ensure that RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
 Otherwise, by default, RX vPMD is disabled.
 
 load_balancer
 ~~~~~~~~~~~~~
 
-As in the case of l3fwd, to enable vPMD, do NOT set DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
+As in the case of l3fwd, to enable vPMD, do NOT set RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
 In addition, for improved performance, use -bsz "(32,32),(64,64),(32,32)" in load_balancer to avoid using the default burst size of 144.
 
 
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index dd059b227d8e..86927a0b56b0 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -371,7 +371,7 @@ Limitations
 
 - CRC:
 
-  - ``DEV_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
+  - ``RTE_ETH_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
     for some NICs (such as ConnectX-6 Dx, ConnectX-6 Lx, and BlueField-2).
     The capability bit ``scatter_fcs_w_decap_disable`` shows NIC support.
 
@@ -611,7 +611,7 @@ Driver options
   small-packet traffic.
 
   When MPRQ is enabled, MTU can be larger than the size of
-  user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
+  user-provided mbuf even if RTE_ETH_RX_OFFLOAD_SCATTER isn't enabled. PMD will
   configure large stride size enough to accommodate MTU as long as
   device allows. Note that this can waste system memory compared to enabling Rx
   scatter and multi-segment packet.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 3ce696b605d1..681010d9ed7d 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -275,7 +275,7 @@ An example utility for eBPF instruction generation in the format of C arrays wil
 be added in next releases
 
 TAP reports on supported RSS functions as part of dev_infos_get callback:
-``ETH_RSS_IP``, ``ETH_RSS_UDP`` and ``ETH_RSS_TCP``.
+``RTE_ETH_RSS_IP``, ``RTE_ETH_RSS_UDP`` and ``RTE_ETH_RSS_TCP``.
 **Known limitation:** TAP supports all of the above hash functions together
 and not in partial combinations.
 
diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
index 7bff0aef0b74..9b2c31a2f0bc 100644
--- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
@@ -194,11 +194,11 @@ To segment an outgoing packet, an application must:
 
    - the bit mask of required GSO types. The GSO library uses the same macros as
      those that describe a physical device's TX offloading capabilities (i.e.
-     ``DEV_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
+     ``RTE_ETH_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
      wants to segment TCP/IPv4 packets, it should set gso_types to
-     ``DEV_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
-     supported for gso_types are ``DEV_TX_OFFLOAD_VXLAN_TNL_TSO``, and
-     ``DEV_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
+     ``RTE_ETH_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
+     supported for gso_types are ``RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO``, and
+     ``RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
      allowed.
 
    - a flag, that indicates whether the IPv4 headers of output segments should
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 2f190b40e43a..dc6186a44ae2 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -137,7 +137,7 @@ a vxlan-encapsulated tcp packet:
     mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
     set out_ip checksum to 0 in the packet
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
 
 - calculate checksum of out_ip and out_udp::
 
@@ -147,8 +147,8 @@ a vxlan-encapsulated tcp packet:
     set out_ip checksum to 0 in the packet
     set out_udp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM
-  and DEV_TX_OFFLOAD_UDP_CKSUM.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+  and RTE_ETH_TX_OFFLOAD_UDP_CKSUM.
 
 - calculate checksum of in_ip::
 
@@ -158,7 +158,7 @@ a vxlan-encapsulated tcp packet:
     set in_ip checksum to 0 in the packet
 
   This is similar to case 1), but l2_len is different. It is supported
-  on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+  on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
   Note that it can only work if outer L4 checksum is 0.
 
 - calculate checksum of in_ip and in_tcp::
@@ -170,8 +170,8 @@ a vxlan-encapsulated tcp packet:
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
   This is similar to case 2), but l2_len is different. It is supported
-  on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM and
-  DEV_TX_OFFLOAD_TCP_CKSUM.
+  on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM and
+  RTE_ETH_TX_OFFLOAD_TCP_CKSUM.
   Note that it can only work if outer L4 checksum is 0.
 
 - segment inner TCP::
@@ -185,7 +185,7 @@ a vxlan-encapsulated tcp packet:
     set in_tcp checksum to pseudo header without including the IP
       payload length using rte_ipv4_phdr_cksum()
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_TCP_TSO.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_TCP_TSO.
   Note that it can only work if outer L4 checksum is 0.
 
 - calculate checksum of out_ip, in_ip, in_tcp::
@@ -200,8 +200,8 @@ a vxlan-encapsulated tcp packet:
     set in_ip checksum to 0 in the packet
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM,
-  DEV_TX_OFFLOAD_UDP_CKSUM and DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
+  RTE_ETH_TX_OFFLOAD_UDP_CKSUM and RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM.
 
 The list of flags and their precise meaning is described in the mbuf API
 documentation (rte_mbuf.h). Also refer to the testpmd source code
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 0d4ac77a7ccf..68312898448c 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -57,7 +57,7 @@ Whenever needed and appropriate, asynchronous communication should be introduced
 
 Avoiding lock contention is a key issue in a multi-core environment.
 To address this issue, PMDs are designed to work with per-core private resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable.
+For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
 In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
 
 To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core
@@ -119,7 +119,7 @@ This is also true for the pipe-line model provided all logical cores used are lo
 
 Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance.
 
-If the PMD is ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
+If the PMD is ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
 concurrently on the same tx queue without SW lock. This PMD feature found in some NICs and useful in the following use cases:
 
 *  Remove explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
@@ -127,7 +127,7 @@ concurrently on the same tx queue without SW lock. This PMD feature found in som
 *  In the eventdev use case, avoid dedicating a separate TX core for transmitting and thus
    enables more scaling as all workers can send the packets.
 
-See `Hardware Offload`_ for ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
+See `Hardware Offload`_ for ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
 
 Device Identification, Ownership and Configuration
 --------------------------------------------------
@@ -311,7 +311,7 @@ The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get(
 The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities.
 Supported offloads can be either per-port or per-queue.
 
-Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
+Offloads are enabled using the existing ``RTE_ETH_TX_OFFLOAD_*`` or ``RTE_ETH_RX_OFFLOAD_*`` flags.
 Any requested offloading by an application must be within the device capabilities.
 Any offloading is disabled by default if it is not set in the parameter
 ``dev_conf->[rt]xmode.offloads`` to ``rte_eth_dev_configure()`` and
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index a2169517c3f9..d798adb83e1d 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1993,23 +1993,23 @@ only matching traffic goes through.
 
 .. table:: RSS
 
-   +---------------+---------------------------------------------+
-   | Field         | Value                                       |
-   +===============+=============================================+
-   | ``func``      | RSS hash function to apply                  |
-   +---------------+---------------------------------------------+
-   | ``level``     | encapsulation level for ``types``           |
-   +---------------+---------------------------------------------+
-   | ``types``     | specific RSS hash types (see ``ETH_RSS_*``) |
-   +---------------+---------------------------------------------+
-   | ``key_len``   | hash key length in bytes                    |
-   +---------------+---------------------------------------------+
-   | ``queue_num`` | number of entries in ``queue``              |
-   +---------------+---------------------------------------------+
-   | ``key``       | hash key                                    |
-   +---------------+---------------------------------------------+
-   | ``queue``     | queue indices to use                        |
-   +---------------+---------------------------------------------+
+   +---------------+-------------------------------------------------+
+   | Field         | Value                                           |
+   +===============+=================================================+
+   | ``func``      | RSS hash function to apply                      |
+   +---------------+-------------------------------------------------+
+   | ``level``     | encapsulation level for ``types``               |
+   +---------------+-------------------------------------------------+
+   | ``types``     | specific RSS hash types (see ``RTE_ETH_RSS_*``) |
+   +---------------+-------------------------------------------------+
+   | ``key_len``   | hash key length in bytes                        |
+   +---------------+-------------------------------------------------+
+   | ``queue_num`` | number of entries in ``queue``                  |
+   +---------------+-------------------------------------------------+
+   | ``key``       | hash key                                        |
+   +---------------+-------------------------------------------------+
+   | ``queue``     | queue indices to use                            |
+   +---------------+-------------------------------------------------+
 
 Action: ``PF``
 ^^^^^^^^^^^^^^
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
index ad92c16868c1..46c9b51d1bf9 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -569,7 +569,7 @@ created by the application is attached to the security session by the API
 
 For Inline Crypto and Inline protocol offload, device specific defined metadata is
 updated in the mbuf using ``rte_security_set_pkt_metadata()`` if
-``DEV_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
+``RTE_ETH_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
 
 For inline protocol offloaded ingress traffic, the application can register a
 pointer, ``userdata`` , in the security session. When the packet is received,
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index cc2b89850b07..f11550dc78ac 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -69,22 +69,16 @@ Deprecation Notices
   ``RTE_ETH_FLOW_MAX`` is one sample of the mentioned case, adding a new flow
   type will break the ABI because of ``flex_mask[RTE_ETH_FLOW_MAX]`` array
   usage in following public struct hierarchy:
-  ``rte_eth_fdir_flex_conf -> rte_fdir_conf -> rte_eth_conf (in the middle)``.
+  ``rte_eth_fdir_flex_conf -> rte_eth_fdir_conf -> rte_eth_conf (in the middle)``.
   Need to identify this kind of usages and fix in 20.11, otherwise this blocks
   us extending existing enum/define.
   One solution can be using a fixed size array instead of ``.*MAX.*`` value.
 
-* ethdev: Will add ``RTE_ETH_`` prefix to all ethdev macros/enums in v21.11.
-  Macros will be added for backward compatibility.
-  Backward compatibility macros will be removed on v22.11.
-  A few old backward compatibility macros from 2013 that does not have
-  proper prefix will be removed on v21.11.
-
 * ethdev: The flow director API, including ``rte_eth_conf.fdir_conf`` field,
   and the related structures (``rte_fdir_*`` and ``rte_eth_fdir_*``),
   will be removed in DPDK 20.11.
 
-* ethdev: New offload flags ``DEV_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
+* ethdev: New offload flags ``RTE_ETH_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
   This will allow application to enable or disable PMDs from updating
   ``rte_mbuf::hash::fdir``.
   This scheme will allow PMDs to avoid writes to ``rte_mbuf`` fields on Rx and
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 569d3c00b9ee..b327c2bfca1c 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -446,6 +446,9 @@ ABI Changes
 * bbdev: Added capability related to more comprehensive CRC options,
   shifting values of the ``enum rte_bbdev_op_ldpcdec_flag_bitmasks``.
 
+* ethdev: All enums & macros updated to have ``RTE_ETH`` prefix and structures
+  updated to have ``rte_eth`` prefix. DPDK components updated to use new names.
+
 
 Known Issues
 ------------
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 78171b25f96e..782574dd39d5 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -209,12 +209,12 @@ Where:
     device will ensure the ordering. Ordering will be lost when tried in PARALLEL.
 
 *   ``--rxoffload MASK``: RX HW offload capabilities to enable/use on this port
-    (bitmask of DEV_RX_OFFLOAD_* values). It is an optional parameter and
+    (bitmask of RTE_ETH_RX_OFFLOAD_* values). It is an optional parameter and
     allows user to disable some of the RX HW offload capabilities.
     By default all HW RX offloads are enabled.
 
 *   ``--txoffload MASK``: TX HW offload capabilities to enable/use on this port
-    (bitmask of DEV_TX_OFFLOAD_* values). It is an optional parameter and
+    (bitmask of RTE_ETH_TX_OFFLOAD_* values). It is an optional parameter and
     allows user to disable some of the TX HW offload capabilities.
     By default all HW TX offloads are enabled.
 
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index d23e0b6a7a2e..30edef07ea20 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -546,7 +546,7 @@ The command line options are:
     Set the hexadecimal bitmask of RX multi queue mode which can be enabled.
     The default value is 0x7::
 
-       ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG | ETH_MQ_RX_VMDQ_FLAG
+       RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG
 
 *   ``--record-core-cycles``
 
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index be52e6f72dab..a922988607ef 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -90,20 +90,20 @@ int dpaa_intr_disable(char *if_name);
 struct usdpaa_ioctl_link_status_args_old {
 	/* network device node name */
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link status(ETH_LINK_UP/DOWN) */
+	/* link status(RTE_ETH_LINK_UP/DOWN) */
 	int     link_status;
 };
 
 struct usdpaa_ioctl_link_status_args {
 	/* network device node name */
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link status(ETH_LINK_UP/DOWN) */
+	/* link status(RTE_ETH_LINK_UP/DOWN) */
 	int     link_status;
-	/* link speed (ETH_SPEED_NUM_)*/
+	/* link speed (RTE_ETH_SPEED_NUM_)*/
 	int     link_speed;
-	/* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+	/* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
 	int     link_duplex;
-	/* link autoneg (ETH_LINK_AUTONEG/FIXED)*/
+	/* link autoneg (RTE_ETH_LINK_AUTONEG/FIXED)*/
 	int     link_autoneg;
 
 };
@@ -111,16 +111,16 @@ struct usdpaa_ioctl_link_status_args {
 struct usdpaa_ioctl_update_link_status_args {
 	/* network device node name */
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link status(ETH_LINK_UP/DOWN) */
+	/* link status(RTE_ETH_LINK_UP/DOWN) */
 	int     link_status;
 };
 
 struct usdpaa_ioctl_update_link_speed {
 	/* network device node name*/
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link speed (ETH_SPEED_NUM_)*/
+	/* link speed (RTE_ETH_SPEED_NUM_)*/
 	int     link_speed;
-	/* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+	/* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
 	int     link_duplex;
 };
 
diff --git a/drivers/common/cnxk/roc_npc.h b/drivers/common/cnxk/roc_npc.h
index ef85073b17e1..e13d55713625 100644
--- a/drivers/common/cnxk/roc_npc.h
+++ b/drivers/common/cnxk/roc_npc.h
@@ -167,7 +167,7 @@ enum roc_npc_rss_hash_function {
 struct roc_npc_action_rss {
 	enum roc_npc_rss_hash_function func;
 	uint32_t level;
-	uint64_t types;	       /**< Specific RSS hash types (see ETH_RSS_*). */
+	uint64_t types;	       /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
 	uint32_t key_len;      /**< Hash key length in bytes. */
 	uint32_t queue_num;    /**< Number of entries in @p queue. */
 	const uint8_t *key;    /**< Hash key. */
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index a077376dc0fb..8f778f0c2419 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -93,10 +93,10 @@ static const char *valid_arguments[] = {
 };
 
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(af_packet_logtype, NOTICE);
@@ -290,7 +290,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 static int
 eth_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -320,7 +320,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
 		internals->tx_queue[i].sockfd = -1;
 	}
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
@@ -331,7 +331,7 @@ eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 	const struct rte_eth_rxmode *rxmode = &dev_conf->rxmode;
 	struct pmd_internals *internals = dev->data->dev_private;
 
-	internals->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	internals->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 	return 0;
 }
 
@@ -346,9 +346,9 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_rx_queues = (uint16_t)internals->nb_queues;
 	dev_info->max_tx_queues = (uint16_t)internals->nb_queues;
 	dev_info->min_rx_bufsize = 0;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_VLAN_INSERT;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	return 0;
 }
diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index b362ccdcd38c..e156246f24df 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -163,10 +163,10 @@ static const char * const valid_arguments[] = {
 };
 
 static const struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_AUTONEG
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_AUTONEG
 };
 
 /* List which tracks PMDs to facilitate sharing UMEMs across them. */
@@ -652,7 +652,7 @@ eth_af_xdp_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 static int
 eth_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -661,7 +661,7 @@ eth_dev_start(struct rte_eth_dev *dev)
 static int
 eth_dev_stop(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index 377299b14c7a..b618cba3f023 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -736,14 +736,14 @@ eth_ark_dev_info_get(struct rte_eth_dev *dev,
 		.nb_align = ARK_TX_MIN_QUEUE}; /* power of 2 */
 
 	/* ARK PMD supports all line rates, how do we indicate that here ?? */
-	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
-				ETH_LINK_SPEED_10G |
-				ETH_LINK_SPEED_25G |
-				ETH_LINK_SPEED_40G |
-				ETH_LINK_SPEED_50G |
-				ETH_LINK_SPEED_100G);
-
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_TIMESTAMP;
+	dev_info->speed_capa = (RTE_ETH_LINK_SPEED_1G |
+				RTE_ETH_LINK_SPEED_10G |
+				RTE_ETH_LINK_SPEED_25G |
+				RTE_ETH_LINK_SPEED_40G |
+				RTE_ETH_LINK_SPEED_50G |
+				RTE_ETH_LINK_SPEED_100G);
+
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	return 0;
 }
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 5a198f53fce7..f7bfac796c07 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -154,20 +154,20 @@ static struct rte_pci_driver rte_atl_pmd = {
 	.remove = eth_atl_pci_remove,
 };
 
-#define ATL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP \
-			| DEV_RX_OFFLOAD_IPV4_CKSUM \
-			| DEV_RX_OFFLOAD_UDP_CKSUM \
-			| DEV_RX_OFFLOAD_TCP_CKSUM \
-			| DEV_RX_OFFLOAD_MACSEC_STRIP \
-			| DEV_RX_OFFLOAD_VLAN_FILTER)
-
-#define ATL_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT \
-			| DEV_TX_OFFLOAD_IPV4_CKSUM \
-			| DEV_TX_OFFLOAD_UDP_CKSUM \
-			| DEV_TX_OFFLOAD_TCP_CKSUM \
-			| DEV_TX_OFFLOAD_TCP_TSO \
-			| DEV_TX_OFFLOAD_MACSEC_INSERT \
-			| DEV_TX_OFFLOAD_MULTI_SEGS)
+#define ATL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP \
+			| RTE_ETH_RX_OFFLOAD_IPV4_CKSUM \
+			| RTE_ETH_RX_OFFLOAD_UDP_CKSUM \
+			| RTE_ETH_RX_OFFLOAD_TCP_CKSUM \
+			| RTE_ETH_RX_OFFLOAD_MACSEC_STRIP \
+			| RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+
+#define ATL_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT \
+			| RTE_ETH_TX_OFFLOAD_IPV4_CKSUM \
+			| RTE_ETH_TX_OFFLOAD_UDP_CKSUM \
+			| RTE_ETH_TX_OFFLOAD_TCP_CKSUM \
+			| RTE_ETH_TX_OFFLOAD_TCP_TSO \
+			| RTE_ETH_TX_OFFLOAD_MACSEC_INSERT \
+			| RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define SFP_EEPROM_SIZE 0x100
 
@@ -488,7 +488,7 @@ atl_dev_start(struct rte_eth_dev *dev)
 	/* set adapter started */
 	hw->adapter_stopped = 0;
 
-	if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(ERR,
 		"Invalid link_speeds for port %u, fix speed not supported",
 				dev->data->port_id);
@@ -655,18 +655,18 @@ atl_dev_set_link_up(struct rte_eth_dev *dev)
 	uint32_t link_speeds = dev->data->dev_conf.link_speeds;
 	uint32_t speed_mask = 0;
 
-	if (link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		speed_mask = hw->aq_nic_cfg->link_speed_msk;
 	} else {
-		if (link_speeds & ETH_LINK_SPEED_10G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 			speed_mask |= AQ_NIC_RATE_10G;
-		if (link_speeds & ETH_LINK_SPEED_5G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_5G)
 			speed_mask |= AQ_NIC_RATE_5G;
-		if (link_speeds & ETH_LINK_SPEED_1G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed_mask |= AQ_NIC_RATE_1G;
-		if (link_speeds & ETH_LINK_SPEED_2_5G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_2_5G)
 			speed_mask |=  AQ_NIC_RATE_2G5;
-		if (link_speeds & ETH_LINK_SPEED_100M)
+		if (link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed_mask |= AQ_NIC_RATE_100M;
 	}
 
@@ -1127,10 +1127,10 @@ atl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->reta_size = HW_ATL_B0_RSS_REDIRECTION_MAX;
 	dev_info->flow_type_rss_offloads = ATL_RSS_OFFLOAD_ALL;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
-	dev_info->speed_capa |= ETH_LINK_SPEED_100M;
-	dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
-	dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
 
 	return 0;
 }
@@ -1175,10 +1175,10 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
 	u32 fc = AQ_NIC_FC_OFF;
 	int err = 0;
 
-	link.link_status = ETH_LINK_DOWN;
+	link.link_status = RTE_ETH_LINK_DOWN;
 	link.link_speed = 0;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_autoneg = hw->is_autoneg ? ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_autoneg = hw->is_autoneg ? RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
 	memset(&old, 0, sizeof(old));
 
 	/* load old link status */
@@ -1198,8 +1198,8 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
 		return 0;
 	}
 
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_speed = hw->aq_link_status.mbps;
 
 	rte_eth_linkstatus_set(dev, &link);
@@ -1333,7 +1333,7 @@ atl_dev_link_status_print(struct rte_eth_dev *dev)
 		PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned int)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -1532,13 +1532,13 @@ atl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	hw->aq_fw_ops->get_flow_control(hw, &fc);
 
 	if (fc == AQ_NIC_FC_OFF)
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	else if ((fc & AQ_NIC_FC_RX) && (fc & AQ_NIC_FC_TX))
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (fc & AQ_NIC_FC_RX)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (fc & AQ_NIC_FC_TX)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 
 	return 0;
 }
@@ -1553,13 +1553,13 @@ atl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	if (hw->aq_fw_ops->set_flow_control == NULL)
 		return -ENOTSUP;
 
-	if (fc_conf->mode == RTE_FC_NONE)
+	if (fc_conf->mode == RTE_ETH_FC_NONE)
 		hw->aq_nic_cfg->flow_control = AQ_NIC_FC_OFF;
-	else if (fc_conf->mode == RTE_FC_RX_PAUSE)
+	else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
 		hw->aq_nic_cfg->flow_control = AQ_NIC_FC_RX;
-	else if (fc_conf->mode == RTE_FC_TX_PAUSE)
+	else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
 		hw->aq_nic_cfg->flow_control = AQ_NIC_FC_TX;
-	else if (fc_conf->mode == RTE_FC_FULL)
+	else if (fc_conf->mode == RTE_ETH_FC_FULL)
 		hw->aq_nic_cfg->flow_control = (AQ_NIC_FC_RX | AQ_NIC_FC_TX);
 
 	if (old_flow_control != hw->aq_nic_cfg->flow_control)
@@ -1727,14 +1727,14 @@ atl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	PMD_INIT_FUNC_TRACE();
 
-	ret = atl_enable_vlan_filter(dev, mask & ETH_VLAN_FILTER_MASK);
+	ret = atl_enable_vlan_filter(dev, mask & RTE_ETH_VLAN_FILTER_MASK);
 
-	cfg->vlan_strip = !!(mask & ETH_VLAN_STRIP_MASK);
+	cfg->vlan_strip = !!(mask & RTE_ETH_VLAN_STRIP_MASK);
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++)
 		hw_atl_rpo_rx_desc_vlan_stripping_set(hw, cfg->vlan_strip, i);
 
-	if (mask & ETH_VLAN_EXTEND_MASK)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK)
 		ret = -ENOTSUP;
 
 	return ret;
@@ -1750,10 +1750,10 @@ atl_vlan_tpid_set(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 	PMD_INIT_FUNC_TRACE();
 
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
+	case RTE_ETH_VLAN_TYPE_INNER:
 		hw_atl_rpf_vlan_inner_etht_set(hw, tpid);
 		break;
-	case ETH_VLAN_TYPE_OUTER:
+	case RTE_ETH_VLAN_TYPE_OUTER:
 		hw_atl_rpf_vlan_outer_etht_set(hw, tpid);
 		break;
 	default:
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index fbc9917ed30d..ed9ef9f0cc52 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -11,15 +11,15 @@
 #include "hw_atl/hw_atl_utils.h"
 
 #define ATL_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define ATL_DEV_PRIVATE_TO_HW(adapter) \
 	(&((struct atl_adapter *)adapter)->hw)
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 0d3460383a50..2ff426892df2 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -145,10 +145,10 @@ atl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
 
 	rxq->l3_csum_enabled = dev->data->dev_conf.rxmode.offloads &
-		DEV_RX_OFFLOAD_IPV4_CKSUM;
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 	rxq->l4_csum_enabled = dev->data->dev_conf.rxmode.offloads &
-		(DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM);
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		(RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		PMD_DRV_LOG(ERR, "PMD does not support KEEP_CRC offload");
 
 	/* allocate memory for the software ring */
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 932ec90265cf..5d94db02c506 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1998,9 +1998,9 @@ avp_dev_configure(struct rte_eth_dev *eth_dev)
 	/* Setup required number of queues */
 	_avp_set_queue_counts(eth_dev);
 
-	mask = (ETH_VLAN_STRIP_MASK |
-		ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK);
+	mask = (RTE_ETH_VLAN_STRIP_MASK |
+		RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK);
 	ret = avp_vlan_offload_set(eth_dev, mask);
 	if (ret < 0) {
 		PMD_DRV_LOG(ERR, "VLAN offload set failed by host, ret=%d\n",
@@ -2140,8 +2140,8 @@ avp_dev_link_update(struct rte_eth_dev *eth_dev,
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	struct rte_eth_link *link = &eth_dev->data->dev_link;
 
-	link->link_speed = ETH_SPEED_NUM_10G;
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_speed = RTE_ETH_SPEED_NUM_10G;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link->link_status = !!(avp->flags & AVP_F_LINKUP);
 
 	return -1;
@@ -2191,8 +2191,8 @@ avp_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->max_rx_pktlen = avp->max_rx_pkt_len;
 	dev_info->max_mac_addrs = AVP_MAX_MAC_ADDRS;
 	if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
-		dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
-		dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+		dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+		dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	}
 
 	return 0;
@@ -2205,9 +2205,9 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 	struct rte_eth_conf *dev_conf = &eth_dev->data->dev_conf;
 	uint64_t offloads = dev_conf->rxmode.offloads;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
-			if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 				avp->features |= RTE_AVP_FEATURE_VLAN_OFFLOAD;
 			else
 				avp->features &= ~RTE_AVP_FEATURE_VLAN_OFFLOAD;
@@ -2216,13 +2216,13 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 		}
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			PMD_DRV_LOG(ERR, "VLAN filter offload not supported\n");
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			PMD_DRV_LOG(ERR, "VLAN extend offload not supported\n");
 	}
 
diff --git a/drivers/net/axgbe/axgbe_dev.c b/drivers/net/axgbe/axgbe_dev.c
index ca32ad641873..3aaa2193272f 100644
--- a/drivers/net/axgbe/axgbe_dev.c
+++ b/drivers/net/axgbe/axgbe_dev.c
@@ -840,11 +840,11 @@ static void axgbe_rss_options(struct axgbe_port *pdata)
 	pdata->rss_hf = rss_conf->rss_hf;
 	rss_hf = rss_conf->rss_hf;
 
-	if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+	if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
-	if (rss_hf & (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+	if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
-	if (rss_hf & (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+	if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
 }
 
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 0250256830ac..dab0c6775d1d 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -326,7 +326,7 @@ axgbe_dev_configure(struct rte_eth_dev *dev)
 	struct axgbe_port *pdata =  dev->data->dev_private;
 	/* Checksum offload to hardware */
 	pdata->rx_csum_enable = dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_CHECKSUM;
+				RTE_ETH_RX_OFFLOAD_CHECKSUM;
 	return 0;
 }
 
@@ -335,9 +335,9 @@ axgbe_dev_rx_mq_config(struct rte_eth_dev *dev)
 {
 	struct axgbe_port *pdata = dev->data->dev_private;
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 		pdata->rss_enable = 1;
-	else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+	else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
 		pdata->rss_enable = 0;
 	else
 		return  -1;
@@ -385,7 +385,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
 	rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
 
 	max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 				max_pkt_len > pdata->rx_buf_size)
 		dev_data->scattered_rx = 1;
 
@@ -521,8 +521,8 @@ axgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & (1ULL << shift)) == 0)
 			continue;
 		pdata->rss_table[i] = reta_conf[idx].reta[shift];
@@ -552,8 +552,8 @@ axgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & (1ULL << shift)) == 0)
 			continue;
 		reta_conf[idx].reta[shift] = pdata->rss_table[i];
@@ -590,13 +590,13 @@ axgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
 
 	pdata->rss_hf = rss_conf->rss_hf & AXGBE_RSS_OFFLOAD;
 
-	if (pdata->rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+	if (pdata->rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
 	if (pdata->rss_hf &
-	    (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+	    (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
 	if (pdata->rss_hf &
-	    (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+	    (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
 
 	/* Set the RSS options */
@@ -765,7 +765,7 @@ axgbe_dev_link_update(struct rte_eth_dev *dev,
 	link.link_status = pdata->phy_link;
 	link.link_speed = pdata->phy_speed;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			      ETH_LINK_SPEED_FIXED);
+			      RTE_ETH_LINK_SPEED_FIXED);
 	ret = rte_eth_linkstatus_set(dev, &link);
 	if (ret == -1)
 		PMD_DRV_LOG(ERR, "No change in link status\n");
@@ -1208,24 +1208,24 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_rx_pktlen = AXGBE_RX_MAX_BUF_SIZE;
 	dev_info->max_mac_addrs = pdata->hw_feat.addn_mac + 1;
 	dev_info->max_hash_mac_addrs = pdata->hw_feat.hash_table_size;
-	dev_info->speed_capa =  ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM  |
-		DEV_RX_OFFLOAD_TCP_CKSUM  |
-		DEV_RX_OFFLOAD_SCATTER	  |
-		DEV_RX_OFFLOAD_KEEP_CRC;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_SCATTER	  |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if (pdata->hw_feat.rss) {
 		dev_info->flow_type_rss_offloads = AXGBE_RSS_OFFLOAD;
@@ -1262,13 +1262,13 @@ axgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	fc.autoneg = pdata->pause_autoneg;
 
 	if (pdata->rx_pause && pdata->tx_pause)
-		fc.mode = RTE_FC_FULL;
+		fc.mode = RTE_ETH_FC_FULL;
 	else if (pdata->rx_pause)
-		fc.mode = RTE_FC_RX_PAUSE;
+		fc.mode = RTE_ETH_FC_RX_PAUSE;
 	else if (pdata->tx_pause)
-		fc.mode = RTE_FC_TX_PAUSE;
+		fc.mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc.mode = RTE_FC_NONE;
+		fc.mode = RTE_ETH_FC_NONE;
 
 	fc_conf->high_water =  (1024 + (fc.low_water[0] << 9)) / 1024;
 	fc_conf->low_water =  (1024 + (fc.high_water[0] << 9)) / 1024;
@@ -1298,13 +1298,13 @@ axgbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	AXGMAC_IOWRITE(pdata, reg, reg_val);
 	fc.mode = fc_conf->mode;
 
-	if (fc.mode == RTE_FC_FULL) {
+	if (fc.mode == RTE_ETH_FC_FULL) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 1;
-	} else if (fc.mode == RTE_FC_RX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
 		pdata->tx_pause = 0;
 		pdata->rx_pause = 1;
-	} else if (fc.mode == RTE_FC_TX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 0;
 	} else {
@@ -1386,15 +1386,15 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
 
 	fc.mode = pfc_conf->fc.mode;
 
-	if (fc.mode == RTE_FC_FULL) {
+	if (fc.mode == RTE_ETH_FC_FULL) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 1;
 		AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
-	} else if (fc.mode == RTE_FC_RX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
 		pdata->tx_pause = 0;
 		pdata->rx_pause = 1;
 		AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
-	} else if (fc.mode == RTE_FC_TX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 0;
 		AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 0);
@@ -1830,8 +1830,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 	PMD_DRV_LOG(DEBUG, "EDVLP: qinq = 0x%x\n", qinq);
 
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
-		PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_INNER\n");
+	case RTE_ETH_VLAN_TYPE_INNER:
+		PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_INNER\n");
 		if (qinq) {
 			if (tpid != 0x8100 && tpid != 0x88a8)
 				PMD_DRV_LOG(ERR,
@@ -1848,8 +1848,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 				    "Inner type not supported in single tag\n");
 		}
 		break;
-	case ETH_VLAN_TYPE_OUTER:
-		PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_OUTER\n");
+	case RTE_ETH_VLAN_TYPE_OUTER:
+		PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_OUTER\n");
 		if (qinq) {
 			PMD_DRV_LOG(DEBUG, "double tagging is enabled\n");
 			/*Enable outer VLAN tag*/
@@ -1866,11 +1866,11 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 					    "tag supported 0x8100/0x88A8\n");
 		}
 		break;
-	case ETH_VLAN_TYPE_MAX:
-		PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_MAX\n");
+	case RTE_ETH_VLAN_TYPE_MAX:
+		PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_MAX\n");
 		break;
-	case ETH_VLAN_TYPE_UNKNOWN:
-		PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_UNKNOWN\n");
+	case RTE_ETH_VLAN_TYPE_UNKNOWN:
+		PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_UNKNOWN\n");
 		break;
 	}
 	return 0;
@@ -1904,8 +1904,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, CSVL, 0);
 	AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, VLTI, 1);
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 			PMD_DRV_LOG(DEBUG, "Strip ON for device = %s\n",
 				    pdata->eth_dev->device->name);
 			pdata->hw_if.enable_rx_vlan_stripping(pdata);
@@ -1915,8 +1915,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			pdata->hw_if.disable_rx_vlan_stripping(pdata);
 		}
 	}
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 			PMD_DRV_LOG(DEBUG, "Filter ON for device = %s\n",
 				    pdata->eth_dev->device->name);
 			pdata->hw_if.enable_rx_vlan_filtering(pdata);
@@ -1926,14 +1926,14 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			pdata->hw_if.disable_rx_vlan_filtering(pdata);
 		}
 	}
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
 			PMD_DRV_LOG(DEBUG, "enabling vlan extended mode\n");
 			axgbe_vlan_extend_enable(pdata);
 			/* Set global registers with default ethertype*/
-			axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+			axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					    RTE_ETHER_TYPE_VLAN);
-			axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+			axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
 					    RTE_ETHER_TYPE_VLAN);
 		} else {
 			PMD_DRV_LOG(DEBUG, "disabling vlan extended mode\n");
diff --git a/drivers/net/axgbe/axgbe_ethdev.h b/drivers/net/axgbe/axgbe_ethdev.h
index a6226729fe4d..0a3e1c59df1a 100644
--- a/drivers/net/axgbe/axgbe_ethdev.h
+++ b/drivers/net/axgbe/axgbe_ethdev.h
@@ -97,12 +97,12 @@
 
 /* Receive Side Scaling */
 #define AXGBE_RSS_OFFLOAD  ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define AXGBE_RSS_HASH_KEY_SIZE		40
 #define AXGBE_RSS_MAX_TABLE_SIZE	256
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 4f98e695ae74..59fa9175aded 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -597,7 +597,7 @@ static void axgbe_an73_state_machine(struct axgbe_port *pdata)
 		pdata->an_int = 0;
 		axgbe_an73_clear_interrupts(pdata);
 		pdata->eth_dev->data->dev_link.link_status =
-			ETH_LINK_DOWN;
+			RTE_ETH_LINK_DOWN;
 	} else if (pdata->an_state == AXGBE_AN_ERROR) {
 		PMD_DRV_LOG(ERR, "error during auto-negotiation, state=%u\n",
 			    cur_state);
diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c
index c8618d2d6daa..aa2c27ebaa49 100644
--- a/drivers/net/axgbe/axgbe_rxtx.c
+++ b/drivers/net/axgbe/axgbe_rxtx.c
@@ -75,7 +75,7 @@ int axgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		(DMA_CH_INC * rxq->queue_id));
 	rxq->dma_tail_reg = (volatile uint32_t *)((uint8_t *)rxq->dma_regs +
 						  DMA_CH_RDTR_LO);
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -286,7 +286,7 @@ axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
-				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+				if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
 				else
 					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
@@ -430,7 +430,7 @@ uint16_t eth_axgbe_recv_scattered_pkts(void *rx_queue,
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
-				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+				if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
 				else
 					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 567ea2382864..78fc717ec44a 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -94,14 +94,14 @@ bnx2x_link_update(struct rte_eth_dev *dev)
 	link.link_speed = sc->link_vars.line_speed;
 	switch (sc->link_vars.duplex) {
 		case DUPLEX_FULL:
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			break;
 		case DUPLEX_HALF:
-			link.link_duplex = ETH_LINK_HALF_DUPLEX;
+			link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 			break;
 	}
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+		 RTE_ETH_LINK_SPEED_FIXED);
 	link.link_status = sc->link_vars.link_up;
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -408,7 +408,7 @@ bnx2xvf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_comple
 	if (sc->old_bulletin.valid_bitmap & (1 << CHANNEL_DOWN)) {
 		PMD_DRV_LOG(ERR, sc, "PF indicated channel is down."
 				"VF device is no longer operational");
-		dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	}
 
 	return ret;
@@ -534,7 +534,7 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_rx_bufsize = BNX2X_MIN_RX_BUF_SIZE;
 	dev_info->max_rx_pktlen  = BNX2X_MAX_RX_PKT_LEN;
 	dev_info->max_mac_addrs  = BNX2X_MAX_MAC_ADDRS;
-	dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G;
 
 	dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
 	dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
@@ -669,7 +669,7 @@ bnx2x_common_dev_init(struct rte_eth_dev *eth_dev, int is_vf)
 	bnx2x_load_firmware(sc);
 	assert(sc->firmware);
 
-	if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		sc->udp_rss = 1;
 
 	sc->rx_budget = BNX2X_RX_BUDGET;
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 6743cf92b0e6..39bd739c7bc9 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -569,37 +569,37 @@ struct bnxt_rep_info {
 #define BNXT_FW_STATUS_SHUTDOWN		0x100000
 
 #define BNXT_ETH_RSS_SUPPORT (	\
-	ETH_RSS_IPV4 |		\
-	ETH_RSS_NONFRAG_IPV4_TCP |	\
-	ETH_RSS_NONFRAG_IPV4_UDP |	\
-	ETH_RSS_IPV6 |		\
-	ETH_RSS_NONFRAG_IPV6_TCP |	\
-	ETH_RSS_NONFRAG_IPV6_UDP |	\
-	ETH_RSS_LEVEL_MASK)
-
-#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_CKSUM | \
-				     DEV_TX_OFFLOAD_UDP_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_TSO | \
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GRE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_QINQ_INSERT | \
-				     DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
-				     DEV_RX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_UDP_CKSUM | \
-				     DEV_RX_OFFLOAD_TCP_CKSUM | \
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
-				     DEV_RX_OFFLOAD_KEEP_CRC | \
-				     DEV_RX_OFFLOAD_VLAN_EXTEND | \
-				     DEV_RX_OFFLOAD_TCP_LRO | \
-				     DEV_RX_OFFLOAD_SCATTER | \
-				     DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RSS_IPV4 |		\
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP |	\
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP |	\
+	RTE_ETH_RSS_IPV6 |		\
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP |	\
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP |	\
+	RTE_ETH_RSS_LEVEL_MASK)
+
+#define BNXT_DEV_TX_OFFLOAD_SUPPORT (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+				     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+				     RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define BNXT_DEV_RX_OFFLOAD_SUPPORT (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+				     RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_KEEP_CRC | \
+				     RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+				     RTE_ETH_RX_OFFLOAD_TCP_LRO | \
+				     RTE_ETH_RX_OFFLOAD_SCATTER | \
+				     RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index f385723a9f65..2791a5c62db1 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -426,7 +426,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 		goto err_out;
 
 	/* Alloc RSS context only if RSS mode is enabled */
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
 		int j, nr_ctxs = bnxt_rss_ctxts(bp);
 
 		/* RSS table size in Thor is 512.
@@ -458,7 +458,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 	 * setting is not available at this time, it will not be
 	 * configured correctly in the CFA.
 	 */
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 		vnic->vlan_strip = true;
 	else
 		vnic->vlan_strip = false;
@@ -493,7 +493,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 	bnxt_hwrm_vnic_plcmode_cfg(bp, vnic);
 
 	rc = bnxt_hwrm_vnic_tpa_cfg(bp, vnic,
-				    (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) ?
+				    (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) ?
 				    true : false);
 	if (rc)
 		goto err_out;
@@ -923,35 +923,35 @@ uint32_t bnxt_get_speed_capabilities(struct bnxt *bp)
 		link_speed = bp->link_info->support_pam4_speeds;
 
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB)
-		speed_capa |= ETH_LINK_SPEED_100M;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100MBHD)
-		speed_capa |= ETH_LINK_SPEED_100M_HD;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_1GB)
-		speed_capa |= ETH_LINK_SPEED_1G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_2_5GB)
-		speed_capa |= ETH_LINK_SPEED_2_5G;
+		speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_10GB)
-		speed_capa |= ETH_LINK_SPEED_10G;
+		speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_20GB)
-		speed_capa |= ETH_LINK_SPEED_20G;
+		speed_capa |= RTE_ETH_LINK_SPEED_20G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_25GB)
-		speed_capa |= ETH_LINK_SPEED_25G;
+		speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_40GB)
-		speed_capa |= ETH_LINK_SPEED_40G;
+		speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_50GB)
-		speed_capa |= ETH_LINK_SPEED_50G;
+		speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100GB)
-		speed_capa |= ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_50G)
-		speed_capa |= ETH_LINK_SPEED_50G;
+		speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_100G)
-		speed_capa |= ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_200G)
-		speed_capa |= ETH_LINK_SPEED_200G;
+		speed_capa |= RTE_ETH_LINK_SPEED_200G;
 
 	if (bp->link_info->auto_mode ==
 	    HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE)
-		speed_capa |= ETH_LINK_SPEED_FIXED;
+		speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
 
 	return speed_capa;
 }
@@ -995,14 +995,14 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 
 	dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
 	if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	if (bp->vnic_cap_flags & BNXT_VNIC_CAP_VLAN_RX_STRIP)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_STRIP;
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT |
 				    dev_info->tx_queue_offload_capa;
 	if (bp->fw_cap & BNXT_FW_CAP_VLAN_TX_INSERT)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_VLAN_INSERT;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
 
 	dev_info->speed_capa = bnxt_get_speed_capabilities(bp);
@@ -1049,8 +1049,8 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	 */
 
 	/* VMDq resources */
-	vpool = 64; /* ETH_64_POOLS */
-	vrxq = 128; /* ETH_VMDQ_DCB_NUM_QUEUES */
+	vpool = 64; /* RTE_ETH_64_POOLS */
+	vrxq = 128; /* RTE_ETH_VMDQ_DCB_NUM_QUEUES */
 	for (i = 0; i < 4; vpool >>= 1, i++) {
 		if (max_vnics > vpool) {
 			for (j = 0; j < 5; vrxq >>= 1, j++) {
@@ -1145,15 +1145,15 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
 	    (uint32_t)(eth_dev->data->nb_rx_queues) > bp->max_ring_grps)
 		goto resource_error;
 
-	if (!(eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) &&
+	if (!(eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) &&
 	    bp->max_vnics < eth_dev->data->nb_rx_queues)
 		goto resource_error;
 
 	bp->rx_cp_nr_rings = bp->rx_nr_rings;
 	bp->tx_cp_nr_rings = bp->tx_nr_rings;
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rx_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
 
 	bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
@@ -1182,7 +1182,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
 		PMD_DRV_LOG(INFO, "Port %d Link Up - speed %u Mbps - %s\n",
 			eth_dev->data->port_id,
 			(uint32_t)link->link_speed,
-			(link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+			(link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 			("full-duplex") : ("half-duplex\n"));
 	else
 		PMD_DRV_LOG(INFO, "Port %d Link Down\n",
@@ -1199,10 +1199,10 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
 	uint16_t buf_size;
 	int i;
 
-	if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		return 1;
 
-	if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO)
+	if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		return 1;
 
 	for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1247,15 +1247,15 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
 	 * a limited subset have been enabled.
 	 */
 	if (eth_dev->data->dev_conf.rxmode.offloads &
-		~(DEV_RX_OFFLOAD_VLAN_STRIP |
-		  DEV_RX_OFFLOAD_KEEP_CRC |
-		  DEV_RX_OFFLOAD_IPV4_CKSUM |
-		  DEV_RX_OFFLOAD_UDP_CKSUM |
-		  DEV_RX_OFFLOAD_TCP_CKSUM |
-		  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		  DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-		  DEV_RX_OFFLOAD_RSS_HASH |
-		  DEV_RX_OFFLOAD_VLAN_FILTER))
+		~(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		  RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		  RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_RSS_HASH |
+		  RTE_ETH_RX_OFFLOAD_VLAN_FILTER))
 		goto use_scalar_rx;
 
 #if defined(RTE_ARCH_X86) && defined(CC_AVX2_SUPPORT)
@@ -1307,7 +1307,7 @@ bnxt_transmit_function(struct rte_eth_dev *eth_dev)
 	 * or tx offloads.
 	 */
 	if (eth_dev->data->scattered_rx ||
-	    (offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) ||
+	    (offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) ||
 	    BNXT_TRUFLOW_EN(bp))
 		goto use_scalar_tx;
 
@@ -1608,10 +1608,10 @@ static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
 
 	bnxt_link_update_op(eth_dev, 1);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
-		vlan_mask |= ETH_VLAN_FILTER_MASK;
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-		vlan_mask |= ETH_VLAN_STRIP_MASK;
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+		vlan_mask |= RTE_ETH_VLAN_FILTER_MASK;
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+		vlan_mask |= RTE_ETH_VLAN_STRIP_MASK;
 	rc = bnxt_vlan_offload_set_op(eth_dev, vlan_mask);
 	if (rc)
 		goto error;
@@ -1833,8 +1833,8 @@ int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete)
 		/* Retrieve link info from hardware */
 		rc = bnxt_get_hwrm_link_config(bp, &new);
 		if (rc) {
-			new.link_speed = ETH_LINK_SPEED_100M;
-			new.link_duplex = ETH_LINK_FULL_DUPLEX;
+			new.link_speed = RTE_ETH_LINK_SPEED_100M;
+			new.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR,
 				"Failed to retrieve link rc = 0x%x!\n", rc);
 			goto out;
@@ -2028,7 +2028,7 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
 	if (!vnic->rss_table)
 		return -EINVAL;
 
-	if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+	if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 		return -EINVAL;
 
 	if (reta_size != tbl_size) {
@@ -2041,8 +2041,8 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
 	for (i = 0; i < reta_size; i++) {
 		struct bnxt_rx_queue *rxq;
 
-		idx = i / RTE_RETA_GROUP_SIZE;
-		sft = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		sft = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (!(reta_conf[idx].mask & (1ULL << sft)))
 			continue;
@@ -2095,8 +2095,8 @@ static int bnxt_reta_query_op(struct rte_eth_dev *eth_dev,
 	}
 
 	for (idx = 0, i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		sft = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		sft = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (reta_conf[idx].mask & (1ULL << sft)) {
 			uint16_t qid;
@@ -2134,7 +2134,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
 	 * If RSS enablement were different than dev_configure,
 	 * then return -EINVAL
 	 */
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (!rss_conf->rss_hf)
 			PMD_DRV_LOG(ERR, "Hash type NONE\n");
 	} else {
@@ -2152,7 +2152,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
 	vnic->hash_type = bnxt_rte_to_hwrm_hash_types(rss_conf->rss_hf);
 	vnic->hash_mode =
 		bnxt_rte_to_hwrm_hash_level(bp, rss_conf->rss_hf,
-					    ETH_RSS_LEVEL(rss_conf->rss_hf));
+					    RTE_ETH_RSS_LEVEL(rss_conf->rss_hf));
 
 	/*
 	 * If hashkey is not specified, use the previously configured
@@ -2197,30 +2197,30 @@ static int bnxt_rss_hash_conf_get_op(struct rte_eth_dev *eth_dev,
 		hash_types = vnic->hash_type;
 		rss_conf->rss_hf = 0;
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4) {
-			rss_conf->rss_hf |= ETH_RSS_IPV4;
+			rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
 			hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6) {
-			rss_conf->rss_hf |= ETH_RSS_IPV6;
+			rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
 			hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
 		}
@@ -2260,17 +2260,17 @@ static int bnxt_flow_ctrl_get_op(struct rte_eth_dev *dev,
 		fc_conf->autoneg = 1;
 	switch (bp->link_info->pause) {
 	case 0:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case (HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX |
 			HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX):
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	}
 	return 0;
@@ -2293,11 +2293,11 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		bp->link_info->auto_pause = 0;
 		bp->link_info->force_pause = 0;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		if (fc_conf->autoneg) {
 			bp->link_info->auto_pause =
 					HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_RX;
@@ -2308,7 +2308,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
 					HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_RX;
 		}
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		if (fc_conf->autoneg) {
 			bp->link_info->auto_pause =
 					HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX;
@@ -2319,7 +2319,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
 					HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_TX;
 		}
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		if (fc_conf->autoneg) {
 			bp->link_info->auto_pause =
 					HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX |
@@ -2350,7 +2350,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
 		return rc;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (bp->vxlan_port_cnt) {
 			PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
 				udp_tunnel->udp_port);
@@ -2364,7 +2364,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
 		tunnel_type =
 			HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_VXLAN;
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (bp->geneve_port_cnt) {
 			PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
 				udp_tunnel->udp_port);
@@ -2413,7 +2413,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
 		return rc;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (!bp->vxlan_port_cnt) {
 			PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
 			return -EINVAL;
@@ -2430,7 +2430,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
 			HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_VXLAN;
 		port = bp->vxlan_fw_dst_port_id;
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (!bp->geneve_port_cnt) {
 			PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
 			return -EINVAL;
@@ -2608,7 +2608,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
 	int rc;
 
 	vnic = BNXT_GET_DEFAULT_VNIC(bp);
-	if (!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)) {
+	if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
 		/* Remove any VLAN filters programmed */
 		for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
 			bnxt_del_vlan_filter(bp, i);
@@ -2628,7 +2628,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
 		bnxt_add_vlan_filter(bp, 0);
 	}
 	PMD_DRV_LOG(DEBUG, "VLAN Filtering: %d\n",
-		    !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER));
+		    !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER));
 
 	return 0;
 }
@@ -2641,7 +2641,7 @@ static int bnxt_free_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 
 	/* Destroy vnic filters and vnic */
 	if (bp->eth_dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_VLAN_FILTER) {
+	    RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
 			bnxt_del_vlan_filter(bp, i);
 	}
@@ -2680,7 +2680,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
 		return rc;
 
 	if (bp->eth_dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_VLAN_FILTER) {
+	    RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		rc = bnxt_add_vlan_filter(bp, 0);
 		if (rc)
 			return rc;
@@ -2698,7 +2698,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
 		return rc;
 
 	PMD_DRV_LOG(DEBUG, "VLAN Strip Offload: %d\n",
-		    !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP));
+		    !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP));
 
 	return rc;
 }
@@ -2718,22 +2718,22 @@ bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask)
 	if (!dev->data->dev_started)
 		return 0;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* Enable or disable VLAN filtering */
 		rc = bnxt_config_vlan_hw_filter(bp, rx_offloads);
 		if (rc)
 			return rc;
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
 		rc = bnxt_config_vlan_hw_stripping(bp, rx_offloads);
 		if (rc)
 			return rc;
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			PMD_DRV_LOG(DEBUG, "Extend VLAN supported\n");
 		else
 			PMD_DRV_LOG(INFO, "Extend VLAN unsupported\n");
@@ -2748,10 +2748,10 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 {
 	struct bnxt *bp = dev->data->dev_private;
 	int qinq = dev->data->dev_conf.rxmode.offloads &
-		   DEV_RX_OFFLOAD_VLAN_EXTEND;
+		   RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
-	if (vlan_type != ETH_VLAN_TYPE_INNER &&
-	    vlan_type != ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+	    vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
 		PMD_DRV_LOG(ERR,
 			    "Unsupported vlan type.");
 		return -EINVAL;
@@ -2763,7 +2763,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 		return -EINVAL;
 	}
 
-	if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		switch (tpid) {
 		case RTE_ETHER_TYPE_QINQ:
 			bp->outer_tpid_bd =
@@ -2791,7 +2791,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 		}
 		bp->outer_tpid_bd |= tpid;
 		PMD_DRV_LOG(INFO, "outer_tpid_bd = %x\n", bp->outer_tpid_bd);
-	} else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+	} else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
 		PMD_DRV_LOG(ERR,
 			    "Can accelerate only outer vlan in QinQ\n");
 		return -EINVAL;
@@ -2831,7 +2831,7 @@ bnxt_set_default_mac_addr_op(struct rte_eth_dev *dev,
 	bnxt_del_dflt_mac_filter(bp, vnic);
 
 	memcpy(bp->mac_addr, addr, RTE_ETHER_ADDR_LEN);
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		/* This filter will allow only untagged packets */
 		rc = bnxt_add_vlan_filter(bp, 0);
 	} else {
@@ -6556,4 +6556,4 @@ bool is_bnxt_supported(struct rte_eth_dev *dev)
 RTE_LOG_REGISTER_SUFFIX(bnxt_logtype_driver, driver, NOTICE);
 RTE_PMD_REGISTER_PCI(net_bnxt, bnxt_rte_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_bnxt, bnxt_pci_id_map);
-RTE_PMD_REGISTER_KMOD_DEP(net_bnxt, "* igb_uio | uio_pci_generic | vfio-pci");
+
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index b2ebb5634e3a..ced697a73980 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -978,7 +978,7 @@ static int bnxt_vnic_prep(struct bnxt *bp, struct bnxt_vnic_info *vnic,
 		}
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 		vnic->vlan_strip = true;
 	else
 		vnic->vlan_strip = false;
@@ -1177,7 +1177,7 @@ bnxt_vnic_rss_cfg_update(struct bnxt *bp,
 	}
 
 	/* If RSS types is 0, use a best effort configuration */
-	types = rss->types ? rss->types : ETH_RSS_IPV4;
+	types = rss->types ? rss->types : RTE_ETH_RSS_IPV4;
 
 	hash_type = bnxt_rte_to_hwrm_hash_types(types);
 
@@ -1322,7 +1322,7 @@ bnxt_validate_and_parse_flow(struct rte_eth_dev *dev,
 
 		rxq = bp->rx_queues[act_q->index];
 
-		if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) && rxq &&
+		if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) && rxq &&
 		    vnic->fw_vnic_id != INVALID_HW_RING_ID)
 			goto use_vnic;
 
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 181e607d7bf8..82e89b7c8af7 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -628,7 +628,7 @@ int bnxt_hwrm_set_l2_filter(struct bnxt *bp,
 	uint16_t j = dst_id - 1;
 
 	//TODO: Is there a better way to add VLANs to each VNIC in case of VMDQ
-	if ((dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) &&
+	if ((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) &&
 	    conf->pool_map[j].pools & (1UL << j)) {
 		PMD_DRV_LOG(DEBUG,
 			"Add vlan %u to vmdq pool %u\n",
@@ -2979,12 +2979,12 @@ static uint16_t bnxt_parse_eth_link_duplex(uint32_t conf_link_speed)
 {
 	uint8_t hw_link_duplex = HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
 
-	if ((conf_link_speed & ETH_LINK_SPEED_FIXED) == ETH_LINK_SPEED_AUTONEG)
+	if ((conf_link_speed & RTE_ETH_LINK_SPEED_FIXED) == RTE_ETH_LINK_SPEED_AUTONEG)
 		return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
 
 	switch (conf_link_speed) {
-	case ETH_LINK_SPEED_10M_HD:
-	case ETH_LINK_SPEED_100M_HD:
+	case RTE_ETH_LINK_SPEED_10M_HD:
+	case RTE_ETH_LINK_SPEED_100M_HD:
 		/* FALLTHROUGH */
 		return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF;
 	}
@@ -3001,51 +3001,51 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
 {
 	uint16_t eth_link_speed = 0;
 
-	if (conf_link_speed == ETH_LINK_SPEED_AUTONEG)
-		return ETH_LINK_SPEED_AUTONEG;
+	if (conf_link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
+		return RTE_ETH_LINK_SPEED_AUTONEG;
 
-	switch (conf_link_speed & ~ETH_LINK_SPEED_FIXED) {
-	case ETH_LINK_SPEED_100M:
-	case ETH_LINK_SPEED_100M_HD:
+	switch (conf_link_speed & ~RTE_ETH_LINK_SPEED_FIXED) {
+	case RTE_ETH_LINK_SPEED_100M:
+	case RTE_ETH_LINK_SPEED_100M_HD:
 		/* FALLTHROUGH */
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_100MB;
 		break;
-	case ETH_LINK_SPEED_1G:
+	case RTE_ETH_LINK_SPEED_1G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_1GB;
 		break;
-	case ETH_LINK_SPEED_2_5G:
+	case RTE_ETH_LINK_SPEED_2_5G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_2_5GB;
 		break;
-	case ETH_LINK_SPEED_10G:
+	case RTE_ETH_LINK_SPEED_10G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_10GB;
 		break;
-	case ETH_LINK_SPEED_20G:
+	case RTE_ETH_LINK_SPEED_20G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_20GB;
 		break;
-	case ETH_LINK_SPEED_25G:
+	case RTE_ETH_LINK_SPEED_25G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_25GB;
 		break;
-	case ETH_LINK_SPEED_40G:
+	case RTE_ETH_LINK_SPEED_40G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_40GB;
 		break;
-	case ETH_LINK_SPEED_50G:
+	case RTE_ETH_LINK_SPEED_50G:
 		eth_link_speed = pam4_link ?
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_50GB :
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_50GB;
 		break;
-	case ETH_LINK_SPEED_100G:
+	case RTE_ETH_LINK_SPEED_100G:
 		eth_link_speed = pam4_link ?
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_100GB :
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_100GB;
 		break;
-	case ETH_LINK_SPEED_200G:
+	case RTE_ETH_LINK_SPEED_200G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
 		break;
@@ -3058,11 +3058,11 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
 	return eth_link_speed;
 }
 
-#define BNXT_SUPPORTED_SPEEDS (ETH_LINK_SPEED_100M | ETH_LINK_SPEED_100M_HD | \
-		ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G | \
-		ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G | ETH_LINK_SPEED_25G | \
-		ETH_LINK_SPEED_40G | ETH_LINK_SPEED_50G | \
-		ETH_LINK_SPEED_100G | ETH_LINK_SPEED_200G)
+#define BNXT_SUPPORTED_SPEEDS (RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_100M_HD | \
+		RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G | \
+		RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G | RTE_ETH_LINK_SPEED_25G | \
+		RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_50G | \
+		RTE_ETH_LINK_SPEED_100G | RTE_ETH_LINK_SPEED_200G)
 
 static int bnxt_validate_link_speed(struct bnxt *bp)
 {
@@ -3071,13 +3071,13 @@ static int bnxt_validate_link_speed(struct bnxt *bp)
 	uint32_t link_speed_capa;
 	uint32_t one_speed;
 
-	if (link_speed == ETH_LINK_SPEED_AUTONEG)
+	if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
 		return 0;
 
 	link_speed_capa = bnxt_get_speed_capabilities(bp);
 
-	if (link_speed & ETH_LINK_SPEED_FIXED) {
-		one_speed = link_speed & ~ETH_LINK_SPEED_FIXED;
+	if (link_speed & RTE_ETH_LINK_SPEED_FIXED) {
+		one_speed = link_speed & ~RTE_ETH_LINK_SPEED_FIXED;
 
 		if (one_speed & (one_speed - 1)) {
 			PMD_DRV_LOG(ERR,
@@ -3107,71 +3107,71 @@ bnxt_parse_eth_link_speed_mask(struct bnxt *bp, uint32_t link_speed)
 {
 	uint16_t ret = 0;
 
-	if (link_speed == ETH_LINK_SPEED_AUTONEG) {
+	if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG) {
 		if (bp->link_info->support_speeds)
 			return bp->link_info->support_speeds;
 		link_speed = BNXT_SUPPORTED_SPEEDS;
 	}
 
-	if (link_speed & ETH_LINK_SPEED_100M)
+	if (link_speed & RTE_ETH_LINK_SPEED_100M)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
-	if (link_speed & ETH_LINK_SPEED_100M_HD)
+	if (link_speed & RTE_ETH_LINK_SPEED_100M_HD)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
-	if (link_speed & ETH_LINK_SPEED_1G)
+	if (link_speed & RTE_ETH_LINK_SPEED_1G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_1GB;
-	if (link_speed & ETH_LINK_SPEED_2_5G)
+	if (link_speed & RTE_ETH_LINK_SPEED_2_5G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_2_5GB;
-	if (link_speed & ETH_LINK_SPEED_10G)
+	if (link_speed & RTE_ETH_LINK_SPEED_10G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_10GB;
-	if (link_speed & ETH_LINK_SPEED_20G)
+	if (link_speed & RTE_ETH_LINK_SPEED_20G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_20GB;
-	if (link_speed & ETH_LINK_SPEED_25G)
+	if (link_speed & RTE_ETH_LINK_SPEED_25G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_25GB;
-	if (link_speed & ETH_LINK_SPEED_40G)
+	if (link_speed & RTE_ETH_LINK_SPEED_40G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_40GB;
-	if (link_speed & ETH_LINK_SPEED_50G)
+	if (link_speed & RTE_ETH_LINK_SPEED_50G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_50GB;
-	if (link_speed & ETH_LINK_SPEED_100G)
+	if (link_speed & RTE_ETH_LINK_SPEED_100G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100GB;
-	if (link_speed & ETH_LINK_SPEED_200G)
+	if (link_speed & RTE_ETH_LINK_SPEED_200G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
 	return ret;
 }
 
 static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
 {
-	uint32_t eth_link_speed = ETH_SPEED_NUM_NONE;
+	uint32_t eth_link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	switch (hw_link_speed) {
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB:
-		eth_link_speed = ETH_SPEED_NUM_100M;
+		eth_link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_1GB:
-		eth_link_speed = ETH_SPEED_NUM_1G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2_5GB:
-		eth_link_speed = ETH_SPEED_NUM_2_5G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_10GB:
-		eth_link_speed = ETH_SPEED_NUM_10G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_20GB:
-		eth_link_speed = ETH_SPEED_NUM_20G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_25GB:
-		eth_link_speed = ETH_SPEED_NUM_25G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_40GB:
-		eth_link_speed = ETH_SPEED_NUM_40G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_50GB:
-		eth_link_speed = ETH_SPEED_NUM_50G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100GB:
-		eth_link_speed = ETH_SPEED_NUM_100G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_200GB:
-		eth_link_speed = ETH_SPEED_NUM_200G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_200G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2GB:
 	default:
@@ -3184,16 +3184,16 @@ static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
 
 static uint16_t bnxt_parse_hw_link_duplex(uint16_t hw_link_duplex)
 {
-	uint16_t eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+	uint16_t eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (hw_link_duplex) {
 	case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH:
 	case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_FULL:
 		/* FALLTHROUGH */
-		eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+		eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF:
-		eth_link_duplex = ETH_LINK_HALF_DUPLEX;
+		eth_link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "HWRM link duplex %d not defined\n",
@@ -3222,12 +3222,12 @@ int bnxt_get_hwrm_link_config(struct bnxt *bp, struct rte_eth_link *link)
 		link->link_speed =
 			bnxt_parse_hw_link_speed(link_info->link_speed);
 	else
-		link->link_speed = ETH_SPEED_NUM_NONE;
+		link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 	link->link_duplex = bnxt_parse_hw_link_duplex(link_info->duplex);
 	link->link_status = link_info->link_up;
 	link->link_autoneg = link_info->auto_mode ==
 		HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE ?
-		ETH_LINK_FIXED : ETH_LINK_AUTONEG;
+		RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
 exit:
 	return rc;
 }
@@ -3253,7 +3253,7 @@ int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up)
 
 	autoneg = bnxt_check_eth_link_autoneg(dev_conf->link_speeds);
 	if (BNXT_CHIP_P5(bp) &&
-	    dev_conf->link_speeds == ETH_LINK_SPEED_40G) {
+	    dev_conf->link_speeds == RTE_ETH_LINK_SPEED_40G) {
 		/* 40G is not supported as part of media auto detect.
 		 * The speed should be forced and autoneg disabled
 		 * to configure 40G speed.
@@ -3344,7 +3344,7 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 
 	HWRM_CHECK_RESULT();
 
-	bp->vlan = rte_le_to_cpu_16(resp->vlan) & ETH_VLAN_ID_MAX;
+	bp->vlan = rte_le_to_cpu_16(resp->vlan) & RTE_ETH_VLAN_ID_MAX;
 
 	svif_info = rte_le_to_cpu_16(resp->svif_info);
 	if (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID)
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index b7e88e013a84..1c07db3ca9c5 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -537,7 +537,7 @@ int bnxt_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
 
 	dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
 	if (parent_bp->flags & BNXT_FLAG_PTP_SUPPORTED)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT;
 	dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
 
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index 08cefa1baaef..7940d489a102 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -187,7 +187,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
 			rx_ring_info->rx_ring_struct->ring_size *
 			AGG_RING_SIZE_FACTOR)) : 0;
 
-		if (rx_ring_info && (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+		if (rx_ring_info && (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 			int tpa_max = BNXT_TPA_MAX_AGGS(bp);
 
 			tpa_info_len = tpa_max * sizeof(struct bnxt_tpa_info);
@@ -283,7 +283,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
 					    ag_bitmap_start, ag_bitmap_len);
 
 			/* TPA info */
-			if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+			if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 				rx_ring_info->tpa_info =
 					((struct bnxt_tpa_info *)
 					 ((char *)mz->addr + tpa_info_start));
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index 38ec4aa14b77..1456f8b54ffa 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -52,13 +52,13 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 	bp->nr_vnics = 0;
 
 	/* Multi-queue mode */
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB_RSS) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
 		/* VMDq ONLY, VMDq+RSS, VMDq+DCB, VMDq+DCB+RSS */
 
 		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_RSS:
-		case ETH_MQ_RX_VMDQ_ONLY:
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
 			/* FALLTHROUGH */
 			/* ETH_8/64_POOLs */
 			pools = conf->nb_queue_pools;
@@ -66,14 +66,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 			max_pools = RTE_MIN(bp->max_vnics,
 					    RTE_MIN(bp->max_l2_ctx,
 					    RTE_MIN(bp->max_rsscos_ctx,
-						    ETH_64_POOLS)));
+						    RTE_ETH_64_POOLS)));
 			PMD_DRV_LOG(DEBUG,
 				    "pools = %u max_pools = %u\n",
 				    pools, max_pools);
 			if (pools > max_pools)
 				pools = max_pools;
 			break;
-		case ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_RSS:
 			pools = bp->rx_cosq_cnt ? bp->rx_cosq_cnt : 1;
 			break;
 		default:
@@ -111,7 +111,7 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 				    ring_idx, rxq, i, vnic);
 		}
 		if (i == 0) {
-			if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB) {
+			if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB) {
 				bp->eth_dev->data->promiscuous = 1;
 				vnic->flags |= BNXT_VNIC_INFO_PROMISC;
 			}
@@ -121,8 +121,8 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 		vnic->end_grp_id = end_grp_id;
 
 		if (i) {
-			if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB ||
-			    !(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS))
+			if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB ||
+			    !(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS))
 				vnic->rss_dflt_cr = true;
 			goto skip_filter_allocation;
 		}
@@ -147,14 +147,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 
 	bp->rx_num_qs_per_vnic = nb_q_per_grp;
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		struct rte_eth_rss_conf *rss = &dev_conf->rx_adv_conf.rss_conf;
 
 		if (bp->flags & BNXT_FLAG_UPDATE_HASH)
 			bp->flags &= ~BNXT_FLAG_UPDATE_HASH;
 
 		for (i = 0; i < bp->nr_vnics; i++) {
-			uint32_t lvl = ETH_RSS_LEVEL(rss->rss_hf);
+			uint32_t lvl = RTE_ETH_RSS_LEVEL(rss->rss_hf);
 
 			vnic = &bp->vnic_info[i];
 			vnic->hash_type =
@@ -363,7 +363,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 	PMD_DRV_LOG(DEBUG, "RX Buf size is %d\n", rxq->rx_buf_size);
 	rxq->queue_id = queue_idx;
 	rxq->port_id = eth_dev->data->port_id;
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -478,7 +478,7 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 	PMD_DRV_LOG(INFO, "Rx queue started %d\n", rx_queue_id);
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		vnic = rxq->vnic;
 
 		if (BNXT_HAS_RING_GRPS(bp)) {
@@ -549,7 +549,7 @@ int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	rxq->rx_started = false;
 	PMD_DRV_LOG(DEBUG, "Rx queue stopped\n");
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (BNXT_HAS_RING_GRPS(bp))
 			vnic->fw_grp_ids[rx_queue_id] = INVALID_HW_RING_ID;
 
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index aeacc60a0127..eb555c4545e6 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -566,8 +566,8 @@ bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
 	dev_conf = &rxq->bp->eth_dev->data->dev_conf;
 	offloads = dev_conf->rxmode.offloads;
 
-	outer_cksum_enabled = !!(offloads & (DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-					     DEV_RX_OFFLOAD_OUTER_UDP_CKSUM));
+	outer_cksum_enabled = !!(offloads & (RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+					     RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM));
 
 	/* Initialize ol_flags table. */
 	pt = rxr->ol_flags_table;
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
index d08854ff61e2..e4905b4fd169 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
@@ -416,7 +416,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_common.h b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
index 9b9489a695a2..0627fd212d0a 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_common.h
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
@@ -96,7 +96,7 @@ bnxt_rxq_rearm(struct bnxt_rx_queue *rxq, struct bnxt_rx_ring_info *rxr)
 }
 
 /*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
  * is enabled.
  */
 static inline void
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
index 13211060cf0e..f15e2d3b4ed4 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
@@ -352,7 +352,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
index 6e563053260a..ffd560166cac 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
@@ -333,7 +333,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 9e45ddd7a82e..f2fcaf53021c 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -353,7 +353,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 }
 
 /*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
  * is enabled.
  */
 static void bnxt_tx_cmp_fast(struct bnxt_tx_queue *txq, int nr_pkts)
@@ -479,7 +479,7 @@ static int bnxt_handle_tx_cp(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index 26253a7e17f2..c63cf4b943fa 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -239,17 +239,17 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
 {
 	uint16_t hwrm_type = 0;
 
-	if (rte_type & ETH_RSS_IPV4)
+	if (rte_type & RTE_ETH_RSS_IPV4)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
-	if (rte_type & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
-	if (rte_type & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
-	if (rte_type & ETH_RSS_IPV6)
+	if (rte_type & RTE_ETH_RSS_IPV6)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
-	if (rte_type & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
-	if (rte_type & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
 
 	return hwrm_type;
@@ -258,11 +258,11 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
 int bnxt_rte_to_hwrm_hash_level(struct bnxt *bp, uint64_t hash_f, uint32_t lvl)
 {
 	uint32_t mode = HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_DEFAULT;
-	bool l3 = (hash_f & (ETH_RSS_IPV4 | ETH_RSS_IPV6));
-	bool l4 = (hash_f & (ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV6_UDP |
-			     ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV6_TCP));
+	bool l3 = (hash_f & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6));
+	bool l4 = (hash_f & (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_TCP));
 	bool l3_only = l3 && !l4;
 	bool l3_and_l4 = l3 && l4;
 
@@ -307,16 +307,16 @@ uint64_t bnxt_hwrm_to_rte_rss_level(struct bnxt *bp, uint32_t mode)
 	 * return default hash mode.
 	 */
 	if (!(bp->vnic_cap_flags & BNXT_VNIC_CAP_OUTER_RSS))
-		return ETH_RSS_LEVEL_PMD_DEFAULT;
+		return RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
 
 	if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_2 ||
 	    mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_4)
-		rss_level |= ETH_RSS_LEVEL_OUTERMOST;
+		rss_level |= RTE_ETH_RSS_LEVEL_OUTERMOST;
 	else if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_2 ||
 		 mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_4)
-		rss_level |= ETH_RSS_LEVEL_INNERMOST;
+		rss_level |= RTE_ETH_RSS_LEVEL_INNERMOST;
 	else
-		rss_level |= ETH_RSS_LEVEL_PMD_DEFAULT;
+		rss_level |= RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
 
 	return rss_level;
 }
diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c
index f71543810970..77ecbef04c3d 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt.c
+++ b/drivers/net/bnxt/rte_pmd_bnxt.c
@@ -421,18 +421,18 @@ int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf,
 	if (vf >= bp->pdev->max_vfs)
 		return -EINVAL;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG) {
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG) {
 		PMD_DRV_LOG(ERR, "Currently cannot toggle this setting\n");
 		return -ENOTSUP;
 	}
 
 	/* Is this really the correct mapping?  VFd seems to think it is. */
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 		flag |= BNXT_VNIC_INFO_PROMISC;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 		flag |= BNXT_VNIC_INFO_BCAST;
-	if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 		flag |= BNXT_VNIC_INFO_ALLMULTI | BNXT_VNIC_INFO_MCAST;
 
 	if (on)
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index fc179a2732ac..8b104b639184 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -167,8 +167,8 @@ struct bond_dev_private {
 	struct rte_eth_desc_lim tx_desc_lim;	/**< Tx descriptor limits */
 
 	uint16_t reta_size;
-	struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_512 /
-			RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_512 /
+			RTE_ETH_RETA_GROUP_SIZE];
 
 	uint8_t rss_key[52];				/**< 52-byte hash key buffer. */
 	uint8_t rss_key_len;				/**< hash key length in bytes. */
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 2029955c1092..ca50583d62d8 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -770,25 +770,25 @@ link_speed_key(uint16_t speed) {
 	uint16_t key_speed;
 
 	switch (speed) {
-	case ETH_SPEED_NUM_NONE:
+	case RTE_ETH_SPEED_NUM_NONE:
 		key_speed = 0x00;
 		break;
-	case ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_10M:
 		key_speed = BOND_LINK_SPEED_KEY_10M;
 		break;
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		key_speed = BOND_LINK_SPEED_KEY_100M;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		key_speed = BOND_LINK_SPEED_KEY_1000M;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		key_speed = BOND_LINK_SPEED_KEY_10G;
 		break;
-	case ETH_SPEED_NUM_20G:
+	case RTE_ETH_SPEED_NUM_20G:
 		key_speed = BOND_LINK_SPEED_KEY_20G;
 		break;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		key_speed = BOND_LINK_SPEED_KEY_40G;
 		break;
 	default:
@@ -887,7 +887,7 @@ bond_mode_8023ad_periodic_cb(void *arg)
 
 		if (ret >= 0 && link_info.link_status != 0) {
 			key = link_speed_key(link_info.link_speed) << 1;
-			if (link_info.link_duplex == ETH_LINK_FULL_DUPLEX)
+			if (link_info.link_duplex == RTE_ETH_LINK_FULL_DUPLEX)
 				key |= BOND_LINK_FULL_DUPLEX_KEY;
 		} else {
 			key = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 5140ef14c2ee..84943cffe2bb 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -204,7 +204,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
 
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	if ((bonded_eth_dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_VLAN_FILTER) == 0)
+			RTE_ETH_RX_OFFLOAD_VLAN_FILTER) == 0)
 		return 0;
 
 	internals = bonded_eth_dev->data->dev_private;
@@ -592,7 +592,7 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
 			return -1;
 		}
 
-		 if (link_props.link_status == ETH_LINK_UP) {
+		if (link_props.link_status == RTE_ETH_LINK_UP) {
 			if (internals->active_slave_count == 0 &&
 			    !internals->user_defined_primary_port)
 				bond_ethdev_primary_set(internals,
@@ -727,7 +727,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
 		internals->tx_offload_capa = 0;
 		internals->rx_queue_offload_capa = 0;
 		internals->tx_queue_offload_capa = 0;
-		internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+		internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
 		internals->reta_size = 0;
 		internals->candidate_max_rx_pktlen = 0;
 		internals->max_rx_pktlen = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 8d038ba6b6c4..834a5937b3aa 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1369,8 +1369,8 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
 		 * In any other mode the link properties are set to default
 		 * values of AUTONEG/DUPLEX
 		 */
-		ethdev->data->dev_link.link_autoneg = ETH_LINK_AUTONEG;
-		ethdev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		ethdev->data->dev_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
+		ethdev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	}
 }
 
@@ -1700,7 +1700,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 		slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
 
 	/* If RSS is enabled for bonding, try to enable it for slaves  */
-	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		/* rss_key won't be empty if RSS is configured in bonded dev */
 		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
 					internals->rss_key_len;
@@ -1714,12 +1714,12 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 	}
 
 	if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_VLAN_FILTER)
+			RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 		slave_eth_dev->data->dev_conf.rxmode.offloads |=
-				DEV_RX_OFFLOAD_VLAN_FILTER;
+				RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	else
 		slave_eth_dev->data->dev_conf.rxmode.offloads &=
-				~DEV_RX_OFFLOAD_VLAN_FILTER;
+				~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	slave_eth_dev->data->dev_conf.rxmode.mtu =
 			bonded_eth_dev->data->dev_conf.rxmode.mtu;
@@ -1823,7 +1823,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 	}
 
 	/* If RSS is enabled for bonding, synchronize RETA */
-	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
 		int i;
 		struct bond_dev_private *internals;
 
@@ -1946,7 +1946,7 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 		return -1;
 	}
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 1;
 
 	internals = eth_dev->data->dev_private;
@@ -2086,7 +2086,7 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 			tlb_last_obytets[internals->active_slaves[i]] = 0;
 	}
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 0;
 
 	internals->link_status_polling_enabled = 0;
@@ -2416,15 +2416,15 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 
 	bond_ctx = ethdev->data->dev_private;
 
-	ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+	ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	if (ethdev->data->dev_started == 0 ||
 			bond_ctx->active_slave_count == 0) {
-		ethdev->data->dev_link.link_status = ETH_LINK_DOWN;
+		ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 		return 0;
 	}
 
-	ethdev->data->dev_link.link_status = ETH_LINK_UP;
+	ethdev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	if (wait_to_complete)
 		link_update = rte_eth_link_get;
@@ -2449,7 +2449,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 					  &slave_link);
 			if (ret < 0) {
 				ethdev->data->dev_link.link_speed =
-					ETH_SPEED_NUM_NONE;
+					RTE_ETH_SPEED_NUM_NONE;
 				RTE_BOND_LOG(ERR,
 					"Slave (port %u) link get failed: %s",
 					bond_ctx->active_slaves[idx],
@@ -2491,7 +2491,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 		 * In theses mode the maximum theoretical link speed is the sum
 		 * of all the slaves
 		 */
-		ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		one_link_update_succeeded = false;
 
 		for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
@@ -2865,7 +2865,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 			goto link_update;
 
 		/* check link state properties if bonded link is up*/
-		if (bonded_eth_dev->data->dev_link.link_status == ETH_LINK_UP) {
+		if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
 			if (link_properties_valid(bonded_eth_dev, &link) != 0)
 				RTE_BOND_LOG(ERR, "Invalid link properties "
 					     "for slave %d in bonding mode %d",
@@ -2881,7 +2881,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 		if (internals->active_slave_count < 1) {
 			/* If first active slave, then change link status */
 			bonded_eth_dev->data->dev_link.link_status =
-								ETH_LINK_UP;
+								RTE_ETH_LINK_UP;
 			internals->current_primary_port = port_id;
 			lsc_flag = 1;
 
@@ -2973,12 +2973,12 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	 /* Copy RETA table */
-	reta_count = (reta_size + RTE_RETA_GROUP_SIZE - 1) /
-			RTE_RETA_GROUP_SIZE;
+	reta_count = (reta_size + RTE_ETH_RETA_GROUP_SIZE - 1) /
+			RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < reta_count; i++) {
 		internals->reta_conf[i].mask = reta_conf[i].mask;
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				internals->reta_conf[i].reta[j] = reta_conf[i].reta[j];
 	}
@@ -3011,8 +3011,8 @@ bond_ethdev_rss_reta_query(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	 /* Copy RETA table */
-	for (i = 0; i < reta_size / RTE_RETA_GROUP_SIZE; i++)
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < reta_size / RTE_ETH_RETA_GROUP_SIZE; i++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = internals->reta_conf[i].reta[j];
 
@@ -3274,7 +3274,7 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
 	internals->max_rx_pktlen = 0;
 
 	/* Initially allow to choose any offload type */
-	internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+	internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
 
 	memset(&internals->default_rxconf, 0,
 	       sizeof(internals->default_rxconf));
@@ -3501,7 +3501,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 	 * set key to the the value specified in port RSS configuration.
 	 * Fall back to default RSS key if the key is not specified
 	 */
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
 		struct rte_eth_rss_conf *rss_conf =
 			&dev->data->dev_conf.rx_adv_conf.rss_conf;
 		if (rss_conf->rss_key != NULL) {
@@ -3526,9 +3526,9 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 
 		for (i = 0; i < RTE_DIM(internals->reta_conf); i++) {
 			internals->reta_conf[i].mask = ~0LL;
-			for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+			for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 				internals->reta_conf[i].reta[j] =
-						(i * RTE_RETA_GROUP_SIZE + j) %
+						(i * RTE_ETH_RETA_GROUP_SIZE + j) %
 						dev->data->nb_rx_queues;
 		}
 	}
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 25da5f6691d0..f7eb0f437b77 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -15,28 +15,28 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rxmode *rxmode = &conf->rxmode;
 	uint16_t flags = 0;
 
-	if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
-	    (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+	    (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		flags |= NIX_RX_OFFLOAD_RSS_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		flags |= NIX_RX_MULTI_SEG_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 
 	if (!dev->ptype_disable)
 		flags |= NIX_RX_OFFLOAD_PTYPE_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		flags |= NIX_RX_OFFLOAD_SECURITY_F;
 
 	return flags;
@@ -72,39 +72,39 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
 			 offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
 
-	if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
-	    conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+	    conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
 
-	if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= NIX_TX_MULTI_SEG_F;
 
 	/* Enable Inner checksum for TSO */
-	if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+	if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
 	/* Enable Inner and Outer checksum for Tunnel TSO */
-	if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		    DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
-	if (conf & DEV_TX_OFFLOAD_SECURITY)
+	if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
 		flags |= NIX_TX_OFFLOAD_SECURITY_F;
 
 	return flags;
diff --git a/drivers/net/cnxk/cn10k_rte_flow.c b/drivers/net/cnxk/cn10k_rte_flow.c
index 8c87452934eb..dff4c7746cf5 100644
--- a/drivers/net/cnxk/cn10k_rte_flow.c
+++ b/drivers/net/cnxk/cn10k_rte_flow.c
@@ -98,7 +98,7 @@ cn10k_rss_action_validate(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		plt_err("multi-queue mode is disabled");
 		return -ENOTSUP;
 	}
diff --git a/drivers/net/cnxk/cn10k_rx.c b/drivers/net/cnxk/cn10k_rx.c
index d6af54b56de6..5d603514c045 100644
--- a/drivers/net/cnxk/cn10k_rx.c
+++ b/drivers/net/cnxk/cn10k_rx.c
@@ -77,12 +77,12 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 			nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
 
 	if (dev->scalar_ena) {
-		if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+		if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 			return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
 		return pick_rx_func(eth_dev, nix_eth_rx_burst);
 	}
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
 	return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
 }
diff --git a/drivers/net/cnxk/cn10k_tx.c b/drivers/net/cnxk/cn10k_tx.c
index eb962ef08cab..5e6c5ee11188 100644
--- a/drivers/net/cnxk/cn10k_tx.c
+++ b/drivers/net/cnxk/cn10k_tx.c
@@ -78,11 +78,11 @@ cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 
 	if (dev->scalar_ena) {
 		pick_tx_func(eth_dev, nix_eth_tx_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
 	} else {
 		pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
 	}
 
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 08c86f9e6b7b..17f8f6debbc8 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -15,28 +15,28 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rxmode *rxmode = &conf->rxmode;
 	uint16_t flags = 0;
 
-	if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
-	    (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+	    (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		flags |= NIX_RX_OFFLOAD_RSS_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		flags |= NIX_RX_MULTI_SEG_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 
 	if (!dev->ptype_disable)
 		flags |= NIX_RX_OFFLOAD_PTYPE_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		flags |= NIX_RX_OFFLOAD_SECURITY_F;
 
 	return flags;
@@ -72,39 +72,39 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
 			 offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
 
-	if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
-	    conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+	    conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
 
-	if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= NIX_TX_MULTI_SEG_F;
 
 	/* Enable Inner checksum for TSO */
-	if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+	if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
 	/* Enable Inner and Outer checksum for Tunnel TSO */
-	if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		    DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY)
 		flags |= NIX_TX_OFFLOAD_SECURITY_F;
 
 	return flags;
@@ -298,9 +298,9 @@ cn9k_nix_configure(struct rte_eth_dev *eth_dev)
 
 	/* Platform specific checks */
 	if ((roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) &&
-	    (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
-	    ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
-	     (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+	    ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+	     (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
 		plt_err("Outer IP and SCTP checksum unsupported");
 		return -EINVAL;
 	}
@@ -553,17 +553,17 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
 	 * TSO not supported for earlier chip revisions
 	 */
 	if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0())
-		dev->tx_offload_capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
-					  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-					  DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-					  DEV_TX_OFFLOAD_GRE_TNL_TSO);
+		dev->tx_offload_capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+					  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+					  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+					  RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
 
 	/* 50G and 100G to be supported for board version C0
 	 * and above of CN9K.
 	 */
 	if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) {
-		dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_50G;
-		dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_100G;
+		dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_50G;
+		dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_100G;
 	}
 
 	dev->hwcap = 0;
diff --git a/drivers/net/cnxk/cn9k_rx.c b/drivers/net/cnxk/cn9k_rx.c
index 5c4387e74e0b..8d504c4a6d92 100644
--- a/drivers/net/cnxk/cn9k_rx.c
+++ b/drivers/net/cnxk/cn9k_rx.c
@@ -77,12 +77,12 @@ cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 			nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
 
 	if (dev->scalar_ena) {
-		if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+		if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 			return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
 		return pick_rx_func(eth_dev, nix_eth_rx_burst);
 	}
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
 	return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
 }
diff --git a/drivers/net/cnxk/cn9k_tx.c b/drivers/net/cnxk/cn9k_tx.c
index e5691a2a7e16..f3f19fed9780 100644
--- a/drivers/net/cnxk/cn9k_tx.c
+++ b/drivers/net/cnxk/cn9k_tx.c
@@ -77,11 +77,11 @@ cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 
 	if (dev->scalar_ena) {
 		pick_tx_func(eth_dev, nix_eth_tx_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
 	} else {
 		pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
 	}
 
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 2e05d8bf1552..db54468dbca1 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -10,7 +10,7 @@ nix_get_rx_offload_capa(struct cnxk_eth_dev *dev)
 
 	if (roc_nix_is_vf_or_sdp(&dev->nix) ||
 	    dev->npc.switch_header_type == ROC_PRIV_FLAGS_HIGIG)
-		capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+		capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	return capa;
 }
@@ -28,11 +28,11 @@ nix_get_speed_capa(struct cnxk_eth_dev *dev)
 	uint32_t speed_capa;
 
 	/* Auto negotiation disabled */
-	speed_capa = ETH_LINK_SPEED_FIXED;
+	speed_capa = RTE_ETH_LINK_SPEED_FIXED;
 	if (!roc_nix_is_vf_or_sdp(&dev->nix) && !roc_nix_is_lbk(&dev->nix)) {
-		speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			      ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
-			      ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			      RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+			      RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
 	}
 
 	return speed_capa;
@@ -65,7 +65,7 @@ nix_security_setup(struct cnxk_eth_dev *dev)
 	struct roc_nix *nix = &dev->nix;
 	int i, rc = 0;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		/* Setup Inline Inbound */
 		rc = roc_nix_inl_inb_init(nix);
 		if (rc) {
@@ -80,8 +80,8 @@ nix_security_setup(struct cnxk_eth_dev *dev)
 		cnxk_nix_inb_mode_set(dev, true);
 	}
 
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
-	    dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY ||
+	    dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		struct plt_bitmap *bmap;
 		size_t bmap_sz;
 		void *mem;
@@ -100,8 +100,8 @@ nix_security_setup(struct cnxk_eth_dev *dev)
 
 		dev->outb.lf_base = roc_nix_inl_outb_lf_base_get(nix);
 
-		/* Skip the rest if DEV_TX_OFFLOAD_SECURITY is not enabled */
-		if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY))
+		/* Skip the rest if RTE_ETH_TX_OFFLOAD_SECURITY is not enabled */
+		if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY))
 			goto done;
 
 		rc = -ENOMEM;
@@ -136,7 +136,7 @@ nix_security_setup(struct cnxk_eth_dev *dev)
 done:
 	return 0;
 cleanup:
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		rc |= roc_nix_inl_inb_fini(nix);
 	return rc;
 }
@@ -182,7 +182,7 @@ nix_security_release(struct cnxk_eth_dev *dev)
 	int rc, ret = 0;
 
 	/* Cleanup Inline inbound */
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		/* Destroy inbound sessions */
 		tvar = NULL;
 		RTE_TAILQ_FOREACH_SAFE(eth_sec, &dev->inb.list, entry, tvar)
@@ -199,8 +199,8 @@ nix_security_release(struct cnxk_eth_dev *dev)
 	}
 
 	/* Cleanup Inline outbound */
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
-	    dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY ||
+	    dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		/* Destroy outbound sessions */
 		tvar = NULL;
 		RTE_TAILQ_FOREACH_SAFE(eth_sec, &dev->outb.list, entry, tvar)
@@ -242,8 +242,8 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
 	buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
 
 	if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD > buffsz) {
-		dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
-		dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+		dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	}
 }
 
@@ -273,7 +273,7 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
 	struct rte_eth_fc_conf fc_conf = {0};
 	int rc;
 
-	/* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+	/* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
 	 * by AF driver, update those info in PMD structure.
 	 */
 	rc = cnxk_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -281,10 +281,10 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
 		goto exit;
 
 	fc->mode = fc_conf.mode;
-	fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_RX_PAUSE);
-	fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_TX_PAUSE);
+	fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+	fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
 
 exit:
 	return rc;
@@ -305,11 +305,11 @@ nix_update_flow_ctrl_config(struct rte_eth_dev *eth_dev)
 	/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
 	if (roc_model_is_cn96_ax() &&
 	    dev->npc.switch_header_type != ROC_PRIV_FLAGS_HIGIG &&
-	    (fc_cfg.mode == RTE_FC_FULL || fc_cfg.mode == RTE_FC_RX_PAUSE)) {
+	    (fc_cfg.mode == RTE_ETH_FC_FULL || fc_cfg.mode == RTE_ETH_FC_RX_PAUSE)) {
 		fc_cfg.mode =
-				(fc_cfg.mode == RTE_FC_FULL ||
-				fc_cfg.mode == RTE_FC_TX_PAUSE) ?
-				RTE_FC_TX_PAUSE : RTE_FC_NONE;
+				(fc_cfg.mode == RTE_ETH_FC_FULL ||
+				fc_cfg.mode == RTE_ETH_FC_TX_PAUSE) ?
+				RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
 	}
 
 	return cnxk_nix_flow_ctrl_set(eth_dev, &fc_cfg);
@@ -352,7 +352,7 @@ nix_sq_max_sqe_sz(struct cnxk_eth_dev *dev)
 	 * Maximum three segments can be supported with W8, Choose
 	 * NIX_MAXSQESZ_W16 for multi segment offload.
 	 */
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		return NIX_MAXSQESZ_W16;
 	else
 		return NIX_MAXSQESZ_W8;
@@ -380,7 +380,7 @@ cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	/* When Tx Security offload is enabled, increase tx desc count by
 	 * max possible outbound desc count.
 	 */
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY)
 		nb_desc += dev->outb.nb_desc;
 
 	/* Setup ROC SQ */
@@ -499,7 +499,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	 * to avoid meta packet drop as LBK does not currently support
 	 * backpressure.
 	 */
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY && roc_nix_is_lbk(nix)) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY && roc_nix_is_lbk(nix)) {
 		uint64_t pkt_pool_limit = roc_nix_inl_dev_rq_limit_get();
 
 		/* Use current RQ's aura limit if inl rq is not available */
@@ -561,7 +561,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	rxq_sp->qconf.nb_desc = nb_desc;
 	rxq_sp->qconf.mp = mp;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		/* Setup rq reference for inline dev if present */
 		rc = roc_nix_inl_dev_rq_get(rq);
 		if (rc)
@@ -579,7 +579,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	 * These are needed in deriving raw clock value from tsc counter.
 	 * read_clock eth op returns raw clock value.
 	 */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
 		rc = cnxk_nix_tsc_convert(dev);
 		if (rc) {
 			plt_err("Failed to calculate delta and freq mult");
@@ -618,7 +618,7 @@ cnxk_nix_rx_queue_release(struct rte_eth_dev *eth_dev, uint16_t qid)
 	plt_nix_dbg("Releasing rxq %u", qid);
 
 	/* Release rq reference for inline dev if present */
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		roc_nix_inl_dev_rq_put(rq);
 
 	/* Cleanup ROC RQ */
@@ -657,24 +657,24 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
 
 	dev->ethdev_rss_hf = ethdev_rss;
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
 	    dev->npc.switch_header_type == ROC_PRIV_FLAGS_LEN_90B) {
 		flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
 	}
 
-	if (ethdev_rss & ETH_RSS_C_VLAN)
+	if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
 
-	if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
 
-	if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
 
-	if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
 
-	if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
 
 	if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -683,34 +683,34 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
 	if (ethdev_rss & RSS_IPV6_ENABLE)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
 
-	if (ethdev_rss & ETH_RSS_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_TCP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_UDP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_SCTP)
+	if (ethdev_rss & RTE_ETH_RSS_SCTP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
 
 	if (ethdev_rss & RSS_IPV6_EX_ENABLE)
 		flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
 
-	if (ethdev_rss & ETH_RSS_PORT)
+	if (ethdev_rss & RTE_ETH_RSS_PORT)
 		flowkey_cfg |= FLOW_KEY_TYPE_PORT;
 
-	if (ethdev_rss & ETH_RSS_NVGRE)
+	if (ethdev_rss & RTE_ETH_RSS_NVGRE)
 		flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
 
-	if (ethdev_rss & ETH_RSS_VXLAN)
+	if (ethdev_rss & RTE_ETH_RSS_VXLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
 
-	if (ethdev_rss & ETH_RSS_GENEVE)
+	if (ethdev_rss & RTE_ETH_RSS_GENEVE)
 		flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
 
-	if (ethdev_rss & ETH_RSS_GTPU)
+	if (ethdev_rss & RTE_ETH_RSS_GTPU)
 		flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
 
 	return flowkey_cfg;
@@ -746,7 +746,7 @@ nix_rss_default_setup(struct cnxk_eth_dev *dev)
 	uint64_t rss_hf;
 
 	rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
-	rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 
@@ -958,8 +958,8 @@ nix_lso_fmt_setup(struct cnxk_eth_dev *dev)
 
 	/* Nothing much to do if offload is not enabled */
 	if (!(dev->tx_offloads &
-	      (DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-	       DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO)))
+	      (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+	       RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))
 		return 0;
 
 	/* Setup LSO formats in AF. Its a no-op if other ethdev has
@@ -1007,13 +1007,13 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 		goto fail_configure;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-	    rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+	    rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		plt_err("Unsupported mq rx mode %d", rxmode->mq_mode);
 		goto fail_configure;
 	}
 
-	if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		plt_err("Unsupported mq tx mode %d", txmode->mq_mode);
 		goto fail_configure;
 	}
@@ -1054,7 +1054,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 	/* Prepare rx cfg */
 	rx_cfg = ROC_NIX_LF_RX_CFG_DIS_APAD;
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+	    (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
 		rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_OL4;
 		rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_IL4;
 	}
@@ -1062,7 +1062,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 		   ROC_NIX_LF_RX_CFG_LEN_IL4 | ROC_NIX_LF_RX_CFG_LEN_IL3 |
 		   ROC_NIX_LF_RX_CFG_LEN_OL4 | ROC_NIX_LF_RX_CFG_LEN_OL3);
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		rx_cfg |= ROC_NIX_LF_RX_CFG_IP6_UDP_OPT;
 		/* Disable drop re if rx offload security is enabled and
 		 * platform does not support it.
@@ -1454,12 +1454,12 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
 	 * enabled on PF owning this VF
 	 */
 	memset(&dev->tstamp, 0, sizeof(struct cnxk_timesync_info));
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
 		cnxk_eth_dev_ops.timesync_enable(eth_dev);
 	else
 		cnxk_eth_dev_ops.timesync_disable(eth_dev);
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 		rc = rte_mbuf_dyn_rx_timestamp_register
 			(&dev->tstamp.tstamp_dynfield_offset,
 			 &dev->tstamp.rx_tstamp_dynflag);
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 72f80ae948cf..29a3540ed3f8 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -58,41 +58,44 @@
 	 CNXK_NIX_TX_NB_SEG_MAX)
 
 #define CNXK_NIX_RSS_L3_L4_SRC_DST                                             \
-	(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | ETH_RSS_L4_SRC_ONLY |     \
-	 ETH_RSS_L4_DST_ONLY)
+	(RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |                   \
+	 RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
 
 #define CNXK_NIX_RSS_OFFLOAD                                                   \
-	(ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP |               \
-	 ETH_RSS_SCTP | ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD |                  \
-	 CNXK_NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | ETH_RSS_C_VLAN)
+	(RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |                 \
+	 RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_TUNNEL |             \
+	 RTE_ETH_RSS_L2_PAYLOAD | CNXK_NIX_RSS_L3_L4_SRC_DST |                 \
+	 RTE_ETH_RSS_LEVEL_MASK | RTE_ETH_RSS_C_VLAN)
 
 #define CNXK_NIX_TX_OFFLOAD_CAPA                                               \
-	(DEV_TX_OFFLOAD_MBUF_FAST_FREE | DEV_TX_OFFLOAD_MT_LOCKFREE |          \
-	 DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT |             \
-	 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |    \
-	 DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM |                 \
-	 DEV_TX_OFFLOAD_SCTP_CKSUM | DEV_TX_OFFLOAD_TCP_TSO |                  \
-	 DEV_TX_OFFLOAD_VXLAN_TNL_TSO | DEV_TX_OFFLOAD_GENEVE_TNL_TSO |        \
-	 DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_MULTI_SEGS |              \
-	 DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_SECURITY)
+	(RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |          \
+	 RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT |             \
+	 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |    \
+	 RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM |                 \
+	 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO |                  \
+	 RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |        \
+	 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS |              \
+	 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_SECURITY)
 
 #define CNXK_NIX_RX_OFFLOAD_CAPA                                               \
-	(DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM |                 \
-	 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER |            \
-	 DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | DEV_RX_OFFLOAD_RSS_HASH |            \
-	 DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_VLAN_STRIP |                \
-	 DEV_RX_OFFLOAD_SECURITY)
+	(RTE_ETH_RX_OFFLOAD_CHECKSUM | RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |         \
+	 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_SCATTER |    \
+	 RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_RSS_HASH |    \
+	 RTE_ETH_RX_OFFLOAD_TIMESTAMP | RTE_ETH_RX_OFFLOAD_VLAN_STRIP |        \
+	 RTE_ETH_RX_OFFLOAD_SECURITY)
 
 #define RSS_IPV4_ENABLE                                                        \
-	(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP |         \
-	 ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_SCTP)
+	(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |                            \
+	 RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV4_TCP |         \
+	 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
 #define RSS_IPV6_ENABLE                                                        \
-	(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP |         \
-	 ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_SCTP)
+	(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |                            \
+	 RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |         \
+	 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 #define RSS_IPV6_EX_ENABLE                                                     \
-	(ETH_RSS_IPV6_EX | ETH_RSS_IPV6_TCP_EX | ETH_RSS_IPV6_UDP_EX)
+	(RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define RSS_MAX_LEVELS 3
 
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index c0b949e21ab0..e068f553495c 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -104,11 +104,11 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
 
 	val = atoi(value);
 
-	if (val <= ETH_RSS_RETA_SIZE_64)
+	if (val <= RTE_ETH_RSS_RETA_SIZE_64)
 		val = ROC_NIX_RSS_RETA_SZ_64;
-	else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
+	else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
 		val = ROC_NIX_RSS_RETA_SZ_128;
-	else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
+	else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
 		val = ROC_NIX_RSS_RETA_SZ_256;
 	else
 		val = ROC_NIX_RSS_RETA_SZ_64;
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index d0924df76152..67464302653d 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -81,24 +81,24 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
 		uint64_t flags;
 		const char *output;
 	} rx_offload_map[] = {
-		{DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
-		{DEV_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
-		{DEV_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
-		{DEV_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
-		{DEV_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
-		{DEV_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
-		{DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
-		{DEV_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
-		{DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
-		{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
-		{DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
-		{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
-		{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
-		{DEV_RX_OFFLOAD_SECURITY, " Security,"},
-		{DEV_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
-		{DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
-		{DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
-		{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+		{RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
+		{RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
+		{RTE_ETH_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
+		{RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
+		{RTE_ETH_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
+		{RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
+		{RTE_ETH_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
+		{RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+		{RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+		{RTE_ETH_RX_OFFLOAD_SECURITY, " Security,"},
+		{RTE_ETH_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
+		{RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
+		{RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
 	};
 	static const char *const burst_mode[] = {"Vector Neon, Rx Offloads:",
 						 "Scalar, Rx Offloads:"
@@ -142,28 +142,28 @@ cnxk_nix_tx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
 		uint64_t flags;
 		const char *output;
 	} tx_offload_map[] = {
-		{DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
-		{DEV_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
-		{DEV_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
-		{DEV_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
-		{DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
-		{DEV_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
-		{DEV_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
-		{DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
-		{DEV_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
-		{DEV_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
-		{DEV_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
-		{DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
-		{DEV_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
-		{DEV_TX_OFFLOAD_SECURITY, " Security,"},
-		{DEV_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
-		{DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
+		{RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+		{RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
+		{RTE_ETH_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
+		{RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
+		{RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
+		{RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
+		{RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
+		{RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
+		{RTE_ETH_TX_OFFLOAD_SECURITY, " Security,"},
+		{RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
 	};
 	static const char *const burst_mode[] = {"Vector Neon, Tx Offloads:",
 						 "Scalar, Tx Offloads:"
@@ -203,8 +203,8 @@ cnxk_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 	enum rte_eth_fc_mode mode_map[] = {
-					   RTE_FC_NONE, RTE_FC_RX_PAUSE,
-					   RTE_FC_TX_PAUSE, RTE_FC_FULL
+					   RTE_ETH_FC_NONE, RTE_ETH_FC_RX_PAUSE,
+					   RTE_ETH_FC_TX_PAUSE, RTE_ETH_FC_FULL
 					  };
 	struct roc_nix *nix = &dev->nix;
 	int mode;
@@ -264,10 +264,10 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	if (fc_conf->mode == fc->mode)
 		return 0;
 
-	rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_RX_PAUSE);
-	tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_TX_PAUSE);
+	rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+	tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
 
 	/* Check if TX pause frame is already enabled or not */
 	if (fc->tx_pause ^ tx_pause) {
@@ -408,13 +408,13 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	 * when this feature has not been enabled before.
 	 */
 	if (data->dev_started && frame_size > buffsz &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		plt_err("Scatter offload is not enabled for mtu");
 		goto exit;
 	}
 
 	/* Check <seg size> * <max_seg>  >= max_frame */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)	&&
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	&&
 	    frame_size > (buffsz * CNXK_NIX_RX_NB_SEG_MAX)) {
 		plt_err("Greater than maximum supported packet length");
 		goto exit;
@@ -734,8 +734,8 @@ cnxk_nix_reta_update(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Copy RETA table */
-	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta[idx] = reta_conf[i].reta[j];
 			idx++;
@@ -770,8 +770,8 @@ cnxk_nix_reta_query(struct rte_eth_dev *eth_dev,
 		goto fail;
 
 	/* Copy RETA table */
-	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = reta[idx];
 			idx++;
@@ -804,7 +804,7 @@ cnxk_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
 	if (rss_conf->rss_key)
 		roc_nix_rss_key_set(nix, rss_conf->rss_key);
 
-	rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 	flowkey_cfg =
diff --git a/drivers/net/cnxk/cnxk_link.c b/drivers/net/cnxk/cnxk_link.c
index 6a7080167598..f10a502826c6 100644
--- a/drivers/net/cnxk/cnxk_link.c
+++ b/drivers/net/cnxk/cnxk_link.c
@@ -38,7 +38,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
 		plt_info("Port %d: Link Up - speed %u Mbps - %s",
 			 (int)(eth_dev->data->port_id),
 			 (uint32_t)link->link_speed,
-			 link->link_duplex == ETH_LINK_FULL_DUPLEX
+			 link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX
 				 ? "full-duplex"
 				 : "half-duplex");
 	else
@@ -89,7 +89,7 @@ cnxk_eth_dev_link_status_cb(struct roc_nix *nix, struct roc_nix_link_info *link)
 
 	eth_link.link_status = link->status;
 	eth_link.link_speed = link->speed;
-	eth_link.link_autoneg = ETH_LINK_AUTONEG;
+	eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	eth_link.link_duplex = link->full_duplex;
 
 	/* Print link info */
@@ -117,17 +117,17 @@ cnxk_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
 		return 0;
 
 	if (roc_nix_is_lbk(&dev->nix)) {
-		link.link_status = ETH_LINK_UP;
-		link.link_speed = ETH_SPEED_NUM_100G;
-		link.link_autoneg = ETH_LINK_FIXED;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_status = RTE_ETH_LINK_UP;
+		link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	} else {
 		rc = roc_nix_mac_link_info_get(&dev->nix, &info);
 		if (rc)
 			return rc;
 		link.link_status = info.status;
 		link.link_speed = info.speed;
-		link.link_autoneg = ETH_LINK_AUTONEG;
+		link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 		if (info.full_duplex)
 			link.link_duplex = info.full_duplex;
 	}
diff --git a/drivers/net/cnxk/cnxk_ptp.c b/drivers/net/cnxk/cnxk_ptp.c
index 449489f599c4..139fea256ccd 100644
--- a/drivers/net/cnxk/cnxk_ptp.c
+++ b/drivers/net/cnxk/cnxk_ptp.c
@@ -227,7 +227,7 @@ cnxk_nix_timesync_enable(struct rte_eth_dev *eth_dev)
 	dev->rx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
 	dev->tx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
 
-	dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+	dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	rc = roc_nix_ptp_rx_ena_dis(nix, true);
 	if (!rc) {
@@ -257,7 +257,7 @@ int
 cnxk_nix_timesync_disable(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
-	uint64_t rx_offloads = DEV_RX_OFFLOAD_TIMESTAMP;
+	uint64_t rx_offloads = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	struct roc_nix *nix = &dev->nix;
 	int rc = 0;
 
diff --git a/drivers/net/cnxk/cnxk_rte_flow.c b/drivers/net/cnxk/cnxk_rte_flow.c
index dfc33ba8654a..b08d7c34faa9 100644
--- a/drivers/net/cnxk/cnxk_rte_flow.c
+++ b/drivers/net/cnxk/cnxk_rte_flow.c
@@ -69,7 +69,7 @@ npc_rss_action_validate(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		plt_err("multi-queue mode is disabled");
 		return -ENOTSUP;
 	}
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 37625c5bfb69..dbcbfaf68a30 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -28,31 +28,31 @@
 #define CXGBE_LINK_STATUS_POLL_CNT 100 /* Max number of times to poll */
 
 #define CXGBE_DEFAULT_RSS_KEY_LEN     40 /* 320-bits */
-#define CXGBE_RSS_HF_IPV4_MASK (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
-				ETH_RSS_NONFRAG_IPV4_OTHER)
-#define CXGBE_RSS_HF_IPV6_MASK (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
-				ETH_RSS_NONFRAG_IPV6_OTHER | \
-				ETH_RSS_IPV6_EX)
-#define CXGBE_RSS_HF_TCP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_TCP | \
-				    ETH_RSS_IPV6_TCP_EX)
-#define CXGBE_RSS_HF_UDP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_UDP | \
-				    ETH_RSS_IPV6_UDP_EX)
-#define CXGBE_RSS_HF_ALL (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP)
+#define CXGBE_RSS_HF_IPV4_MASK (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+				RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
+#define CXGBE_RSS_HF_IPV6_MASK (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
+				RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+				RTE_ETH_RSS_IPV6_EX)
+#define CXGBE_RSS_HF_TCP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+				    RTE_ETH_RSS_IPV6_TCP_EX)
+#define CXGBE_RSS_HF_UDP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+				    RTE_ETH_RSS_IPV6_UDP_EX)
+#define CXGBE_RSS_HF_ALL (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP)
 
 /* Tx/Rx Offloads supported */
-#define CXGBE_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT | \
-			   DEV_TX_OFFLOAD_IPV4_CKSUM | \
-			   DEV_TX_OFFLOAD_UDP_CKSUM | \
-			   DEV_TX_OFFLOAD_TCP_CKSUM | \
-			   DEV_TX_OFFLOAD_TCP_TSO | \
-			   DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define CXGBE_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP | \
-			   DEV_RX_OFFLOAD_IPV4_CKSUM | \
-			   DEV_RX_OFFLOAD_UDP_CKSUM | \
-			   DEV_RX_OFFLOAD_TCP_CKSUM | \
-			   DEV_RX_OFFLOAD_SCATTER | \
-			   DEV_RX_OFFLOAD_RSS_HASH)
+#define CXGBE_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+			   RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+			   RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+			   RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+			   RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define CXGBE_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+			   RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+			   RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+			   RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+			   RTE_ETH_RX_OFFLOAD_SCATTER | \
+			   RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 /* Devargs filtermode and filtermask representation */
 enum cxgbe_devargs_filter_mode_flags {
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index f77b2976002c..4758321778d1 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -231,9 +231,9 @@ int cxgbe_dev_link_update(struct rte_eth_dev *eth_dev,
 	}
 
 	new_link.link_status = cxgbe_force_linkup(adapter) ?
-			       ETH_LINK_UP : pi->link_cfg.link_ok;
+			       RTE_ETH_LINK_UP : pi->link_cfg.link_ok;
 	new_link.link_autoneg = (lc->link_caps & FW_PORT_CAP32_ANEG) ? 1 : 0;
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	new_link.link_speed = t4_fwcap_to_speed(lc->link_caps);
 
 	return rte_eth_linkstatus_set(eth_dev, &new_link);
@@ -374,7 +374,7 @@ int cxgbe_dev_start(struct rte_eth_dev *eth_dev)
 			goto out;
 	}
 
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		eth_dev->data->scattered_rx = 1;
 	else
 		eth_dev->data->scattered_rx = 0;
@@ -438,9 +438,9 @@ int cxgbe_dev_configure(struct rte_eth_dev *eth_dev)
 
 	CXGBE_FUNC_TRACE();
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (!(adapter->flags & FW_QUEUE_BOUND)) {
 		err = cxgbe_setup_sge_fwevtq(adapter);
@@ -1080,13 +1080,13 @@ static int cxgbe_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 		rx_pause = 1;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	return 0;
 }
 
@@ -1099,12 +1099,12 @@ static int cxgbe_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	u8 tx_pause = 0, rx_pause = 0;
 	int ret;
 
-	if (fc_conf->mode == RTE_FC_FULL) {
+	if (fc_conf->mode == RTE_ETH_FC_FULL) {
 		tx_pause = 1;
 		rx_pause = 1;
-	} else if (fc_conf->mode == RTE_FC_TX_PAUSE) {
+	} else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE) {
 		tx_pause = 1;
-	} else if (fc_conf->mode == RTE_FC_RX_PAUSE) {
+	} else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE) {
 		rx_pause = 1;
 	}
 
@@ -1200,9 +1200,9 @@ static int cxgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 		rss_hf |= CXGBE_RSS_HF_IPV6_MASK;
 
 	if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN) {
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		if (flags & F_FW_RSS_VI_CONFIG_CMD_UDPEN)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	}
 
 	if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN)
@@ -1246,8 +1246,8 @@ static int cxgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 
 	rte_memcpy(rss, pi->rss, pi->rss_size * sizeof(u16));
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (!(reta_conf[idx].mask & (1ULL << shift)))
 			continue;
 
@@ -1277,8 +1277,8 @@ static int cxgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (!(reta_conf[idx].mask & (1ULL << shift)))
 			continue;
 
@@ -1479,7 +1479,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
 
 	if (lc->pcaps & FW_PORT_CAP32_SPEED_100G) {
 		if (capa_arr) {
-			capa_arr[num].speed = ETH_SPEED_NUM_100G;
+			capa_arr[num].speed = RTE_ETH_SPEED_NUM_100G;
 			capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(RS);
 		}
@@ -1488,7 +1488,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
 
 	if (lc->pcaps & FW_PORT_CAP32_SPEED_50G) {
 		if (capa_arr) {
-			capa_arr[num].speed = ETH_SPEED_NUM_50G;
+			capa_arr[num].speed = RTE_ETH_SPEED_NUM_50G;
 			capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(BASER);
 		}
@@ -1497,7 +1497,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
 
 	if (lc->pcaps & FW_PORT_CAP32_SPEED_25G) {
 		if (capa_arr) {
-			capa_arr[num].speed = ETH_SPEED_NUM_25G;
+			capa_arr[num].speed = RTE_ETH_SPEED_NUM_25G;
 			capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(RS);
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 91d6bb9bbcb0..f1ac32270961 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -1670,7 +1670,7 @@ int cxgbe_link_start(struct port_info *pi)
 	 * that step explicitly.
 	 */
 	ret = t4_set_rxmode(adapter, adapter->mbox, pi->viid, mtu, -1, -1, -1,
-			    !!(conf_offloads & DEV_RX_OFFLOAD_VLAN_STRIP),
+			    !!(conf_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP),
 			    true);
 	if (ret == 0) {
 		ret = cxgbe_mpstcam_modify(pi, (int)pi->xact_addr_filt,
@@ -1694,7 +1694,7 @@ int cxgbe_link_start(struct port_info *pi)
 	}
 
 	if (ret == 0 && cxgbe_force_linkup(adapter))
-		pi->eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+		pi->eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return ret;
 }
 
@@ -1725,10 +1725,10 @@ int cxgbe_write_rss_conf(const struct port_info *pi, uint64_t rss_hf)
 	if (rss_hf & CXGBE_RSS_HF_IPV4_MASK)
 		flags |= F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN |
 			 F_FW_RSS_VI_CONFIG_CMD_UDPEN;
 
@@ -1865,7 +1865,7 @@ static void fw_caps_to_speed_caps(enum fw_port_type port_type,
 {
 #define SET_SPEED(__speed_name) \
 	do { \
-		*speed_caps |= ETH_LINK_ ## __speed_name; \
+		*speed_caps |= RTE_ETH_LINK_ ## __speed_name; \
 	} while (0)
 
 #define FW_CAPS_TO_SPEED(__fw_name) \
@@ -1952,7 +1952,7 @@ void cxgbe_get_speed_caps(struct port_info *pi, u32 *speed_caps)
 			      speed_caps);
 
 	if (!(pi->link_cfg.pcaps & FW_PORT_CAP32_ANEG))
-		*speed_caps |= ETH_LINK_SPEED_FIXED;
+		*speed_caps |= RTE_ETH_LINK_SPEED_FIXED;
 }
 
 /**
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index c79cdb8d8ad7..89ea7dd47c0b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -54,29 +54,29 @@
 
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
-		DEV_RX_OFFLOAD_SCATTER;
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 /* Rx offloads which cannot be disabled */
 static uint64_t dev_rx_offloads_nodis =
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 /* Supported Tx offloads */
 static uint64_t dev_tx_offloads_sup =
-		DEV_TX_OFFLOAD_MT_LOCKFREE |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 /* Tx offloads which cannot be disabled */
 static uint64_t dev_tx_offloads_nodis =
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 /* Keep track of whether QMAN and BMAN have been globally initialized */
 static int is_global_init;
@@ -238,7 +238,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 
 	fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		DPAA_PMD_DEBUG("enabling scatter mode");
 		fman_if_set_sg(dev->process_private, 1);
 		dev->data->scattered_rx = 1;
@@ -283,43 +283,43 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 
 	/* Configure link only if link is UP*/
 	if (link->link_status) {
-		if (eth_conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
+		if (eth_conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 			/* Start autoneg only if link is not in autoneg mode */
 			if (!link->link_autoneg)
 				dpaa_restart_link_autoneg(__fif->node_name);
-		} else if (eth_conf->link_speeds & ETH_LINK_SPEED_FIXED) {
-			switch (eth_conf->link_speeds & ~ETH_LINK_SPEED_FIXED) {
-			case ETH_LINK_SPEED_10M_HD:
-				speed = ETH_SPEED_NUM_10M;
-				duplex = ETH_LINK_HALF_DUPLEX;
+		} else if (eth_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+			switch (eth_conf->link_speeds &  RTE_ETH_LINK_SPEED_FIXED) {
+			case RTE_ETH_LINK_SPEED_10M_HD:
+				speed = RTE_ETH_SPEED_NUM_10M;
+				duplex = RTE_ETH_LINK_HALF_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_10M:
-				speed = ETH_SPEED_NUM_10M;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_10M:
+				speed = RTE_ETH_SPEED_NUM_10M;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_100M_HD:
-				speed = ETH_SPEED_NUM_100M;
-				duplex = ETH_LINK_HALF_DUPLEX;
+			case RTE_ETH_LINK_SPEED_100M_HD:
+				speed = RTE_ETH_SPEED_NUM_100M;
+				duplex = RTE_ETH_LINK_HALF_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_100M:
-				speed = ETH_SPEED_NUM_100M;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_100M:
+				speed = RTE_ETH_SPEED_NUM_100M;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_1G:
-				speed = ETH_SPEED_NUM_1G;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_1G:
+				speed = RTE_ETH_SPEED_NUM_1G;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_2_5G:
-				speed = ETH_SPEED_NUM_2_5G;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_2_5G:
+				speed = RTE_ETH_SPEED_NUM_2_5G;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_10G:
-				speed = ETH_SPEED_NUM_10G;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_10G:
+				speed = RTE_ETH_SPEED_NUM_10G;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
 			default:
-				speed = ETH_SPEED_NUM_NONE;
-				duplex = ETH_LINK_FULL_DUPLEX;
+				speed = RTE_ETH_SPEED_NUM_NONE;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
 			}
 			/* Set link speed */
@@ -535,30 +535,30 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_mac_addrs = DPAA_MAX_MAC_FILTER;
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
-	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
 
 	if (fif->mac_type == fman_mac_1g) {
-		dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
-					| ETH_LINK_SPEED_10M
-					| ETH_LINK_SPEED_100M_HD
-					| ETH_LINK_SPEED_100M
-					| ETH_LINK_SPEED_1G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+					| RTE_ETH_LINK_SPEED_10M
+					| RTE_ETH_LINK_SPEED_100M_HD
+					| RTE_ETH_LINK_SPEED_100M
+					| RTE_ETH_LINK_SPEED_1G;
 	} else if (fif->mac_type == fman_mac_2_5g) {
-		dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
-					| ETH_LINK_SPEED_10M
-					| ETH_LINK_SPEED_100M_HD
-					| ETH_LINK_SPEED_100M
-					| ETH_LINK_SPEED_1G
-					| ETH_LINK_SPEED_2_5G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+					| RTE_ETH_LINK_SPEED_10M
+					| RTE_ETH_LINK_SPEED_100M_HD
+					| RTE_ETH_LINK_SPEED_100M
+					| RTE_ETH_LINK_SPEED_1G
+					| RTE_ETH_LINK_SPEED_2_5G;
 	} else if (fif->mac_type == fman_mac_10g) {
-		dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
-					| ETH_LINK_SPEED_10M
-					| ETH_LINK_SPEED_100M_HD
-					| ETH_LINK_SPEED_100M
-					| ETH_LINK_SPEED_1G
-					| ETH_LINK_SPEED_2_5G
-					| ETH_LINK_SPEED_10G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+					| RTE_ETH_LINK_SPEED_10M
+					| RTE_ETH_LINK_SPEED_100M_HD
+					| RTE_ETH_LINK_SPEED_100M
+					| RTE_ETH_LINK_SPEED_1G
+					| RTE_ETH_LINK_SPEED_2_5G
+					| RTE_ETH_LINK_SPEED_10G;
 	} else {
 		DPAA_PMD_ERR("invalid link_speed: %s, %d",
 			     dpaa_intf->name, fif->mac_type);
@@ -591,12 +591,12 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} rx_offload_map[] = {
-			{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
-			{DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
-			{DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
-			{DEV_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
-			{DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+			{RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+			{RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+			{RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+			{RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+			{RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
 	};
 
 	/* Update Rx offload info */
@@ -623,14 +623,14 @@ dpaa_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} tx_offload_map[] = {
-			{DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
-			{DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
-			{DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
-			{DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
-			{DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
-			{DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
-			{DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+			{RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+			{RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+			{RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+			{RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+			{RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+			{RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
 	};
 
 	/* Update Tx offload info */
@@ -664,7 +664,7 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 			ret = dpaa_get_link_status(__fif->node_name, link);
 			if (ret)
 				return ret;
-			if (link->link_status == ETH_LINK_DOWN &&
+			if (link->link_status == RTE_ETH_LINK_DOWN &&
 			    wait_to_complete)
 				rte_delay_ms(CHECK_INTERVAL);
 			else
@@ -675,15 +675,15 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	}
 
 	if (ioctl_version < 2) {
-		link->link_duplex = ETH_LINK_FULL_DUPLEX;
-		link->link_autoneg = ETH_LINK_AUTONEG;
+		link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+		link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 		if (fif->mac_type == fman_mac_1g)
-			link->link_speed = ETH_SPEED_NUM_1G;
+			link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		else if (fif->mac_type == fman_mac_2_5g)
-			link->link_speed = ETH_SPEED_NUM_2_5G;
+			link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		else if (fif->mac_type == fman_mac_10g)
-			link->link_speed = ETH_SPEED_NUM_10G;
+			link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		else
 			DPAA_PMD_ERR("invalid link_speed: %s, %d",
 				     dpaa_intf->name, fif->mac_type);
@@ -962,7 +962,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (max_rx_pktlen <= buffsz) {
 		;
 	} else if (dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_SCATTER) {
+			RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
 			DPAA_PMD_ERR("Maximum Rx packet size %d too big to fit "
 				"MaxSGlist %d",
@@ -1268,7 +1268,7 @@ static int dpaa_link_down(struct rte_eth_dev *dev)
 	__fif = container_of(fif, struct __fman_if, __if);
 
 	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
-		dpaa_update_link_status(__fif->node_name, ETH_LINK_DOWN);
+		dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_DOWN);
 	else
 		return dpaa_eth_dev_stop(dev);
 	return 0;
@@ -1284,7 +1284,7 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
 	__fif = container_of(fif, struct __fman_if, __if);
 
 	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
-		dpaa_update_link_status(__fif->node_name, ETH_LINK_UP);
+		dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_UP);
 	else
 		dpaa_eth_dev_start(dev);
 	return 0;
@@ -1314,10 +1314,10 @@ dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
 		return -EINVAL;
 	}
 
-	if (fc_conf->mode == RTE_FC_NONE) {
+	if (fc_conf->mode == RTE_ETH_FC_NONE) {
 		return 0;
-	} else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
-		 fc_conf->mode == RTE_FC_FULL) {
+	} else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE ||
+		 fc_conf->mode == RTE_ETH_FC_FULL) {
 		fman_if_set_fc_threshold(dev->process_private,
 					 fc_conf->high_water,
 					 fc_conf->low_water,
@@ -1361,11 +1361,11 @@ dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
 	}
 	ret = fman_if_get_fc_threshold(dev->process_private);
 	if (ret) {
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		fc_conf->pause_time =
 			fman_if_get_fc_quanta(dev->process_private);
 	} else {
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return 0;
@@ -1626,10 +1626,10 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf,
 	fc_conf = dpaa_intf->fc_conf;
 	ret = fman_if_get_fc_threshold(fman_intf);
 	if (ret) {
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		fc_conf->pause_time = fman_if_get_fc_quanta(fman_intf);
 	} else {
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b5728e09c29f..c868e9d5bd9b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -74,11 +74,11 @@
 #define DPAA_DEBUG_FQ_TX_ERROR   1
 
 #define DPAA_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_L2_PAYLOAD | \
-	ETH_RSS_IP | \
-	ETH_RSS_UDP | \
-	ETH_RSS_TCP | \
-	ETH_RSS_SCTP)
+	RTE_ETH_RSS_L2_PAYLOAD | \
+	RTE_ETH_RSS_IP | \
+	RTE_ETH_RSS_UDP | \
+	RTE_ETH_RSS_TCP | \
+	RTE_ETH_RSS_SCTP)
 
 #define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
 		PKT_TX_IP_CKSUM |                \
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index c5b5ec869519..1ccd03602790 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -394,7 +394,7 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 		if (req_dist_set % 2 != 0) {
 			dist_field = 1U << loop;
 			switch (dist_field) {
-			case ETH_RSS_L2_PAYLOAD:
+			case RTE_ETH_RSS_L2_PAYLOAD:
 
 				if (l2_configured)
 					break;
@@ -404,9 +404,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_ETH;
 				break;
 
-			case ETH_RSS_IPV4:
-			case ETH_RSS_FRAG_IPV4:
-			case ETH_RSS_NONFRAG_IPV4_OTHER:
+			case RTE_ETH_RSS_IPV4:
+			case RTE_ETH_RSS_FRAG_IPV4:
+			case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
 
 				if (ipv4_configured)
 					break;
@@ -415,10 +415,10 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_IPV4;
 				break;
 
-			case ETH_RSS_IPV6:
-			case ETH_RSS_FRAG_IPV6:
-			case ETH_RSS_NONFRAG_IPV6_OTHER:
-			case ETH_RSS_IPV6_EX:
+			case RTE_ETH_RSS_IPV6:
+			case RTE_ETH_RSS_FRAG_IPV6:
+			case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+			case RTE_ETH_RSS_IPV6_EX:
 
 				if (ipv6_configured)
 					break;
@@ -427,9 +427,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_IPV6;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_TCP:
-			case ETH_RSS_NONFRAG_IPV6_TCP:
-			case ETH_RSS_IPV6_TCP_EX:
+			case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+			case RTE_ETH_RSS_IPV6_TCP_EX:
 
 				if (tcp_configured)
 					break;
@@ -438,9 +438,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_TCP;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_UDP:
-			case ETH_RSS_NONFRAG_IPV6_UDP:
-			case ETH_RSS_IPV6_UDP_EX:
+			case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+			case RTE_ETH_RSS_IPV6_UDP_EX:
 
 				if (udp_configured)
 					break;
@@ -449,8 +449,8 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_UDP;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_SCTP:
-			case ETH_RSS_NONFRAG_IPV6_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
 
 				if (sctp_configured)
 					break;
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 08f49af7685d..3170694841df 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -220,9 +220,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
 		if (req_dist_set % 2 != 0) {
 			dist_field = 1ULL << loop;
 			switch (dist_field) {
-			case ETH_RSS_L2_PAYLOAD:
-			case ETH_RSS_ETH:
-
+			case RTE_ETH_RSS_L2_PAYLOAD:
+			case RTE_ETH_RSS_ETH:
 				if (l2_configured)
 					break;
 				l2_configured = 1;
@@ -238,7 +237,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_PPPOE:
+			case RTE_ETH_RSS_PPPOE:
 				if (pppoe_configured)
 					break;
 				kg_cfg->extracts[i].extract.from_hdr.prot =
@@ -252,7 +251,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_ESP:
+			case RTE_ETH_RSS_ESP:
 				if (esp_configured)
 					break;
 				esp_configured = 1;
@@ -268,7 +267,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_AH:
+			case RTE_ETH_RSS_AH:
 				if (ah_configured)
 					break;
 				ah_configured = 1;
@@ -284,8 +283,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_C_VLAN:
-			case ETH_RSS_S_VLAN:
+			case RTE_ETH_RSS_C_VLAN:
+			case RTE_ETH_RSS_S_VLAN:
 				if (vlan_configured)
 					break;
 				vlan_configured = 1;
@@ -301,7 +300,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_MPLS:
+			case RTE_ETH_RSS_MPLS:
 
 				if (mpls_configured)
 					break;
@@ -338,13 +337,13 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_IPV4:
-			case ETH_RSS_FRAG_IPV4:
-			case ETH_RSS_NONFRAG_IPV4_OTHER:
-			case ETH_RSS_IPV6:
-			case ETH_RSS_FRAG_IPV6:
-			case ETH_RSS_NONFRAG_IPV6_OTHER:
-			case ETH_RSS_IPV6_EX:
+			case RTE_ETH_RSS_IPV4:
+			case RTE_ETH_RSS_FRAG_IPV4:
+			case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
+			case RTE_ETH_RSS_IPV6:
+			case RTE_ETH_RSS_FRAG_IPV6:
+			case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+			case RTE_ETH_RSS_IPV6_EX:
 
 				if (l3_configured)
 					break;
@@ -382,12 +381,12 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 			break;
 
-			case ETH_RSS_NONFRAG_IPV4_TCP:
-			case ETH_RSS_NONFRAG_IPV6_TCP:
-			case ETH_RSS_NONFRAG_IPV4_UDP:
-			case ETH_RSS_NONFRAG_IPV6_UDP:
-			case ETH_RSS_IPV6_TCP_EX:
-			case ETH_RSS_IPV6_UDP_EX:
+			case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+			case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+			case RTE_ETH_RSS_IPV6_TCP_EX:
+			case RTE_ETH_RSS_IPV6_UDP_EX:
 
 				if (l4_configured)
 					break;
@@ -414,8 +413,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_SCTP:
-			case ETH_RSS_NONFRAG_IPV6_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
 
 				if (sctp_configured)
 					break;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index a0270e78520e..59e728577f53 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -38,33 +38,33 @@
 
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
-		DEV_RX_OFFLOAD_CHECKSUM |
-		DEV_RX_OFFLOAD_SCTP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_CHECKSUM |
+		RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 /* Rx offloads which cannot be disabled */
 static uint64_t dev_rx_offloads_nodis =
-		DEV_RX_OFFLOAD_RSS_HASH |
-		DEV_RX_OFFLOAD_SCATTER;
+		RTE_ETH_RX_OFFLOAD_RSS_HASH |
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 /* Supported Tx offloads */
 static uint64_t dev_tx_offloads_sup =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_MT_LOCKFREE |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 /* Tx offloads which cannot be disabled */
 static uint64_t dev_tx_offloads_nodis =
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 /* enable timestamp in mbuf */
 bool dpaa2_enable_ts[RTE_MAX_ETHPORTS];
@@ -142,7 +142,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* VLAN Filter not avaialble */
 		if (!priv->max_vlan_filters) {
 			DPAA2_PMD_INFO("VLAN filter not available");
@@ -150,7 +150,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		}
 
 		if (dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_VLAN_FILTER)
+			RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ret = dpni_enable_vlan_filter(dpni, CMD_PRI_LOW,
 						      priv->token, true);
 		else
@@ -251,13 +251,13 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 					dev_rx_offloads_nodis;
 	dev_info->tx_offload_capa = dev_tx_offloads_sup |
 					dev_tx_offloads_nodis;
-	dev_info->speed_capa = ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_2_5G |
-			ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_2_5G |
+			RTE_ETH_LINK_SPEED_10G;
 
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
-	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	dev_info->flow_type_rss_offloads = DPAA2_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxportconf.burst_size = dpaa2_dqrr_size;
@@ -270,10 +270,10 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->default_rxportconf.ring_size = DPAA2_RX_DEFAULT_NBDESC;
 
 	if (dpaa2_svr_family == SVR_LX2160A) {
-		dev_info->speed_capa |= ETH_LINK_SPEED_25G |
-				ETH_LINK_SPEED_40G |
-				ETH_LINK_SPEED_50G |
-				ETH_LINK_SPEED_100G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G |
+				RTE_ETH_LINK_SPEED_40G |
+				RTE_ETH_LINK_SPEED_50G |
+				RTE_ETH_LINK_SPEED_100G;
 	}
 
 	return 0;
@@ -291,15 +291,15 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} rx_offload_map[] = {
-			{DEV_RX_OFFLOAD_CHECKSUM, " Checksum,"},
-			{DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
-			{DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
-			{DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
-			{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
-			{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
-			{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
-			{DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
+			{RTE_ETH_RX_OFFLOAD_CHECKSUM, " Checksum,"},
+			{RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+			{RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
+			{RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
+			{RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
+			{RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+			{RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"},
+			{RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"}
 	};
 
 	/* Update Rx offload info */
@@ -326,15 +326,15 @@ dpaa2_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} tx_offload_map[] = {
-			{DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
-			{DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
-			{DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
-			{DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
-			{DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
-			{DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
-			{DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
-			{DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+			{RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+			{RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+			{RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+			{RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+			{RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+			{RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+			{RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
 	};
 
 	/* Update Tx offload info */
@@ -573,7 +573,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		return -1;
 	}
 
-	if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (eth_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) {
 			ret = dpaa2_setup_flow_dist(dev,
 					eth_conf->rx_adv_conf.rss_conf.rss_hf,
@@ -587,12 +587,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		rx_l3_csum_offload = true;
 
-	if ((rx_offloads & DEV_RX_OFFLOAD_UDP_CKSUM) ||
-		(rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) ||
-		(rx_offloads & DEV_RX_OFFLOAD_SCTP_CKSUM))
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) ||
+		(rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) ||
+		(rx_offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM))
 		rx_l4_csum_offload = true;
 
 	ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -610,7 +610,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 	}
 
 #if !defined(RTE_LIBRTE_IEEE1588)
-	if (rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 #endif
 	{
 		ret = rte_mbuf_dyn_rx_timestamp_register(
@@ -623,12 +623,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		dpaa2_enable_ts[dev->data->port_id] = true;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		tx_l3_csum_offload = true;
 
-	if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ||
-		(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
-		(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ||
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM))
 		tx_l4_csum_offload = true;
 
 	ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -660,8 +660,8 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
-		dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+		dpaa2_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
 
 	dpaa2_tm_init(dev);
 
@@ -1856,7 +1856,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 			DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret);
 			return -1;
 		}
-		if (state.up == ETH_LINK_DOWN &&
+		if (state.up == RTE_ETH_LINK_DOWN &&
 		    wait_to_complete)
 			rte_delay_ms(CHECK_INTERVAL);
 		else
@@ -1868,9 +1868,9 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 	link.link_speed = state.rate;
 
 	if (state.options & DPNI_LINK_OPT_HALF_DUPLEX)
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	else
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	ret = rte_eth_linkstatus_set(dev, &link);
 	if (ret == -1)
@@ -2031,9 +2031,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *	No TX side flow control (send Pause frame disabled)
 		 */
 		if (!(state.options & DPNI_LINK_OPT_ASYM_PAUSE))
-			fc_conf->mode = RTE_FC_FULL;
+			fc_conf->mode = RTE_ETH_FC_FULL;
 		else
-			fc_conf->mode = RTE_FC_RX_PAUSE;
+			fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	} else {
 		/* DPNI_LINK_OPT_PAUSE not set
 		 *  if ASYM_PAUSE set,
@@ -2043,9 +2043,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *	Flow control disabled
 		 */
 		if (state.options & DPNI_LINK_OPT_ASYM_PAUSE)
-			fc_conf->mode = RTE_FC_TX_PAUSE;
+			fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		else
-			fc_conf->mode = RTE_FC_NONE;
+			fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return ret;
@@ -2089,14 +2089,14 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	/* update cfg with fc_conf */
 	switch (fc_conf->mode) {
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		/* Full flow control;
 		 * OPT_PAUSE set, ASYM_PAUSE not set
 		 */
 		cfg.options |= DPNI_LINK_OPT_PAUSE;
 		cfg.options &= ~DPNI_LINK_OPT_ASYM_PAUSE;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		/* Enable RX flow control
 		 * OPT_PAUSE not set;
 		 * ASYM_PAUSE set;
@@ -2104,7 +2104,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
 		cfg.options &= ~DPNI_LINK_OPT_PAUSE;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		/* Enable TX Flow control
 		 * OPT_PAUSE set
 		 * ASYM_PAUSE set
@@ -2112,7 +2112,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		cfg.options |= DPNI_LINK_OPT_PAUSE;
 		cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
 		break;
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		/* Disable Flow control
 		 * OPT_PAUSE not set
 		 * ASYM_PAUSE not set
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index fdc62ec30d22..c5e9267bf04d 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -65,17 +65,17 @@
 #define DPAA2_TX_CONF_ENABLE	0x08
 
 #define DPAA2_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_L2_PAYLOAD | \
-	ETH_RSS_IP | \
-	ETH_RSS_UDP | \
-	ETH_RSS_TCP | \
-	ETH_RSS_SCTP | \
-	ETH_RSS_MPLS | \
-	ETH_RSS_C_VLAN | \
-	ETH_RSS_S_VLAN | \
-	ETH_RSS_ESP | \
-	ETH_RSS_AH | \
-	ETH_RSS_PPPOE)
+	RTE_ETH_RSS_L2_PAYLOAD | \
+	RTE_ETH_RSS_IP | \
+	RTE_ETH_RSS_UDP | \
+	RTE_ETH_RSS_TCP | \
+	RTE_ETH_RSS_SCTP | \
+	RTE_ETH_RSS_MPLS | \
+	RTE_ETH_RSS_C_VLAN | \
+	RTE_ETH_RSS_S_VLAN | \
+	RTE_ETH_RSS_ESP | \
+	RTE_ETH_RSS_AH | \
+	RTE_ETH_RSS_PPPOE)
 
 /* LX2 FRC Parsed values (Little Endian) */
 #define DPAA2_PKT_TYPE_ETHER		0x0060
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f40369e2c3f9..7c77243b5d1a 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -773,7 +773,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 #endif
 
 		if (eth_data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_VLAN_STRIP)
+				RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			rte_vlan_strip(bufs[num_rx]);
 
 		dq_storage++;
@@ -987,7 +987,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 							eth_data->port_id);
 
 		if (eth_data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_VLAN_STRIP) {
+				RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 			rte_vlan_strip(bufs[num_rx]);
 		}
 
@@ -1230,7 +1230,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 					if (unlikely(((*bufs)->ol_flags
 						& PKT_TX_VLAN_PKT) ||
 						(eth_data->dev_conf.txmode.offloads
-						& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+						& RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
 						ret = rte_vlan_insert(bufs);
 						if (ret)
 							goto send_n_return;
@@ -1273,7 +1273,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 			if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN_PKT) ||
 				(eth_data->dev_conf.txmode.offloads
-				& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+				& RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
 				int ret = rte_vlan_insert(bufs);
 				if (ret)
 					goto send_n_return;
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 7d5d6377859a..a548ae2ccb2c 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -82,15 +82,15 @@
 #define E1000_FTQF_QUEUE_ENABLE          0x00000100
 
 #define IGB_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 /*
  * The overhead from MTU to max frame size.
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 73152dec6ed1..9da477e59def 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -597,8 +597,8 @@ eth_em_start(struct rte_eth_dev *dev)
 
 	e1000_clear_hw_cntrs_base_generic(hw);
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
-			ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+			RTE_ETH_VLAN_EXTEND_MASK;
 	ret = eth_em_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Unable to update vlan offload");
@@ -611,39 +611,39 @@ eth_em_start(struct rte_eth_dev *dev)
 
 	/* Setup link speed and duplex */
 	speeds = &dev->data->dev_conf.link_speeds;
-	if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
 		hw->mac.autoneg = 1;
 	} else {
 		num_speeds = 0;
-		autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+		autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 		/* Reset */
 		hw->phy.autoneg_advertised = 0;
 
-		if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+		if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
 			num_speeds = -1;
 			goto error_invalid_config;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_1G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_1G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
 			num_speeds++;
 		}
@@ -1102,9 +1102,9 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.nb_mtu_seg_max = EM_TX_MAX_MTU_SEG,
 	};
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G;
 
 	/* Preferred queue parameters */
 	dev_info->default_rxportconf.nb_queues = 1;
@@ -1162,17 +1162,17 @@ eth_em_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		uint16_t duplex, speed;
 		hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
 		link.link_duplex = (duplex == FULL_DUPLEX) ?
-				ETH_LINK_FULL_DUPLEX :
-				ETH_LINK_HALF_DUPLEX;
+				RTE_ETH_LINK_FULL_DUPLEX :
+				RTE_ETH_LINK_HALF_DUPLEX;
 		link.link_speed = speed;
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 		link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 	} else {
-		link.link_speed = ETH_SPEED_NUM_NONE;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_status = ETH_LINK_DOWN;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -1424,15 +1424,15 @@ eth_em_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if(mask & ETH_VLAN_STRIP_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			em_vlan_hw_strip_enable(dev);
 		else
 			em_vlan_hw_strip_disable(dev);
 	}
 
-	if(mask & ETH_VLAN_FILTER_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			em_vlan_hw_filter_enable(dev);
 		else
 			em_vlan_hw_filter_disable(dev);
@@ -1601,7 +1601,7 @@ eth_em_interrupt_action(struct rte_eth_dev *dev,
 	if (link.link_status) {
 		PMD_INIT_LOG(INFO, " Port %d: Link Up - speed %u Mbps - %s",
 			     dev->data->port_id, link.link_speed,
-			     link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			     link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 			     "full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down", dev->data->port_id);
@@ -1683,13 +1683,13 @@ eth_em_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		rx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 344149c19147..648b04154c5b 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -93,7 +93,7 @@ struct em_rx_queue {
 	struct em_rx_entry *sw_ring;   /**< address of RX software ring. */
 	struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */
 	struct rte_mbuf *pkt_last_seg;  /**< Last segment of current packet. */
-	uint64_t	    offloads;   /**< Offloads of DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads;   /**< Offloads of RTE_ETH_RX_OFFLOAD_* */
 	uint16_t            nb_rx_desc; /**< number of RX descriptors. */
 	uint16_t            rx_tail;    /**< current value of RDT register. */
 	uint16_t            nb_rx_hold; /**< number of held free RX desc. */
@@ -173,7 +173,7 @@ struct em_tx_queue {
 	uint8_t                wthresh;  /**< Write-back threshold register. */
 	struct em_ctx_info ctx_cache;
 	/**< Hardware context history.*/
-	uint64_t	       offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+	uint64_t	       offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
@@ -1171,11 +1171,11 @@ em_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
 
 	RTE_SET_USED(dev);
 	tx_offload_capa =
-		DEV_TX_OFFLOAD_MULTI_SEGS  |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	return tx_offload_capa;
 }
@@ -1369,13 +1369,13 @@ em_get_rx_port_offloads_capa(void)
 	uint64_t rx_offload_capa;
 
 	rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP  |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_IPV4_CKSUM  |
-		DEV_RX_OFFLOAD_UDP_CKSUM   |
-		DEV_RX_OFFLOAD_TCP_CKSUM   |
-		DEV_RX_OFFLOAD_KEEP_CRC    |
-		DEV_RX_OFFLOAD_SCATTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	return rx_offload_capa;
 }
@@ -1469,7 +1469,7 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
 	rxq->queue_id = queue_idx;
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1788,7 +1788,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 *  call to configure
 		 */
-		if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -1831,7 +1831,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (!dev->data->scattered_rx)
 			PMD_INIT_LOG(DEBUG, "forcing scatter mode");
 		dev->rx_pkt_burst = eth_em_recv_scattered_pkts;
@@ -1844,7 +1844,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 	 */
 	rxcsum = E1000_READ_REG(hw, E1000_RXCSUM);
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= E1000_RXCSUM_IPOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_IPOFL;
@@ -1870,7 +1870,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 	}
 
 	/* Setup the Receive Control Register. */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
 	else
 		rctl |= E1000_RCTL_SECRC; /* Strip Ethernet CRC. */
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index dbe811a1ad2f..ae3bc4a9c201 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -1073,21 +1073,21 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
 	uint16_t nb_rx_q = dev->data->nb_rx_queues;
 	uint16_t nb_tx_q = dev->data->nb_tx_queues;
 
-	if ((rx_mq_mode & ETH_MQ_RX_DCB_FLAG) ||
-	    tx_mq_mode == ETH_MQ_TX_DCB ||
-	    tx_mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	if ((rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) ||
+	    tx_mq_mode == RTE_ETH_MQ_TX_DCB ||
+	    tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		PMD_INIT_LOG(ERR, "DCB mode is not supported.");
 		return -EINVAL;
 	}
 	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
 		/* Check multi-queue mode.
-		 * To no break software we accept ETH_MQ_RX_NONE as this might
+		 * To no break software we accept RTE_ETH_MQ_RX_NONE as this might
 		 * be used to turn off VLAN filter.
 		 */
 
-		if (rx_mq_mode == ETH_MQ_RX_NONE ||
-		    rx_mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+		if (rx_mq_mode == RTE_ETH_MQ_RX_NONE ||
+		    rx_mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
 			RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
 		} else {
 			/* Only support one queue on VFs.
@@ -1099,12 +1099,12 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
 			return -EINVAL;
 		}
 		/* TX mode is not used here, so mode might be ignored.*/
-		if (tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+		if (tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
 			/* SRIOV only works in VMDq enable mode */
 			PMD_INIT_LOG(WARNING, "SRIOV is active,"
 					" TX mode %d is not supported. "
 					" Driver will behave as %d mode.",
-					tx_mq_mode, ETH_MQ_TX_VMDQ_ONLY);
+					tx_mq_mode, RTE_ETH_MQ_TX_VMDQ_ONLY);
 		}
 
 		/* check valid queue number */
@@ -1117,17 +1117,17 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
 		/* To no break software that set invalid mode, only display
 		 * warning if invalid mode is used.
 		 */
-		if (rx_mq_mode != ETH_MQ_RX_NONE &&
-		    rx_mq_mode != ETH_MQ_RX_VMDQ_ONLY &&
-		    rx_mq_mode != ETH_MQ_RX_RSS) {
+		if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+		    rx_mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY &&
+		    rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
 			/* RSS together with VMDq not supported*/
 			PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
 				     rx_mq_mode);
 			return -EINVAL;
 		}
 
-		if (tx_mq_mode != ETH_MQ_TX_NONE &&
-		    tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+		if (tx_mq_mode != RTE_ETH_MQ_TX_NONE &&
+		    tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
 			PMD_INIT_LOG(WARNING, "TX mode %d is not supported."
 					" Due to txmode is meaningless in this"
 					" driver, just ignore.",
@@ -1146,8 +1146,8 @@ eth_igb_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multipe queue mode checking */
 	ret  = igb_check_mq_mode(dev);
@@ -1287,8 +1287,8 @@ eth_igb_start(struct rte_eth_dev *dev)
 	/*
 	 * VLAN Offload Settings
 	 */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
-			ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+			RTE_ETH_VLAN_EXTEND_MASK;
 	ret = eth_igb_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Unable to set vlan offload");
@@ -1296,7 +1296,7 @@ eth_igb_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
 		/* Enable VLAN filter since VMDq always use VLAN filter */
 		igb_vmdq_vlan_hw_filter_enable(dev);
 	}
@@ -1310,39 +1310,39 @@ eth_igb_start(struct rte_eth_dev *dev)
 
 	/* Setup link speed and duplex */
 	speeds = &dev->data->dev_conf.link_speeds;
-	if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
 		hw->mac.autoneg = 1;
 	} else {
 		num_speeds = 0;
-		autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+		autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 		/* Reset */
 		hw->phy.autoneg_advertised = 0;
 
-		if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+		if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
 			num_speeds = -1;
 			goto error_invalid_config;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_1G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_1G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
 			num_speeds++;
 		}
@@ -2185,21 +2185,21 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	case e1000_82576:
 		dev_info->max_rx_queues = 16;
 		dev_info->max_tx_queues = 16;
-		dev_info->max_vmdq_pools = ETH_8_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
 		dev_info->vmdq_queue_num = 16;
 		break;
 
 	case e1000_82580:
 		dev_info->max_rx_queues = 8;
 		dev_info->max_tx_queues = 8;
-		dev_info->max_vmdq_pools = ETH_8_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
 		dev_info->vmdq_queue_num = 8;
 		break;
 
 	case e1000_i350:
 		dev_info->max_rx_queues = 8;
 		dev_info->max_tx_queues = 8;
-		dev_info->max_vmdq_pools = ETH_8_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
 		dev_info->vmdq_queue_num = 8;
 		break;
 
@@ -2225,7 +2225,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		return -EINVAL;
 	}
 	dev_info->hash_key_size = IGB_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = IGB_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -2251,9 +2251,9 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->rx_desc_lim = rx_desc_lim;
 	dev_info->tx_desc_lim = tx_desc_lim;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G;
 
 	dev_info->max_mtu = dev_info->max_rx_pktlen - E1000_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -2296,12 +2296,12 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_rx_bufsize = 256; /* See BSIZE field of RCTL register. */
 	dev_info->max_rx_pktlen  = 0x3FFF; /* See RLPML register. */
 	dev_info->max_mac_addrs = hw->mac.rar_entry_count;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-				DEV_TX_OFFLOAD_IPV4_CKSUM  |
-				DEV_TX_OFFLOAD_UDP_CKSUM   |
-				DEV_TX_OFFLOAD_TCP_CKSUM   |
-				DEV_TX_OFFLOAD_SCTP_CKSUM  |
-				DEV_TX_OFFLOAD_TCP_TSO;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	switch (hw->mac.type) {
 	case e1000_vfadapt:
 		dev_info->max_rx_queues = 2;
@@ -2402,17 +2402,17 @@ eth_igb_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		uint16_t duplex, speed;
 		hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
 		link.link_duplex = (duplex == FULL_DUPLEX) ?
-				ETH_LINK_FULL_DUPLEX :
-				ETH_LINK_HALF_DUPLEX;
+				RTE_ETH_LINK_FULL_DUPLEX :
+				RTE_ETH_LINK_HALF_DUPLEX;
 		link.link_speed = speed;
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 		link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 	} else if (!link_check) {
 		link.link_speed = 0;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_status = ETH_LINK_DOWN;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -2588,7 +2588,7 @@ eth_igb_vlan_tpid_set(struct rte_eth_dev *dev,
 	qinq &= E1000_CTRL_EXT_EXT_VLAN;
 
 	/* only outer TPID of double VLAN can be configured*/
-	if (qinq && vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (qinq && vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		reg = E1000_READ_REG(hw, E1000_VET);
 		reg = (reg & (~E1000_VET_VET_EXT)) |
 			((uint32_t)tpid << E1000_VET_VET_EXT_SHIFT);
@@ -2703,22 +2703,22 @@ eth_igb_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if(mask & ETH_VLAN_STRIP_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			igb_vlan_hw_strip_enable(dev);
 		else
 			igb_vlan_hw_strip_disable(dev);
 	}
 
-	if(mask & ETH_VLAN_FILTER_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			igb_vlan_hw_filter_enable(dev);
 		else
 			igb_vlan_hw_filter_disable(dev);
 	}
 
-	if(mask & ETH_VLAN_EXTEND_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			igb_vlan_hw_extend_enable(dev);
 		else
 			igb_vlan_hw_extend_disable(dev);
@@ -2870,7 +2870,7 @@ eth_igb_interrupt_action(struct rte_eth_dev *dev,
 				     " Port %d: Link Up - speed %u Mbps - %s",
 				     dev->data->port_id,
 				     (unsigned)link.link_speed,
-				     link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+				     link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 				     "full-duplex" : "half-duplex");
 		} else {
 			PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3024,13 +3024,13 @@ eth_igb_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		rx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -3099,18 +3099,18 @@ eth_igb_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 * on configuration
 		 */
 		switch (fc_conf->mode) {
-		case RTE_FC_NONE:
+		case RTE_ETH_FC_NONE:
 			ctrl &= ~E1000_CTRL_RFCE & ~E1000_CTRL_TFCE;
 			break;
-		case RTE_FC_RX_PAUSE:
+		case RTE_ETH_FC_RX_PAUSE:
 			ctrl |= E1000_CTRL_RFCE;
 			ctrl &= ~E1000_CTRL_TFCE;
 			break;
-		case RTE_FC_TX_PAUSE:
+		case RTE_ETH_FC_TX_PAUSE:
 			ctrl |= E1000_CTRL_TFCE;
 			ctrl &= ~E1000_CTRL_RFCE;
 			break;
-		case RTE_FC_FULL:
+		case RTE_ETH_FC_FULL:
 			ctrl |= E1000_CTRL_RFCE | E1000_CTRL_TFCE;
 			break;
 		default:
@@ -3258,22 +3258,22 @@ igbvf_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
 		     dev->data->port_id);
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/*
 	 * VF has no ability to enable/disable HW CRC
 	 * Keep the persistent behavior the same as Host PF
 	 */
 #ifndef RTE_LIBRTE_E1000_PF_DISABLE_STRIP_CRC
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
-		conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #else
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
 		PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #endif
 
@@ -3571,16 +3571,16 @@ eth_igb_rss_reta_update(struct rte_eth_dev *dev,
 	uint16_t idx, shift;
 	struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += IGB_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IGB_4_BIT_MASK);
 		if (!mask)
@@ -3612,16 +3612,16 @@ eth_igb_rss_reta_query(struct rte_eth_dev *dev,
 	uint16_t idx, shift;
 	struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += IGB_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IGB_4_BIT_MASK);
 		if (!mask)
diff --git a/drivers/net/e1000/igb_pf.c b/drivers/net/e1000/igb_pf.c
index 2ce74dd5a9a5..fe355ef6b3b5 100644
--- a/drivers/net/e1000/igb_pf.c
+++ b/drivers/net/e1000/igb_pf.c
@@ -88,7 +88,7 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev)
 	if (*vfinfo == NULL)
 		rte_panic("Cannot allocate memory for private VF data\n");
 
-	RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
+	RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_8_POOLS;
 	RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
 	RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
 	RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index a1d5eecc14a1..bcce2fc726d8 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -111,7 +111,7 @@ struct igb_rx_queue {
 	uint8_t             crc_len;    /**< 0 if CRC stripped, 4 otherwise. */
 	uint8_t             drop_en;  /**< If not 0, set SRRCTL.Drop_En. */
 	uint32_t            flags;      /**< RX flags. */
-	uint64_t	    offloads;   /**< offloads of DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads;   /**< offloads of RTE_ETH_RX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
@@ -186,7 +186,7 @@ struct igb_tx_queue {
 	/**< Start context position for transmit queue. */
 	struct igb_advctx_info ctx_cache[IGB_CTX_NUM];
 	/**< Hardware context history.*/
-	uint64_t	       offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+	uint64_t	       offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
@@ -1459,13 +1459,13 @@ igb_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
 	uint64_t tx_offload_capa;
 
 	RTE_SET_USED(dev);
-	tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-			  DEV_TX_OFFLOAD_IPV4_CKSUM  |
-			  DEV_TX_OFFLOAD_UDP_CKSUM   |
-			  DEV_TX_OFFLOAD_TCP_CKSUM   |
-			  DEV_TX_OFFLOAD_SCTP_CKSUM  |
-			  DEV_TX_OFFLOAD_TCP_TSO     |
-			  DEV_TX_OFFLOAD_MULTI_SEGS;
+	tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+			  RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+			  RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+			  RTE_ETH_TX_OFFLOAD_TCP_TSO     |
+			  RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return tx_offload_capa;
 }
@@ -1640,19 +1640,19 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
 
 	hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP  |
-			  DEV_RX_OFFLOAD_VLAN_FILTER |
-			  DEV_RX_OFFLOAD_IPV4_CKSUM  |
-			  DEV_RX_OFFLOAD_UDP_CKSUM   |
-			  DEV_RX_OFFLOAD_TCP_CKSUM   |
-			  DEV_RX_OFFLOAD_KEEP_CRC    |
-			  DEV_RX_OFFLOAD_SCATTER     |
-			  DEV_RX_OFFLOAD_RSS_HASH;
+	rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+			  RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+			  RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+			  RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+			  RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+			  RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+			  RTE_ETH_RX_OFFLOAD_SCATTER     |
+			  RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (hw->mac.type == e1000_i350 ||
 	    hw->mac.type == e1000_i210 ||
 	    hw->mac.type == e1000_i211)
-		rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+		rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
 	return rx_offload_capa;
 }
@@ -1733,7 +1733,7 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
 		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1950,23 +1950,23 @@ igb_hw_rss_hash_set(struct e1000_hw *hw, struct rte_eth_rss_conf *rss_conf)
 	/* Set configured hashing protocols in MRQC register */
 	rss_hf = rss_conf->rss_hf;
 	mrqc = E1000_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV4_TCP;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6;
-	if (rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_EX)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP;
-	if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV4_UDP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP;
-	if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP_EX;
 	E1000_WRITE_REG(hw, E1000_MRQC, mrqc);
 }
@@ -2032,23 +2032,23 @@ int eth_igb_rss_hash_conf_get(struct rte_eth_dev *dev,
 	}
 	rss_hf = 0;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_EX)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP_EX)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP_EX)
-		rss_hf |= ETH_RSS_IPV6_UDP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
 	rss_conf->rss_hf = rss_hf;
 	return 0;
 }
@@ -2170,15 +2170,15 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 			E1000_VMOLR_ROPE | E1000_VMOLR_BAM |
 			E1000_VMOLR_MPME);
 
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_UNTAG)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_UNTAG)
 			vmolr |= E1000_VMOLR_AUPE;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_MC)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
 			vmolr |= E1000_VMOLR_ROMPE;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_UC)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 			vmolr |= E1000_VMOLR_ROPE;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_BROADCAST)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 			vmolr |= E1000_VMOLR_BAM;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_MULTICAST)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 			vmolr |= E1000_VMOLR_MPME;
 
 		E1000_WRITE_REG(hw, E1000_VMOLR(i), vmolr);
@@ -2214,9 +2214,9 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 	/* VLVF: set up filters for vlan tags as configured */
 	for (i = 0; i < cfg->nb_pool_maps; i++) {
 		/* set vlan id in VF register and set the valid bit */
-		E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE | \
-                        (cfg->pool_map[i].vlan_id & ETH_VLAN_ID_MAX) | \
-			((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT ) & \
+		E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE |
+			(cfg->pool_map[i].vlan_id & RTE_ETH_VLAN_ID_MAX) |
+			((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT) &
 			E1000_VLVF_POOLSEL_MASK)));
 	}
 
@@ -2268,7 +2268,7 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	uint32_t mrqc;
 
-	if (RTE_ETH_DEV_SRIOV(dev).active == ETH_8_POOLS) {
+	if (RTE_ETH_DEV_SRIOV(dev).active == RTE_ETH_8_POOLS) {
 		/*
 		 * SRIOV active scheme
 		 * FIXME if support RSS together with VMDq & SRIOV
@@ -2282,14 +2282,14 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * SRIOV inactive scheme
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-			case ETH_MQ_RX_RSS:
+			case RTE_ETH_MQ_RX_RSS:
 				igb_rss_configure(dev);
 				break;
-			case ETH_MQ_RX_VMDQ_ONLY:
+			case RTE_ETH_MQ_RX_VMDQ_ONLY:
 				/*Configure general VMDQ only RX parameters*/
 				igb_vmdq_rx_hw_configure(dev);
 				break;
-			case ETH_MQ_RX_NONE:
+			case RTE_ETH_MQ_RX_NONE:
 				/* if mq_mode is none, disable rss mode.*/
 			default:
 				igb_rss_disable(dev);
@@ -2338,7 +2338,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		 * Set maximum packet length by default, and might be updated
 		 * together with enabling/disabling dual VLAN.
 		 */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			max_len += VLAN_TAG_SIZE;
 
 		E1000_WRITE_REG(hw, E1000_RLPML, max_len);
@@ -2374,7 +2374,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 *  call to configure
 		 */
-		if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -2444,7 +2444,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		E1000_WRITE_REG(hw, E1000_RXDCTL(rxq->reg_idx), rxdctl);
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (!dev->data->scattered_rx)
 			PMD_INIT_LOG(DEBUG, "forcing scatter mode");
 		dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
@@ -2488,16 +2488,16 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 	rxcsum |= E1000_RXCSUM_PCSD;
 
 	/* Enable both L3/L4 rx checksum offload */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		rxcsum |= E1000_RXCSUM_IPOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_IPOFL;
 	if (rxmode->offloads &
-		(DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+		(RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		rxcsum |= E1000_RXCSUM_TUOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_TUOFL;
-	if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= E1000_RXCSUM_CRCOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_CRCOFL;
@@ -2505,7 +2505,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 	E1000_WRITE_REG(hw, E1000_RXCSUM, rxcsum);
 
 	/* Setup the Receive Control Register. */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
 
 		/* clear STRCRC bit in all queues */
@@ -2545,7 +2545,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		(hw->mac.mc_filter_type << E1000_RCTL_MO_SHIFT);
 
 	/* Make sure VLAN Filters are off. */
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_VMDQ_ONLY)
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY)
 		rctl &= ~E1000_RCTL_VFE;
 	/* Don't store bad packets. */
 	rctl &= ~E1000_RCTL_SBP;
@@ -2743,7 +2743,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
 		E1000_WRITE_REG(hw, E1000_RXDCTL(i), rxdctl);
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (!dev->data->scattered_rx)
 			PMD_INIT_LOG(DEBUG, "forcing scatter mode");
 		dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index f3b17d70c9a4..4d2601d15a57 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -117,10 +117,10 @@ static const struct ena_stats ena_stats_rx_strings[] = {
 #define ENA_STATS_ARRAY_TX	ARRAY_SIZE(ena_stats_tx_strings)
 #define ENA_STATS_ARRAY_RX	ARRAY_SIZE(ena_stats_rx_strings)
 
-#define QUEUE_OFFLOADS (DEV_TX_OFFLOAD_TCP_CKSUM |\
-			DEV_TX_OFFLOAD_UDP_CKSUM |\
-			DEV_TX_OFFLOAD_IPV4_CKSUM |\
-			DEV_TX_OFFLOAD_TCP_TSO)
+#define QUEUE_OFFLOADS (RTE_ETH_TX_OFFLOAD_TCP_CKSUM |\
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |\
+			RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |\
+			RTE_ETH_TX_OFFLOAD_TCP_TSO)
 #define MBUF_OFFLOADS (PKT_TX_L4_MASK |\
 		       PKT_TX_IP_CKSUM |\
 		       PKT_TX_TCP_SEG)
@@ -332,7 +332,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 	    (queue_offloads & QUEUE_OFFLOADS)) {
 		/* check if TSO is required */
 		if ((mbuf->ol_flags & PKT_TX_TCP_SEG) &&
-		    (queue_offloads & DEV_TX_OFFLOAD_TCP_TSO)) {
+		    (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
 			ena_tx_ctx->tso_enable = true;
 
 			ena_meta->l4_hdr_len = GET_L4_HDR_LEN(mbuf);
@@ -340,7 +340,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 
 		/* check if L3 checksum is needed */
 		if ((mbuf->ol_flags & PKT_TX_IP_CKSUM) &&
-		    (queue_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM))
+		    (queue_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM))
 			ena_tx_ctx->l3_csum_enable = true;
 
 		if (mbuf->ol_flags & PKT_TX_IPV6) {
@@ -357,12 +357,12 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 
 		/* check if L4 checksum is needed */
 		if (((mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) &&
-		    (queue_offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) {
+		    (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_TCP;
 			ena_tx_ctx->l4_csum_enable = true;
 		} else if (((mbuf->ol_flags & PKT_TX_L4_MASK) ==
 				PKT_TX_UDP_CKSUM) &&
-				(queue_offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+				(queue_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UDP;
 			ena_tx_ctx->l4_csum_enable = true;
 		} else {
@@ -643,9 +643,9 @@ static int ena_link_update(struct rte_eth_dev *dev,
 	struct rte_eth_link *link = &dev->data->dev_link;
 	struct ena_adapter *adapter = dev->data->dev_private;
 
-	link->link_status = adapter->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
-	link->link_speed = ETH_SPEED_NUM_NONE;
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_status = adapter->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
+	link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	return 0;
 }
@@ -923,7 +923,7 @@ static int ena_start(struct rte_eth_dev *dev)
 	if (rc)
 		goto err_start_tx;
 
-	if (adapter->edev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (adapter->edev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		rc = ena_rss_configure(adapter);
 		if (rc)
 			goto err_rss_init;
@@ -2004,9 +2004,9 @@ static int ena_dev_configure(struct rte_eth_dev *dev)
 
 	adapter->state = ENA_ADAPTER_STATE_CONFIG;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
-	dev->data->dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+	dev->data->dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	/* Scattered Rx cannot be turned off in the HW, so this capability must
 	 * be forced.
@@ -2067,17 +2067,17 @@ static uint64_t ena_get_rx_port_offloads(struct ena_adapter *adapter)
 	uint64_t port_offloads = 0;
 
 	if (adapter->offloads.rx_offloads & ENA_L3_IPV4_CSUM)
-		port_offloads |= DEV_RX_OFFLOAD_IPV4_CKSUM;
+		port_offloads |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 
 	if (adapter->offloads.rx_offloads &
 	    (ENA_L4_IPV4_CSUM | ENA_L4_IPV6_CSUM))
 		port_offloads |=
-			DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM;
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 	if (adapter->offloads.rx_offloads & ENA_RX_RSS_HASH)
-		port_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+		port_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
-	port_offloads |= DEV_RX_OFFLOAD_SCATTER;
+	port_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	return port_offloads;
 }
@@ -2087,17 +2087,17 @@ static uint64_t ena_get_tx_port_offloads(struct ena_adapter *adapter)
 	uint64_t port_offloads = 0;
 
 	if (adapter->offloads.tx_offloads & ENA_IPV4_TSO)
-		port_offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+		port_offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (adapter->offloads.tx_offloads & ENA_L3_IPV4_CSUM)
-		port_offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+		port_offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 	if (adapter->offloads.tx_offloads &
 	    (ENA_L4_IPV4_CSUM_PARTIAL | ENA_L4_IPV4_CSUM |
 	     ENA_L4_IPV6_CSUM | ENA_L4_IPV6_CSUM_PARTIAL))
 		port_offloads |=
-			DEV_TX_OFFLOAD_UDP_CKSUM | DEV_TX_OFFLOAD_TCP_CKSUM;
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
-	port_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	port_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return port_offloads;
 }
@@ -2130,14 +2130,14 @@ static int ena_infos_get(struct rte_eth_dev *dev,
 	ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
 
 	dev_info->speed_capa =
-			ETH_LINK_SPEED_1G   |
-			ETH_LINK_SPEED_2_5G |
-			ETH_LINK_SPEED_5G   |
-			ETH_LINK_SPEED_10G  |
-			ETH_LINK_SPEED_25G  |
-			ETH_LINK_SPEED_40G  |
-			ETH_LINK_SPEED_50G  |
-			ETH_LINK_SPEED_100G;
+			RTE_ETH_LINK_SPEED_1G   |
+			RTE_ETH_LINK_SPEED_2_5G |
+			RTE_ETH_LINK_SPEED_5G   |
+			RTE_ETH_LINK_SPEED_10G  |
+			RTE_ETH_LINK_SPEED_25G  |
+			RTE_ETH_LINK_SPEED_40G  |
+			RTE_ETH_LINK_SPEED_50G  |
+			RTE_ETH_LINK_SPEED_100G;
 
 	/* Inform framework about available features */
 	dev_info->rx_offload_capa = ena_get_rx_port_offloads(adapter);
@@ -2303,7 +2303,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	}
 #endif
 
-	fill_hash = rx_ring->offloads & DEV_RX_OFFLOAD_RSS_HASH;
+	fill_hash = rx_ring->offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	descs_in_use = rx_ring->ring_size -
 		ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1;
@@ -2416,11 +2416,11 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 #ifdef RTE_LIBRTE_ETHDEV_DEBUG
 		/* Check if requested offload is also enabled for the queue */
 		if ((ol_flags & PKT_TX_IP_CKSUM &&
-		     !(tx_ring->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)) ||
+		     !(tx_ring->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)) ||
 		    (l4_csum_flag == PKT_TX_TCP_CKSUM &&
-		     !(tx_ring->offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) ||
+		     !(tx_ring->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) ||
 		    (l4_csum_flag == PKT_TX_UDP_CKSUM &&
-		     !(tx_ring->offloads & DEV_TX_OFFLOAD_UDP_CKSUM))) {
+		     !(tx_ring->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM))) {
 			PMD_TX_LOG(DEBUG,
 				"mbuf[%" PRIu32 "]: requested offloads: %" PRIu16 " are not enabled for the queue[%u]\n",
 				i, m->nb_segs, tx_ring->id);
diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h
index 4f4142ed12d0..865e1241e0ce 100644
--- a/drivers/net/ena/ena_ethdev.h
+++ b/drivers/net/ena/ena_ethdev.h
@@ -58,8 +58,8 @@
 
 #define ENA_HASH_KEY_SIZE		40
 
-#define ENA_ALL_RSS_HF (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
-			ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_UDP)
+#define ENA_ALL_RSS_HF (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define ENA_IO_TXQ_IDX(q)		(2 * (q))
 #define ENA_IO_RXQ_IDX(q)		(2 * (q) + 1)
diff --git a/drivers/net/ena/ena_rss.c b/drivers/net/ena/ena_rss.c
index 152098410fa2..be4007e3f3fe 100644
--- a/drivers/net/ena/ena_rss.c
+++ b/drivers/net/ena/ena_rss.c
@@ -76,7 +76,7 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
 	if (reta_size == 0 || reta_conf == NULL)
 		return -EINVAL;
 
-	if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		PMD_DRV_LOG(ERR,
 			"RSS was not configured for the PMD\n");
 		return -ENOTSUP;
@@ -93,8 +93,8 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
 		/* Each reta_conf is for 64 entries.
 		 * To support 128 we use 2 conf of 64.
 		 */
-		conf_idx = i / RTE_RETA_GROUP_SIZE;
-		idx = i % RTE_RETA_GROUP_SIZE;
+		conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		idx = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (TEST_BIT(reta_conf[conf_idx].mask, idx)) {
 			entry_value =
 				ENA_IO_RXQ_IDX(reta_conf[conf_idx].reta[idx]);
@@ -139,7 +139,7 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
 	if (reta_size == 0 || reta_conf == NULL)
 		return -EINVAL;
 
-	if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		PMD_DRV_LOG(ERR,
 			"RSS was not configured for the PMD\n");
 		return -ENOTSUP;
@@ -154,8 +154,8 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0 ; i < reta_size ; i++) {
-		reta_conf_idx = i / RTE_RETA_GROUP_SIZE;
-		reta_idx = i % RTE_RETA_GROUP_SIZE;
+		reta_conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (TEST_BIT(reta_conf[reta_conf_idx].mask, reta_idx))
 			reta_conf[reta_conf_idx].reta[reta_idx] =
 				ENA_IO_RXQ_IDX_REV(indirect_table[i]);
@@ -199,34 +199,34 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
 	/* Convert proto to ETH flag */
 	switch (proto) {
 	case ENA_ADMIN_RSS_TCP4:
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		break;
 	case ENA_ADMIN_RSS_UDP4:
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 		break;
 	case ENA_ADMIN_RSS_TCP6:
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 		break;
 	case ENA_ADMIN_RSS_UDP6:
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 		break;
 	case ENA_ADMIN_RSS_IP4:
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 		break;
 	case ENA_ADMIN_RSS_IP6:
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 		break;
 	case ENA_ADMIN_RSS_IP4_FRAG:
-		rss_hf |= ETH_RSS_FRAG_IPV4;
+		rss_hf |= RTE_ETH_RSS_FRAG_IPV4;
 		break;
 	case ENA_ADMIN_RSS_NOT_IP:
-		rss_hf |= ETH_RSS_L2_PAYLOAD;
+		rss_hf |= RTE_ETH_RSS_L2_PAYLOAD;
 		break;
 	case ENA_ADMIN_RSS_TCP6_EX:
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 		break;
 	case ENA_ADMIN_RSS_IP6_EX:
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 		break;
 	default:
 		break;
@@ -235,10 +235,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
 	/* Check if only DA or SA is being used for L3. */
 	switch (fields & ENA_HF_RSS_ALL_L3) {
 	case ENA_ADMIN_RSS_L3_SA:
-		rss_hf |= ETH_RSS_L3_SRC_ONLY;
+		rss_hf |= RTE_ETH_RSS_L3_SRC_ONLY;
 		break;
 	case ENA_ADMIN_RSS_L3_DA:
-		rss_hf |= ETH_RSS_L3_DST_ONLY;
+		rss_hf |= RTE_ETH_RSS_L3_DST_ONLY;
 		break;
 	default:
 		break;
@@ -247,10 +247,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
 	/* Check if only DA or SA is being used for L4. */
 	switch (fields & ENA_HF_RSS_ALL_L4) {
 	case ENA_ADMIN_RSS_L4_SP:
-		rss_hf |= ETH_RSS_L4_SRC_ONLY;
+		rss_hf |= RTE_ETH_RSS_L4_SRC_ONLY;
 		break;
 	case ENA_ADMIN_RSS_L4_DP:
-		rss_hf |= ETH_RSS_L4_DST_ONLY;
+		rss_hf |= RTE_ETH_RSS_L4_DST_ONLY;
 		break;
 	default:
 		break;
@@ -268,11 +268,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
 	fields_mask = ENA_ADMIN_RSS_L2_DA | ENA_ADMIN_RSS_L2_SA;
 
 	/* Determine which fields of L3 should be used. */
-	switch (rss_hf & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) {
-	case ETH_RSS_L3_DST_ONLY:
+	switch (rss_hf & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) {
+	case RTE_ETH_RSS_L3_DST_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L3_DA;
 		break;
-	case ETH_RSS_L3_SRC_ONLY:
+	case RTE_ETH_RSS_L3_SRC_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L3_SA;
 		break;
 	default:
@@ -284,11 +284,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
 	}
 
 	/* Determine which fields of L4 should be used. */
-	switch (rss_hf & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) {
-	case ETH_RSS_L4_DST_ONLY:
+	switch (rss_hf & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) {
+	case RTE_ETH_RSS_L4_DST_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L4_DP;
 		break;
-	case ETH_RSS_L4_SRC_ONLY:
+	case RTE_ETH_RSS_L4_SRC_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L4_SP;
 		break;
 	default:
@@ -334,43 +334,43 @@ static int ena_set_hash_fields(struct ena_com_dev *ena_dev, uint64_t rss_hf)
 	int rc, i;
 
 	/* Turn on appropriate fields for each requested packet type */
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) != 0)
 		selected_fields[ENA_ADMIN_RSS_TCP4].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP4, rss_hf);
 
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) != 0)
 		selected_fields[ENA_ADMIN_RSS_UDP4].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP4, rss_hf);
 
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) != 0)
 		selected_fields[ENA_ADMIN_RSS_TCP6].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6, rss_hf);
 
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) != 0)
 		selected_fields[ENA_ADMIN_RSS_UDP6].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP6, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV4) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV4) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP4].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV6) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV6) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP6].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6, rss_hf);
 
-	if ((rss_hf & ETH_RSS_FRAG_IPV4) != 0)
+	if ((rss_hf & RTE_ETH_RSS_FRAG_IPV4) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP4_FRAG].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4_FRAG, rss_hf);
 
-	if ((rss_hf & ETH_RSS_L2_PAYLOAD) != 0)
+	if ((rss_hf & RTE_ETH_RSS_L2_PAYLOAD) != 0)
 		selected_fields[ENA_ADMIN_RSS_NOT_IP].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_NOT_IP, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV6_TCP_EX) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) != 0)
 		selected_fields[ENA_ADMIN_RSS_TCP6_EX].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6_EX, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV6_EX) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV6_EX) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP6_EX].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6_EX, rss_hf);
 
@@ -541,7 +541,7 @@ int ena_rss_hash_conf_get(struct rte_eth_dev *dev,
 	uint16_t admin_hf;
 	static bool warn_once;
 
-	if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		PMD_DRV_LOG(ERR, "RSS was not configured for the PMD\n");
 		return -ENOTSUP;
 	}
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index 1b567f01eae0..7cdb8ce463ed 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -100,27 +100,27 @@ enetc_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 	status = enetc_port_rd(enetc_hw, ENETC_PM0_STATUS);
 
 	if (status & ENETC_LINK_MODE)
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	else
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 
 	if (status & ENETC_LINK_STATUS)
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 	else
-		link.link_status = ETH_LINK_DOWN;
+		link.link_status = RTE_ETH_LINK_DOWN;
 
 	switch (status & ENETC_LINK_SPEED_MASK) {
 	case ENETC_LINK_SPEED_1G:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case ENETC_LINK_SPEED_100M:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	default:
 	case ENETC_LINK_SPEED_10M:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -207,10 +207,10 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
 	dev_info->max_tx_queues = MAX_TX_RINGS;
 	dev_info->max_rx_pktlen = ENETC_MAC_MAXFRM_SIZE;
 	dev_info->rx_offload_capa =
-		(DEV_RX_OFFLOAD_IPV4_CKSUM |
-		 DEV_RX_OFFLOAD_UDP_CKSUM |
-		 DEV_RX_OFFLOAD_TCP_CKSUM |
-		 DEV_RX_OFFLOAD_KEEP_CRC);
+		(RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		 RTE_ETH_RX_OFFLOAD_KEEP_CRC);
 
 	return 0;
 }
@@ -463,7 +463,7 @@ enetc_rx_queue_setup(struct rte_eth_dev *dev,
 			       RTE_ETH_QUEUE_STATE_STOPPED;
 	}
 
-	rx_ring->crc_len = (uint8_t)((rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+	rx_ring->crc_len = (uint8_t)((rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
 				     RTE_ETHER_CRC_LEN : 0);
 
 	return 0;
@@ -705,7 +705,7 @@ enetc_dev_configure(struct rte_eth_dev *dev)
 	enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
 	enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		int config;
 
 		config = enetc_port_rd(enetc_hw, ENETC_PM0_CMD_CFG);
@@ -713,10 +713,10 @@ enetc_dev_configure(struct rte_eth_dev *dev)
 		enetc_port_wr(enetc_hw, ENETC_PM0_CMD_CFG, config);
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		checksum &= ~L3_CKSUM;
 
-	if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM))
+	if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
 		checksum &= ~L4_CKSUM;
 
 	enetc_port_wr(enetc_hw, ENETC_PAR_PORT_CFG, checksum);
diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index 47bfdac2cfdd..d5493c98345d 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -178,7 +178,7 @@ struct enic {
 	 */
 	uint8_t rss_hash_type; /* NIC_CFG_RSS_HASH_TYPE flags */
 	uint8_t rss_enable;
-	uint64_t rss_hf; /* ETH_RSS flags */
+	uint64_t rss_hf; /* RTE_ETH_RSS flags */
 	union vnic_rss_key rss_key;
 	union vnic_rss_cpu rss_cpu;
 
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8df7332bc5e0..c8bdaf1a8e79 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -38,30 +38,30 @@ static const struct vic_speed_capa {
 	uint16_t sub_devid;
 	uint32_t capa;
 } vic_speed_capa_map[] = {
-	{ 0x0043, ETH_LINK_SPEED_10G }, /* VIC */
-	{ 0x0047, ETH_LINK_SPEED_10G }, /* P81E PCIe */
-	{ 0x0048, ETH_LINK_SPEED_10G }, /* M81KR Mezz */
-	{ 0x004f, ETH_LINK_SPEED_10G }, /* 1280 Mezz */
-	{ 0x0084, ETH_LINK_SPEED_10G }, /* 1240 MLOM */
-	{ 0x0085, ETH_LINK_SPEED_10G }, /* 1225 PCIe */
-	{ 0x00cd, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1285 PCIe */
-	{ 0x00ce, ETH_LINK_SPEED_10G }, /* 1225T PCIe */
-	{ 0x012a, ETH_LINK_SPEED_40G }, /* M4308 */
-	{ 0x012c, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1340 MLOM */
-	{ 0x012e, ETH_LINK_SPEED_10G }, /* 1227 PCIe */
-	{ 0x0137, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1380 Mezz */
-	{ 0x014d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1385 PCIe */
-	{ 0x015d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1387 MLOM */
-	{ 0x0215, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
-		  ETH_LINK_SPEED_40G }, /* 1440 Mezz */
-	{ 0x0216, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
-		  ETH_LINK_SPEED_40G }, /* 1480 MLOM */
-	{ 0x0217, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1455 PCIe */
-	{ 0x0218, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1457 MLOM */
-	{ 0x0219, ETH_LINK_SPEED_40G }, /* 1485 PCIe */
-	{ 0x021a, ETH_LINK_SPEED_40G }, /* 1487 MLOM */
-	{ 0x024a, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1495 PCIe */
-	{ 0x024b, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1497 MLOM */
+	{ 0x0043, RTE_ETH_LINK_SPEED_10G }, /* VIC */
+	{ 0x0047, RTE_ETH_LINK_SPEED_10G }, /* P81E PCIe */
+	{ 0x0048, RTE_ETH_LINK_SPEED_10G }, /* M81KR Mezz */
+	{ 0x004f, RTE_ETH_LINK_SPEED_10G }, /* 1280 Mezz */
+	{ 0x0084, RTE_ETH_LINK_SPEED_10G }, /* 1240 MLOM */
+	{ 0x0085, RTE_ETH_LINK_SPEED_10G }, /* 1225 PCIe */
+	{ 0x00cd, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1285 PCIe */
+	{ 0x00ce, RTE_ETH_LINK_SPEED_10G }, /* 1225T PCIe */
+	{ 0x012a, RTE_ETH_LINK_SPEED_40G }, /* M4308 */
+	{ 0x012c, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1340 MLOM */
+	{ 0x012e, RTE_ETH_LINK_SPEED_10G }, /* 1227 PCIe */
+	{ 0x0137, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1380 Mezz */
+	{ 0x014d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1385 PCIe */
+	{ 0x015d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1387 MLOM */
+	{ 0x0215, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+		  RTE_ETH_LINK_SPEED_40G }, /* 1440 Mezz */
+	{ 0x0216, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+		  RTE_ETH_LINK_SPEED_40G }, /* 1480 MLOM */
+	{ 0x0217, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1455 PCIe */
+	{ 0x0218, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1457 MLOM */
+	{ 0x0219, RTE_ETH_LINK_SPEED_40G }, /* 1485 PCIe */
+	{ 0x021a, RTE_ETH_LINK_SPEED_40G }, /* 1487 MLOM */
+	{ 0x024a, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1495 PCIe */
+	{ 0x024b, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1497 MLOM */
 	{ 0, 0 }, /* End marker */
 };
 
@@ -297,8 +297,8 @@ static int enicpmd_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 	ENICPMD_FUNC_TRACE();
 
 	offloads = eth_dev->data->dev_conf.rxmode.offloads;
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			enic->ig_vlan_strip_en = 1;
 		else
 			enic->ig_vlan_strip_en = 0;
@@ -323,17 +323,17 @@ static int enicpmd_dev_configure(struct rte_eth_dev *eth_dev)
 		return ret;
 	}
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	enic->mc_count = 0;
 	enic->hw_ip_checksum = !!(eth_dev->data->dev_conf.rxmode.offloads &
-				  DEV_RX_OFFLOAD_CHECKSUM);
+				  RTE_ETH_RX_OFFLOAD_CHECKSUM);
 	/* All vlan offload masks to apply the current settings */
-	mask = ETH_VLAN_STRIP_MASK |
-		ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK |
+		RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	ret = enicpmd_vlan_offload_set(eth_dev, mask);
 	if (ret) {
 		dev_err(enic, "Failed to configure VLAN offloads\n");
@@ -435,14 +435,14 @@ static uint32_t speed_capa_from_pci_id(struct rte_eth_dev *eth_dev)
 	}
 	/* 1300 and later models are at least 40G */
 	if (id >= 0x0100)
-		return ETH_LINK_SPEED_40G;
+		return RTE_ETH_LINK_SPEED_40G;
 	/* VFs have subsystem id 0, check device id */
 	if (id == 0) {
 		/* Newer VF implies at least 40G model */
 		if (pdev->id.device_id == PCI_DEVICE_ID_CISCO_VIC_ENET_SN)
-			return ETH_LINK_SPEED_40G;
+			return RTE_ETH_LINK_SPEED_40G;
 	}
-	return ETH_LINK_SPEED_10G;
+	return RTE_ETH_LINK_SPEED_10G;
 }
 
 static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
@@ -774,8 +774,8 @@ static int enicpmd_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = enic_sop_rq_idx_to_rte_idx(
 				enic->rss_cpu.cpu[i / 4].b[i % 4]);
@@ -806,8 +806,8 @@ static int enicpmd_dev_rss_reta_update(struct rte_eth_dev *dev,
 	 */
 	rss_cpu = enic->rss_cpu;
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			rss_cpu.cpu[i / 4].b[i % 4] =
 				enic_rte_rq_idx_to_sop_idx(
@@ -883,7 +883,7 @@ static void enicpmd_dev_rxq_info_get(struct rte_eth_dev *dev,
 	 */
 	conf->offloads = enic->rx_offload_capa;
 	if (!enic->ig_vlan_strip_en)
-		conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	/* rx_thresh and other fields are not applicable for enic */
 }
 
@@ -969,8 +969,8 @@ static int enicpmd_dev_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
 static int udp_tunnel_common_check(struct enic *enic,
 				   struct rte_eth_udp_tunnel *tnl)
 {
-	if (tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN &&
-	    tnl->prot_type != RTE_TUNNEL_TYPE_GENEVE)
+	if (tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN &&
+	    tnl->prot_type != RTE_ETH_TUNNEL_TYPE_GENEVE)
 		return -ENOTSUP;
 	if (!enic->overlay_offload) {
 		ENICPMD_LOG(DEBUG, " overlay offload is not supported\n");
@@ -1010,7 +1010,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
 	ret = udp_tunnel_common_check(enic, tnl);
 	if (ret)
 		return ret;
-	vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+	vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
 	if (vxlan)
 		port = enic->vxlan_port;
 	else
@@ -1039,7 +1039,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
 	ret = udp_tunnel_common_check(enic, tnl);
 	if (ret)
 		return ret;
-	vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+	vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
 	if (vxlan)
 		port = enic->vxlan_port;
 	else
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index dfc7f5d1f94f..21b1fffb14f0 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -430,7 +430,7 @@ int enic_link_update(struct rte_eth_dev *eth_dev)
 
 	memset(&link, 0, sizeof(link));
 	link.link_status = enic_get_link_status(enic);
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_speed = vnic_dev_port_speed(enic->vdev);
 
 	return rte_eth_linkstatus_set(eth_dev, &link);
@@ -597,7 +597,7 @@ int enic_enable(struct enic *enic)
 	}
 
 	eth_dev->data->dev_link.link_speed = vnic_dev_port_speed(enic->vdev);
-	eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	/* vnic notification of link status has already been turned on in
 	 * enic_dev_init() which is called during probe time.  Here we are
@@ -638,11 +638,11 @@ int enic_enable(struct enic *enic)
 	 * and vlan insertion are supported.
 	 */
 	simple_tx_offloads = enic->tx_offload_capa &
-		(DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		 DEV_TX_OFFLOAD_VLAN_INSERT |
-		 DEV_TX_OFFLOAD_IPV4_CKSUM |
-		 DEV_TX_OFFLOAD_UDP_CKSUM |
-		 DEV_TX_OFFLOAD_TCP_CKSUM);
+		(RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		 RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		 RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
 	if ((eth_dev->data->dev_conf.txmode.offloads &
 	     ~simple_tx_offloads) == 0) {
 		ENICPMD_LOG(DEBUG, " use the simple tx handler");
@@ -858,7 +858,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
 	max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
 
 	if (enic->rte_dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_SCATTER) {
+	    RTE_ETH_RX_OFFLOAD_SCATTER) {
 		dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
 		/* ceil((max pkt len)/mbuf_size) */
 		mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) / mbuf_size;
@@ -1385,15 +1385,15 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
 	rss_hash_type = 0;
 	rss_hf = rss_conf->rss_hf & enic->flow_type_rss_offloads;
 	if (enic->rq_count > 1 &&
-	    (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) &&
+	    (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) &&
 	    rss_hf != 0) {
 		rss_enable = 1;
-		if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			      ETH_RSS_NONFRAG_IPV4_OTHER))
+		if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			      RTE_ETH_RSS_NONFRAG_IPV4_OTHER))
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV4;
 			if (enic->udp_rss_weak) {
 				/*
@@ -1404,12 +1404,12 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
 				rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
 			}
 		}
-		if (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_IPV6_EX |
-			      ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER))
+		if (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_IPV6_EX |
+			      RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER))
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV6;
-		if (rss_hf & (ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX))
+		if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX))
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
-		if (rss_hf & (ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX)) {
+		if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX)) {
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV6;
 			if (enic->udp_rss_weak)
 				rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
@@ -1745,9 +1745,9 @@ enic_enable_overlay_offload(struct enic *enic)
 		return -EINVAL;
 	}
 	enic->tx_offload_capa |=
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		(enic->geneve ? DEV_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
-		(enic->vxlan ? DEV_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		(enic->geneve ? RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
+		(enic->vxlan ? RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
 	enic->tx_offload_mask |=
 		PKT_TX_OUTER_IPV6 |
 		PKT_TX_OUTER_IPV4 |
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index c5777772a09e..918a9e170ff6 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -147,31 +147,31 @@ int enic_get_vnic_config(struct enic *enic)
 		 * IPV4 hash type handles both non-frag and frag packet types.
 		 * TCP/UDP is controlled via a separate flag below.
 		 */
-		enic->flow_type_rss_offloads |= ETH_RSS_IPV4 |
-			ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV4 |
+			RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER;
 	if (ENIC_SETTING(enic, RSSHASH_TCPIPV4))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_TCP;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (ENIC_SETTING(enic, RSSHASH_IPV6))
 		/*
 		 * The VIC adapter can perform RSS on IPv6 packets with and
 		 * without extension headers. An IPv6 "fragment" is an IPv6
 		 * packet with the fragment extension header.
 		 */
-		enic->flow_type_rss_offloads |= ETH_RSS_IPV6 |
-			ETH_RSS_IPV6_EX | ETH_RSS_FRAG_IPV6 |
-			ETH_RSS_NONFRAG_IPV6_OTHER;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV6 |
+			RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_FRAG_IPV6 |
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER;
 	if (ENIC_SETTING(enic, RSSHASH_TCPIPV6))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_TCP |
-			ETH_RSS_IPV6_TCP_EX;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			RTE_ETH_RSS_IPV6_TCP_EX;
 	if (enic->udp_rss_weak)
 		enic->flow_type_rss_offloads |=
-			ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
-			ETH_RSS_IPV6_UDP_EX;
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			RTE_ETH_RSS_IPV6_UDP_EX;
 	if (ENIC_SETTING(enic, RSSHASH_UDPIPV4))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_UDP;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (ENIC_SETTING(enic, RSSHASH_UDPIPV6))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_UDP |
-			ETH_RSS_IPV6_UDP_EX;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			RTE_ETH_RSS_IPV6_UDP_EX;
 
 	/* Zero offloads if RSS is not enabled */
 	if (!ENIC_SETTING(enic, RSS))
@@ -201,19 +201,19 @@ int enic_get_vnic_config(struct enic *enic)
 	enic->tx_queue_offload_capa = 0;
 	enic->tx_offload_capa =
 		enic->tx_queue_offload_capa |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	enic->rx_offload_capa =
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	enic->tx_offload_mask =
 		PKT_TX_IPV6 |
 		PKT_TX_IPV4 |
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index b87c036e6014..82d595b1d1a0 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -17,10 +17,10 @@
 
 const char pmd_failsafe_driver_name[] = FAILSAFE_DRIVER_NAME;
 static const struct rte_eth_link eth_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_UP,
-	.link_autoneg = ETH_LINK_AUTONEG,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_UP,
+	.link_autoneg = RTE_ETH_LINK_AUTONEG,
 };
 
 static int
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 602c04033c18..5f4810051dac 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -326,7 +326,7 @@ int failsafe_rx_intr_install_subdevice(struct sub_device *sdev)
 	int qid;
 	struct rte_eth_dev *fsdev;
 	struct rxq **rxq;
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 				&ETH(sdev)->data->dev_conf.intr_conf;
 
 	fsdev = fs_dev(sdev);
@@ -519,7 +519,7 @@ int
 failsafe_rx_intr_install(struct rte_eth_dev *dev)
 {
 	struct fs_priv *priv = PRIV(dev);
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 			&priv->data->dev_conf.intr_conf;
 
 	if (intr_conf->rxq == 0 || dev->intr_handle != NULL)
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 29de39910c6e..a3a8a1c82e3a 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1172,51 +1172,51 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
 	 * configuring a sub-device.
 	 */
 	infos->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_LRO |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_MACSEC_STRIP |
-		DEV_RX_OFFLOAD_HEADER_SPLIT |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_TIMESTAMP |
-		DEV_RX_OFFLOAD_SECURITY |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_LRO |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+		RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+		RTE_ETH_RX_OFFLOAD_SECURITY |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	infos->rx_queue_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_LRO |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_MACSEC_STRIP |
-		DEV_RX_OFFLOAD_HEADER_SPLIT |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_TIMESTAMP |
-		DEV_RX_OFFLOAD_SECURITY |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_LRO |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+		RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+		RTE_ETH_RX_OFFLOAD_SECURITY |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	infos->tx_offload_capa =
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	infos->flow_type_rss_offloads =
-		ETH_RSS_IP |
-		ETH_RSS_UDP |
-		ETH_RSS_TCP;
+		RTE_ETH_RSS_IP |
+		RTE_ETH_RSS_UDP |
+		RTE_ETH_RSS_TCP;
 	infos->dev_capa =
 		RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 		RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 17c73c4dc5ae..b7522a47a80b 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -177,7 +177,7 @@ struct fm10k_rx_queue {
 	uint8_t drop_en;
 	uint8_t rx_deferred_start; /* don't start this queue in dev start. */
 	uint16_t rx_ftag_en; /* indicates FTAG RX supported */
-	uint64_t offloads; /* offloads of DEV_RX_OFFLOAD_* */
+	uint64_t offloads; /* offloads of RTE_ETH_RX_OFFLOAD_* */
 };
 
 /*
@@ -209,7 +209,7 @@ struct fm10k_tx_queue {
 	uint16_t next_rs; /* Next pos to set RS flag */
 	uint16_t next_dd; /* Next pos to check DD flag */
 	volatile uint32_t *tail_ptr;
-	uint64_t offloads; /* Offloads of DEV_TX_OFFLOAD_* */
+	uint64_t offloads; /* Offloads of RTE_ETH_TX_OFFLOAD_* */
 	uint16_t nb_desc;
 	uint16_t port_id;
 	uint8_t tx_deferred_start; /** don't start this queue in dev start. */
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 66f4a5c6df2c..d256334bfde9 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -413,12 +413,12 @@ fm10k_check_mq_mode(struct rte_eth_dev *dev)
 
 	vmdq_conf = &dev->data->dev_conf.rx_adv_conf.vmdq_rx_conf;
 
-	if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		PMD_INIT_LOG(ERR, "DCB mode is not supported.");
 		return -EINVAL;
 	}
 
-	if (!(rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+	if (!(rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
 		return 0;
 
 	if (hw->mac.type == fm10k_mac_vf) {
@@ -449,8 +449,8 @@ fm10k_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multipe queue mode checking */
 	ret  = fm10k_check_mq_mode(dev);
@@ -510,7 +510,7 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
 		0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
 	};
 
-	if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_RSS ||
+	if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_RSS ||
 		dev_conf->rx_adv_conf.rss_conf.rss_hf == 0) {
 		FM10K_WRITE_REG(hw, FM10K_MRQC(0), 0);
 		return;
@@ -547,15 +547,15 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
 	 */
 	hf = dev_conf->rx_adv_conf.rss_conf.rss_hf;
 	mrqc = 0;
-	mrqc |= (hf & ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
 
 	if (mrqc == 0) {
 		PMD_INIT_LOG(ERR, "Specified RSS mode 0x%"PRIx64"is not"
@@ -602,7 +602,7 @@ fm10k_dev_mq_rx_configure(struct rte_eth_dev *dev)
 	if (hw->mac.type != fm10k_mac_pf)
 		return;
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
 		nb_queue_pools = vmdq_conf->nb_queue_pools;
 
 	/* no pool number change, no need to update logic port and VLAN/MAC */
@@ -759,7 +759,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
 		/* It adds dual VLAN length for supporting dual VLAN */
 		if ((dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
 				2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
-			rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
+			rxq->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 			uint32_t reg;
 			dev->data->scattered_rx = 1;
 			reg = FM10K_READ_REG(hw, FM10K_SRRCTL(i));
@@ -1145,7 +1145,7 @@ fm10k_dev_start(struct rte_eth_dev *dev)
 	}
 
 	/* Update default vlan when not in VMDQ mode */
-	if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+	if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
 		fm10k_vlan_filter_set(dev, hw->mac.default_vid, true);
 
 	fm10k_link_update(dev, 0);
@@ -1222,11 +1222,11 @@ fm10k_link_update(struct rte_eth_dev *dev,
 		FM10K_DEV_PRIVATE_TO_INFO(dev->data->dev_private);
 	PMD_INIT_FUNC_TRACE();
 
-	dev->data->dev_link.link_speed  = ETH_SPEED_NUM_50G;
-	dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	dev->data->dev_link.link_speed  = RTE_ETH_SPEED_NUM_50G;
+	dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	dev->data->dev_link.link_status =
-		dev_info->sm_down ? ETH_LINK_DOWN : ETH_LINK_UP;
-	dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
+		dev_info->sm_down ? RTE_ETH_LINK_DOWN : RTE_ETH_LINK_UP;
+	dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
 
 	return 0;
 }
@@ -1378,7 +1378,7 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 	dev_info->max_vfs            = pdev->max_vfs;
 	dev_info->vmdq_pool_base     = 0;
 	dev_info->vmdq_queue_base    = 0;
-	dev_info->max_vmdq_pools     = ETH_32_POOLS;
+	dev_info->max_vmdq_pools     = RTE_ETH_32_POOLS;
 	dev_info->vmdq_queue_num     = FM10K_MAX_QUEUES_PF;
 	dev_info->rx_queue_offload_capa = fm10k_get_rx_queue_offloads_capa(dev);
 	dev_info->rx_offload_capa = fm10k_get_rx_port_offloads_capa(dev) |
@@ -1389,15 +1389,15 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 
 	dev_info->hash_key_size = FM10K_RSSRK_SIZE * sizeof(uint32_t);
 	dev_info->reta_size = FM10K_MAX_RSS_INDICES;
-	dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
-					ETH_RSS_IPV6 |
-					ETH_RSS_IPV6_EX |
-					ETH_RSS_NONFRAG_IPV4_TCP |
-					ETH_RSS_NONFRAG_IPV6_TCP |
-					ETH_RSS_IPV6_TCP_EX |
-					ETH_RSS_NONFRAG_IPV4_UDP |
-					ETH_RSS_NONFRAG_IPV6_UDP |
-					ETH_RSS_IPV6_UDP_EX;
+	dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+					RTE_ETH_RSS_IPV6 |
+					RTE_ETH_RSS_IPV6_EX |
+					RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+					RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+					RTE_ETH_RSS_IPV6_TCP_EX |
+					RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+					RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+					RTE_ETH_RSS_IPV6_UDP_EX;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -1435,9 +1435,9 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 		.nb_mtu_seg_max = FM10K_TX_MAX_MTU_SEG,
 	};
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G |
-			ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
-			ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G |
+			RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+			RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -1509,7 +1509,7 @@ fm10k_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 		return -EINVAL;
 	}
 
-	if (vlan_id > ETH_VLAN_ID_MAX) {
+	if (vlan_id > RTE_ETH_VLAN_ID_MAX) {
 		PMD_INIT_LOG(ERR, "Invalid vlan_id: must be < 4096");
 		return -EINVAL;
 	}
@@ -1767,20 +1767,20 @@ static uint64_t fm10k_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
 
-	return (uint64_t)(DEV_RX_OFFLOAD_SCATTER);
+	return (uint64_t)(RTE_ETH_RX_OFFLOAD_SCATTER);
 }
 
 static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
 
-	return  (uint64_t)(DEV_RX_OFFLOAD_VLAN_STRIP  |
-			   DEV_RX_OFFLOAD_VLAN_FILTER |
-			   DEV_RX_OFFLOAD_IPV4_CKSUM  |
-			   DEV_RX_OFFLOAD_UDP_CKSUM   |
-			   DEV_RX_OFFLOAD_TCP_CKSUM   |
-			   DEV_RX_OFFLOAD_HEADER_SPLIT |
-			   DEV_RX_OFFLOAD_RSS_HASH);
+	return  (uint64_t)(RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+			   RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+			   RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+			   RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+			   RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+			   RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+			   RTE_ETH_RX_OFFLOAD_RSS_HASH);
 }
 
 static int
@@ -1965,12 +1965,12 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
 
-	return (uint64_t)(DEV_TX_OFFLOAD_VLAN_INSERT |
-			  DEV_TX_OFFLOAD_MULTI_SEGS  |
-			  DEV_TX_OFFLOAD_IPV4_CKSUM  |
-			  DEV_TX_OFFLOAD_UDP_CKSUM   |
-			  DEV_TX_OFFLOAD_TCP_CKSUM   |
-			  DEV_TX_OFFLOAD_TCP_TSO);
+	return (uint64_t)(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+			  RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+			  RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+			  RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_TCP_TSO);
 }
 
 static int
@@ -2111,8 +2111,8 @@ fm10k_reta_update(struct rte_eth_dev *dev,
 	 * 128-entries in 32 registers
 	 */
 	for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				BIT_MASK_PER_UINT32);
 		if (mask == 0)
@@ -2160,8 +2160,8 @@ fm10k_reta_query(struct rte_eth_dev *dev,
 	 * 128-entries in 32 registers
 	 */
 	for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				BIT_MASK_PER_UINT32);
 		if (mask == 0)
@@ -2198,15 +2198,15 @@ fm10k_rss_hash_update(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	mrqc = 0;
-	mrqc |= (hf & ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
 
 	/* If the mapping doesn't fit any supported, return */
 	if (mrqc == 0)
@@ -2243,15 +2243,15 @@ fm10k_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	mrqc = FM10K_READ_REG(hw, FM10K_MRQC(0));
 	hf = 0;
-	hf |= (mrqc & FM10K_MRQC_IPV4)     ? ETH_RSS_IPV4              : 0;
-	hf |= (mrqc & FM10K_MRQC_IPV6)     ? ETH_RSS_IPV6              : 0;
-	hf |= (mrqc & FM10K_MRQC_IPV6)     ? ETH_RSS_IPV6_EX           : 0;
-	hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? ETH_RSS_NONFRAG_IPV4_TCP  : 0;
-	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_NONFRAG_IPV6_TCP  : 0;
-	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP_EX       : 0;
-	hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? ETH_RSS_NONFRAG_IPV4_UDP  : 0;
-	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_NONFRAG_IPV6_UDP  : 0;
-	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP_EX       : 0;
+	hf |= (mrqc & FM10K_MRQC_IPV4)     ? RTE_ETH_RSS_IPV4              : 0;
+	hf |= (mrqc & FM10K_MRQC_IPV6)     ? RTE_ETH_RSS_IPV6              : 0;
+	hf |= (mrqc & FM10K_MRQC_IPV6)     ? RTE_ETH_RSS_IPV6_EX           : 0;
+	hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_TCP  : 0;
+	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_TCP  : 0;
+	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_IPV6_TCP_EX       : 0;
+	hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_UDP  : 0;
+	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_UDP  : 0;
+	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_IPV6_UDP_EX       : 0;
 
 	rss_conf->rss_hf = hf;
 
@@ -2606,7 +2606,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
 
 			/* first clear the internal SW recording structure */
 			if (!(dev->data->dev_conf.rxmode.mq_mode &
-						ETH_MQ_RX_VMDQ_FLAG))
+						RTE_ETH_MQ_RX_VMDQ_FLAG))
 				fm10k_vlan_filter_set(dev, hw->mac.default_vid,
 					false);
 
@@ -2622,7 +2622,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
 					MAIN_VSI_POOL_NUMBER);
 
 			if (!(dev->data->dev_conf.rxmode.mq_mode &
-						ETH_MQ_RX_VMDQ_FLAG))
+						RTE_ETH_MQ_RX_VMDQ_FLAG))
 				fm10k_vlan_filter_set(dev, hw->mac.default_vid,
 					true);
 
diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c
index 83af01dc2da6..50973a662c67 100644
--- a/drivers/net/fm10k/fm10k_rxtx_vec.c
+++ b/drivers/net/fm10k/fm10k_rxtx_vec.c
@@ -208,11 +208,11 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
 {
 #ifndef RTE_LIBRTE_IEEE1588
 	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 
 #ifndef RTE_FM10K_RX_OLFLAGS_ENABLE
 	/* whithout rx ol_flags, no VP flag report */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 		return -1;
 #endif
 
@@ -221,7 +221,7 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
 		return -1;
 
 	/* no header split support */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
 		return -1;
 
 	return 0;
diff --git a/drivers/net/hinic/base/hinic_pmd_hwdev.c b/drivers/net/hinic/base/hinic_pmd_hwdev.c
index cb9cf6efa287..80f9eb5c3031 100644
--- a/drivers/net/hinic/base/hinic_pmd_hwdev.c
+++ b/drivers/net/hinic/base/hinic_pmd_hwdev.c
@@ -1320,28 +1320,28 @@ hinic_cable_status_event(u8 cmd, void *buf_in, __rte_unused u16 in_size,
 static int hinic_link_event_process(struct hinic_hwdev *hwdev,
 				    struct rte_eth_dev *eth_dev, u8 status)
 {
-	uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
-					ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
-					ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
-					ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+	uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+					RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+					RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+					RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
 	struct nic_port_info port_info;
 	struct rte_eth_link link;
 	int rc = HINIC_OK;
 
 	if (!status) {
-		link.link_status = ETH_LINK_DOWN;
+		link.link_status = RTE_ETH_LINK_DOWN;
 		link.link_speed = 0;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	} else {
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 
 		memset(&port_info, 0, sizeof(port_info));
 		rc = hinic_get_port_info(hwdev, &port_info);
 		if (rc) {
-			link.link_speed = ETH_SPEED_NUM_NONE;
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
-			link.link_autoneg = ETH_LINK_FIXED;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+			link.link_autoneg = RTE_ETH_LINK_FIXED;
 		} else {
 			link.link_speed = port_speed[port_info.speed %
 						LINK_SPEED_MAX];
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index c2374ebb6759..4cd5a85d5f8d 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -311,8 +311,8 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* mtu size is 256~9600 */
 	if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
@@ -338,7 +338,7 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
 
 	/* init vlan offoad */
 	err = hinic_vlan_offload_set(dev,
-				ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+				RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
 	if (err) {
 		PMD_DRV_LOG(ERR, "Initialize vlan filter and strip failed");
 		(void)hinic_config_mq_mode(dev, FALSE);
@@ -696,15 +696,15 @@ static void hinic_get_speed_capa(struct rte_eth_dev *dev, uint32_t *speed_capa)
 	} else {
 		*speed_capa = 0;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_1G))
-			*speed_capa |= ETH_LINK_SPEED_1G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_1G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_10G))
-			*speed_capa |= ETH_LINK_SPEED_10G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_10G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_25G))
-			*speed_capa |= ETH_LINK_SPEED_25G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_25G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_40G))
-			*speed_capa |= ETH_LINK_SPEED_40G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_40G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_100G))
-			*speed_capa |= ETH_LINK_SPEED_100G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	}
 }
 
@@ -732,24 +732,24 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 
 	hinic_get_speed_capa(dev, &info->speed_capa);
 	info->rx_queue_offload_capa = 0;
-	info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
-				DEV_RX_OFFLOAD_IPV4_CKSUM |
-				DEV_RX_OFFLOAD_UDP_CKSUM |
-				DEV_RX_OFFLOAD_TCP_CKSUM |
-				DEV_RX_OFFLOAD_VLAN_FILTER |
-				DEV_RX_OFFLOAD_SCATTER |
-				DEV_RX_OFFLOAD_TCP_LRO |
-				DEV_RX_OFFLOAD_RSS_HASH;
+	info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+				RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				RTE_ETH_RX_OFFLOAD_SCATTER |
+				RTE_ETH_RX_OFFLOAD_TCP_LRO |
+				RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	info->tx_queue_offload_capa = 0;
-	info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-				DEV_TX_OFFLOAD_IPV4_CKSUM |
-				DEV_TX_OFFLOAD_UDP_CKSUM |
-				DEV_TX_OFFLOAD_TCP_CKSUM |
-				DEV_TX_OFFLOAD_SCTP_CKSUM |
-				DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				DEV_TX_OFFLOAD_TCP_TSO |
-				DEV_TX_OFFLOAD_MULTI_SEGS;
+	info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	info->hash_key_size = HINIC_RSS_KEY_SIZE;
 	info->reta_size = HINIC_RSS_INDIR_SIZE;
@@ -846,20 +846,20 @@ static int hinic_priv_get_dev_link_status(struct hinic_nic_dev *nic_dev,
 	u8 port_link_status = 0;
 	struct nic_port_info port_link_info;
 	struct hinic_hwdev *nic_hwdev = nic_dev->hwdev;
-	uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
-					ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
-					ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
-					ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+	uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+					RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+					RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+					RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
 
 	rc = hinic_get_link_status(nic_hwdev, &port_link_status);
 	if (rc)
 		return rc;
 
 	if (!port_link_status) {
-		link->link_status = ETH_LINK_DOWN;
+		link->link_status = RTE_ETH_LINK_DOWN;
 		link->link_speed = 0;
-		link->link_duplex = ETH_LINK_HALF_DUPLEX;
-		link->link_autoneg = ETH_LINK_FIXED;
+		link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link->link_autoneg = RTE_ETH_LINK_FIXED;
 		return HINIC_OK;
 	}
 
@@ -901,8 +901,8 @@ static int hinic_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		/* Get link status information from hardware */
 		rc = hinic_priv_get_dev_link_status(nic_dev, &link);
 		if (rc != HINIC_OK) {
-			link.link_speed = ETH_SPEED_NUM_NONE;
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR, "Get link status failed");
 			goto out;
 		}
@@ -1650,8 +1650,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	int err;
 
 	/* Enable or disable VLAN filter */
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) ?
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) ?
 			TRUE : FALSE;
 		err = hinic_config_vlan_filter(nic_dev->hwdev, on);
 		if (err == HINIC_MGMT_CMD_UNSUPPORTED) {
@@ -1672,8 +1672,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	}
 
 	/* Enable or disable VLAN stripping */
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) ?
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) ?
 			TRUE : FALSE;
 		err = hinic_set_rx_vlan_offload(nic_dev->hwdev, on);
 		if (err) {
@@ -1859,13 +1859,13 @@ static int hinic_flow_ctrl_get(struct rte_eth_dev *dev,
 	fc_conf->autoneg = nic_pause.auto_neg;
 
 	if (nic_pause.tx_pause && nic_pause.rx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (nic_pause.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else if (nic_pause.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -1879,14 +1879,14 @@ static int hinic_flow_ctrl_set(struct rte_eth_dev *dev,
 
 	nic_pause.auto_neg = fc_conf->autoneg;
 
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-		(fc_conf->mode & RTE_FC_TX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+		(fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
 		nic_pause.tx_pause = true;
 	else
 		nic_pause.tx_pause = false;
 
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-		(fc_conf->mode & RTE_FC_RX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+		(fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
 		nic_pause.rx_pause = true;
 	else
 		nic_pause.rx_pause = false;
@@ -1930,7 +1930,7 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
 	struct nic_rss_type rss_type = {0};
 	int err = 0;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		PMD_DRV_LOG(WARNING, "RSS is not enabled");
 		return HINIC_OK;
 	}
@@ -1951,14 +1951,14 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
 		}
 	}
 
-	rss_type.ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
-	rss_type.tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
-	rss_type.ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
-	rss_type.ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
-	rss_type.tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
-	rss_type.tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
-	rss_type.udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
-	rss_type.udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+	rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+	rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+	rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+	rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+	rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+	rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+	rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+	rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
 
 	err = hinic_set_rss_type(nic_dev->hwdev, tmpl_idx, rss_type);
 	if (err) {
@@ -1994,7 +1994,7 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
 	struct nic_rss_type rss_type = {0};
 	int err;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		PMD_DRV_LOG(WARNING, "RSS is not enabled");
 		return HINIC_ERROR;
 	}
@@ -2015,15 +2015,15 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
 
 	rss_conf->rss_hf = 0;
 	rss_conf->rss_hf |=  rss_type.ipv4 ?
-		(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4) : 0;
-	rss_conf->rss_hf |=  rss_type.tcp_ipv4 ? ETH_RSS_NONFRAG_IPV4_TCP : 0;
+		(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4) : 0;
+	rss_conf->rss_hf |=  rss_type.tcp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_TCP : 0;
 	rss_conf->rss_hf |=  rss_type.ipv6 ?
-		(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6) : 0;
-	rss_conf->rss_hf |=  rss_type.ipv6_ext ? ETH_RSS_IPV6_EX : 0;
-	rss_conf->rss_hf |=  rss_type.tcp_ipv6 ? ETH_RSS_NONFRAG_IPV6_TCP : 0;
-	rss_conf->rss_hf |=  rss_type.tcp_ipv6_ext ? ETH_RSS_IPV6_TCP_EX : 0;
-	rss_conf->rss_hf |=  rss_type.udp_ipv4 ? ETH_RSS_NONFRAG_IPV4_UDP : 0;
-	rss_conf->rss_hf |=  rss_type.udp_ipv6 ? ETH_RSS_NONFRAG_IPV6_UDP : 0;
+		(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6) : 0;
+	rss_conf->rss_hf |=  rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0;
+	rss_conf->rss_hf |=  rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0;
+	rss_conf->rss_hf |=  rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0;
+	rss_conf->rss_hf |=  rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0;
+	rss_conf->rss_hf |=  rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0;
 
 	return HINIC_OK;
 }
@@ -2053,7 +2053,7 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
 	u16 i = 0;
 	u16 idx, shift;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG))
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG))
 		return HINIC_OK;
 
 	if (reta_size != NIC_RSS_INDIR_SIZE) {
@@ -2067,8 +2067,8 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
 
 	/* update rss indir_tbl */
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (reta_conf[idx].reta[shift] >= nic_dev->num_rq) {
 			PMD_DRV_LOG(ERR, "Invalid reta entry, indirtbl[%d]: %d "
@@ -2133,8 +2133,8 @@ static int hinic_rss_indirtbl_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = (uint16_t)indirtbl[i];
 	}
diff --git a/drivers/net/hinic/hinic_pmd_rx.c b/drivers/net/hinic/hinic_pmd_rx.c
index 842399cc4cd8..d347afe9a6a9 100644
--- a/drivers/net/hinic/hinic_pmd_rx.c
+++ b/drivers/net/hinic/hinic_pmd_rx.c
@@ -504,14 +504,14 @@ static void hinic_fill_rss_type(struct nic_rss_type *rss_type,
 {
 	u64 rss_hf = rss_conf->rss_hf;
 
-	rss_type->ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
-	rss_type->tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
-	rss_type->ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
-	rss_type->ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
-	rss_type->tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
-	rss_type->tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
-	rss_type->udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
-	rss_type->udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+	rss_type->ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+	rss_type->tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+	rss_type->ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+	rss_type->ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+	rss_type->tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+	rss_type->tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+	rss_type->udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+	rss_type->udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
 }
 
 static void hinic_fillout_indir_tbl(struct hinic_nic_dev *nic_dev, u32 *indir)
@@ -588,8 +588,8 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
 {
 	int err, i;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
-		nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
+		nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
 		nic_dev->num_rss = 0;
 		if (nic_dev->num_rq > 1) {
 			/* get rss template id */
@@ -599,7 +599,7 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
 				PMD_DRV_LOG(WARNING, "Alloc rss template failed");
 				return err;
 			}
-			nic_dev->flags |= ETH_MQ_RX_RSS_FLAG;
+			nic_dev->flags |= RTE_ETH_MQ_RX_RSS_FLAG;
 			for (i = 0; i < nic_dev->num_rq; i++)
 				hinic_add_rq_to_rx_queue_list(nic_dev, i);
 		}
@@ -610,12 +610,12 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
 
 static void hinic_destroy_num_qps(struct hinic_nic_dev *nic_dev)
 {
-	if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+	if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (hinic_rss_template_free(nic_dev->hwdev,
 					    nic_dev->rss_tmpl_idx))
 			PMD_DRV_LOG(WARNING, "Free rss template failed");
 
-		nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+		nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
 	}
 }
 
@@ -641,7 +641,7 @@ int hinic_config_mq_mode(struct rte_eth_dev *dev, bool on)
 	int ret = 0;
 
 	switch (dev_conf->rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		ret = hinic_config_mq_rx_rss(nic_dev, on);
 		break;
 	default:
@@ -662,7 +662,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
 	int lro_wqe_num;
 	int buf_size;
 
-	if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+	if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (rss_conf.rss_hf == 0) {
 			rss_conf.rss_hf = HINIC_RSS_OFFLOAD_ALL;
 		} else if ((rss_conf.rss_hf & HINIC_RSS_OFFLOAD_ALL) == 0) {
@@ -678,7 +678,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
 	}
 
 	/* Enable both L3/L4 rx checksum offload */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		nic_dev->rx_csum_en = HINIC_RX_CSUM_OFFLOAD_EN;
 
 	err = hinic_set_rx_csum_offload(nic_dev->hwdev,
@@ -687,7 +687,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
 		goto rx_csum_ofl_err;
 
 	/* config lro */
-	lro_en = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ?
+	lro_en = dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ?
 			true : false;
 	max_lro_size = dev->data->dev_conf.rxmode.max_lro_pkt_size;
 	buf_size = nic_dev->hwdev->nic_io->rq_buf_size;
@@ -726,7 +726,7 @@ void hinic_rx_remove_configure(struct rte_eth_dev *dev)
 {
 	struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
 
-	if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+	if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
 		hinic_rss_deinit(nic_dev);
 		hinic_destroy_num_qps(nic_dev);
 	}
diff --git a/drivers/net/hinic/hinic_pmd_rx.h b/drivers/net/hinic/hinic_pmd_rx.h
index 8a45f2d9fc50..5c303398b635 100644
--- a/drivers/net/hinic/hinic_pmd_rx.h
+++ b/drivers/net/hinic/hinic_pmd_rx.h
@@ -8,17 +8,17 @@
 #define HINIC_DEFAULT_RX_FREE_THRESH	32
 
 #define HINIC_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 |\
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 |\
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 enum rq_completion_fmt {
 	RQ_COMPLETE_SGE = 1
diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
index 8753c340e790..3d0159d78778 100644
--- a/drivers/net/hns3/hns3_dcb.c
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -1536,7 +1536,7 @@ hns3_dcb_hw_configure(struct hns3_adapter *hns)
 		return ret;
 	}
 
-	if (hw->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (hw->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		dcb_rx_conf = &hw->data->dev_conf.rx_adv_conf.dcb_rx_conf;
 		if (dcb_rx_conf->nb_tcs == 0)
 			hw->dcb_info.pfc_en = 1; /* tc0 only */
@@ -1693,7 +1693,7 @@ hns3_update_queue_map_configure(struct hns3_adapter *hns)
 	uint16_t nb_tx_q = hw->data->nb_tx_queues;
 	int ret;
 
-	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		return 0;
 
 	ret = hns3_dcb_update_tc_queue_mapping(hw, nb_rx_q, nb_tx_q);
@@ -1713,22 +1713,22 @@ static void
 hns3_get_fc_mode(struct hns3_hw *hw, enum rte_eth_fc_mode mode)
 {
 	switch (mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		hw->requested_fc_mode = HNS3_FC_NONE;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		hw->requested_fc_mode = HNS3_FC_RX_PAUSE;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		hw->requested_fc_mode = HNS3_FC_TX_PAUSE;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		hw->requested_fc_mode = HNS3_FC_FULL;
 		break;
 	default:
 		hw->requested_fc_mode = HNS3_FC_NONE;
 		hns3_warn(hw, "fc_mode(%u) exceeds member scope and is "
-			  "configured to RTE_FC_NONE", mode);
+			  "configured to RTE_ETH_FC_NONE", mode);
 		break;
 	}
 }
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 6b89bcef97ba..9881659cebfc 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -60,29 +60,29 @@ enum hns3_evt_cause {
 };
 
 static const struct rte_eth_fec_capa speed_fec_capa_tbl[] = {
-	{ ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
 
-	{ ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
 
-	{ ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
 
-	{ ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
 
-	{ ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
 
-	{ ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(RS) }
 };
@@ -500,8 +500,8 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
 	struct hns3_cmd_desc desc;
 	int ret;
 
-	if ((vlan_type != ETH_VLAN_TYPE_INNER &&
-	     vlan_type != ETH_VLAN_TYPE_OUTER)) {
+	if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+	     vlan_type != RTE_ETH_VLAN_TYPE_OUTER)) {
 		hns3_err(hw, "Unsupported vlan type, vlan_type =%d", vlan_type);
 		return -EINVAL;
 	}
@@ -514,10 +514,10 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
 	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MAC_VLAN_TYPE_ID, false);
 	rx_req = (struct hns3_rx_vlan_type_cfg_cmd *)desc.data;
 
-	if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
 		rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
-	} else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+	} else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
 		rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
 		rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
 		rx_req->in_fst_vlan_type = rte_cpu_to_le_16(tpid);
@@ -725,11 +725,11 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	rte_spinlock_lock(&hw->lock);
 	rxmode = &dev->data->dev_conf.rxmode;
 	tmp_mask = (unsigned int)mask;
-	if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* ignore vlan filter configuration during promiscuous mode */
 		if (!dev->data->promiscuous) {
 			/* Enable or disable VLAN filter */
-			enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER ?
+			enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ?
 				 true : false;
 
 			ret = hns3_enable_vlan_filter(hns, enable);
@@ -742,9 +742,9 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		}
 	}
 
-	if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP ?
+		enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ?
 		    true : false;
 
 		ret = hns3_en_hw_strip_rxvtag(hns, enable);
@@ -1118,7 +1118,7 @@ hns3_init_vlan_config(struct hns3_adapter *hns)
 		return ret;
 	}
 
-	ret = hns3_vlan_tpid_configure(hns, ETH_VLAN_TYPE_INNER,
+	ret = hns3_vlan_tpid_configure(hns, RTE_ETH_VLAN_TYPE_INNER,
 				       RTE_ETHER_TYPE_VLAN);
 	if (ret) {
 		hns3_err(hw, "tpid set fail in pf, ret =%d", ret);
@@ -1161,7 +1161,7 @@ hns3_restore_vlan_conf(struct hns3_adapter *hns)
 	if (!hw->data->promiscuous) {
 		/* restore vlan filter states */
 		offloads = hw->data->dev_conf.rxmode.offloads;
-		enable = offloads & DEV_RX_OFFLOAD_VLAN_FILTER ? true : false;
+		enable = offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ? true : false;
 		ret = hns3_enable_vlan_filter(hns, enable);
 		if (ret) {
 			hns3_err(hw, "failed to restore vlan rx filter conf, "
@@ -1204,7 +1204,7 @@ hns3_dev_configure_vlan(struct rte_eth_dev *dev)
 			  txmode->hw_vlan_reject_untagged);
 
 	/* Apply vlan offload setting */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
 	ret = hns3_vlan_offload_set(dev, mask);
 	if (ret) {
 		hns3_err(hw, "dev config rx vlan offload failed, ret = %d",
@@ -2213,9 +2213,9 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
 	int max_tc = 0;
 	int i;
 
-	if ((rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG) ||
-	    (tx_mq_mode == ETH_MQ_TX_VMDQ_DCB ||
-	     tx_mq_mode == ETH_MQ_TX_VMDQ_ONLY)) {
+	if ((rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) ||
+	    (tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB ||
+	     tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)) {
 		hns3_err(hw, "VMDQ is not supported, rx_mq_mode = %d, tx_mq_mode = %d.",
 			 rx_mq_mode, tx_mq_mode);
 		return -EOPNOTSUPP;
@@ -2223,7 +2223,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
 
 	dcb_rx_conf = &dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
 	dcb_tx_conf = &dev->data->dev_conf.tx_adv_conf.dcb_tx_conf;
-	if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		if (dcb_rx_conf->nb_tcs > pf->tc_max) {
 			hns3_err(hw, "nb_tcs(%u) > max_tc(%u) driver supported.",
 				 dcb_rx_conf->nb_tcs, pf->tc_max);
@@ -2232,7 +2232,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
 
 		if (!(dcb_rx_conf->nb_tcs == HNS3_4_TCS ||
 		      dcb_rx_conf->nb_tcs == HNS3_8_TCS)) {
-			hns3_err(hw, "on ETH_MQ_RX_DCB_RSS mode, "
+			hns3_err(hw, "on RTE_ETH_MQ_RX_DCB_RSS mode, "
 				 "nb_tcs(%d) != %d or %d in rx direction.",
 				 dcb_rx_conf->nb_tcs, HNS3_4_TCS, HNS3_8_TCS);
 			return -EINVAL;
@@ -2400,11 +2400,11 @@ hns3_check_link_speed(struct hns3_hw *hw, uint32_t link_speeds)
 	 * configure link_speeds (default 0), which means auto-negotiation.
 	 * In this case, it should return success.
 	 */
-	if (link_speeds == ETH_LINK_SPEED_AUTONEG &&
+	if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG &&
 	    hw->mac.support_autoneg == 0)
 		return 0;
 
-	if (link_speeds != ETH_LINK_SPEED_AUTONEG) {
+	if (link_speeds != RTE_ETH_LINK_SPEED_AUTONEG) {
 		ret = hns3_check_port_speed(hw, link_speeds);
 		if (ret)
 			return ret;
@@ -2464,15 +2464,15 @@ hns3_dev_configure(struct rte_eth_dev *dev)
 	if (ret)
 		goto cfg_err;
 
-	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		ret = hns3_setup_dcb(dev);
 		if (ret)
 			goto cfg_err;
 	}
 
 	/* When RSS is not configured, redirect the packet queue 0 */
-	if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 		rss_conf = conf->rx_adv_conf.rss_conf;
 		hw->rss_dis_flag = false;
 		ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -2493,7 +2493,7 @@ hns3_dev_configure(struct rte_eth_dev *dev)
 		goto cfg_err;
 
 	/* config hardware GRO */
-	gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+	gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
 	ret = hns3_config_gro(hw, gro_en);
 	if (ret)
 		goto cfg_err;
@@ -2600,15 +2600,15 @@ hns3_get_copper_port_speed_capa(uint32_t supported_speed)
 	uint32_t speed_capa = 0;
 
 	if (supported_speed & HNS3_PHY_LINK_SPEED_10M_HD_BIT)
-		speed_capa |= ETH_LINK_SPEED_10M_HD;
+		speed_capa |= RTE_ETH_LINK_SPEED_10M_HD;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_10M_BIT)
-		speed_capa |= ETH_LINK_SPEED_10M;
+		speed_capa |= RTE_ETH_LINK_SPEED_10M;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_100M_HD_BIT)
-		speed_capa |= ETH_LINK_SPEED_100M_HD;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_100M_BIT)
-		speed_capa |= ETH_LINK_SPEED_100M;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_1000M_BIT)
-		speed_capa |= ETH_LINK_SPEED_1G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G;
 
 	return speed_capa;
 }
@@ -2619,19 +2619,19 @@ hns3_get_firber_port_speed_capa(uint32_t supported_speed)
 	uint32_t speed_capa = 0;
 
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_1G_BIT)
-		speed_capa |= ETH_LINK_SPEED_1G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_10G_BIT)
-		speed_capa |= ETH_LINK_SPEED_10G;
+		speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_25G_BIT)
-		speed_capa |= ETH_LINK_SPEED_25G;
+		speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_40G_BIT)
-		speed_capa |= ETH_LINK_SPEED_40G;
+		speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_50G_BIT)
-		speed_capa |= ETH_LINK_SPEED_50G;
+		speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_100G_BIT)
-		speed_capa |= ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_200G_BIT)
-		speed_capa |= ETH_LINK_SPEED_200G;
+		speed_capa |= RTE_ETH_LINK_SPEED_200G;
 
 	return speed_capa;
 }
@@ -2650,7 +2650,7 @@ hns3_get_speed_capa(struct hns3_hw *hw)
 			hns3_get_firber_port_speed_capa(mac->supported_speed);
 
 	if (mac->support_autoneg == 0)
-		speed_capa |= ETH_LINK_SPEED_FIXED;
+		speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
 
 	return speed_capa;
 }
@@ -2676,40 +2676,40 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 	info->max_mac_addrs = HNS3_UC_MACADDR_NUM;
 	info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
 	info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
-	info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_TCP_CKSUM |
-				 DEV_RX_OFFLOAD_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_SCTP_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_KEEP_CRC |
-				 DEV_RX_OFFLOAD_SCATTER |
-				 DEV_RX_OFFLOAD_VLAN_STRIP |
-				 DEV_RX_OFFLOAD_VLAN_FILTER |
-				 DEV_RX_OFFLOAD_RSS_HASH |
-				 DEV_RX_OFFLOAD_TCP_LRO);
-	info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_TCP_CKSUM |
-				 DEV_TX_OFFLOAD_UDP_CKSUM |
-				 DEV_TX_OFFLOAD_SCTP_CKSUM |
-				 DEV_TX_OFFLOAD_MULTI_SEGS |
-				 DEV_TX_OFFLOAD_TCP_TSO |
-				 DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				 DEV_TX_OFFLOAD_GRE_TNL_TSO |
-				 DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-				 DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+	info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+				 RTE_ETH_RX_OFFLOAD_SCATTER |
+				 RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				 RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				 RTE_ETH_RX_OFFLOAD_RSS_HASH |
+				 RTE_ETH_RX_OFFLOAD_TCP_LRO);
+	info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				 RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				 RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
 				 hns3_txvlan_cap_get(hw));
 
 	if (hns3_dev_get_support(hw, OUTER_UDP_CKSUM))
-		info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+		info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 
 	if (hns3_dev_get_support(hw, INDEP_TXRX))
 		info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 				 RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
 
 	if (hns3_dev_get_support(hw, PTP))
-		info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+		info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	info->rx_desc_lim = (struct rte_eth_desc_lim) {
 		.nb_max = HNS3_MAX_RING_DESC,
@@ -2793,7 +2793,7 @@ hns3_update_port_link_info(struct rte_eth_dev *eth_dev)
 
 	ret = hns3_update_link_info(eth_dev);
 	if (ret)
-		hw->mac.link_status = ETH_LINK_DOWN;
+		hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	return ret;
 }
@@ -2806,29 +2806,29 @@ hns3_setup_linkstatus(struct rte_eth_dev *eth_dev,
 	struct hns3_mac *mac = &hw->mac;
 
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_10M:
-	case ETH_SPEED_NUM_100M:
-	case ETH_SPEED_NUM_1G:
-	case ETH_SPEED_NUM_10G:
-	case ETH_SPEED_NUM_25G:
-	case ETH_SPEED_NUM_40G:
-	case ETH_SPEED_NUM_50G:
-	case ETH_SPEED_NUM_100G:
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_200G:
 		if (mac->link_status)
 			new_link->link_speed = mac->link_speed;
 		break;
 	default:
 		if (mac->link_status)
-			new_link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+			new_link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 	}
 
 	if (!mac->link_status)
-		new_link->link_speed = ETH_SPEED_NUM_NONE;
+		new_link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	new_link->link_duplex = mac->link_duplex;
-	new_link->link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+	new_link->link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 	new_link->link_autoneg = mac->link_autoneg;
 }
 
@@ -2848,8 +2848,8 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
 	if (eth_dev->data->dev_started == 0) {
 		new_link.link_autoneg = mac->link_autoneg;
 		new_link.link_duplex = mac->link_duplex;
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
-		new_link.link_status = ETH_LINK_DOWN;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		new_link.link_status = RTE_ETH_LINK_DOWN;
 		goto out;
 	}
 
@@ -2861,7 +2861,7 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
 			break;
 		}
 
-		if (!wait_to_complete || mac->link_status == ETH_LINK_UP)
+		if (!wait_to_complete || mac->link_status == RTE_ETH_LINK_UP)
 			break;
 
 		rte_delay_ms(HNS3_LINK_CHECK_INTERVAL);
@@ -3207,31 +3207,31 @@ hns3_parse_speed(int speed_cmd, uint32_t *speed)
 {
 	switch (speed_cmd) {
 	case HNS3_CFG_SPEED_10M:
-		*speed = ETH_SPEED_NUM_10M;
+		*speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case HNS3_CFG_SPEED_100M:
-		*speed = ETH_SPEED_NUM_100M;
+		*speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case HNS3_CFG_SPEED_1G:
-		*speed = ETH_SPEED_NUM_1G;
+		*speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case HNS3_CFG_SPEED_10G:
-		*speed = ETH_SPEED_NUM_10G;
+		*speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case HNS3_CFG_SPEED_25G:
-		*speed = ETH_SPEED_NUM_25G;
+		*speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case HNS3_CFG_SPEED_40G:
-		*speed = ETH_SPEED_NUM_40G;
+		*speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case HNS3_CFG_SPEED_50G:
-		*speed = ETH_SPEED_NUM_50G;
+		*speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case HNS3_CFG_SPEED_100G:
-		*speed = ETH_SPEED_NUM_100G;
+		*speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	case HNS3_CFG_SPEED_200G:
-		*speed = ETH_SPEED_NUM_200G;
+		*speed = RTE_ETH_SPEED_NUM_200G;
 		break;
 	default:
 		return -EINVAL;
@@ -3559,39 +3559,39 @@ hns3_cfg_mac_speed_dup_hw(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
 	hns3_set_bit(req->speed_dup, HNS3_CFG_DUPLEX_B, !!duplex ? 1 : 0);
 
 	switch (speed) {
-	case ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_10M:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10M);
 		break;
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100M);
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_1G);
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10G);
 		break;
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_25G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_25G);
 		break;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_40G);
 		break;
-	case ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_50G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_50G);
 		break;
-	case ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_100G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100G);
 		break;
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_200G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_200G);
 		break;
@@ -4254,14 +4254,14 @@ hns3_mac_init(struct hns3_hw *hw)
 	int ret;
 
 	pf->support_sfp_query = true;
-	mac->link_duplex = ETH_LINK_FULL_DUPLEX;
+	mac->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	ret = hns3_cfg_mac_speed_dup_hw(hw, mac->link_speed, mac->link_duplex);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Config mac speed dup fail ret = %d", ret);
 		return ret;
 	}
 
-	mac->link_status = ETH_LINK_DOWN;
+	mac->link_status = RTE_ETH_LINK_DOWN;
 
 	return hns3_config_mtu(hw, pf->mps);
 }
@@ -4511,7 +4511,7 @@ hns3_dev_promiscuous_enable(struct rte_eth_dev *dev)
 	 * all packets coming in in the receiving direction.
 	 */
 	offloads = dev->data->dev_conf.rxmode.offloads;
-	if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		ret = hns3_enable_vlan_filter(hns, false);
 		if (ret) {
 			hns3_err(hw, "failed to enable promiscuous mode due to "
@@ -4552,7 +4552,7 @@ hns3_dev_promiscuous_disable(struct rte_eth_dev *dev)
 	}
 	/* when promiscuous mode was disabled, restore the vlan filter status */
 	offloads = dev->data->dev_conf.rxmode.offloads;
-	if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		ret = hns3_enable_vlan_filter(hns, true);
 		if (ret) {
 			hns3_err(hw, "failed to disable promiscuous mode due to"
@@ -4672,8 +4672,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
 		mac_info->supported_speed =
 					rte_le_to_cpu_32(resp->supported_speed);
 		mac_info->support_autoneg = resp->autoneg_ability;
-		mac_info->link_autoneg = (resp->autoneg == 0) ? ETH_LINK_FIXED
-					: ETH_LINK_AUTONEG;
+		mac_info->link_autoneg = (resp->autoneg == 0) ? RTE_ETH_LINK_FIXED
+					: RTE_ETH_LINK_AUTONEG;
 	} else {
 		mac_info->query_type = HNS3_DEFAULT_QUERY;
 	}
@@ -4684,8 +4684,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
 static uint8_t
 hns3_check_speed_dup(uint8_t duplex, uint32_t speed)
 {
-	if (!(speed == ETH_SPEED_NUM_10M || speed == ETH_SPEED_NUM_100M))
-		duplex = ETH_LINK_FULL_DUPLEX;
+	if (!(speed == RTE_ETH_SPEED_NUM_10M || speed == RTE_ETH_SPEED_NUM_100M))
+		duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	return duplex;
 }
@@ -4735,7 +4735,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
 		return ret;
 
 	/* Do nothing if no SFP */
-	if (mac_info.link_speed == ETH_SPEED_NUM_NONE)
+	if (mac_info.link_speed == RTE_ETH_SPEED_NUM_NONE)
 		return 0;
 
 	/*
@@ -4762,7 +4762,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
 
 	/* Config full duplex for SFP */
 	return hns3_cfg_mac_speed_dup(hw, mac_info.link_speed,
-				      ETH_LINK_FULL_DUPLEX);
+				      RTE_ETH_LINK_FULL_DUPLEX);
 }
 
 static void
@@ -4881,10 +4881,10 @@ hns3_cfg_mac_mode(struct hns3_hw *hw, bool enable)
 	hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_B, val);
 
 	/*
-	 * If DEV_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
+	 * If RTE_ETH_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
 	 * when receiving frames. Otherwise, CRC will be stripped.
 	 */
-	if (hw->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (hw->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, 0);
 	else
 		hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, val);
@@ -4912,7 +4912,7 @@ hns3_get_mac_link_status(struct hns3_hw *hw)
 	ret = hns3_cmd_send(hw, &desc, 1);
 	if (ret) {
 		hns3_err(hw, "get link status cmd failed %d", ret);
-		return ETH_LINK_DOWN;
+		return RTE_ETH_LINK_DOWN;
 	}
 
 	req = (struct hns3_link_status_cmd *)desc.data;
@@ -5094,19 +5094,19 @@ hns3_set_firber_default_support_speed(struct hns3_hw *hw)
 	struct hns3_mac *mac = &hw->mac;
 
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		return HNS3_FIBER_LINK_SPEED_1G_BIT;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		return HNS3_FIBER_LINK_SPEED_10G_BIT;
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_25G:
 		return HNS3_FIBER_LINK_SPEED_25G_BIT;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		return HNS3_FIBER_LINK_SPEED_40G_BIT;
-	case ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_50G:
 		return HNS3_FIBER_LINK_SPEED_50G_BIT;
-	case ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_100G:
 		return HNS3_FIBER_LINK_SPEED_100G_BIT;
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_200G:
 		return HNS3_FIBER_LINK_SPEED_200G_BIT;
 	default:
 		hns3_warn(hw, "invalid speed %u Mbps.", mac->link_speed);
@@ -5344,20 +5344,20 @@ hns3_convert_link_speeds2bitmap_copper(uint32_t link_speeds)
 {
 	uint32_t speed_bit;
 
-	switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
-	case ETH_LINK_SPEED_10M:
+	switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+	case RTE_ETH_LINK_SPEED_10M:
 		speed_bit = HNS3_PHY_LINK_SPEED_10M_BIT;
 		break;
-	case ETH_LINK_SPEED_10M_HD:
+	case RTE_ETH_LINK_SPEED_10M_HD:
 		speed_bit = HNS3_PHY_LINK_SPEED_10M_HD_BIT;
 		break;
-	case ETH_LINK_SPEED_100M:
+	case RTE_ETH_LINK_SPEED_100M:
 		speed_bit = HNS3_PHY_LINK_SPEED_100M_BIT;
 		break;
-	case ETH_LINK_SPEED_100M_HD:
+	case RTE_ETH_LINK_SPEED_100M_HD:
 		speed_bit = HNS3_PHY_LINK_SPEED_100M_HD_BIT;
 		break;
-	case ETH_LINK_SPEED_1G:
+	case RTE_ETH_LINK_SPEED_1G:
 		speed_bit = HNS3_PHY_LINK_SPEED_1000M_BIT;
 		break;
 	default:
@@ -5373,26 +5373,26 @@ hns3_convert_link_speeds2bitmap_fiber(uint32_t link_speeds)
 {
 	uint32_t speed_bit;
 
-	switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
-	case ETH_LINK_SPEED_1G:
+	switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+	case RTE_ETH_LINK_SPEED_1G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_1G_BIT;
 		break;
-	case ETH_LINK_SPEED_10G:
+	case RTE_ETH_LINK_SPEED_10G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_10G_BIT;
 		break;
-	case ETH_LINK_SPEED_25G:
+	case RTE_ETH_LINK_SPEED_25G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_25G_BIT;
 		break;
-	case ETH_LINK_SPEED_40G:
+	case RTE_ETH_LINK_SPEED_40G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_40G_BIT;
 		break;
-	case ETH_LINK_SPEED_50G:
+	case RTE_ETH_LINK_SPEED_50G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_50G_BIT;
 		break;
-	case ETH_LINK_SPEED_100G:
+	case RTE_ETH_LINK_SPEED_100G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_100G_BIT;
 		break;
-	case ETH_LINK_SPEED_200G:
+	case RTE_ETH_LINK_SPEED_200G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_200G_BIT;
 		break;
 	default:
@@ -5427,28 +5427,28 @@ hns3_check_port_speed(struct hns3_hw *hw, uint32_t link_speeds)
 static inline uint32_t
 hns3_get_link_speed(uint32_t link_speeds)
 {
-	uint32_t speed = ETH_SPEED_NUM_NONE;
-
-	if (link_speeds & ETH_LINK_SPEED_10M ||
-	    link_speeds & ETH_LINK_SPEED_10M_HD)
-		speed = ETH_SPEED_NUM_10M;
-	if (link_speeds & ETH_LINK_SPEED_100M ||
-	    link_speeds & ETH_LINK_SPEED_100M_HD)
-		speed = ETH_SPEED_NUM_100M;
-	if (link_speeds & ETH_LINK_SPEED_1G)
-		speed = ETH_SPEED_NUM_1G;
-	if (link_speeds & ETH_LINK_SPEED_10G)
-		speed = ETH_SPEED_NUM_10G;
-	if (link_speeds & ETH_LINK_SPEED_25G)
-		speed = ETH_SPEED_NUM_25G;
-	if (link_speeds & ETH_LINK_SPEED_40G)
-		speed = ETH_SPEED_NUM_40G;
-	if (link_speeds & ETH_LINK_SPEED_50G)
-		speed = ETH_SPEED_NUM_50G;
-	if (link_speeds & ETH_LINK_SPEED_100G)
-		speed = ETH_SPEED_NUM_100G;
-	if (link_speeds & ETH_LINK_SPEED_200G)
-		speed = ETH_SPEED_NUM_200G;
+	uint32_t speed = RTE_ETH_SPEED_NUM_NONE;
+
+	if (link_speeds & RTE_ETH_LINK_SPEED_10M ||
+	    link_speeds & RTE_ETH_LINK_SPEED_10M_HD)
+		speed = RTE_ETH_SPEED_NUM_10M;
+	if (link_speeds & RTE_ETH_LINK_SPEED_100M ||
+	    link_speeds & RTE_ETH_LINK_SPEED_100M_HD)
+		speed = RTE_ETH_SPEED_NUM_100M;
+	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
+		speed = RTE_ETH_SPEED_NUM_1G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
+		speed = RTE_ETH_SPEED_NUM_10G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
+		speed = RTE_ETH_SPEED_NUM_25G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
+		speed = RTE_ETH_SPEED_NUM_40G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_50G)
+		speed = RTE_ETH_SPEED_NUM_50G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_100G)
+		speed = RTE_ETH_SPEED_NUM_100G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_200G)
+		speed = RTE_ETH_SPEED_NUM_200G;
 
 	return speed;
 }
@@ -5456,11 +5456,11 @@ hns3_get_link_speed(uint32_t link_speeds)
 static uint8_t
 hns3_get_link_duplex(uint32_t link_speeds)
 {
-	if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
-	    (link_speeds & ETH_LINK_SPEED_100M_HD))
-		return ETH_LINK_HALF_DUPLEX;
+	if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+	    (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+		return RTE_ETH_LINK_HALF_DUPLEX;
 	else
-		return ETH_LINK_FULL_DUPLEX;
+		return RTE_ETH_LINK_FULL_DUPLEX;
 }
 
 static int
@@ -5594,9 +5594,9 @@ hns3_apply_link_speed(struct hns3_hw *hw)
 	struct hns3_set_link_speed_cfg cfg;
 
 	memset(&cfg, 0, sizeof(struct hns3_set_link_speed_cfg));
-	cfg.autoneg = (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) ?
-			ETH_LINK_AUTONEG : ETH_LINK_FIXED;
-	if (cfg.autoneg != ETH_LINK_AUTONEG) {
+	cfg.autoneg = (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) ?
+			RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
+	if (cfg.autoneg != RTE_ETH_LINK_AUTONEG) {
 		cfg.speed = hns3_get_link_speed(conf->link_speeds);
 		cfg.duplex = hns3_get_link_duplex(conf->link_speeds);
 	}
@@ -5869,7 +5869,7 @@ hns3_do_stop(struct hns3_adapter *hns)
 	ret = hns3_cfg_mac_mode(hw, false);
 	if (ret)
 		return ret;
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED) == 0) {
 		hns3_configure_all_mac_addr(hns, true);
@@ -6080,17 +6080,17 @@ hns3_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	current_mode = hns3_get_current_fc_mode(dev);
 	switch (current_mode) {
 	case HNS3_FC_FULL:
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	case HNS3_FC_TX_PAUSE:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case HNS3_FC_RX_PAUSE:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case HNS3_FC_NONE:
 	default:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		break;
 	}
 
@@ -6236,7 +6236,7 @@ hns3_get_dcb_info(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info)
 	int i;
 
 	rte_spinlock_lock(&hw->lock);
-	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = pf->local_max_tc;
 	else
 		dcb_info->nb_tcs = 1;
@@ -6536,7 +6536,7 @@ hns3_stop_service(struct hns3_adapter *hns)
 	struct rte_eth_dev *eth_dev;
 
 	eth_dev = &rte_eth_devices[hw->data->port_id];
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 	if (hw->adapter_state == HNS3_NIC_STARTED) {
 		rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
 		hns3_update_linkstatus_and_event(hw, false);
@@ -6826,7 +6826,7 @@ get_current_fec_auto_state(struct hns3_hw *hw, uint8_t *state)
 	 * in device of link speed
 	 * below 10 Gbps.
 	 */
-	if (hw->mac.link_speed < ETH_SPEED_NUM_10G) {
+	if (hw->mac.link_speed < RTE_ETH_SPEED_NUM_10G) {
 		*state = 0;
 		return 0;
 	}
@@ -6858,7 +6858,7 @@ hns3_fec_get_internal(struct hns3_hw *hw, uint32_t *fec_capa)
 	 * configured FEC mode is returned.
 	 * If link is up, current FEC mode is returned.
 	 */
-	if (hw->mac.link_status == ETH_LINK_DOWN) {
+	if (hw->mac.link_status == RTE_ETH_LINK_DOWN) {
 		ret = get_current_fec_auto_state(hw, &auto_state);
 		if (ret)
 			return ret;
@@ -6957,12 +6957,12 @@ get_current_speed_fec_cap(struct hns3_hw *hw, struct rte_eth_fec_capa *fec_capa)
 	uint32_t cur_capa;
 
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		cur_capa = fec_capa[1].capa;
 		break;
-	case ETH_SPEED_NUM_25G:
-	case ETH_SPEED_NUM_100G:
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_200G:
 		cur_capa = fec_capa[0].capa;
 		break;
 	default:
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index fa08fadc9497..eb3470535363 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -190,10 +190,10 @@ struct hns3_mac {
 	uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
 	uint8_t media_type;
 	uint8_t phy_addr;
-	uint8_t link_duplex  : 1; /* ETH_LINK_[HALF/FULL]_DUPLEX */
-	uint8_t link_autoneg : 1; /* ETH_LINK_[AUTONEG/FIXED] */
-	uint8_t link_status  : 1; /* ETH_LINK_[DOWN/UP] */
-	uint32_t link_speed;      /* ETH_SPEED_NUM_ */
+	uint8_t link_duplex  : 1; /* RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+	uint8_t link_autoneg : 1; /* RTE_ETH_LINK_[AUTONEG/FIXED] */
+	uint8_t link_status  : 1; /* RTE_ETH_LINK_[DOWN/UP] */
+	uint32_t link_speed;      /* RTE_ETH_SPEED_NUM_ */
 	/*
 	 * Some firmware versions support only the SFP speed query. In addition
 	 * to the SFP speed query, some firmware supports the query of the speed
@@ -1079,9 +1079,9 @@ static inline uint64_t
 hns3_txvlan_cap_get(struct hns3_hw *hw)
 {
 	if (hw->port_base_vlan_cfg.state)
-		return DEV_TX_OFFLOAD_VLAN_INSERT;
+		return RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	else
-		return DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT;
+		return RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
 }
 
 #endif /* _HNS3_ETHDEV_H_ */
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 8e5df05aa285..c0c1f1c4c107 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -807,15 +807,15 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
 	}
 
 	hw->adapter_state = HNS3_NIC_CONFIGURING;
-	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		hns3_err(hw, "setting link speed/duplex not supported");
 		ret = -EINVAL;
 		goto cfg_err;
 	}
 
 	/* When RSS is not configured, redirect the packet queue 0 */
-	if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 		hw->rss_dis_flag = false;
 		rss_conf = conf->rx_adv_conf.rss_conf;
 		ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -832,7 +832,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
 		goto cfg_err;
 
 	/* config hardware GRO */
-	gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+	gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
 	ret = hns3_config_gro(hw, gro_en);
 	if (ret)
 		goto cfg_err;
@@ -935,32 +935,32 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 	info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
 	info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
 
-	info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_TCP_CKSUM |
-				 DEV_RX_OFFLOAD_SCTP_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_SCATTER |
-				 DEV_RX_OFFLOAD_VLAN_STRIP |
-				 DEV_RX_OFFLOAD_VLAN_FILTER |
-				 DEV_RX_OFFLOAD_RSS_HASH |
-				 DEV_RX_OFFLOAD_TCP_LRO);
-	info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_TCP_CKSUM |
-				 DEV_TX_OFFLOAD_UDP_CKSUM |
-				 DEV_TX_OFFLOAD_SCTP_CKSUM |
-				 DEV_TX_OFFLOAD_MULTI_SEGS |
-				 DEV_TX_OFFLOAD_TCP_TSO |
-				 DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				 DEV_TX_OFFLOAD_GRE_TNL_TSO |
-				 DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-				 DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+	info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_SCATTER |
+				 RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				 RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				 RTE_ETH_RX_OFFLOAD_RSS_HASH |
+				 RTE_ETH_RX_OFFLOAD_TCP_LRO);
+	info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				 RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				 RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
 				 hns3_txvlan_cap_get(hw));
 
 	if (hns3_dev_get_support(hw, OUTER_UDP_CKSUM))
-		info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+		info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 
 	if (hns3_dev_get_support(hw, INDEP_TXRX))
 		info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -1640,10 +1640,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	tmp_mask = (unsigned int)mask;
 
-	if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
 		rte_spinlock_lock(&hw->lock);
 		/* Enable or disable VLAN filter */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ret = hns3vf_en_vlan_filter(hw, true);
 		else
 			ret = hns3vf_en_vlan_filter(hw, false);
@@ -1653,10 +1653,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	}
 
 	/* Vlan stripping setting */
-	if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
 		rte_spinlock_lock(&hw->lock);
 		/* Enable or disable VLAN stripping */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			ret = hns3vf_en_hw_strip_rxvtag(hw, true);
 		else
 			ret = hns3vf_en_hw_strip_rxvtag(hw, false);
@@ -1724,7 +1724,7 @@ hns3vf_restore_vlan_conf(struct hns3_adapter *hns)
 	int ret;
 
 	dev_conf = &hw->data->dev_conf;
-	en = dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP ? true
+	en = dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ? true
 								   : false;
 	ret = hns3vf_en_hw_strip_rxvtag(hw, en);
 	if (ret)
@@ -1749,8 +1749,8 @@ hns3vf_dev_configure_vlan(struct rte_eth_dev *dev)
 	}
 
 	/* Apply vlan offload setting */
-	ret = hns3vf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK |
-					ETH_VLAN_FILTER_MASK);
+	ret = hns3vf_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK |
+					RTE_ETH_VLAN_FILTER_MASK);
 	if (ret)
 		hns3_err(hw, "dev config vlan offload failed, ret = %d.", ret);
 
@@ -2059,7 +2059,7 @@ hns3vf_do_stop(struct hns3_adapter *hns)
 	struct hns3_hw *hw = &hns->hw;
 	int ret;
 
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	/*
 	 * The "hns3vf_do_stop" function will also be called by .stop_service to
@@ -2218,31 +2218,31 @@ hns3vf_dev_link_update(struct rte_eth_dev *eth_dev,
 
 	memset(&new_link, 0, sizeof(new_link));
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_10M:
-	case ETH_SPEED_NUM_100M:
-	case ETH_SPEED_NUM_1G:
-	case ETH_SPEED_NUM_10G:
-	case ETH_SPEED_NUM_25G:
-	case ETH_SPEED_NUM_40G:
-	case ETH_SPEED_NUM_50G:
-	case ETH_SPEED_NUM_100G:
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_200G:
 		if (mac->link_status)
 			new_link.link_speed = mac->link_speed;
 		break;
 	default:
 		if (mac->link_status)
-			new_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+			new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 	}
 
 	if (!mac->link_status)
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	new_link.link_duplex = mac->link_duplex;
-	new_link.link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+	new_link.link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg =
-	    !(eth_dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED);
+	    !(eth_dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(eth_dev, &new_link);
 }
@@ -2570,11 +2570,11 @@ hns3vf_stop_service(struct hns3_adapter *hns)
 		 * Make sure call update link status before hns3vf_stop_poll_job
 		 * because update link status depend on polling job exist.
 		 */
-		hns3vf_update_link_status(hw, ETH_LINK_DOWN, hw->mac.link_speed,
+		hns3vf_update_link_status(hw, RTE_ETH_LINK_DOWN, hw->mac.link_speed,
 					  hw->mac.link_duplex);
 		hns3vf_stop_poll_job(eth_dev);
 	}
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	hns3_set_rxtx_function(eth_dev);
 	rte_wmb();
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 38a2ee58a651..da6918fddda3 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1298,10 +1298,10 @@ hns3_rss_input_tuple_supported(struct hns3_hw *hw,
 	 * Kunpeng930 and future kunpeng series support to use src/dst port
 	 * fields to RSS hash for IPv6 SCTP packet type.
 	 */
-	if (rss->types & (ETH_RSS_L4_DST_ONLY | ETH_RSS_L4_SRC_ONLY) &&
-	    (rss->types & ETH_RSS_IP ||
+	if (rss->types & (RTE_ETH_RSS_L4_DST_ONLY | RTE_ETH_RSS_L4_SRC_ONLY) &&
+	    (rss->types & RTE_ETH_RSS_IP ||
 	    (!hw->rss_info.ipv6_sctp_offload_supported &&
-	    rss->types & ETH_RSS_NONFRAG_IPV6_SCTP)))
+	    rss->types & RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
 		return false;
 
 	return true;
diff --git a/drivers/net/hns3/hns3_ptp.c b/drivers/net/hns3/hns3_ptp.c
index 5dfe68cc4dbd..9a829d7011ad 100644
--- a/drivers/net/hns3/hns3_ptp.c
+++ b/drivers/net/hns3/hns3_ptp.c
@@ -21,7 +21,7 @@ hns3_mbuf_dyn_rx_timestamp_register(struct rte_eth_dev *dev,
 	struct hns3_hw *hw = &hns->hw;
 	int ret;
 
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		return 0;
 
 	ret = rte_mbuf_dyn_rx_timestamp_register
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index 3a81e90e0911..85495bbe89d9 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -76,69 +76,69 @@ static const struct {
 	uint64_t rss_types;
 	uint64_t rss_field;
 } hns3_set_tuple_table[] = {
-	{ ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
-	{ ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
-	{ ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) },
-	{ ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) },
 };
 
@@ -146,44 +146,44 @@ static const struct {
 	uint64_t rss_types;
 	uint64_t rss_field;
 } hns3_set_rss_types[] = {
-	{ ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
+	{ RTE_ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_VER) },
-	{ ETH_RSS_NONFRAG_IPV4_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
-	{ ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
+	{ RTE_ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) |
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_VER) },
-	{ ETH_RSS_NONFRAG_IPV6_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }
 };
@@ -365,10 +365,10 @@ hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw,
 	 * When user does not specify the following types or a combination of
 	 * the following types, it enables all fields for the supported RSS
 	 * types. the following types as:
-	 * - ETH_RSS_L3_SRC_ONLY
-	 * - ETH_RSS_L3_DST_ONLY
-	 * - ETH_RSS_L4_SRC_ONLY
-	 * - ETH_RSS_L4_DST_ONLY
+	 * - RTE_ETH_RSS_L3_SRC_ONLY
+	 * - RTE_ETH_RSS_L3_DST_ONLY
+	 * - RTE_ETH_RSS_L4_SRC_ONLY
+	 * - RTE_ETH_RSS_L4_DST_ONLY
 	 */
 	if (fields_count == 0) {
 		for (i = 0; i < RTE_DIM(hns3_set_rss_types); i++) {
@@ -520,8 +520,8 @@ hns3_dev_rss_reta_update(struct rte_eth_dev *dev,
 	memcpy(indirection_tbl, rss_cfg->rss_indirection_tbl,
 	       sizeof(rss_cfg->rss_indirection_tbl));
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].reta[shift] >= hw->alloc_rss_size) {
 			rte_spinlock_unlock(&hw->lock);
 			hns3_err(hw, "queue id(%u) set to redirection table "
@@ -572,8 +572,8 @@ hns3_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 	rte_spinlock_lock(&hw->lock);
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] =
 						rss_cfg->rss_indirection_tbl[i];
@@ -692,7 +692,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 	}
 
 	/* When RSS is off, redirect the packet queue 0 */
-	if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) == 0)
+	if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0)
 		hns3_rss_uninit(hns);
 
 	/* Configure RSS hash algorithm and hash key offset */
@@ -709,7 +709,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 	 * When RSS is off, it doesn't need to configure rss redirection table
 	 * to hardware.
 	 */
-	if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+	if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		ret = hns3_set_rss_indir_table(hw, rss_cfg->rss_indirection_tbl,
 					       hw->rss_ind_tbl_size);
 		if (ret)
@@ -723,7 +723,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 	return ret;
 
 rss_indir_table_uninit:
-	if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+	if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		ret1 = hns3_rss_reset_indir_table(hw);
 		if (ret1 != 0)
 			return ret;
diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h
index 996083b88b25..6f153a1b7bfb 100644
--- a/drivers/net/hns3/hns3_rss.h
+++ b/drivers/net/hns3/hns3_rss.h
@@ -8,20 +8,20 @@
 #include <rte_flow.h>
 
 #define HNS3_ETH_RSS_SUPPORT ( \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L3_SRC_ONLY | \
-	ETH_RSS_L3_DST_ONLY | \
-	ETH_RSS_L4_SRC_ONLY | \
-	ETH_RSS_L4_DST_ONLY)
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L3_SRC_ONLY | \
+	RTE_ETH_RSS_L3_DST_ONLY | \
+	RTE_ETH_RSS_L4_SRC_ONLY | \
+	RTE_ETH_RSS_L4_DST_ONLY)
 
 #define HNS3_RSS_IND_TBL_SIZE	512 /* The size of hash lookup table */
 #define HNS3_RSS_IND_TBL_SIZE_MAX 2048
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 602548a4f25b..920ee8ceeab9 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1924,7 +1924,7 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	memset(&rxq->dfx_stats, 0, sizeof(struct hns3_rx_dfx_stats));
 
 	/* CRC len set here is used for amending packet length */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1969,7 +1969,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
 						 rxq->rx_buf_len);
 	}
 
-	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
+	if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
 	    dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
 		dev->data->scattered_rx = true;
 }
@@ -2845,7 +2845,7 @@ hns3_get_rx_function(struct rte_eth_dev *dev)
 	vec_allowed = vec_support && hns3_get_default_vec_support();
 	sve_allowed = vec_support && hns3_get_sve_support();
 	simple_allowed = !dev->data->scattered_rx &&
-			 (offloads & DEV_RX_OFFLOAD_TCP_LRO) == 0;
+			 (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) == 0;
 
 	if (hns->rx_func_hint == HNS3_IO_FUNC_HINT_VEC && vec_allowed)
 		return hns3_recv_pkts_vec;
@@ -3139,7 +3139,7 @@ hns3_restore_gro_conf(struct hns3_hw *hw)
 	int ret;
 
 	offloads = hw->data->dev_conf.rxmode.offloads;
-	gro_en = offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+	gro_en = offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
 	ret = hns3_config_gro(hw, gro_en);
 	if (ret)
 		hns3_err(hw, "restore hardware GRO to %s failed, ret = %d",
@@ -4291,7 +4291,7 @@ hns3_tx_check_simple_support(struct rte_eth_dev *dev)
 	if (hns3_dev_get_support(hw, PTP))
 		return false;
 
-	return (offloads == (offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE));
+	return (offloads == (offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE));
 }
 
 static bool
@@ -4303,16 +4303,16 @@ hns3_get_tx_prep_needed(struct rte_eth_dev *dev)
 	return true;
 #else
 #define HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK (\
-		DEV_TX_OFFLOAD_IPV4_CKSUM | \
-		DEV_TX_OFFLOAD_TCP_CKSUM | \
-		DEV_TX_OFFLOAD_UDP_CKSUM | \
-		DEV_TX_OFFLOAD_SCTP_CKSUM | \
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-		DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
-		DEV_TX_OFFLOAD_TCP_TSO | \
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-		DEV_TX_OFFLOAD_GRE_TNL_TSO | \
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO)
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)
 
 	uint64_t tx_offload = dev->data->dev_conf.txmode.offloads;
 	if (tx_offload & HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index c8229e9076b5..dfea5d5b4c2f 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -307,7 +307,7 @@ struct hns3_rx_queue {
 	uint16_t rx_rearm_start; /* index of BD that driver re-arming from */
 	uint16_t rx_rearm_nb;    /* number of remaining BDs to be re-armed */
 
-	/* 4 if DEV_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
+	/* 4 if RTE_ETH_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
 	uint8_t crc_len;
 
 	/*
diff --git a/drivers/net/hns3/hns3_rxtx_vec.c b/drivers/net/hns3/hns3_rxtx_vec.c
index ff434d2d33ed..455110361aac 100644
--- a/drivers/net/hns3/hns3_rxtx_vec.c
+++ b/drivers/net/hns3/hns3_rxtx_vec.c
@@ -22,8 +22,8 @@ hns3_tx_check_vec_support(struct rte_eth_dev *dev)
 	if (hns3_dev_get_support(hw, PTP))
 		return -ENOTSUP;
 
-	/* Only support DEV_TX_OFFLOAD_MBUF_FAST_FREE */
-	if (txmode->offloads != DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	/* Only support RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE */
+	if (txmode->offloads != RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		return -ENOTSUP;
 
 	return 0;
@@ -228,10 +228,10 @@ hns3_rxq_vec_check(struct hns3_rx_queue *rxq, void *arg)
 int
 hns3_rx_check_vec_support(struct rte_eth_dev *dev)
 {
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-	uint64_t offloads_mask = DEV_RX_OFFLOAD_TCP_LRO |
-				 DEV_RX_OFFLOAD_VLAN;
+	uint64_t offloads_mask = RTE_ETH_RX_OFFLOAD_TCP_LRO |
+				 RTE_ETH_RX_OFFLOAD_VLAN;
 
 	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if (hns3_dev_get_support(hw, PTP))
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 0a4db0891d4a..293df887bf7c 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1629,7 +1629,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
 
 	/* Set the global registers with default ether type value */
 	if (!pf->support_multi_driver) {
-		ret = i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+		ret = i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					 RTE_ETHER_TYPE_VLAN);
 		if (ret != I40E_SUCCESS) {
 			PMD_INIT_LOG(ERR,
@@ -1896,8 +1896,8 @@ i40e_dev_configure(struct rte_eth_dev *dev)
 	ad->tx_simple_allowed = true;
 	ad->tx_vec_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Only legacy filter API needs the following fdir config. So when the
 	 * legacy filter API is deprecated, the following codes should also be
@@ -1931,13 +1931,13 @@ i40e_dev_configure(struct rte_eth_dev *dev)
 	 *  number, which will be available after rx_queue_setup(). dev_start()
 	 *  function is good to place RSS setup.
 	 */
-	if (mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+	if (mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) {
 		ret = i40e_vmdq_setup(dev);
 		if (ret)
 			goto err;
 	}
 
-	if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if (mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		ret = i40e_dcb_setup(dev);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "failed to configure DCB.");
@@ -2214,17 +2214,17 @@ i40e_parse_link_speeds(uint16_t link_speeds)
 {
 	uint8_t link_speed = I40E_LINK_SPEED_UNKNOWN;
 
-	if (link_speeds & ETH_LINK_SPEED_40G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
 		link_speed |= I40E_LINK_SPEED_40GB;
-	if (link_speeds & ETH_LINK_SPEED_25G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
 		link_speed |= I40E_LINK_SPEED_25GB;
-	if (link_speeds & ETH_LINK_SPEED_20G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_20G)
 		link_speed |= I40E_LINK_SPEED_20GB;
-	if (link_speeds & ETH_LINK_SPEED_10G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 		link_speed |= I40E_LINK_SPEED_10GB;
-	if (link_speeds & ETH_LINK_SPEED_1G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
 		link_speed |= I40E_LINK_SPEED_1GB;
-	if (link_speeds & ETH_LINK_SPEED_100M)
+	if (link_speeds & RTE_ETH_LINK_SPEED_100M)
 		link_speed |= I40E_LINK_SPEED_100MB;
 
 	return link_speed;
@@ -2332,13 +2332,13 @@ i40e_apply_link_speed(struct rte_eth_dev *dev)
 	abilities |= I40E_AQ_PHY_ENABLE_ATOMIC_LINK |
 		     I40E_AQ_PHY_LINK_ENABLED;
 
-	if (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
-		conf->link_speeds = ETH_LINK_SPEED_40G |
-				    ETH_LINK_SPEED_25G |
-				    ETH_LINK_SPEED_20G |
-				    ETH_LINK_SPEED_10G |
-				    ETH_LINK_SPEED_1G |
-				    ETH_LINK_SPEED_100M;
+	if (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
+		conf->link_speeds = RTE_ETH_LINK_SPEED_40G |
+				    RTE_ETH_LINK_SPEED_25G |
+				    RTE_ETH_LINK_SPEED_20G |
+				    RTE_ETH_LINK_SPEED_10G |
+				    RTE_ETH_LINK_SPEED_1G |
+				    RTE_ETH_LINK_SPEED_100M;
 
 		abilities |= I40E_AQ_PHY_AN_ENABLED;
 	} else {
@@ -2876,34 +2876,34 @@ update_link_reg(struct i40e_hw *hw, struct rte_eth_link *link)
 	/* Parse the link status */
 	switch (link_speed) {
 	case I40E_REG_SPEED_0:
-		link->link_speed = ETH_SPEED_NUM_100M;
+		link->link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case I40E_REG_SPEED_1:
-		link->link_speed = ETH_SPEED_NUM_1G;
+		link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case I40E_REG_SPEED_2:
 		if (hw->mac.type == I40E_MAC_X722)
-			link->link_speed = ETH_SPEED_NUM_2_5G;
+			link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		else
-			link->link_speed = ETH_SPEED_NUM_10G;
+			link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case I40E_REG_SPEED_3:
 		if (hw->mac.type == I40E_MAC_X722) {
-			link->link_speed = ETH_SPEED_NUM_5G;
+			link->link_speed = RTE_ETH_SPEED_NUM_5G;
 		} else {
 			reg_val = I40E_READ_REG(hw, I40E_PRTMAC_MACC);
 
 			if (reg_val & I40E_REG_MACC_25GB)
-				link->link_speed = ETH_SPEED_NUM_25G;
+				link->link_speed = RTE_ETH_SPEED_NUM_25G;
 			else
-				link->link_speed = ETH_SPEED_NUM_40G;
+				link->link_speed = RTE_ETH_SPEED_NUM_40G;
 		}
 		break;
 	case I40E_REG_SPEED_4:
 		if (hw->mac.type == I40E_MAC_X722)
-			link->link_speed = ETH_SPEED_NUM_10G;
+			link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		else
-			link->link_speed = ETH_SPEED_NUM_20G;
+			link->link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "Unknown link speed info %u", link_speed);
@@ -2930,8 +2930,8 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
 		status = i40e_aq_get_link_info(hw, enable_lse,
 						&link_status, NULL);
 		if (unlikely(status != I40E_SUCCESS)) {
-			link->link_speed = ETH_SPEED_NUM_NONE;
-			link->link_duplex = ETH_LINK_FULL_DUPLEX;
+			link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+			link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR, "Failed to get link info");
 			return;
 		}
@@ -2946,28 +2946,28 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
 	/* Parse the link status */
 	switch (link_status.link_speed) {
 	case I40E_LINK_SPEED_100MB:
-		link->link_speed = ETH_SPEED_NUM_100M;
+		link->link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case I40E_LINK_SPEED_1GB:
-		link->link_speed = ETH_SPEED_NUM_1G;
+		link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case I40E_LINK_SPEED_10GB:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case I40E_LINK_SPEED_20GB:
-		link->link_speed = ETH_SPEED_NUM_20G;
+		link->link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case I40E_LINK_SPEED_25GB:
-		link->link_speed = ETH_SPEED_NUM_25G;
+		link->link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case I40E_LINK_SPEED_40GB:
-		link->link_speed = ETH_SPEED_NUM_40G;
+		link->link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	default:
 		if (link->link_status)
-			link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+			link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		else
-			link->link_speed = ETH_SPEED_NUM_NONE;
+			link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 }
@@ -2984,9 +2984,9 @@ i40e_dev_link_update(struct rte_eth_dev *dev,
 	memset(&link, 0, sizeof(link));
 
 	/* i40e uses full duplex only */
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_SPEED_FIXED);
 
 	if (!wait_to_complete && !enable_lse)
 		update_link_reg(hw, &link);
@@ -3720,33 +3720,33 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_KEEP_CRC |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_RSS_HASH;
-
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
 		dev_info->tx_queue_offload_capa;
 	dev_info->dev_capa =
 		RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -3805,7 +3805,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	if (I40E_PHY_TYPE_SUPPORT_40G(hw->phy.phy_types)) {
 		/* For XL710 */
-		dev_info->speed_capa = ETH_LINK_SPEED_40G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_40G;
 		dev_info->default_rxportconf.nb_queues = 2;
 		dev_info->default_txportconf.nb_queues = 2;
 		if (dev->data->nb_rx_queues == 1)
@@ -3819,17 +3819,17 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	} else if (I40E_PHY_TYPE_SUPPORT_25G(hw->phy.phy_types)) {
 		/* For XXV710 */
-		dev_info->speed_capa = ETH_LINK_SPEED_25G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_25G;
 		dev_info->default_rxportconf.nb_queues = 1;
 		dev_info->default_txportconf.nb_queues = 1;
 		dev_info->default_rxportconf.ring_size = 256;
 		dev_info->default_txportconf.ring_size = 256;
 	} else {
 		/* For X710 */
-		dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
 		dev_info->default_rxportconf.nb_queues = 1;
 		dev_info->default_txportconf.nb_queues = 1;
-		if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_10G) {
+		if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_10G) {
 			dev_info->default_rxportconf.ring_size = 512;
 			dev_info->default_txportconf.ring_size = 256;
 		} else {
@@ -3868,7 +3868,7 @@ i40e_vlan_tpid_set_by_registers(struct rte_eth_dev *dev,
 	int ret;
 
 	if (qinq) {
-		if (vlan_type == ETH_VLAN_TYPE_OUTER)
+		if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
 			reg_id = 2;
 	}
 
@@ -3915,12 +3915,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
 	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	int qinq = dev->data->dev_conf.rxmode.offloads &
-		   DEV_RX_OFFLOAD_VLAN_EXTEND;
+		   RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	int ret = 0;
 
-	if ((vlan_type != ETH_VLAN_TYPE_INNER &&
-	     vlan_type != ETH_VLAN_TYPE_OUTER) ||
-	    (!qinq && vlan_type == ETH_VLAN_TYPE_INNER)) {
+	if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+	     vlan_type != RTE_ETH_VLAN_TYPE_OUTER) ||
+	    (!qinq && vlan_type == RTE_ETH_VLAN_TYPE_INNER)) {
 		PMD_DRV_LOG(ERR,
 			    "Unsupported vlan type.");
 		return -EINVAL;
@@ -3934,12 +3934,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
 	/* 802.1ad frames ability is added in NVM API 1.7*/
 	if (hw->flags & I40E_HW_FLAG_802_1AD_CAPABLE) {
 		if (qinq) {
-			if (vlan_type == ETH_VLAN_TYPE_OUTER)
+			if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
 				hw->first_tag = rte_cpu_to_le_16(tpid);
-			else if (vlan_type == ETH_VLAN_TYPE_INNER)
+			else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER)
 				hw->second_tag = rte_cpu_to_le_16(tpid);
 		} else {
-			if (vlan_type == ETH_VLAN_TYPE_OUTER)
+			if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
 				hw->second_tag = rte_cpu_to_le_16(tpid);
 		}
 		ret = i40e_aq_set_switch_config(hw, 0, 0, 0, NULL);
@@ -3998,37 +3998,37 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			i40e_vsi_config_vlan_filter(vsi, TRUE);
 		else
 			i40e_vsi_config_vlan_filter(vsi, FALSE);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			i40e_vsi_config_vlan_stripping(vsi, TRUE);
 		else
 			i40e_vsi_config_vlan_stripping(vsi, FALSE);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
 			i40e_vsi_config_double_vlan(vsi, TRUE);
 			/* Set global registers with default ethertype. */
-			i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+			i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					   RTE_ETHER_TYPE_VLAN);
-			i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+			i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
 					   RTE_ETHER_TYPE_VLAN);
 		}
 		else
 			i40e_vsi_config_double_vlan(vsi, FALSE);
 	}
 
-	if (mask & ETH_QINQ_STRIP_MASK) {
+	if (mask & RTE_ETH_QINQ_STRIP_MASK) {
 		/* Enable or disable outer VLAN stripping */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
 			i40e_vsi_config_outer_vlan_stripping(vsi, TRUE);
 		else
 			i40e_vsi_config_outer_vlan_stripping(vsi, FALSE);
@@ -4111,17 +4111,17 @@ i40e_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	 /* Return current mode according to actual setting*/
 	switch (hw->fc.current_mode) {
 	case I40E_FC_FULL:
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	case I40E_FC_TX_PAUSE:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case I40E_FC_RX_PAUSE:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case I40E_FC_NONE:
 	default:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	};
 
 	return 0;
@@ -4137,10 +4137,10 @@ i40e_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	struct i40e_hw *hw;
 	struct i40e_pf *pf;
 	enum i40e_fc_mode rte_fcmode_2_i40e_fcmode[] = {
-		[RTE_FC_NONE] = I40E_FC_NONE,
-		[RTE_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
-		[RTE_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
-		[RTE_FC_FULL] = I40E_FC_FULL
+		[RTE_ETH_FC_NONE] = I40E_FC_NONE,
+		[RTE_ETH_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
+		[RTE_ETH_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
+		[RTE_ETH_FC_FULL] = I40E_FC_FULL
 	};
 
 	/* high_water field in the rte_eth_fc_conf using the kilobytes unit */
@@ -4287,7 +4287,7 @@ i40e_macaddr_add(struct rte_eth_dev *dev,
 	}
 
 	rte_memcpy(&mac_filter.mac_addr, mac_addr, RTE_ETHER_ADDR_LEN);
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 		mac_filter.filter_type = I40E_MACVLAN_PERFECT_MATCH;
 	else
 		mac_filter.filter_type = I40E_MAC_PERFECT_MATCH;
@@ -4440,7 +4440,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
 	int ret;
 
 	if (reta_size != lut_size ||
-		reta_size > ETH_RSS_RETA_SIZE_512) {
+		reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
 		PMD_DRV_LOG(ERR,
 			"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
 			reta_size, lut_size);
@@ -4456,8 +4456,8 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
 	if (ret)
 		goto out;
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			lut[i] = reta_conf[idx].reta[shift];
 	}
@@ -4483,7 +4483,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
 	int ret;
 
 	if (reta_size != lut_size ||
-		reta_size > ETH_RSS_RETA_SIZE_512) {
+		reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
 		PMD_DRV_LOG(ERR,
 			"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
 			reta_size, lut_size);
@@ -4500,8 +4500,8 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
 	if (ret)
 		goto out;
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = lut[i];
 	}
@@ -4818,7 +4818,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
 			pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
 				hw->func_caps.num_vsis - vsi_count);
 			pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
-				ETH_64_POOLS);
+				RTE_ETH_64_POOLS);
 			if (pf->max_nb_vmdq_vsi) {
 				pf->flags |= I40E_FLAG_VMDQ;
 				pf->vmdq_nb_qps = pf->vmdq_nb_qp_max;
@@ -6104,10 +6104,10 @@ i40e_dev_init_vlan(struct rte_eth_dev *dev)
 	int mask = 0;
 
 	/* Apply vlan offload setting */
-	mask = ETH_VLAN_STRIP_MASK |
-	       ETH_QINQ_STRIP_MASK |
-	       ETH_VLAN_FILTER_MASK |
-	       ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK |
+	       RTE_ETH_QINQ_STRIP_MASK |
+	       RTE_ETH_VLAN_FILTER_MASK |
+	       RTE_ETH_VLAN_EXTEND_MASK;
 	ret = i40e_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_DRV_LOG(INFO, "Failed to update vlan offload");
@@ -6236,9 +6236,9 @@ i40e_pf_setup(struct i40e_pf *pf)
 
 	/* Configure filter control */
 	memset(&settings, 0, sizeof(settings));
-	if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_128)
+	if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_128)
 		settings.hash_lut_size = I40E_HASH_LUT_SIZE_128;
-	else if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_512)
+	else if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_512)
 		settings.hash_lut_size = I40E_HASH_LUT_SIZE_512;
 	else {
 		PMD_DRV_LOG(ERR, "Hash lookup table size (%u) not supported",
@@ -7098,7 +7098,7 @@ i40e_find_vlan_filter(struct i40e_vsi *vsi,
 {
 	uint32_t vid_idx, vid_bit;
 
-	if (vlan_id > ETH_VLAN_ID_MAX)
+	if (vlan_id > RTE_ETH_VLAN_ID_MAX)
 		return 0;
 
 	vid_idx = I40E_VFTA_IDX(vlan_id);
@@ -7133,7 +7133,7 @@ i40e_set_vlan_filter(struct i40e_vsi *vsi,
 	struct i40e_aqc_add_remove_vlan_element_data vlan_data = {0};
 	int ret;
 
-	if (vlan_id > ETH_VLAN_ID_MAX)
+	if (vlan_id > RTE_ETH_VLAN_ID_MAX)
 		return;
 
 	i40e_store_vlan_filter(vsi, vlan_id, on);
@@ -7727,25 +7727,25 @@ static int
 i40e_dev_get_filter_type(uint16_t filter_type, uint16_t *flag)
 {
 	switch (filter_type) {
-	case RTE_TUNNEL_FILTER_IMAC_IVLAN:
+	case RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN;
 		break;
-	case RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID:
+	case RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID;
 		break;
-	case RTE_TUNNEL_FILTER_IMAC_TENID:
+	case RTE_ETH_TUNNEL_FILTER_IMAC_TENID:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID;
 		break;
-	case RTE_TUNNEL_FILTER_OMAC_TENID_IMAC:
+	case RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC;
 		break;
-	case ETH_TUNNEL_FILTER_IMAC:
+	case RTE_ETH_TUNNEL_FILTER_IMAC:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC;
 		break;
-	case ETH_TUNNEL_FILTER_OIP:
+	case RTE_ETH_TUNNEL_FILTER_OIP:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_OIP;
 		break;
-	case ETH_TUNNEL_FILTER_IIP:
+	case RTE_ETH_TUNNEL_FILTER_IIP:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IIP;
 		break;
 	default:
@@ -8711,16 +8711,16 @@ i40e_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
 					  I40E_AQC_TUNNEL_TYPE_VXLAN);
 		break;
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
 					  I40E_AQC_TUNNEL_TYPE_VXLAN_GPE);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -1;
 		break;
@@ -8746,12 +8746,12 @@ i40e_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		ret = i40e_del_vxlan_port(pf, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -1;
 		break;
@@ -8843,7 +8843,7 @@ int
 i40e_pf_reset_rss_reta(struct i40e_pf *pf)
 {
 	struct i40e_hw *hw = &pf->adapter->hw;
-	uint8_t lut[ETH_RSS_RETA_SIZE_512];
+	uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
 	uint32_t i;
 	int num;
 
@@ -8851,7 +8851,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
 	 * configured. It's necessary to calculate the actual PF
 	 * queues that are configured.
 	 */
-	if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+	if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
 		num = i40e_pf_calc_configured_queues_num(pf);
 	else
 		num = pf->dev_data->nb_rx_queues;
@@ -8930,7 +8930,7 @@ i40e_pf_config_rss(struct i40e_pf *pf)
 	rss_hf = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
 	mq_mode = pf->dev_data->dev_conf.rxmode.mq_mode;
 	if (!(rss_hf & pf->adapter->flow_types_mask) ||
-	    !(mq_mode & ETH_MQ_RX_RSS_FLAG))
+	    !(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 		return 0;
 
 	hw = I40E_PF_TO_HW(pf);
@@ -10267,16 +10267,16 @@ i40e_start_timecounters(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 
 	switch (link.link_speed) {
-	case ETH_SPEED_NUM_40G:
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_25G:
 		tsync_inc_l = I40E_PTP_40GB_INCVAL & 0xFFFFFFFF;
 		tsync_inc_h = I40E_PTP_40GB_INCVAL >> 32;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		tsync_inc_l = I40E_PTP_10GB_INCVAL & 0xFFFFFFFF;
 		tsync_inc_h = I40E_PTP_10GB_INCVAL >> 32;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		tsync_inc_l = I40E_PTP_1GB_INCVAL & 0xFFFFFFFF;
 		tsync_inc_h = I40E_PTP_1GB_INCVAL >> 32;
 		break;
@@ -10504,7 +10504,7 @@ i40e_parse_dcb_configure(struct rte_eth_dev *dev,
 	else
 		*tc_map = RTE_LEN2MASK(dcb_rx_conf->nb_tcs, uint8_t);
 
-	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		dcb_cfg->pfc.willing = 0;
 		dcb_cfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS;
 		dcb_cfg->pfc.pfcenable = *tc_map;
@@ -11012,7 +11012,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
 	uint16_t bsf, tc_mapping;
 	int i, j = 0;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = rte_bsf32(vsi->enabled_tc + 1);
 	else
 		dcb_info->nb_tcs = 1;
@@ -11060,7 +11060,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
 				dcb_info->tc_queue.tc_rxq[j][i].nb_queue;
 		}
 		j++;
-	} while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, ETH_MAX_VMDQ_POOL));
+	} while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, RTE_ETH_MAX_VMDQ_POOL));
 	return 0;
 }
 
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 1d57b9617e66..d8042abbd9be 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -147,17 +147,17 @@ enum i40e_flxpld_layer_idx {
 		       I40E_FLAG_RSS_AQ_CAPABLE)
 
 #define I40E_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L2_PAYLOAD)
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L2_PAYLOAD)
 
 /* All bits of RSS hash enable for X722*/
 #define I40E_RSS_HENA_ALL_X722 ( \
@@ -1063,7 +1063,7 @@ struct i40e_rte_flow_rss_conf {
 	uint8_t key[(I40E_VFQF_HKEY_MAX_INDEX > I40E_PFQF_HKEY_MAX_INDEX ?
 		     I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
 		    sizeof(uint32_t)];		/**< Hash key. */
-	uint16_t queue[ETH_RSS_RETA_SIZE_512];	/**< Queues indices to use. */
+	uint16_t queue[RTE_ETH_RSS_RETA_SIZE_512];	/**< Queues indices to use. */
 
 	bool symmetric_enable;		/**< true, if enable symmetric */
 	uint64_t config_pctypes;	/**< All PCTYPES with the flow  */
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index e41a84f1d737..9acaa1875105 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -2015,7 +2015,7 @@ i40e_get_outer_vlan(struct rte_eth_dev *dev)
 {
 	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	int qinq = dev->data->dev_conf.rxmode.offloads &
-		DEV_RX_OFFLOAD_VLAN_EXTEND;
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	uint64_t reg_r = 0;
 	uint16_t reg_id;
 	uint16_t tpid;
@@ -3601,13 +3601,13 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
 }
 
 static uint16_t i40e_supported_tunnel_filter_types[] = {
-	ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_TENID |
-	ETH_TUNNEL_FILTER_IVLAN,
-	ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_IVLAN,
-	ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_TENID,
-	ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_TENID |
-	ETH_TUNNEL_FILTER_IMAC,
-	ETH_TUNNEL_FILTER_IMAC,
+	RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID |
+	RTE_ETH_TUNNEL_FILTER_IVLAN,
+	RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
+	RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID,
+	RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_TENID |
+	RTE_ETH_TUNNEL_FILTER_IMAC,
+	RTE_ETH_TUNNEL_FILTER_IMAC,
 };
 
 static int
@@ -3697,12 +3697,12 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 					rte_memcpy(&filter->outer_mac,
 						   &eth_spec->dst,
 						   RTE_ETHER_ADDR_LEN);
-					filter_type |= ETH_TUNNEL_FILTER_OMAC;
+					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
 						   &eth_spec->dst,
 						   RTE_ETHER_ADDR_LEN);
-					filter_type |= ETH_TUNNEL_FILTER_IMAC;
+					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
 			}
 			break;
@@ -3724,7 +3724,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 					filter->inner_vlan =
 					      rte_be_to_cpu_16(vlan_spec->tci) &
 					      I40E_VLAN_TCI_MASK;
-				filter_type |= ETH_TUNNEL_FILTER_IVLAN;
+				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
@@ -3798,7 +3798,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 					   vxlan_spec->vni, 3);
 				filter->tenant_id =
 					rte_be_to_cpu_32(tenant_id_be);
-				filter_type |= ETH_TUNNEL_FILTER_TENID;
+				filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
 			}
 
 			vxlan_flag = 1;
@@ -3927,12 +3927,12 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 					rte_memcpy(&filter->outer_mac,
 						   &eth_spec->dst,
 						   RTE_ETHER_ADDR_LEN);
-					filter_type |= ETH_TUNNEL_FILTER_OMAC;
+					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
 						   &eth_spec->dst,
 						   RTE_ETHER_ADDR_LEN);
-					filter_type |= ETH_TUNNEL_FILTER_IMAC;
+					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
 			}
 
@@ -3955,7 +3955,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 					filter->inner_vlan =
 					      rte_be_to_cpu_16(vlan_spec->tci) &
 					      I40E_VLAN_TCI_MASK;
-				filter_type |= ETH_TUNNEL_FILTER_IVLAN;
+				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
@@ -4050,7 +4050,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 					   nvgre_spec->tni, 3);
 				filter->tenant_id =
 					rte_be_to_cpu_32(tenant_id_be);
-				filter_type |= ETH_TUNNEL_FILTER_TENID;
+				filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
 			}
 
 			nvgre_flag = 1;
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 5da3d187076e..8962e9d97aa7 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -105,47 +105,47 @@ struct i40e_hash_map_rss_inset {
 
 const struct i40e_hash_map_rss_inset i40e_hash_rss_inset[] = {
 	/* IPv4 */
-	{ ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
-	{ ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+	{ RTE_ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+	{ RTE_ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
 
-	{ ETH_RSS_NONFRAG_IPV4_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	  I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
 
-	{ ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
 
 	/* IPv6 */
-	{ ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
-	{ ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+	{ RTE_ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+	{ RTE_ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
 
-	{ ETH_RSS_NONFRAG_IPV6_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	  I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
 
-	{ ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
 
 	/* Port */
-	{ ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
+	{ RTE_ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
 	/* Ether */
-	{ ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
-	{ ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
+	{ RTE_ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
+	{ RTE_ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
 
 	/* VLAN */
-	{ ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
-	{ ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
+	{ RTE_ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
+	{ RTE_ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
 };
 
 #define I40E_HASH_VOID_NEXT_ALLOW	BIT_ULL(RTE_FLOW_ITEM_TYPE_ETH)
@@ -208,30 +208,30 @@ struct i40e_hash_match_pattern {
 #define I40E_HASH_MAP_CUS_PATTERN(pattern, rss_mask, cus_pctype) { \
 	pattern, rss_mask, true, cus_pctype }
 
-#define I40E_HASH_L2_RSS_MASK		(ETH_RSS_VLAN | ETH_RSS_ETH | \
-					ETH_RSS_L2_SRC_ONLY | \
-					ETH_RSS_L2_DST_ONLY)
+#define I40E_HASH_L2_RSS_MASK		(RTE_ETH_RSS_VLAN | RTE_ETH_RSS_ETH | \
+					RTE_ETH_RSS_L2_SRC_ONLY | \
+					RTE_ETH_RSS_L2_DST_ONLY)
 
 #define I40E_HASH_L23_RSS_MASK		(I40E_HASH_L2_RSS_MASK | \
-					ETH_RSS_L3_SRC_ONLY | \
-					ETH_RSS_L3_DST_ONLY)
+					RTE_ETH_RSS_L3_SRC_ONLY | \
+					RTE_ETH_RSS_L3_DST_ONLY)
 
-#define I40E_HASH_IPV4_L23_RSS_MASK	(ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
-#define I40E_HASH_IPV6_L23_RSS_MASK	(ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV4_L23_RSS_MASK	(RTE_ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV6_L23_RSS_MASK	(RTE_ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
 
 #define I40E_HASH_L234_RSS_MASK		(I40E_HASH_L23_RSS_MASK | \
-					ETH_RSS_PORT | ETH_RSS_L4_SRC_ONLY | \
-					ETH_RSS_L4_DST_ONLY)
+					RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY | \
+					RTE_ETH_RSS_L4_DST_ONLY)
 
-#define I40E_HASH_IPV4_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV4)
-#define I40E_HASH_IPV6_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV6)
+#define I40E_HASH_IPV4_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV4)
+#define I40E_HASH_IPV6_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV6)
 
-#define I40E_HASH_L4_TYPES		(ETH_RSS_NONFRAG_IPV4_TCP | \
-					ETH_RSS_NONFRAG_IPV4_UDP | \
-					ETH_RSS_NONFRAG_IPV4_SCTP | \
-					ETH_RSS_NONFRAG_IPV6_TCP | \
-					ETH_RSS_NONFRAG_IPV6_UDP | \
-					ETH_RSS_NONFRAG_IPV6_SCTP)
+#define I40E_HASH_L4_TYPES		(RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+					RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+					RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+					RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+					RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+					RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 /* Current supported patterns and RSS types.
  * All items that have the same pattern types are together.
@@ -239,72 +239,72 @@ struct i40e_hash_match_pattern {
 static const struct i40e_hash_match_pattern match_patterns[] = {
 	/* Ether */
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_ETH,
-			      ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
+			      RTE_ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
 			      I40E_FILTER_PCTYPE_L2_PAYLOAD),
 
 	/* IPv4 */
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
-			      ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
+			      RTE_ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_FRAG_IPV4),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
-			      ETH_RSS_NONFRAG_IPV4_OTHER |
+			      RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
 			      I40E_HASH_IPV4_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_OTHER),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_TCP,
-			      ETH_RSS_NONFRAG_IPV4_TCP |
+			      RTE_ETH_RSS_NONFRAG_IPV4_TCP |
 			      I40E_HASH_IPV4_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_TCP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_UDP,
-			      ETH_RSS_NONFRAG_IPV4_UDP |
+			      RTE_ETH_RSS_NONFRAG_IPV4_UDP |
 			      I40E_HASH_IPV4_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_UDP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_SCTP,
-			      ETH_RSS_NONFRAG_IPV4_SCTP |
+			      RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
 			      I40E_HASH_IPV4_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_SCTP),
 
 	/* IPv6 */
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
-			      ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
+			      RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_FRAG_IPV6),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
-			      ETH_RSS_NONFRAG_IPV6_OTHER |
+			      RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			      I40E_HASH_IPV6_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_OTHER),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_FRAG,
-			      ETH_RSS_FRAG_IPV6 | I40E_HASH_L23_RSS_MASK,
+			      RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_FRAG_IPV6),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_TCP,
-			      ETH_RSS_NONFRAG_IPV6_TCP |
+			      RTE_ETH_RSS_NONFRAG_IPV6_TCP |
 			      I40E_HASH_IPV6_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_TCP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_UDP,
-			      ETH_RSS_NONFRAG_IPV6_UDP |
+			      RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			      I40E_HASH_IPV6_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_UDP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_SCTP,
-			      ETH_RSS_NONFRAG_IPV6_SCTP |
+			      RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
 			      I40E_HASH_IPV6_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_SCTP),
 
 	/* ESP */
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_UDP_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_UDP_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
 
 	/* GTPC */
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPC,
@@ -319,27 +319,27 @@ static const struct i40e_hash_match_pattern match_patterns[] = {
 				  I40E_HASH_IPV4_L234_RSS_MASK,
 				  I40E_CUSTOMIZED_GTPU),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV4,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV6,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU,
 				  I40E_HASH_IPV6_L234_RSS_MASK,
 				  I40E_CUSTOMIZED_GTPU),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV4,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV6,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
 
 	/* L2TPV3 */
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_L2TPV3,
-				  ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
+				  RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_L2TPV3,
-				  ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
+				  RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
 
 	/* AH */
-	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, ETH_RSS_AH,
+	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, RTE_ETH_RSS_AH,
 				  I40E_CUSTOMIZED_AH_IPV4),
-	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, ETH_RSS_AH,
+	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, RTE_ETH_RSS_AH,
 				  I40E_CUSTOMIZED_AH_IPV6),
 };
 
@@ -575,29 +575,29 @@ i40e_hash_get_inset(uint64_t rss_types)
 	/* If SRC_ONLY and DST_ONLY of the same level are used simultaneously,
 	 * it is the same case as none of them are added.
 	 */
-	mask = rss_types & (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY);
-	if (mask == ETH_RSS_L2_SRC_ONLY)
+	mask = rss_types & (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY);
+	if (mask == RTE_ETH_RSS_L2_SRC_ONLY)
 		inset &= ~I40E_INSET_DMAC;
-	else if (mask == ETH_RSS_L2_DST_ONLY)
+	else if (mask == RTE_ETH_RSS_L2_DST_ONLY)
 		inset &= ~I40E_INSET_SMAC;
 
-	mask = rss_types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
-	if (mask == ETH_RSS_L3_SRC_ONLY)
+	mask = rss_types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
+	if (mask == RTE_ETH_RSS_L3_SRC_ONLY)
 		inset &= ~(I40E_INSET_IPV4_DST | I40E_INSET_IPV6_DST);
-	else if (mask == ETH_RSS_L3_DST_ONLY)
+	else if (mask == RTE_ETH_RSS_L3_DST_ONLY)
 		inset &= ~(I40E_INSET_IPV4_SRC | I40E_INSET_IPV6_SRC);
 
-	mask = rss_types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
-	if (mask == ETH_RSS_L4_SRC_ONLY)
+	mask = rss_types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
+	if (mask == RTE_ETH_RSS_L4_SRC_ONLY)
 		inset &= ~I40E_INSET_DST_PORT;
-	else if (mask == ETH_RSS_L4_DST_ONLY)
+	else if (mask == RTE_ETH_RSS_L4_DST_ONLY)
 		inset &= ~I40E_INSET_SRC_PORT;
 
 	if (rss_types & I40E_HASH_L4_TYPES) {
 		uint64_t l3_mask = rss_types &
-				   (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+				   (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
 		uint64_t l4_mask = rss_types &
-				   (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+				   (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
 
 		if (l3_mask && !l4_mask)
 			inset &= ~(I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT);
@@ -836,7 +836,7 @@ i40e_hash_config(struct i40e_pf *pf,
 
 	/* Update lookup table */
 	if (rss_info->queue_num > 0) {
-		uint8_t lut[ETH_RSS_RETA_SIZE_512];
+		uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
 		uint32_t i, j = 0;
 
 		for (i = 0; i < hw->func_caps.rss_table_size; i++) {
@@ -943,7 +943,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
 			    "RSS key is ignored when queues specified");
 
 	pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
-	if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+	if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
 		max_queue = i40e_pf_calc_configured_queues_num(pf);
 	else
 		max_queue = pf->dev_data->nb_rx_queues;
@@ -1081,22 +1081,22 @@ i40e_hash_validate_rss_types(uint64_t rss_types)
 	uint64_t type, mask;
 
 	/* Validate L2 */
-	type = ETH_RSS_ETH & rss_types;
-	mask = (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY) & rss_types;
+	type = RTE_ETH_RSS_ETH & rss_types;
+	mask = (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY) & rss_types;
 	if (!type && mask)
 		return false;
 
 	/* Validate L3 */
-	type = (I40E_HASH_L4_TYPES | ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-	       ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_IPV6 |
-	       ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
-	mask = (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY) & rss_types;
+	type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+	       RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_IPV6 |
+	       RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
+	mask = (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY) & rss_types;
 	if (!type && mask)
 		return false;
 
 	/* Validate L4 */
-	type = (I40E_HASH_L4_TYPES | ETH_RSS_PORT) & rss_types;
-	mask = (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY) & rss_types;
+	type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_PORT) & rss_types;
+	mask = (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY) & rss_types;
 	if (!type && mask)
 		return false;
 
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index e2d8b2b5f7f1..ccb3924a5f68 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -1207,24 +1207,24 @@ i40e_notify_vf_link_status(struct rte_eth_dev *dev, struct i40e_pf_vf *vf)
 	event.event_data.link_event.link_status =
 		dev->data->dev_link.link_status;
 
-	/* need to convert the ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
+	/* need to convert the RTE_ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
 	switch (dev->data->dev_link.link_speed) {
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_100MB;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_1GB;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_10GB;
 		break;
-	case ETH_SPEED_NUM_20G:
+	case RTE_ETH_SPEED_NUM_20G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_20GB;
 		break;
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_25G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_25GB;
 		break;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_40GB;
 		break;
 	default:
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 554b1142c136..a13bb81115f4 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1329,7 +1329,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 	for (i = 0; i < tx_rs_thresh; i++)
 		rte_prefetch0((txep + i)->mbuf);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
 		if (k) {
 			for (j = 0; j != k; j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
 				for (i = 0; i < RTE_I40E_TX_MAX_FREE_BUF_SZ; ++i, ++txep) {
@@ -1995,7 +1995,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->queue_id = queue_idx;
 	rxq->reg_idx = reg_idx;
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -2243,7 +2243,7 @@ i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
 	}
 	/* check simple tx conflict */
 	if (ad->tx_simple_allowed) {
-		if ((txq->offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
+		if ((txq->offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
 				txq->tx_rs_thresh < RTE_PMD_I40E_TX_MAX_BURST) {
 			PMD_DRV_LOG(ERR, "No-simple tx is required.");
 			return -EINVAL;
@@ -3417,7 +3417,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
 	/* Use a simple Tx queue if possible (only fast free is allowed) */
 	ad->tx_simple_allowed =
 		(txq->offloads ==
-		 (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+		 (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
 		 txq->tx_rs_thresh >= RTE_PMD_I40E_TX_MAX_BURST);
 	ad->tx_vec_allowed = (ad->tx_simple_allowed &&
 			txq->tx_rs_thresh <= RTE_I40E_TX_MAX_FREE_BUF_SZ);
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 2301e6301d7d..5e6eecc50116 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -120,7 +120,7 @@ struct i40e_rx_queue {
 	bool rx_deferred_start; /**< don't start this queue in dev start */
 	uint16_t rx_using_sse; /**<flag indicate the usage of vPMD for rx */
 	uint8_t dcb_tc;         /**< Traffic class of rx queue */
-	uint64_t offloads; /**< Rx offload flags of DEV_RX_OFFLOAD_* */
+	uint64_t offloads; /**< Rx offload flags of RTE_ETH_RX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
@@ -166,7 +166,7 @@ struct i40e_tx_queue {
 	bool q_set; /**< indicate if tx queue has been configured */
 	bool tx_deferred_start; /**< don't start this queue in dev start */
 	uint8_t dcb_tc;         /**< Traffic class of tx queue */
-	uint64_t offloads; /**< Tx offload flags of DEV_RX_OFFLOAD_* */
+	uint64_t offloads; /**< Tx offload flags of RTE_ETH_RX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 4ffe030fcb64..7abc0821d119 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -900,7 +900,7 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->tx_next_dd - (n - 1);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		void **cache_objs;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index f52e3c567558..f9a7f4655050 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -100,7 +100,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 	  */
 	txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
 		for (i = 0; i < n; i++) {
 			free[i] = txep[i].mbuf;
 			txep[i].mbuf = NULL;
@@ -211,7 +211,7 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
 	struct i40e_adapter *ad =
 		I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 	struct i40e_rx_queue *rxq;
 	uint16_t desc, i;
 	bool first_queue;
@@ -221,11 +221,11 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
 		return -1;
 
 	 /* no header split support */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
 		return -1;
 
 	/* no QinQ support */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 		return -1;
 
 	/**
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 12d5a2e48a9b..663c46b91dc5 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -42,30 +42,30 @@ i40e_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX;
 	dev_info->hash_key_size = (I40E_VFQF_HKEY_MAX_INDEX + 1) *
 		sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_64;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_64;
 	dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
 	dev_info->max_mac_addrs = I40E_NUM_MACADDR_MAX;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_FILTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_MULTI_SEGS  |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -385,19 +385,19 @@ i40e_vf_representor_vlan_offload_set(struct rte_eth_dev *ethdev, int mask)
 		return -EINVAL;
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* Enable or disable VLAN filtering offload */
 		if (ethdev->data->dev_conf.rxmode.offloads &
-		    DEV_RX_OFFLOAD_VLAN_FILTER)
+		    RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			return i40e_vsi_config_vlan_filter(vsi, TRUE);
 		else
 			return i40e_vsi_config_vlan_filter(vsi, FALSE);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping offload */
 		if (ethdev->data->dev_conf.rxmode.offloads &
-		    DEV_RX_OFFLOAD_VLAN_STRIP)
+		    RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			return i40e_vsi_config_vlan_stripping(vsi, TRUE);
 		else
 			return i40e_vsi_config_vlan_stripping(vsi, FALSE);
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 34bfa9af4734..12f541f53926 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -50,18 +50,18 @@
 	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
 
 #define IAVF_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 |         \
-	ETH_RSS_NONFRAG_IPV4_TCP |  \
-	ETH_RSS_NONFRAG_IPV4_UDP |  \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 |         \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP |  \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP |  \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
 
 #define IAVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
 #define IAVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 611f1f7722b0..df44df772e4e 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -266,53 +266,53 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
 	static const uint64_t map_hena_rss[] = {
 		/* IPv4 */
 		[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP] =
-				ETH_RSS_NONFRAG_IPV4_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP] =
-				ETH_RSS_NONFRAG_IPV4_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_UDP] =
-				ETH_RSS_NONFRAG_IPV4_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK] =
-				ETH_RSS_NONFRAG_IPV4_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP] =
-				ETH_RSS_NONFRAG_IPV4_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_SCTP] =
-				ETH_RSS_NONFRAG_IPV4_SCTP,
+				RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_OTHER] =
-				ETH_RSS_NONFRAG_IPV4_OTHER,
-		[IAVF_FILTER_PCTYPE_FRAG_IPV4] = ETH_RSS_FRAG_IPV4,
+				RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+		[IAVF_FILTER_PCTYPE_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
 
 		/* IPv6 */
 		[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP] =
-				ETH_RSS_NONFRAG_IPV6_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP] =
-				ETH_RSS_NONFRAG_IPV6_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_UDP] =
-				ETH_RSS_NONFRAG_IPV6_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK] =
-				ETH_RSS_NONFRAG_IPV6_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP] =
-				ETH_RSS_NONFRAG_IPV6_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_SCTP] =
-				ETH_RSS_NONFRAG_IPV6_SCTP,
+				RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_OTHER] =
-				ETH_RSS_NONFRAG_IPV6_OTHER,
-		[IAVF_FILTER_PCTYPE_FRAG_IPV6] = ETH_RSS_FRAG_IPV6,
+				RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+		[IAVF_FILTER_PCTYPE_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
 
 		/* L2 Payload */
-		[IAVF_FILTER_PCTYPE_L2_PAYLOAD] = ETH_RSS_L2_PAYLOAD
+		[IAVF_FILTER_PCTYPE_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
 	};
 
-	const uint64_t ipv4_rss = ETH_RSS_NONFRAG_IPV4_UDP |
-				  ETH_RSS_NONFRAG_IPV4_TCP |
-				  ETH_RSS_NONFRAG_IPV4_SCTP |
-				  ETH_RSS_NONFRAG_IPV4_OTHER |
-				  ETH_RSS_FRAG_IPV4;
+	const uint64_t ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+				  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+				  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+				  RTE_ETH_RSS_FRAG_IPV4;
 
-	const uint64_t ipv6_rss = ETH_RSS_NONFRAG_IPV6_UDP |
-				  ETH_RSS_NONFRAG_IPV6_TCP |
-				  ETH_RSS_NONFRAG_IPV6_SCTP |
-				  ETH_RSS_NONFRAG_IPV6_OTHER |
-				  ETH_RSS_FRAG_IPV6;
+	const uint64_t ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+				  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+				  RTE_ETH_RSS_FRAG_IPV6;
 
 	struct iavf_info *vf =  IAVF_DEV_PRIVATE_TO_VF(adapter);
 	uint64_t caps = 0, hena = 0, valid_rss_hf = 0;
@@ -331,13 +331,13 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
 	}
 
 	/**
-	 * ETH_RSS_IPV4 and ETH_RSS_IPV6 can be considered as 2
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
 	 * generalizations of all other IPv4 and IPv6 RSS types.
 	 */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		rss_hf |= ipv4_rss;
 
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		rss_hf |= ipv6_rss;
 
 	RTE_BUILD_BUG_ON(RTE_DIM(map_hena_rss) > sizeof(uint64_t) * CHAR_BIT);
@@ -363,10 +363,10 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
 	}
 
 	if (valid_rss_hf & ipv4_rss)
-		valid_rss_hf |= rss_hf & ETH_RSS_IPV4;
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
 
 	if (valid_rss_hf & ipv6_rss)
-		valid_rss_hf |= rss_hf & ETH_RSS_IPV6;
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
 
 	if (rss_hf & ~valid_rss_hf)
 		PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
@@ -467,7 +467,7 @@ iavf_dev_vlan_insert_set(struct rte_eth_dev *dev)
 		return 0;
 
 	enable = !!(dev->data->dev_conf.txmode.offloads &
-		    DEV_TX_OFFLOAD_VLAN_INSERT);
+		    RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
 	iavf_config_vlan_insert_v2(adapter, enable);
 
 	return 0;
@@ -479,10 +479,10 @@ iavf_dev_init_vlan(struct rte_eth_dev *dev)
 	int err;
 
 	err = iavf_dev_vlan_offload_set(dev,
-					ETH_VLAN_STRIP_MASK |
-					ETH_QINQ_STRIP_MASK |
-					ETH_VLAN_FILTER_MASK |
-					ETH_VLAN_EXTEND_MASK);
+					RTE_ETH_VLAN_STRIP_MASK |
+					RTE_ETH_QINQ_STRIP_MASK |
+					RTE_ETH_VLAN_FILTER_MASK |
+					RTE_ETH_VLAN_EXTEND_MASK);
 	if (err) {
 		PMD_DRV_LOG(ERR, "Failed to update vlan offload");
 		return err;
@@ -512,8 +512,8 @@ iavf_dev_configure(struct rte_eth_dev *dev)
 	ad->rx_vec_allowed = true;
 	ad->tx_vec_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Large VF setting */
 	if (num_queue_pairs > IAVF_MAX_NUM_QUEUES_DFLT) {
@@ -611,7 +611,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
 	}
 
 	rxq->max_pkt_len = max_pkt_len;
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 	    rxq->max_pkt_len > buf_size) {
 		dev_data->scattered_rx = 1;
 	}
@@ -961,34 +961,34 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->flow_type_rss_offloads = IAVF_RSS_OFFLOAD_ALL;
 	dev_info->max_mac_addrs = IAVF_NUM_MACADDR_MAX;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_CRC)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_KEEP_CRC;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_free_thresh = IAVF_DEFAULT_RX_FREE_THRESH,
@@ -1048,42 +1048,42 @@ iavf_dev_link_update(struct rte_eth_dev *dev,
 	 */
 	switch (vf->link_speed) {
 	case 10:
-		new_link.link_speed = ETH_SPEED_NUM_10M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case 100:
-		new_link.link_speed = ETH_SPEED_NUM_100M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case 1000:
-		new_link.link_speed = ETH_SPEED_NUM_1G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case 10000:
-		new_link.link_speed = ETH_SPEED_NUM_10G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case 20000:
-		new_link.link_speed = ETH_SPEED_NUM_20G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case 25000:
-		new_link.link_speed = ETH_SPEED_NUM_25G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case 40000:
-		new_link.link_speed = ETH_SPEED_NUM_40G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case 50000:
-		new_link.link_speed = ETH_SPEED_NUM_50G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case 100000:
-		new_link.link_speed = ETH_SPEED_NUM_100G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	default:
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	new_link.link_status = vf->link_up ? ETH_LINK_UP :
-					     ETH_LINK_DOWN;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vf->link_up ? RTE_ETH_LINK_UP :
+					     RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(dev, &new_link);
 }
@@ -1231,14 +1231,14 @@ iavf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask)
 	bool enable;
 	int err;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
 
 		iavf_iterate_vlan_filters_v2(dev, enable);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 		err = iavf_config_vlan_strip_v2(adapter, enable);
 		/* If not support, the stripping is already disabled by PF */
@@ -1267,9 +1267,9 @@ iavf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		return -ENOTSUP;
 
 	/* Vlan stripping setting */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			err = iavf_enable_vlan_strip(adapter);
 		else
 			err = iavf_disable_vlan_strip(adapter);
@@ -1311,8 +1311,8 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
 	rte_memcpy(lut, vf->rss_lut, reta_size);
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			lut[i] = reta_conf[idx].reta[shift];
 	}
@@ -1348,8 +1348,8 @@ iavf_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = vf->rss_lut[i];
 	}
@@ -1556,7 +1556,7 @@ iavf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	ret = iavf_query_stats(adapter, &pstats);
 	if (ret == 0) {
 		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
-					 DEV_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
 					 RTE_ETHER_CRC_LEN;
 		iavf_update_stats(vsi, pstats);
 		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index 01724cd569dd..55d8a11da388 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -395,90 +395,90 @@ struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tcp_tmplt = {
 /* rss type super set */
 
 /* IPv4 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV4	(ETH_RSS_ETH | ETH_RSS_IPV4 | \
-					 ETH_RSS_FRAG_IPV4 | \
-					 ETH_RSS_IPV4_CHKSUM)
+#define IAVF_RSS_TYPE_OUTER_IPV4	(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_FRAG_IPV4 | \
+					 RTE_ETH_RSS_IPV4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV4_UDP	(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV4_TCP	(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV4_SCTP	(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 /* IPv6 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV6	(ETH_RSS_ETH | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_OUTER_IPV6	(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
 #define IAVF_RSS_TYPE_OUTER_IPV6_FRAG	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_FRAG_IPV6)
+					 RTE_ETH_RSS_FRAG_IPV6)
 #define IAVF_RSS_TYPE_OUTER_IPV6_UDP	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV6_TCP	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV6_SCTP	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 /* VLAN IPV4 */
 #define IAVF_RSS_TYPE_VLAN_IPV4		(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV4_UDP	(IAVF_RSS_TYPE_OUTER_IPV4_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV4_TCP	(IAVF_RSS_TYPE_OUTER_IPV4_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV4_SCTP	(IAVF_RSS_TYPE_OUTER_IPV4_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 /* VLAN IPv6 */
 #define IAVF_RSS_TYPE_VLAN_IPV6		(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_FRAG	(IAVF_RSS_TYPE_OUTER_IPV6_FRAG | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_UDP	(IAVF_RSS_TYPE_OUTER_IPV6_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_TCP	(IAVF_RSS_TYPE_OUTER_IPV6_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_SCTP	(IAVF_RSS_TYPE_OUTER_IPV6_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 /* IPv4 inner */
-#define IAVF_RSS_TYPE_INNER_IPV4	ETH_RSS_IPV4
-#define IAVF_RSS_TYPE_INNER_IPV4_UDP	(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV4_TCP	(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV4_SCTP	(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV4	RTE_ETH_RSS_IPV4
+#define IAVF_RSS_TYPE_INNER_IPV4_UDP	(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV4_TCP	(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV4_SCTP	(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 /* IPv6 inner */
-#define IAVF_RSS_TYPE_INNER_IPV6	ETH_RSS_IPV6
-#define IAVF_RSS_TYPE_INNER_IPV6_UDP	(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV6_TCP	(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV6_SCTP	(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV6	RTE_ETH_RSS_IPV6
+#define IAVF_RSS_TYPE_INNER_IPV6_UDP	(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV6_TCP	(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV6_SCTP	(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 /* GTPU IPv4 */
 #define IAVF_RSS_TYPE_GTPU_IPV4		(IAVF_RSS_TYPE_INNER_IPV4 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV4_UDP	(IAVF_RSS_TYPE_INNER_IPV4_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV4_TCP	(IAVF_RSS_TYPE_INNER_IPV4_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 /* GTPU IPv6 */
 #define IAVF_RSS_TYPE_GTPU_IPV6		(IAVF_RSS_TYPE_INNER_IPV6 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV6_UDP	(IAVF_RSS_TYPE_INNER_IPV6_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV6_TCP	(IAVF_RSS_TYPE_INNER_IPV6_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 /* ESP, AH, L2TPV3 and PFCP */
-#define IAVF_RSS_TYPE_IPV4_ESP		(ETH_RSS_ESP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV4_AH		(ETH_RSS_AH | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_ESP		(ETH_RSS_ESP | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV6_AH		(ETH_RSS_AH | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV4_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV6_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
 
 /**
  * Supported pattern for hash.
@@ -496,7 +496,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_vlan_ipv4_udp,		IAVF_RSS_TYPE_VLAN_IPV4_UDP,	&outer_ipv4_udp_tmplt},
 	{iavf_pattern_eth_vlan_ipv4_tcp,		IAVF_RSS_TYPE_VLAN_IPV4_TCP,	&outer_ipv4_tcp_tmplt},
 	{iavf_pattern_eth_vlan_ipv4_sctp,		IAVF_RSS_TYPE_VLAN_IPV4_SCTP,	&outer_ipv4_sctp_tmplt},
-	{iavf_pattern_eth_ipv4_gtpu,			ETH_RSS_IPV4,			&outer_ipv4_udp_tmplt},
+	{iavf_pattern_eth_ipv4_gtpu,			RTE_ETH_RSS_IPV4,			&outer_ipv4_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv4,		IAVF_RSS_TYPE_GTPU_IPV4,	&inner_ipv4_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv4_udp,		IAVF_RSS_TYPE_GTPU_IPV4_UDP,	&inner_ipv4_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv4_tcp,		IAVF_RSS_TYPE_GTPU_IPV4_TCP,	&inner_ipv4_tcp_tmplt},
@@ -538,9 +538,9 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_ipv4_ah,			IAVF_RSS_TYPE_IPV4_AH,		&ipv4_ah_tmplt},
 	{iavf_pattern_eth_ipv4_l2tpv3,			IAVF_RSS_TYPE_IPV4_L2TPV3,	&ipv4_l2tpv3_tmplt},
 	{iavf_pattern_eth_ipv4_pfcp,			IAVF_RSS_TYPE_IPV4_PFCP,	&ipv4_pfcp_tmplt},
-	{iavf_pattern_eth_ipv4_gtpc,			ETH_RSS_IPV4,			&ipv4_udp_gtpc_tmplt},
-	{iavf_pattern_eth_ecpri,			ETH_RSS_ECPRI,			&eth_ecpri_tmplt},
-	{iavf_pattern_eth_ipv4_ecpri,			ETH_RSS_ECPRI,			&ipv4_ecpri_tmplt},
+	{iavf_pattern_eth_ipv4_gtpc,			RTE_ETH_RSS_IPV4,			&ipv4_udp_gtpc_tmplt},
+	{iavf_pattern_eth_ecpri,			RTE_ETH_RSS_ECPRI,			&eth_ecpri_tmplt},
+	{iavf_pattern_eth_ipv4_ecpri,			RTE_ETH_RSS_ECPRI,			&ipv4_ecpri_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv4,		IAVF_RSS_TYPE_INNER_IPV4,	&inner_ipv4_tmplt},
 	{iavf_pattern_eth_ipv6_gre_ipv4,		IAVF_RSS_TYPE_INNER_IPV4, &inner_ipv4_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv4_tcp,	IAVF_RSS_TYPE_INNER_IPV4_TCP, &inner_ipv4_tcp_tmplt},
@@ -565,7 +565,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_vlan_ipv6_udp,		IAVF_RSS_TYPE_VLAN_IPV6_UDP,	&outer_ipv6_udp_tmplt},
 	{iavf_pattern_eth_vlan_ipv6_tcp,		IAVF_RSS_TYPE_VLAN_IPV6_TCP,	&outer_ipv6_tcp_tmplt},
 	{iavf_pattern_eth_vlan_ipv6_sctp,		IAVF_RSS_TYPE_VLAN_IPV6_SCTP,	&outer_ipv6_sctp_tmplt},
-	{iavf_pattern_eth_ipv6_gtpu,			ETH_RSS_IPV6,			&outer_ipv6_udp_tmplt},
+	{iavf_pattern_eth_ipv6_gtpu,			RTE_ETH_RSS_IPV6,			&outer_ipv6_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv6,		IAVF_RSS_TYPE_GTPU_IPV6,	&inner_ipv6_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv6_udp,		IAVF_RSS_TYPE_GTPU_IPV6_UDP,	&inner_ipv6_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv6_tcp,		IAVF_RSS_TYPE_GTPU_IPV6_TCP,	&inner_ipv6_tcp_tmplt},
@@ -607,7 +607,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_ipv6_ah,			IAVF_RSS_TYPE_IPV6_AH,		&ipv6_ah_tmplt},
 	{iavf_pattern_eth_ipv6_l2tpv3,			IAVF_RSS_TYPE_IPV6_L2TPV3,	&ipv6_l2tpv3_tmplt},
 	{iavf_pattern_eth_ipv6_pfcp,			IAVF_RSS_TYPE_IPV6_PFCP,	&ipv6_pfcp_tmplt},
-	{iavf_pattern_eth_ipv6_gtpc,			ETH_RSS_IPV6,			&ipv6_udp_gtpc_tmplt},
+	{iavf_pattern_eth_ipv6_gtpc,			RTE_ETH_RSS_IPV6,			&ipv6_udp_gtpc_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv6,		IAVF_RSS_TYPE_INNER_IPV6,	&inner_ipv6_tmplt},
 	{iavf_pattern_eth_ipv6_gre_ipv6,		IAVF_RSS_TYPE_INNER_IPV6, &inner_ipv6_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv6_tcp,	IAVF_RSS_TYPE_INNER_IPV6_TCP, &inner_ipv6_tcp_tmplt},
@@ -648,52 +648,52 @@ iavf_rss_hash_set(struct iavf_adapter *ad, uint64_t rss_hf, bool add)
 	struct virtchnl_rss_cfg rss_cfg;
 
 #define IAVF_RSS_HF_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 	rss_cfg.rss_algorithm = VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC;
-	if (rss_hf & ETH_RSS_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_IPV4) {
 		rss_cfg.proto_hdrs = inner_ipv4_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		rss_cfg.proto_hdrs = inner_ipv4_udp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		rss_cfg.proto_hdrs = inner_ipv4_tcp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
 		rss_cfg.proto_hdrs = inner_ipv4_sctp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_IPV6) {
 		rss_cfg.proto_hdrs = inner_ipv6_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
 		rss_cfg.proto_hdrs = inner_ipv6_udp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 		rss_cfg.proto_hdrs = inner_ipv6_tcp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
 		rss_cfg.proto_hdrs = inner_ipv6_sctp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
@@ -855,28 +855,28 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 		hdr = &proto_hdrs->proto_hdr[i];
 		switch (hdr->type) {
 		case VIRTCHNL_PROTO_HDR_ETH:
-			if (!(rss_type & ETH_RSS_ETH))
+			if (!(rss_type & RTE_ETH_RSS_ETH))
 				hdr->field_selector = 0;
-			else if (rss_type & ETH_RSS_L2_SRC_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
 				REFINE_PROTO_FLD(DEL, ETH_DST);
-			else if (rss_type & ETH_RSS_L2_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
 				REFINE_PROTO_FLD(DEL, ETH_SRC);
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV4:
 			if (rss_type &
-			    (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			     ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV4_SCTP)) {
-				if (rss_type & ETH_RSS_FRAG_IPV4) {
+			    (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			     RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
 					iavf_hash_add_fragment_hdr(proto_hdrs, i + 1);
-				} else if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+				} else if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV4_DST);
-				} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+				} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV4_SRC);
 				} else if (rss_type &
-					   (ETH_RSS_L4_SRC_ONLY |
-					    ETH_RSS_L4_DST_ONLY)) {
+					   (RTE_ETH_RSS_L4_SRC_ONLY |
+					    RTE_ETH_RSS_L4_DST_ONLY)) {
 					REFINE_PROTO_FLD(DEL, IPV4_DST);
 					REFINE_PROTO_FLD(DEL, IPV4_SRC);
 				}
@@ -884,39 +884,39 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_IPV4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, IPV4_CHKSUM);
 
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV4_FRAG:
 			if (rss_type &
-			    (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			     ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV4_SCTP)) {
-				if (rss_type & ETH_RSS_FRAG_IPV4)
+			    (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			     RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_FRAG_IPV4)
 					REFINE_PROTO_FLD(ADD, IPV4_FRAG_PKID);
 			} else {
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_IPV4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, IPV4_CHKSUM);
 
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV6:
 			if (rss_type &
-			    (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-			     ETH_RSS_NONFRAG_IPV6_UDP |
-			     ETH_RSS_NONFRAG_IPV6_TCP |
-			     ETH_RSS_NONFRAG_IPV6_SCTP)) {
-				if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			    (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+			     RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV6_DST);
-				} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+				} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV6_SRC);
 				} else if (rss_type &
-					   (ETH_RSS_L4_SRC_ONLY |
-					    ETH_RSS_L4_DST_ONLY)) {
+					   (RTE_ETH_RSS_L4_SRC_ONLY |
+					    RTE_ETH_RSS_L4_DST_ONLY)) {
 					REFINE_PROTO_FLD(DEL, IPV6_DST);
 					REFINE_PROTO_FLD(DEL, IPV6_SRC);
 				}
@@ -933,7 +933,7 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			}
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG:
-			if (rss_type & ETH_RSS_FRAG_IPV6)
+			if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
 				REFINE_PROTO_FLD(ADD, IPV6_EH_FRAG_PKID);
 			else
 				hdr->field_selector = 0;
@@ -941,87 +941,87 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			break;
 		case VIRTCHNL_PROTO_HDR_UDP:
 			if (rss_type &
-			    (ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV6_UDP)) {
-				if (rss_type & ETH_RSS_L4_SRC_ONLY)
+			    (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+				if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 					REFINE_PROTO_FLD(DEL, UDP_DST_PORT);
-				else if (rss_type & ETH_RSS_L4_DST_ONLY)
+				else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 					REFINE_PROTO_FLD(DEL, UDP_SRC_PORT);
 				else if (rss_type &
-					 (ETH_RSS_L3_SRC_ONLY |
-					  ETH_RSS_L3_DST_ONLY))
+					 (RTE_ETH_RSS_L3_SRC_ONLY |
+					  RTE_ETH_RSS_L3_DST_ONLY))
 					hdr->field_selector = 0;
 			} else {
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_L4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, UDP_CHKSUM);
 			break;
 		case VIRTCHNL_PROTO_HDR_TCP:
 			if (rss_type &
-			    (ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV6_TCP)) {
-				if (rss_type & ETH_RSS_L4_SRC_ONLY)
+			    (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+				if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 					REFINE_PROTO_FLD(DEL, TCP_DST_PORT);
-				else if (rss_type & ETH_RSS_L4_DST_ONLY)
+				else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 					REFINE_PROTO_FLD(DEL, TCP_SRC_PORT);
 				else if (rss_type &
-					 (ETH_RSS_L3_SRC_ONLY |
-					  ETH_RSS_L3_DST_ONLY))
+					 (RTE_ETH_RSS_L3_SRC_ONLY |
+					  RTE_ETH_RSS_L3_DST_ONLY))
 					hdr->field_selector = 0;
 			} else {
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_L4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, TCP_CHKSUM);
 			break;
 		case VIRTCHNL_PROTO_HDR_SCTP:
 			if (rss_type &
-			    (ETH_RSS_NONFRAG_IPV4_SCTP |
-			     ETH_RSS_NONFRAG_IPV6_SCTP)) {
-				if (rss_type & ETH_RSS_L4_SRC_ONLY)
+			    (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 					REFINE_PROTO_FLD(DEL, SCTP_DST_PORT);
-				else if (rss_type & ETH_RSS_L4_DST_ONLY)
+				else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 					REFINE_PROTO_FLD(DEL, SCTP_SRC_PORT);
 				else if (rss_type &
-					 (ETH_RSS_L3_SRC_ONLY |
-					  ETH_RSS_L3_DST_ONLY))
+					 (RTE_ETH_RSS_L3_SRC_ONLY |
+					  RTE_ETH_RSS_L3_DST_ONLY))
 					hdr->field_selector = 0;
 			} else {
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_L4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, SCTP_CHKSUM);
 			break;
 		case VIRTCHNL_PROTO_HDR_S_VLAN:
-			if (!(rss_type & ETH_RSS_S_VLAN))
+			if (!(rss_type & RTE_ETH_RSS_S_VLAN))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_C_VLAN:
-			if (!(rss_type & ETH_RSS_C_VLAN))
+			if (!(rss_type & RTE_ETH_RSS_C_VLAN))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_L2TPV3:
-			if (!(rss_type & ETH_RSS_L2TPV3))
+			if (!(rss_type & RTE_ETH_RSS_L2TPV3))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_ESP:
-			if (!(rss_type & ETH_RSS_ESP))
+			if (!(rss_type & RTE_ETH_RSS_ESP))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_AH:
-			if (!(rss_type & ETH_RSS_AH))
+			if (!(rss_type & RTE_ETH_RSS_AH))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_PFCP:
-			if (!(rss_type & ETH_RSS_PFCP))
+			if (!(rss_type & RTE_ETH_RSS_PFCP))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_ECPRI:
-			if (!(rss_type & ETH_RSS_ECPRI))
+			if (!(rss_type & RTE_ETH_RSS_ECPRI))
 				hdr->field_selector = 0;
 			break;
 		default:
@@ -1038,7 +1038,7 @@ iavf_refine_proto_hdrs_gtpu(struct virtchnl_proto_hdrs *proto_hdrs,
 	struct virtchnl_proto_hdr *hdr;
 	int i;
 
-	if (!(rss_type & ETH_RSS_GTPU))
+	if (!(rss_type & RTE_ETH_RSS_GTPU))
 		return;
 
 	for (i = 0; i < proto_hdrs->count; i++) {
@@ -1163,10 +1163,10 @@ static void iavf_refine_proto_hdrs(struct virtchnl_proto_hdrs *proto_hdrs,
 }
 
 static uint64_t invalid_rss_comb[] = {
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	RTE_ETH_RSS_L3_PRE32 | RTE_ETH_RSS_L3_PRE40 |
 	RTE_ETH_RSS_L3_PRE48 | RTE_ETH_RSS_L3_PRE56 |
 	RTE_ETH_RSS_L3_PRE96
@@ -1177,27 +1177,27 @@ struct rss_attr_type {
 	uint64_t type;
 };
 
-#define VALID_RSS_IPV4_L4	(ETH_RSS_NONFRAG_IPV4_UDP	| \
-				 ETH_RSS_NONFRAG_IPV4_TCP	| \
-				 ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4	(RTE_ETH_RSS_NONFRAG_IPV4_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
-#define VALID_RSS_IPV6_L4	(ETH_RSS_NONFRAG_IPV6_UDP	| \
-				 ETH_RSS_NONFRAG_IPV6_TCP	| \
-				 ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4	(RTE_ETH_RSS_NONFRAG_IPV6_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
-#define VALID_RSS_IPV4		(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4		(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
 				 VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6		(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6		(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
 				 VALID_RSS_IPV6_L4)
 #define VALID_RSS_L3		(VALID_RSS_IPV4 | VALID_RSS_IPV6)
 #define VALID_RSS_L4		(VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
 
-#define VALID_RSS_ATTR		(ETH_RSS_L3_SRC_ONLY	| \
-				 ETH_RSS_L3_DST_ONLY	| \
-				 ETH_RSS_L4_SRC_ONLY	| \
-				 ETH_RSS_L4_DST_ONLY	| \
-				 ETH_RSS_L2_SRC_ONLY	| \
-				 ETH_RSS_L2_DST_ONLY	| \
+#define VALID_RSS_ATTR		(RTE_ETH_RSS_L3_SRC_ONLY	| \
+				 RTE_ETH_RSS_L3_DST_ONLY	| \
+				 RTE_ETH_RSS_L4_SRC_ONLY	| \
+				 RTE_ETH_RSS_L4_DST_ONLY	| \
+				 RTE_ETH_RSS_L2_SRC_ONLY	| \
+				 RTE_ETH_RSS_L2_DST_ONLY	| \
 				 RTE_ETH_RSS_L3_PRE64)
 
 #define INVALID_RSS_ATTR	(RTE_ETH_RSS_L3_PRE32	| \
@@ -1207,9 +1207,9 @@ struct rss_attr_type {
 				 RTE_ETH_RSS_L3_PRE96)
 
 static struct rss_attr_type rss_attr_to_valid_type[] = {
-	{ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY,	ETH_RSS_ETH},
-	{ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
-	{ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
+	{RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY,	RTE_ETH_RSS_ETH},
+	{RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
+	{RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
 	/* current ipv6 prefix only supports prefix 64 bits*/
 	{RTE_ETH_RSS_L3_PRE64,				VALID_RSS_IPV6},
 	{INVALID_RSS_ATTR,				0}
@@ -1226,15 +1226,15 @@ iavf_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
 	 * hash function.
 	 */
 	if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
-		if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
-		    ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+		if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+		    RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
 			return true;
 
 		if (!(rss_type &
-		   (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
-		    ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
-		    ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
-		    ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+		   (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+		    RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
 			return true;
 	}
 
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 88bbd40c1027..ac4db117f5cd 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -617,7 +617,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	rxq->vsi = vsi;
 	rxq->offloads = offloads;
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index f4ae2fd6e123..2d7f6b1b2dca 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -24,22 +24,22 @@
 #define IAVF_VPMD_TX_MAX_FREE_BUF 64
 
 #define IAVF_TX_NO_VECTOR_FLAGS (				 \
-		DEV_TX_OFFLOAD_MULTI_SEGS |		 \
-		DEV_TX_OFFLOAD_TCP_TSO)
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |		 \
+		RTE_ETH_TX_OFFLOAD_TCP_TSO)
 
 #define IAVF_TX_VECTOR_OFFLOAD (				 \
-		DEV_TX_OFFLOAD_VLAN_INSERT |		 \
-		DEV_TX_OFFLOAD_QINQ_INSERT |		 \
-		DEV_TX_OFFLOAD_IPV4_CKSUM |		 \
-		DEV_TX_OFFLOAD_SCTP_CKSUM |		 \
-		DEV_TX_OFFLOAD_UDP_CKSUM |		 \
-		DEV_TX_OFFLOAD_TCP_CKSUM)
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |		 \
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |		 \
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |		 \
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |		 \
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |		 \
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 
 #define IAVF_RX_VECTOR_OFFLOAD (				 \
-		DEV_RX_OFFLOAD_CHECKSUM |		 \
-		DEV_RX_OFFLOAD_SCTP_CKSUM |		 \
-		DEV_RX_OFFLOAD_VLAN |		 \
-		DEV_RX_OFFLOAD_RSS_HASH)
+		RTE_ETH_RX_OFFLOAD_CHECKSUM |		 \
+		RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |		 \
+		RTE_ETH_RX_OFFLOAD_VLAN |		 \
+		RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define IAVF_VECTOR_PATH 0
 #define IAVF_VECTOR_OFFLOAD_PATH 1
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 72a4fcab04a5..b47c51b8ebe4 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -906,7 +906,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 		 * needs to load 2nd 16B of each desc for RSS hash parsing,
 		 * will cause performance drop to get into this context.
 		 */
-		if (offloads & DEV_RX_OFFLOAD_RSS_HASH ||
+		if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH ||
 		    rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh7 =
@@ -958,7 +958,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 					(_mm256_castsi128_si256(raw_desc_bh0),
 					raw_desc_bh1, 1);
 
-			if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+			if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 				/**
 				 * to shift the 32b RSS hash value to the
 				 * highest 32b of each 128b before mask
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 12375d3d80bd..b8f2f69f12fc 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1141,7 +1141,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 			 * needs to load 2nd 16B of each desc for RSS hash parsing,
 			 * will cause performance drop to get into this context.
 			 */
-			if (offloads & DEV_RX_OFFLOAD_RSS_HASH ||
+			if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH ||
 			    rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
@@ -1193,7 +1193,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 						(_mm256_castsi128_si256(raw_desc_bh0),
 						 raw_desc_bh1, 1);
 
-				if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+				if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 					/**
 					 * to shift the 32b RSS hash value to the
 					 * highest 32b of each 128b before mask
@@ -1721,7 +1721,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->next_dd - (n - 1);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
 								rte_lcore_id());
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index edb54991e298..1de43b9b8ee2 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -819,7 +819,7 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
 		 * needs to load 2nd 16B of each desc for RSS hash parsing,
 		 * will cause performance drop to get into this context.
 		 */
-		if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh3 =
 				_mm_load_si128
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index c9c01a14e349..7b7df5eebb6d 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -835,7 +835,7 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
 		PMD_DRV_LOG(DEBUG, "RSS is not supported");
 		return -ENOTSUP;
 	}
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
 		/* set all lut items to default queue */
 		memset(hw->rss_lut, 0, hw->vf_res->rss_lut_size);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index ebd8ca57ef5f..1cda2db00e56 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -95,7 +95,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
 	}
 
 	rxq->max_pkt_len = max_pkt_len;
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 	    (rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size) {
 		dev_data->scattered_rx = 1;
 	}
@@ -582,7 +582,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -644,7 +644,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
 	}
 
 	ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	ad->pf.adapter_stopped = 1;
 	hw->tm_conf.committed = false;
 
@@ -660,8 +660,8 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
 	ad->rx_bulk_alloc_allowed = true;
 	ad->tx_simple_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	return 0;
 }
@@ -683,27 +683,27 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -933,42 +933,42 @@ ice_dcf_link_update(struct rte_eth_dev *dev,
 	 */
 	switch (hw->link_speed) {
 	case 10:
-		new_link.link_speed = ETH_SPEED_NUM_10M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case 100:
-		new_link.link_speed = ETH_SPEED_NUM_100M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case 1000:
-		new_link.link_speed = ETH_SPEED_NUM_1G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case 10000:
-		new_link.link_speed = ETH_SPEED_NUM_10G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case 20000:
-		new_link.link_speed = ETH_SPEED_NUM_20G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case 25000:
-		new_link.link_speed = ETH_SPEED_NUM_25G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case 40000:
-		new_link.link_speed = ETH_SPEED_NUM_40G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case 50000:
-		new_link.link_speed = ETH_SPEED_NUM_50G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case 100000:
-		new_link.link_speed = ETH_SPEED_NUM_100G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	default:
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	new_link.link_status = hw->link_up ? ETH_LINK_UP :
-					     ETH_LINK_DOWN;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = hw->link_up ? RTE_ETH_LINK_UP :
+					     RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(dev, &new_link);
 }
@@ -987,11 +987,11 @@ ice_dcf_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ice_create_tunnel(parent_hw, TNL_VXLAN,
 					udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_ECPRI:
+	case RTE_ETH_TUNNEL_TYPE_ECPRI:
 		ret = ice_create_tunnel(parent_hw, TNL_ECPRI,
 					udp_tunnel->udp_port);
 		break;
@@ -1018,8 +1018,8 @@ ice_dcf_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
-	case RTE_TUNNEL_TYPE_ECPRI:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_ECPRI:
 		ret = ice_destroy_tunnel(parent_hw, udp_tunnel->udp_port, 0);
 		break;
 	default:
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index 44fb38dbe7b1..b9fcfc80ad9b 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -37,7 +37,7 @@ ice_dcf_vf_repr_dev_configure(struct rte_eth_dev *dev)
 static int
 ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -45,7 +45,7 @@ ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
 static int
 ice_dcf_vf_repr_dev_stop(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -143,28 +143,28 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -246,9 +246,9 @@ ice_dcf_vf_repr_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		return -ENOTSUP;
 
 	/* Vlan stripping setting */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		bool enable = !!(dev_conf->rxmode.offloads &
-				 DEV_RX_OFFLOAD_VLAN_STRIP);
+				 RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 		if (enable && repr->outer_vlan_info.port_vlan_ena) {
 			PMD_DRV_LOG(ERR,
@@ -345,7 +345,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
 	if (!ice_dcf_vlan_offload_ena(repr))
 		return -ENOTSUP;
 
-	if (vlan_type != ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
 		PMD_DRV_LOG(ERR,
 			    "Can accelerate only outer VLAN in QinQ\n");
 		return -EINVAL;
@@ -375,7 +375,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
 
 	if (repr->outer_vlan_info.stripping_ena) {
 		err = ice_dcf_vf_repr_vlan_offload_set(dev,
-						       ETH_VLAN_STRIP_MASK);
+						       RTE_ETH_VLAN_STRIP_MASK);
 		if (err) {
 			PMD_DRV_LOG(ERR,
 				    "Failed to reset VLAN stripping : %d\n",
@@ -449,7 +449,7 @@ ice_dcf_vf_repr_init_vlan(struct rte_eth_dev *vf_rep_eth_dev)
 	int err;
 
 	err = ice_dcf_vf_repr_vlan_offload_set(vf_rep_eth_dev,
-					       ETH_VLAN_STRIP_MASK);
+					       RTE_ETH_VLAN_STRIP_MASK);
 	if (err) {
 		PMD_DRV_LOG(ERR, "Failed to set VLAN offload");
 		return err;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index edbc74632711..6a6637a15af7 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1487,9 +1487,9 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
 	TAILQ_INIT(&vsi->mac_list);
 	TAILQ_INIT(&vsi->vlan_list);
 
-	/* Be sync with ETH_RSS_RETA_SIZE_x maximum value definition */
+	/* Be sync with RTE_ETH_RSS_RETA_SIZE_x maximum value definition */
 	pf->hash_lut_size = hw->func_caps.common_cap.rss_table_size >
-			ETH_RSS_RETA_SIZE_512 ? ETH_RSS_RETA_SIZE_512 :
+			RTE_ETH_RSS_RETA_SIZE_512 ? RTE_ETH_RSS_RETA_SIZE_512 :
 			hw->func_caps.common_cap.rss_table_size;
 	pf->flags |= ICE_FLAG_RSS_AQ_CAPABLE;
 
@@ -2993,14 +2993,14 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	int ret;
 
 #define ICE_RSS_HF_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 	ret = ice_rem_vsi_rss_cfg(hw, vsi->idx);
 	if (ret)
@@ -3010,7 +3010,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	cfg.symm = 0;
 	cfg.hdr_type = ICE_RSS_OUTER_HEADERS;
 	/* Configure RSS for IPv4 with src/dst addr as input set */
-	if (rss_hf & ETH_RSS_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_IPV4) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV4;
 		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -3020,7 +3020,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for IPv6 with src/dst addr as input set */
-	if (rss_hf & ETH_RSS_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_IPV6) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV6;
 		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -3030,7 +3030,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for udp4 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -3041,7 +3041,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for udp6 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -3052,7 +3052,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for tcp4 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -3063,7 +3063,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for tcp6 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -3074,7 +3074,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for sctp4 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_SCTP_IPV4;
@@ -3085,7 +3085,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for sctp6 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_SCTP_IPV6;
@@ -3095,7 +3095,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_IPV4) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV4;
@@ -3105,7 +3105,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_IPV6) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV6;
@@ -3115,7 +3115,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
 				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -3125,7 +3125,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
 				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -3135,7 +3135,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
 				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -3145,7 +3145,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
 				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -3288,8 +3288,8 @@ ice_dev_configure(struct rte_eth_dev *dev)
 	ad->rx_bulk_alloc_allowed = true;
 	ad->tx_simple_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (dev->data->nb_rx_queues) {
 		ret = ice_init_rss(pf);
@@ -3569,8 +3569,8 @@ ice_dev_start(struct rte_eth_dev *dev)
 	ice_set_rx_function(dev);
 	ice_set_tx_function(dev);
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-			ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+			RTE_ETH_VLAN_EXTEND_MASK;
 	ret = ice_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
@@ -3682,40 +3682,40 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_KEEP_CRC |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_FILTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->flow_type_rss_offloads = 0;
 
 	if (!is_safe_mode) {
 		dev_info->rx_offload_capa |=
-			DEV_RX_OFFLOAD_IPV4_CKSUM |
-			DEV_RX_OFFLOAD_UDP_CKSUM |
-			DEV_RX_OFFLOAD_TCP_CKSUM |
-			DEV_RX_OFFLOAD_QINQ_STRIP |
-			DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-			DEV_RX_OFFLOAD_VLAN_EXTEND |
-			DEV_RX_OFFLOAD_RSS_HASH |
-			DEV_RX_OFFLOAD_TIMESTAMP;
+			RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+			RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+			RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+			RTE_ETH_RX_OFFLOAD_RSS_HASH |
+			RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 		dev_info->tx_offload_capa |=
-			DEV_TX_OFFLOAD_QINQ_INSERT |
-			DEV_TX_OFFLOAD_IPV4_CKSUM |
-			DEV_TX_OFFLOAD_UDP_CKSUM |
-			DEV_TX_OFFLOAD_TCP_CKSUM |
-			DEV_TX_OFFLOAD_SCTP_CKSUM |
-			DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-			DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+			RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+			RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+			RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 		dev_info->flow_type_rss_offloads |= ICE_RSS_OFFLOAD_ALL;
 	}
 
 	dev_info->rx_queue_offload_capa = 0;
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->reta_size = pf->hash_lut_size;
 	dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
@@ -3754,24 +3754,24 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.nb_align = ICE_ALIGN_RING_DESC,
 	};
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M |
-			       ETH_LINK_SPEED_100M |
-			       ETH_LINK_SPEED_1G |
-			       ETH_LINK_SPEED_2_5G |
-			       ETH_LINK_SPEED_5G |
-			       ETH_LINK_SPEED_10G |
-			       ETH_LINK_SPEED_20G |
-			       ETH_LINK_SPEED_25G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			       RTE_ETH_LINK_SPEED_100M |
+			       RTE_ETH_LINK_SPEED_1G |
+			       RTE_ETH_LINK_SPEED_2_5G |
+			       RTE_ETH_LINK_SPEED_5G |
+			       RTE_ETH_LINK_SPEED_10G |
+			       RTE_ETH_LINK_SPEED_20G |
+			       RTE_ETH_LINK_SPEED_25G;
 
 	phy_type_low = hw->port_info->phy.phy_type_low;
 	phy_type_high = hw->port_info->phy.phy_type_high;
 
 	if (ICE_PHY_TYPE_SUPPORT_50G(phy_type_low))
-		dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
 
 	if (ICE_PHY_TYPE_SUPPORT_100G_LOW(phy_type_low) ||
 			ICE_PHY_TYPE_SUPPORT_100G_HIGH(phy_type_high))
-		dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
 
 	dev_info->nb_rx_queues = dev->data->nb_rx_queues;
 	dev_info->nb_tx_queues = dev->data->nb_tx_queues;
@@ -3836,8 +3836,8 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		status = ice_aq_get_link_info(hw->port_info, enable_lse,
 					      &link_status, NULL);
 		if (status != ICE_SUCCESS) {
-			link.link_speed = ETH_SPEED_NUM_100M;
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			link.link_speed = RTE_ETH_SPEED_NUM_100M;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR, "Failed to get link info");
 			goto out;
 		}
@@ -3853,55 +3853,55 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		goto out;
 
 	/* Full-duplex operation at all supported speeds */
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	/* Parse the link status */
 	switch (link_status.link_speed) {
 	case ICE_AQ_LINK_SPEED_10MB:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case ICE_AQ_LINK_SPEED_100MB:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case ICE_AQ_LINK_SPEED_1000MB:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case ICE_AQ_LINK_SPEED_2500MB:
-		link.link_speed = ETH_SPEED_NUM_2_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	case ICE_AQ_LINK_SPEED_5GB:
-		link.link_speed = ETH_SPEED_NUM_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 	case ICE_AQ_LINK_SPEED_10GB:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case ICE_AQ_LINK_SPEED_20GB:
-		link.link_speed = ETH_SPEED_NUM_20G;
+		link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case ICE_AQ_LINK_SPEED_25GB:
-		link.link_speed = ETH_SPEED_NUM_25G;
+		link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case ICE_AQ_LINK_SPEED_40GB:
-		link.link_speed = ETH_SPEED_NUM_40G;
+		link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case ICE_AQ_LINK_SPEED_50GB:
-		link.link_speed = ETH_SPEED_NUM_50G;
+		link.link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case ICE_AQ_LINK_SPEED_100GB:
-		link.link_speed = ETH_SPEED_NUM_100G;
+		link.link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	case ICE_AQ_LINK_SPEED_UNKNOWN:
 		PMD_DRV_LOG(ERR, "Unknown link speed");
-		link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "None link speed");
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			      ETH_LINK_SPEED_FIXED);
+			      RTE_ETH_LINK_SPEED_FIXED);
 
 out:
 	ice_atomic_write_link_status(dev, &link);
@@ -4377,15 +4377,15 @@ ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ice_vsi_config_vlan_filter(vsi, true);
 		else
 			ice_vsi_config_vlan_filter(vsi, false);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			ice_vsi_config_vlan_stripping(vsi, true);
 		else
 			ice_vsi_config_vlan_stripping(vsi, false);
@@ -4500,8 +4500,8 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
 		goto out;
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			lut[i] = reta_conf[idx].reta[shift];
 	}
@@ -4550,8 +4550,8 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
 		goto out;
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = lut[i];
 	}
@@ -5460,7 +5460,7 @@ ice_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ice_create_tunnel(hw, TNL_VXLAN, udp_tunnel->udp_port);
 		break;
 	default:
@@ -5484,7 +5484,7 @@ ice_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ice_destroy_tunnel(hw, udp_tunnel->udp_port, 0);
 		break;
 	default:
@@ -5505,7 +5505,7 @@ ice_timesync_enable(struct rte_eth_dev *dev)
 	int ret;
 
 	if (dev->data->dev_started && !(dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_TIMESTAMP)) {
+	    RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
 		PMD_DRV_LOG(ERR, "Rx timestamp offload not configured");
 		return -1;
 	}
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 1cd3753ccc5f..599e0028f7e8 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -117,19 +117,19 @@
 		       ICE_FLAG_VF_MAC_BY_PF)
 
 #define ICE_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L2_PAYLOAD)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L2_PAYLOAD)
 
 /**
  * The overhead from MTU to max frame size.
diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c
index 20a3204fab7e..35eff8b17d28 100644
--- a/drivers/net/ice/ice_hash.c
+++ b/drivers/net/ice/ice_hash.c
@@ -39,27 +39,27 @@
 #define ICE_IPV4_PROT		BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_PROT)
 #define ICE_IPV6_PROT		BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PROT)
 
-#define VALID_RSS_IPV4_L4	(ETH_RSS_NONFRAG_IPV4_UDP	| \
-				 ETH_RSS_NONFRAG_IPV4_TCP	| \
-				 ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4	(RTE_ETH_RSS_NONFRAG_IPV4_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
-#define VALID_RSS_IPV6_L4	(ETH_RSS_NONFRAG_IPV6_UDP	| \
-				 ETH_RSS_NONFRAG_IPV6_TCP	| \
-				 ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4	(RTE_ETH_RSS_NONFRAG_IPV6_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
-#define VALID_RSS_IPV4		(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4		(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
 				 VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6		(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6		(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
 				 VALID_RSS_IPV6_L4)
 #define VALID_RSS_L3		(VALID_RSS_IPV4 | VALID_RSS_IPV6)
 #define VALID_RSS_L4		(VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
 
-#define VALID_RSS_ATTR		(ETH_RSS_L3_SRC_ONLY	| \
-				 ETH_RSS_L3_DST_ONLY	| \
-				 ETH_RSS_L4_SRC_ONLY	| \
-				 ETH_RSS_L4_DST_ONLY	| \
-				 ETH_RSS_L2_SRC_ONLY	| \
-				 ETH_RSS_L2_DST_ONLY	| \
+#define VALID_RSS_ATTR		(RTE_ETH_RSS_L3_SRC_ONLY	| \
+				 RTE_ETH_RSS_L3_DST_ONLY	| \
+				 RTE_ETH_RSS_L4_SRC_ONLY	| \
+				 RTE_ETH_RSS_L4_DST_ONLY	| \
+				 RTE_ETH_RSS_L2_SRC_ONLY	| \
+				 RTE_ETH_RSS_L2_DST_ONLY	| \
 				 RTE_ETH_RSS_L3_PRE32	| \
 				 RTE_ETH_RSS_L3_PRE48	| \
 				 RTE_ETH_RSS_L3_PRE64)
@@ -373,87 +373,87 @@ struct ice_rss_hash_cfg eth_tmplt = {
 };
 
 /* IPv4 */
-#define ICE_RSS_TYPE_ETH_IPV4		(ETH_RSS_ETH | ETH_RSS_IPV4 | \
-					 ETH_RSS_FRAG_IPV4 | \
-					 ETH_RSS_IPV4_CHKSUM)
+#define ICE_RSS_TYPE_ETH_IPV4		(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_FRAG_IPV4 | \
+					 RTE_ETH_RSS_IPV4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV4_UDP	(ICE_RSS_TYPE_ETH_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV4_TCP	(ICE_RSS_TYPE_ETH_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV4_SCTP	(ICE_RSS_TYPE_ETH_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP | \
-					 ETH_RSS_L4_CHKSUM)
-#define ICE_RSS_TYPE_IPV4		ETH_RSS_IPV4
-#define ICE_RSS_TYPE_IPV4_UDP		(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP)
-#define ICE_RSS_TYPE_IPV4_TCP		(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP)
-#define ICE_RSS_TYPE_IPV4_SCTP		(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP)
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
+#define ICE_RSS_TYPE_IPV4		RTE_ETH_RSS_IPV4
+#define ICE_RSS_TYPE_IPV4_UDP		(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define ICE_RSS_TYPE_IPV4_TCP		(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define ICE_RSS_TYPE_IPV4_SCTP		(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
 /* IPv6 */
-#define ICE_RSS_TYPE_ETH_IPV6		(ETH_RSS_ETH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_ETH_IPV6_FRAG	(ETH_RSS_ETH | ETH_RSS_IPV6 | \
-					 ETH_RSS_FRAG_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6		(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6_FRAG	(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_FRAG_IPV6)
 #define ICE_RSS_TYPE_ETH_IPV6_UDP	(ICE_RSS_TYPE_ETH_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV6_TCP	(ICE_RSS_TYPE_ETH_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV6_SCTP	(ICE_RSS_TYPE_ETH_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP | \
-					 ETH_RSS_L4_CHKSUM)
-#define ICE_RSS_TYPE_IPV6		ETH_RSS_IPV6
-#define ICE_RSS_TYPE_IPV6_UDP		(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP)
-#define ICE_RSS_TYPE_IPV6_TCP		(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP)
-#define ICE_RSS_TYPE_IPV6_SCTP		(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP)
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
+#define ICE_RSS_TYPE_IPV6		RTE_ETH_RSS_IPV6
+#define ICE_RSS_TYPE_IPV6_UDP		(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define ICE_RSS_TYPE_IPV6_TCP		(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define ICE_RSS_TYPE_IPV6_SCTP		(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 /* VLAN IPV4 */
 #define ICE_RSS_TYPE_VLAN_IPV4		(ICE_RSS_TYPE_IPV4 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
-					 ETH_RSS_FRAG_IPV4)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+					 RTE_ETH_RSS_FRAG_IPV4)
 #define ICE_RSS_TYPE_VLAN_IPV4_UDP	(ICE_RSS_TYPE_IPV4_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV4_TCP	(ICE_RSS_TYPE_IPV4_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV4_SCTP	(ICE_RSS_TYPE_IPV4_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 /* VLAN IPv6 */
 #define ICE_RSS_TYPE_VLAN_IPV6		(ICE_RSS_TYPE_IPV6 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV6_FRAG	(ICE_RSS_TYPE_IPV6 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
-					 ETH_RSS_FRAG_IPV6)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+					 RTE_ETH_RSS_FRAG_IPV6)
 #define ICE_RSS_TYPE_VLAN_IPV6_UDP	(ICE_RSS_TYPE_IPV6_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV6_TCP	(ICE_RSS_TYPE_IPV6_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV6_SCTP	(ICE_RSS_TYPE_IPV6_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 
 /* GTPU IPv4 */
 #define ICE_RSS_TYPE_GTPU_IPV4		(ICE_RSS_TYPE_IPV4 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV4_UDP	(ICE_RSS_TYPE_IPV4_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV4_TCP	(ICE_RSS_TYPE_IPV4_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 /* GTPU IPv6 */
 #define ICE_RSS_TYPE_GTPU_IPV6		(ICE_RSS_TYPE_IPV6 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV6_UDP	(ICE_RSS_TYPE_IPV6_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV6_TCP	(ICE_RSS_TYPE_IPV6_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 
 /* PPPOE */
-#define ICE_RSS_TYPE_PPPOE		(ETH_RSS_ETH | ETH_RSS_PPPOE)
+#define ICE_RSS_TYPE_PPPOE		(RTE_ETH_RSS_ETH | RTE_ETH_RSS_PPPOE)
 
 /* PPPOE IPv4 */
 #define ICE_RSS_TYPE_PPPOE_IPV4		(ICE_RSS_TYPE_IPV4 | \
@@ -472,17 +472,17 @@ struct ice_rss_hash_cfg eth_tmplt = {
 					 ICE_RSS_TYPE_PPPOE)
 
 /* ESP, AH, L2TPV3 and PFCP */
-#define ICE_RSS_TYPE_IPV4_ESP		(ETH_RSS_ESP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_ESP		(ETH_RSS_ESP | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_AH		(ETH_RSS_AH | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_AH		(ETH_RSS_AH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
 
 /* MAC */
-#define ICE_RSS_TYPE_ETH		ETH_RSS_ETH
+#define ICE_RSS_TYPE_ETH		RTE_ETH_RSS_ETH
 
 /**
  * Supported pattern for hash.
@@ -647,86 +647,86 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 	uint64_t *hash_flds = &hash_cfg->hash_flds;
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH) {
-		if (!(rss_type & ETH_RSS_ETH))
+		if (!(rss_type & RTE_ETH_RSS_ETH))
 			*hash_flds &= ~ICE_FLOW_HASH_ETH;
-		if (rss_type & ETH_RSS_L2_SRC_ONLY)
+		if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
 			*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_DA));
-		else if (rss_type & ETH_RSS_L2_DST_ONLY)
+		else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
 			*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_SA));
 		*addl_hdrs &= ~ICE_FLOW_SEG_HDR_ETH;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH_NON_IP) {
-		if (rss_type & ETH_RSS_ETH)
+		if (rss_type & RTE_ETH_RSS_ETH)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_TYPE);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_VLAN) {
-		if (rss_type & ETH_RSS_C_VLAN)
+		if (rss_type & RTE_ETH_RSS_C_VLAN)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_C_VLAN);
-		else if (rss_type & ETH_RSS_S_VLAN)
+		else if (rss_type & RTE_ETH_RSS_S_VLAN)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_S_VLAN);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_PPPOE) {
-		if (!(rss_type & ETH_RSS_PPPOE))
+		if (!(rss_type & RTE_ETH_RSS_PPPOE))
 			*hash_flds &= ~ICE_FLOW_HASH_PPPOE_SESS_ID;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV4) {
 		if (rss_type &
-		   (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-		    ETH_RSS_NONFRAG_IPV4_UDP |
-		    ETH_RSS_NONFRAG_IPV4_TCP |
-		    ETH_RSS_NONFRAG_IPV4_SCTP)) {
-			if (rss_type & ETH_RSS_FRAG_IPV4) {
+		   (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+		    RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+			if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
 				*addl_hdrs |= ICE_FLOW_SEG_HDR_IPV_FRAG;
 				*addl_hdrs &= ~(ICE_FLOW_SEG_HDR_IPV_OTHER);
 				*hash_flds |=
 					BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_ID);
 			}
-			if (rss_type & ETH_RSS_L3_SRC_ONLY)
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_DA));
-			else if (rss_type & ETH_RSS_L3_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_SA));
 			else if (rss_type &
-				(ETH_RSS_L4_SRC_ONLY |
-				ETH_RSS_L4_DST_ONLY))
+				(RTE_ETH_RSS_L4_SRC_ONLY |
+				RTE_ETH_RSS_L4_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_IPV4;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_IPV4;
 		}
 
-		if (rss_type & ETH_RSS_IPV4_CHKSUM)
+		if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_CHKSUM);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV6) {
 		if (rss_type &
-		   (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-		    ETH_RSS_NONFRAG_IPV6_UDP |
-		    ETH_RSS_NONFRAG_IPV6_TCP |
-		    ETH_RSS_NONFRAG_IPV6_SCTP)) {
-			if (rss_type & ETH_RSS_FRAG_IPV6)
+		   (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+		    RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+			if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
 				*hash_flds |=
 					BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_ID);
-			if (rss_type & ETH_RSS_L3_SRC_ONLY)
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
-			else if (rss_type & ETH_RSS_L3_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 			else if (rss_type &
-				(ETH_RSS_L4_SRC_ONLY |
-				ETH_RSS_L4_DST_ONLY))
+				(RTE_ETH_RSS_L4_SRC_ONLY |
+				RTE_ETH_RSS_L4_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_IPV6;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_IPV6;
 		}
 
 		if (rss_type & RTE_ETH_RSS_L3_PRE32) {
-			if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_SA));
-			} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+			} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_DA));
 			} else {
@@ -735,10 +735,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 			}
 		}
 		if (rss_type & RTE_ETH_RSS_L3_PRE48) {
-			if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_SA));
-			} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+			} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_DA));
 			} else {
@@ -747,10 +747,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 			}
 		}
 		if (rss_type & RTE_ETH_RSS_L3_PRE64) {
-			if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_SA));
-			} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+			} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_DA));
 			} else {
@@ -762,81 +762,81 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_UDP) {
 		if (rss_type &
-		   (ETH_RSS_NONFRAG_IPV4_UDP |
-		    ETH_RSS_NONFRAG_IPV6_UDP)) {
-			if (rss_type & ETH_RSS_L4_SRC_ONLY)
+		   (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+			if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_DST_PORT));
-			else if (rss_type & ETH_RSS_L4_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_SRC_PORT));
 			else if (rss_type &
-				(ETH_RSS_L3_SRC_ONLY |
-				  ETH_RSS_L3_DST_ONLY))
+				(RTE_ETH_RSS_L3_SRC_ONLY |
+				  RTE_ETH_RSS_L3_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
 		}
 
-		if (rss_type & ETH_RSS_L4_CHKSUM)
+		if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_CHKSUM);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_TCP) {
 		if (rss_type &
-		   (ETH_RSS_NONFRAG_IPV4_TCP |
-		    ETH_RSS_NONFRAG_IPV6_TCP)) {
-			if (rss_type & ETH_RSS_L4_SRC_ONLY)
+		   (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+			if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_DST_PORT));
-			else if (rss_type & ETH_RSS_L4_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_SRC_PORT));
 			else if (rss_type &
-				(ETH_RSS_L3_SRC_ONLY |
-				  ETH_RSS_L3_DST_ONLY))
+				(RTE_ETH_RSS_L3_SRC_ONLY |
+				  RTE_ETH_RSS_L3_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
 		}
 
-		if (rss_type & ETH_RSS_L4_CHKSUM)
+		if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_CHKSUM);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_SCTP) {
 		if (rss_type &
-		   (ETH_RSS_NONFRAG_IPV4_SCTP |
-		    ETH_RSS_NONFRAG_IPV6_SCTP)) {
-			if (rss_type & ETH_RSS_L4_SRC_ONLY)
+		   (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+			if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_DST_PORT));
-			else if (rss_type & ETH_RSS_L4_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_SRC_PORT));
 			else if (rss_type &
-				(ETH_RSS_L3_SRC_ONLY |
-				  ETH_RSS_L3_DST_ONLY))
+				(RTE_ETH_RSS_L3_SRC_ONLY |
+				  RTE_ETH_RSS_L3_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
 		}
 
-		if (rss_type & ETH_RSS_L4_CHKSUM)
+		if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_CHKSUM);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_L2TPV3) {
-		if (!(rss_type & ETH_RSS_L2TPV3))
+		if (!(rss_type & RTE_ETH_RSS_L2TPV3))
 			*hash_flds &= ~ICE_FLOW_HASH_L2TPV3_SESS_ID;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_ESP) {
-		if (!(rss_type & ETH_RSS_ESP))
+		if (!(rss_type & RTE_ETH_RSS_ESP))
 			*hash_flds &= ~ICE_FLOW_HASH_ESP_SPI;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_AH) {
-		if (!(rss_type & ETH_RSS_AH))
+		if (!(rss_type & RTE_ETH_RSS_AH))
 			*hash_flds &= ~ICE_FLOW_HASH_AH_SPI;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_PFCP_SESSION) {
-		if (!(rss_type & ETH_RSS_PFCP))
+		if (!(rss_type & RTE_ETH_RSS_PFCP))
 			*hash_flds &= ~ICE_FLOW_HASH_PFCP_SEID;
 	}
 }
@@ -870,7 +870,7 @@ ice_refine_hash_cfg_gtpu(struct ice_rss_hash_cfg *hash_cfg,
 	uint64_t *hash_flds = &hash_cfg->hash_flds;
 
 	/* update hash field for gtpu eh/gtpu dwn/gtpu up. */
-	if (!(rss_type & ETH_RSS_GTPU))
+	if (!(rss_type & RTE_ETH_RSS_GTPU))
 		return;
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_GTPU_DWN)
@@ -892,10 +892,10 @@ static void ice_refine_hash_cfg(struct ice_rss_hash_cfg *hash_cfg,
 }
 
 static uint64_t invalid_rss_comb[] = {
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	RTE_ETH_RSS_L3_PRE40 |
 	RTE_ETH_RSS_L3_PRE56 |
 	RTE_ETH_RSS_L3_PRE96
@@ -907,9 +907,9 @@ struct rss_attr_type {
 };
 
 static struct rss_attr_type rss_attr_to_valid_type[] = {
-	{ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY,	ETH_RSS_ETH},
-	{ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
-	{ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
+	{RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY,	RTE_ETH_RSS_ETH},
+	{RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
+	{RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
 	/* current ipv6 prefix only supports prefix 64 bits*/
 	{RTE_ETH_RSS_L3_PRE32,				VALID_RSS_IPV6},
 	{RTE_ETH_RSS_L3_PRE48,				VALID_RSS_IPV6},
@@ -928,16 +928,16 @@ ice_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
 	 * hash function.
 	 */
 	if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
-		if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
-		    ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+		if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+		    RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
 			return true;
 
 		if (!(rss_type &
-		   (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
-		    ETH_RSS_FRAG_IPV4 | ETH_RSS_FRAG_IPV6 |
-		    ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
-		    ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
-		    ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+		   (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+		    RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_FRAG_IPV6 |
+		    RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
 			return true;
 	}
 
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index ff362c21d9f5..8406240d7209 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -303,7 +303,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
 		}
 	}
 
-	if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 		/* Register mbuf field and flag for Rx timestamp */
 		err = rte_mbuf_dyn_rx_timestamp_register(
 				&ice_timestamp_dynfield_offset,
@@ -367,7 +367,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
 	regval |= (0x03 << QRXFLXP_CNTXT_RXDID_PRIO_S) &
 		QRXFLXP_CNTXT_RXDID_PRIO_M;
 
-	if (ad->ptp_ena || rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (ad->ptp_ena || rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 		regval |= QRXFLXP_CNTXT_TS_M;
 
 	ICE_WRITE_REG(hw, QRXFLXP_CNTXT(rxq->reg_idx), regval);
@@ -1117,7 +1117,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
 
 	rxq->reg_idx = vsi->base_queue + queue_idx;
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1624,7 +1624,7 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq)
 			ice_rxd_to_vlan_tci(mb, &rxdp[j]);
 			rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
-			if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+			if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 				ts_ns = ice_tstamp_convert_32b_64b(hw,
 					rte_le_to_cpu_32(rxdp[j].wb.flex_ts.ts_high));
 				if (ice_timestamp_dynflag > 0) {
@@ -1942,7 +1942,7 @@ ice_recv_scattered_pkts(void *rx_queue,
 		rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
 		pkt_flags = ice_rxd_error_to_pkt_flags(rx_stat_err0);
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
-		if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 			ts_ns = ice_tstamp_convert_32b_64b(hw,
 				rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high));
 			if (ice_timestamp_dynflag > 0) {
@@ -2373,7 +2373,7 @@ ice_recv_pkts(void *rx_queue,
 		rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
 		pkt_flags = ice_rxd_error_to_pkt_flags(rx_stat_err0);
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
-		if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 			ts_ns = ice_tstamp_convert_32b_64b(hw,
 				rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high));
 			if (ice_timestamp_dynflag > 0) {
@@ -2889,7 +2889,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
 	for (i = 0; i < txq->tx_rs_thresh; i++)
 		rte_prefetch0((txep + i)->mbuf);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
 		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
 			rte_mempool_put(txep->mbuf->pool, txep->mbuf);
 			txep->mbuf = NULL;
@@ -3365,7 +3365,7 @@ ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq)
 	/* Use a simple Tx queue if possible (only fast free is allowed) */
 	ad->tx_simple_allowed =
 		(txq->offloads ==
-		(txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+		(txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
 		txq->tx_rs_thresh >= ICE_TX_MAX_BURST);
 
 	if (ad->tx_simple_allowed)
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 490693bff218..86955539bea8 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -474,7 +474,7 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 			 * will cause performance drop to get into this context.
 			 */
 			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-					DEV_RX_OFFLOAD_RSS_HASH) {
+					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
 					_mm_load_si128
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 7efe7b50a206..af23f6a34e58 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -585,7 +585,7 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
 			 * will cause performance drop to get into this context.
 			 */
 			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-					DEV_RX_OFFLOAD_RSS_HASH) {
+					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
 					_mm_load_si128
@@ -995,7 +995,7 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->tx_next_dd - (n - 1);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		void **cache_objs;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index f0f99265857e..b1d975b31a5a 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -248,23 +248,23 @@ ice_rxq_vec_setup_default(struct ice_rx_queue *rxq)
 }
 
 #define ICE_TX_NO_VECTOR_FLAGS (			\
-		DEV_TX_OFFLOAD_MULTI_SEGS |		\
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |	\
-		DEV_TX_OFFLOAD_TCP_TSO)
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |		\
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |	\
+		RTE_ETH_TX_OFFLOAD_TCP_TSO)
 
 #define ICE_TX_VECTOR_OFFLOAD (				\
-		DEV_TX_OFFLOAD_VLAN_INSERT |		\
-		DEV_TX_OFFLOAD_QINQ_INSERT |		\
-		DEV_TX_OFFLOAD_IPV4_CKSUM |		\
-		DEV_TX_OFFLOAD_SCTP_CKSUM |		\
-		DEV_TX_OFFLOAD_UDP_CKSUM |		\
-		DEV_TX_OFFLOAD_TCP_CKSUM)
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |		\
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |		\
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 
 #define ICE_RX_VECTOR_OFFLOAD (				\
-		DEV_RX_OFFLOAD_CHECKSUM |		\
-		DEV_RX_OFFLOAD_SCTP_CKSUM |		\
-		DEV_RX_OFFLOAD_VLAN |			\
-		DEV_RX_OFFLOAD_RSS_HASH)
+		RTE_ETH_RX_OFFLOAD_CHECKSUM |		\
+		RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |		\
+		RTE_ETH_RX_OFFLOAD_VLAN |			\
+		RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define ICE_VECTOR_PATH		0
 #define ICE_VECTOR_OFFLOAD_PATH	1
@@ -287,7 +287,7 @@ ice_rx_vec_queue_default(struct ice_rx_queue *rxq)
 	if (rxq->proto_xtr != PROTO_XTR_NONE)
 		return -1;
 
-	if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 		return -1;
 
 	if (rxq->offloads & ICE_RX_VECTOR_OFFLOAD)
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 653bd28b417c..117494131f32 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -479,7 +479,7 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 		 * will cause performance drop to get into this context.
 		 */
 		if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_RSS_HASH) {
+				RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh3 =
 				_mm_load_si128
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 2a1ed90b641b..7ce80a442b35 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -307,8 +307,8 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (rx_mq_mode != ETH_MQ_RX_NONE &&
-		rx_mq_mode != ETH_MQ_RX_RSS) {
+	if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+		rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
 		/* RSS together with VMDq not supported*/
 		PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
 				rx_mq_mode);
@@ -318,7 +318,7 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
 	/* To no break software that set invalid mode, only display
 	 * warning if invalid mode is used.
 	 */
-	if (tx_mq_mode != ETH_MQ_TX_NONE)
+	if (tx_mq_mode != RTE_ETH_MQ_TX_NONE)
 		PMD_INIT_LOG(WARNING,
 			"TX mode %d is not supported. Due to meaningless in this driver, just ignore",
 			tx_mq_mode);
@@ -334,8 +334,8 @@ eth_igc_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	ret  = igc_check_mq_mode(dev);
 	if (ret != 0)
@@ -473,12 +473,12 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		uint16_t duplex, speed;
 		hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
 		link.link_duplex = (duplex == FULL_DUPLEX) ?
-				ETH_LINK_FULL_DUPLEX :
-				ETH_LINK_HALF_DUPLEX;
+				RTE_ETH_LINK_FULL_DUPLEX :
+				RTE_ETH_LINK_HALF_DUPLEX;
 		link.link_speed = speed;
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 		link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 		if (speed == SPEED_2500) {
 			uint32_t tipg = IGC_READ_REG(hw, IGC_TIPG);
@@ -490,9 +490,9 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		}
 	} else {
 		link.link_speed = 0;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_status = ETH_LINK_DOWN;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -525,7 +525,7 @@ eth_igc_interrupt_action(struct rte_eth_dev *dev)
 				" Port %d: Link Up - speed %u Mbps - %s",
 				dev->data->port_id,
 				(unsigned int)link.link_speed,
-				link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+				link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 				"full-duplex" : "half-duplex");
 		else
 			PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -972,18 +972,18 @@ eth_igc_start(struct rte_eth_dev *dev)
 
 	/* VLAN Offload Settings */
 	eth_igc_vlan_offload_set(dev,
-		ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK);
+		RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK);
 
 	/* Setup link speed and duplex */
 	speeds = &dev->data->dev_conf.link_speeds;
-	if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		hw->phy.autoneg_advertised = IGC_ALL_SPEED_DUPLEX_2500;
 		hw->mac.autoneg = 1;
 	} else {
 		int num_speeds = 0;
 
-		if (*speeds & ETH_LINK_SPEED_FIXED) {
+		if (*speeds & RTE_ETH_LINK_SPEED_FIXED) {
 			PMD_DRV_LOG(ERR,
 				    "Force speed mode currently not supported");
 			igc_dev_clear_queues(dev);
@@ -993,33 +993,33 @@ eth_igc_start(struct rte_eth_dev *dev)
 		hw->phy.autoneg_advertised = 0;
 		hw->mac.autoneg = 1;
 
-		if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G)) {
+		if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G)) {
 			num_speeds = -1;
 			goto error_invalid_config;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_1G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_1G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_2_5G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_2_5G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_2500_FULL;
 			num_speeds++;
 		}
@@ -1482,14 +1482,14 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = hw->mac.rar_entry_count;
 	dev_info->rx_offload_capa = IGC_RX_OFFLOAD_ALL;
 	dev_info->tx_offload_capa = IGC_TX_OFFLOAD_ALL;
-	dev_info->rx_queue_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->rx_queue_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	dev_info->max_rx_queues = IGC_QUEUE_PAIRS_NUM;
 	dev_info->max_tx_queues = IGC_QUEUE_PAIRS_NUM;
 	dev_info->max_vmdq_pools = 0;
 
 	dev_info->hash_key_size = IGC_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = IGC_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1515,9 +1515,9 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->rx_desc_lim = rx_desc_lim;
 	dev_info->tx_desc_lim = tx_desc_lim;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G;
 
 	dev_info->max_mtu = dev_info->max_rx_pktlen - IGC_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -2141,13 +2141,13 @@ eth_igc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		rx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -2179,16 +2179,16 @@ eth_igc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		hw->fc.requested_mode = igc_fc_none;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		hw->fc.requested_mode = igc_fc_rx_pause;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		hw->fc.requested_mode = igc_fc_tx_pause;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		hw->fc.requested_mode = igc_fc_full;
 		break;
 	default:
@@ -2234,29 +2234,29 @@ eth_igc_rss_reta_update(struct rte_eth_dev *dev,
 	struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
 	uint16_t i;
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR,
 			"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
-			reta_size, ETH_RSS_RETA_SIZE_128);
+			reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
-	RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+	RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
 
 	/* set redirection table */
-	for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+	for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
 		union igc_rss_reta_reg reta, reg;
 		uint16_t idx, shift;
 		uint8_t j, mask;
 
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				IGC_RSS_RDT_REG_SIZE_MASK);
 
 		/* if no need to update the register */
 		if (!mask ||
-		    shift > (RTE_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
+		    shift > (RTE_ETH_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
 			continue;
 
 		/* check mask whether need to read the register value first */
@@ -2290,29 +2290,29 @@ eth_igc_rss_reta_query(struct rte_eth_dev *dev,
 	struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
 	uint16_t i;
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR,
 			"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
-			reta_size, ETH_RSS_RETA_SIZE_128);
+			reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
-	RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+	RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
 
 	/* read redirection table */
-	for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+	for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
 		union igc_rss_reta_reg reta;
 		uint16_t idx, shift;
 		uint8_t j, mask;
 
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				IGC_RSS_RDT_REG_SIZE_MASK);
 
 		/* if no need to read register */
 		if (!mask ||
-		    shift > (RTE_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
+		    shift > (RTE_ETH_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
 			continue;
 
 		/* read register and get the queue index */
@@ -2369,23 +2369,23 @@ eth_igc_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	rss_hf = 0;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_EX)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP_EX)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP_EX)
-		rss_hf |= ETH_RSS_IPV6_UDP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
 
 	rss_conf->rss_hf |= rss_hf;
 	return 0;
@@ -2514,22 +2514,22 @@ eth_igc_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			igc_vlan_hw_strip_enable(dev);
 		else
 			igc_vlan_hw_strip_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			igc_vlan_hw_filter_enable(dev);
 		else
 			igc_vlan_hw_filter_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			return igc_vlan_hw_extend_enable(dev);
 		else
 			return igc_vlan_hw_extend_disable(dev);
@@ -2547,7 +2547,7 @@ eth_igc_vlan_tpid_set(struct rte_eth_dev *dev,
 	uint32_t reg_val;
 
 	/* only outer TPID of double VLAN can be configured*/
-	if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		reg_val = IGC_READ_REG(hw, IGC_VET);
 		reg_val = (reg_val & (~IGC_VET_EXT)) |
 			((uint32_t)tpid << IGC_VET_EXT_SHIFT);
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index 5e6c2ff30157..f56cad79e939 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -66,37 +66,37 @@ extern "C" {
 #define IGC_TX_MAX_MTU_SEG	UINT8_MAX
 
 #define IGC_RX_OFFLOAD_ALL	(    \
-	DEV_RX_OFFLOAD_VLAN_STRIP  | \
-	DEV_RX_OFFLOAD_VLAN_FILTER | \
-	DEV_RX_OFFLOAD_VLAN_EXTEND | \
-	DEV_RX_OFFLOAD_IPV4_CKSUM  | \
-	DEV_RX_OFFLOAD_UDP_CKSUM   | \
-	DEV_RX_OFFLOAD_TCP_CKSUM   | \
-	DEV_RX_OFFLOAD_SCTP_CKSUM  | \
-	DEV_RX_OFFLOAD_KEEP_CRC    | \
-	DEV_RX_OFFLOAD_SCATTER     | \
-	DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RX_OFFLOAD_VLAN_STRIP  | \
+	RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+	RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+	RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  | \
+	RTE_ETH_RX_OFFLOAD_UDP_CKSUM   | \
+	RTE_ETH_RX_OFFLOAD_TCP_CKSUM   | \
+	RTE_ETH_RX_OFFLOAD_SCTP_CKSUM  | \
+	RTE_ETH_RX_OFFLOAD_KEEP_CRC    | \
+	RTE_ETH_RX_OFFLOAD_SCATTER     | \
+	RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define IGC_TX_OFFLOAD_ALL	(    \
-	DEV_TX_OFFLOAD_VLAN_INSERT | \
-	DEV_TX_OFFLOAD_IPV4_CKSUM  | \
-	DEV_TX_OFFLOAD_UDP_CKSUM   | \
-	DEV_TX_OFFLOAD_TCP_CKSUM   | \
-	DEV_TX_OFFLOAD_SCTP_CKSUM  | \
-	DEV_TX_OFFLOAD_TCP_TSO     | \
-	DEV_TX_OFFLOAD_UDP_TSO	   | \
-	DEV_TX_OFFLOAD_MULTI_SEGS)
+	RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  | \
+	RTE_ETH_TX_OFFLOAD_UDP_CKSUM   | \
+	RTE_ETH_TX_OFFLOAD_TCP_CKSUM   | \
+	RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  | \
+	RTE_ETH_TX_OFFLOAD_TCP_TSO     | \
+	RTE_ETH_TX_OFFLOAD_UDP_TSO	   | \
+	RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define IGC_RSS_OFFLOAD_ALL	(    \
-	ETH_RSS_IPV4               | \
-	ETH_RSS_NONFRAG_IPV4_TCP   | \
-	ETH_RSS_NONFRAG_IPV4_UDP   | \
-	ETH_RSS_IPV6               | \
-	ETH_RSS_NONFRAG_IPV6_TCP   | \
-	ETH_RSS_NONFRAG_IPV6_UDP   | \
-	ETH_RSS_IPV6_EX            | \
-	ETH_RSS_IPV6_TCP_EX        | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4               | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP   | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP   | \
+	RTE_ETH_RSS_IPV6               | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP   | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP   | \
+	RTE_ETH_RSS_IPV6_EX            | \
+	RTE_ETH_RSS_IPV6_TCP_EX        | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define IGC_MAX_ETQF_FILTERS		3	/* etqf(3) is used for 1588 */
 #define IGC_ETQF_FILTER_1588		3
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 56132e8c6cd6..1d34ae2e1b15 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -127,7 +127,7 @@ struct igc_rx_queue {
 	uint8_t             crc_len;    /**< 0 if CRC stripped, 4 otherwise. */
 	uint8_t             drop_en;	/**< If not 0, set SRRCTL.Drop_En. */
 	uint32_t            flags;      /**< RX flags. */
-	uint64_t	    offloads;   /**< offloads of DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads;   /**< offloads of RTE_ETH_RX_OFFLOAD_* */
 };
 
 /** Offload features */
@@ -209,7 +209,7 @@ struct igc_tx_queue {
 	/**< Start context position for transmit queue. */
 	struct igc_advctx_info ctx_cache[IGC_CTX_NUM];
 	/**< Hardware context history.*/
-	uint64_t	       offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+	uint64_t	       offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
 };
 
 static inline uint64_t
@@ -847,23 +847,23 @@ igc_hw_rss_hash_set(struct igc_hw *hw, struct rte_eth_rss_conf *rss_conf)
 	/* Set configured hashing protocols in MRQC register */
 	rss_hf = rss_conf->rss_hf;
 	mrqc = IGC_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV4_TCP;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6;
-	if (rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_EX)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP;
-	if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV4_UDP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP;
-	if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP_EX;
 	IGC_WRITE_REG(hw, IGC_MRQC, mrqc);
 }
@@ -1037,10 +1037,10 @@ igc_dev_mq_rx_configure(struct rte_eth_dev *dev)
 	}
 
 	switch (dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		igc_rss_configure(dev);
 		break;
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 		/*
 		 * configure RSS register for following,
 		 * then disable the RSS logic
@@ -1111,7 +1111,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 * call to configure
 		 */
-		rxq->crc_len = (offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+		rxq->crc_len = (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
 				RTE_ETHER_CRC_LEN : 0;
 
 		bus_addr = rxq->rx_ring_phys_addr;
@@ -1177,7 +1177,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 		IGC_WRITE_REG(hw, IGC_RXDCTL(rxq->reg_idx), rxdctl);
 	}
 
-	if (offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		dev->data->scattered_rx = 1;
 
 	if (dev->data->scattered_rx) {
@@ -1221,20 +1221,20 @@ igc_rx_init(struct rte_eth_dev *dev)
 	rxcsum |= IGC_RXCSUM_PCSD;
 
 	/* Enable both L3/L4 rx checksum offload */
-	if (offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		rxcsum |= IGC_RXCSUM_IPOFL;
 	else
 		rxcsum &= ~IGC_RXCSUM_IPOFL;
 
 	if (offloads &
-		(DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+		(RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
 		rxcsum |= IGC_RXCSUM_TUOFL;
-		offloads |= DEV_RX_OFFLOAD_SCTP_CKSUM;
+		offloads |= RTE_ETH_RX_OFFLOAD_SCTP_CKSUM;
 	} else {
 		rxcsum &= ~IGC_RXCSUM_TUOFL;
 	}
 
-	if (offloads & DEV_RX_OFFLOAD_SCTP_CKSUM)
+	if (offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM)
 		rxcsum |= IGC_RXCSUM_CRCOFL;
 	else
 		rxcsum &= ~IGC_RXCSUM_CRCOFL;
@@ -1242,7 +1242,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 	IGC_WRITE_REG(hw, IGC_RXCSUM, rxcsum);
 
 	/* Setup the Receive Control Register. */
-	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rctl &= ~IGC_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
 	else
 		rctl |= IGC_RCTL_SECRC; /* Strip Ethernet CRC. */
@@ -1279,12 +1279,12 @@ igc_rx_init(struct rte_eth_dev *dev)
 		IGC_WRITE_REG(hw, IGC_RDT(rxq->reg_idx), rxq->nb_rx_desc - 1);
 
 		dvmolr = IGC_READ_REG(hw, IGC_DVMOLR(rxq->reg_idx));
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			dvmolr |= IGC_DVMOLR_STRVLAN;
 		else
 			dvmolr &= ~IGC_DVMOLR_STRVLAN;
 
-		if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			dvmolr &= ~IGC_DVMOLR_STRCRC;
 		else
 			dvmolr |= IGC_DVMOLR_STRCRC;
@@ -2253,10 +2253,10 @@ eth_igc_vlan_strip_queue_set(struct rte_eth_dev *dev,
 	reg_val = IGC_READ_REG(hw, IGC_DVMOLR(rx_queue_id));
 	if (on) {
 		reg_val |= IGC_DVMOLR_STRVLAN;
-		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
 		reg_val &= ~(IGC_DVMOLR_STRVLAN | IGC_DVMOLR_HIDVLAN);
-		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	IGC_WRITE_REG(hw, IGC_DVMOLR(rx_queue_id), reg_val);
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index f94a1fed0a38..c688c3735c06 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -280,37 +280,37 @@ ionic_dev_link_update(struct rte_eth_dev *eth_dev,
 	memset(&link, 0, sizeof(link));
 
 	if (adapter->idev.port_info->config.an_enable) {
-		link.link_autoneg = ETH_LINK_AUTONEG;
+		link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	}
 
 	if (!adapter->link_up ||
 	    !(lif->state & IONIC_LIF_F_UP)) {
 		/* Interface is down */
-		link.link_status = ETH_LINK_DOWN;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	} else {
 		/* Interface is up */
-		link.link_status = ETH_LINK_UP;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_status = RTE_ETH_LINK_UP;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		switch (adapter->link_speed) {
 		case  10000:
-			link.link_speed = ETH_SPEED_NUM_10G;
+			link.link_speed = RTE_ETH_SPEED_NUM_10G;
 			break;
 		case  25000:
-			link.link_speed = ETH_SPEED_NUM_25G;
+			link.link_speed = RTE_ETH_SPEED_NUM_25G;
 			break;
 		case  40000:
-			link.link_speed = ETH_SPEED_NUM_40G;
+			link.link_speed = RTE_ETH_SPEED_NUM_40G;
 			break;
 		case  50000:
-			link.link_speed = ETH_SPEED_NUM_50G;
+			link.link_speed = RTE_ETH_SPEED_NUM_50G;
 			break;
 		case 100000:
-			link.link_speed = ETH_SPEED_NUM_100G;
+			link.link_speed = RTE_ETH_SPEED_NUM_100G;
 			break;
 		default:
-			link.link_speed = ETH_SPEED_NUM_NONE;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 			break;
 		}
 	}
@@ -387,17 +387,17 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->flow_type_rss_offloads = IONIC_ETH_RSS_OFFLOAD_ALL;
 
 	dev_info->speed_capa =
-		ETH_LINK_SPEED_10G |
-		ETH_LINK_SPEED_25G |
-		ETH_LINK_SPEED_40G |
-		ETH_LINK_SPEED_50G |
-		ETH_LINK_SPEED_100G;
+		RTE_ETH_LINK_SPEED_10G |
+		RTE_ETH_LINK_SPEED_25G |
+		RTE_ETH_LINK_SPEED_40G |
+		RTE_ETH_LINK_SPEED_50G |
+		RTE_ETH_LINK_SPEED_100G;
 
 	/*
 	 * Per-queue capabilities
 	 * RTE does not support disabling a feature on a queue if it is
 	 * enabled globally on the device. Thus the driver does not advertise
-	 * capabilities like DEV_TX_OFFLOAD_IPV4_CKSUM as per-queue even
+	 * capabilities like RTE_ETH_TX_OFFLOAD_IPV4_CKSUM as per-queue even
 	 * though the driver would be otherwise capable of disabling it on
 	 * a per-queue basis.
 	 */
@@ -411,24 +411,24 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
 	 */
 
 	dev_info->rx_offload_capa = dev_info->rx_queue_offload_capa |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_RSS_HASH |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH |
 		0;
 
 	dev_info->tx_offload_capa = dev_info->tx_queue_offload_capa |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
 		0;
 
 	dev_info->rx_desc_lim = rx_desc_lim;
@@ -463,9 +463,9 @@ ionic_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 		fc_conf->autoneg = 0;
 
 		if (idev->port_info->config.pause_type)
-			fc_conf->mode = RTE_FC_FULL;
+			fc_conf->mode = RTE_ETH_FC_FULL;
 		else
-			fc_conf->mode = RTE_FC_NONE;
+			fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return 0;
@@ -487,14 +487,14 @@ ionic_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		pause_type = IONIC_PORT_PAUSE_TYPE_NONE;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		pause_type = IONIC_PORT_PAUSE_TYPE_LINK;
 		break;
-	case RTE_FC_RX_PAUSE:
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		return -ENOTSUP;
 	}
 
@@ -545,12 +545,12 @@ ionic_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	num = tbl_sz / RTE_RETA_GROUP_SIZE;
+	num = tbl_sz / RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < num; i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if (reta_conf[i].mask & ((uint64_t)1 << j)) {
-				index = (i * RTE_RETA_GROUP_SIZE) + j;
+				index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
 				lif->rss_ind_tbl[index] = reta_conf[i].reta[j];
 			}
 		}
@@ -585,12 +585,12 @@ ionic_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	num = reta_size / RTE_RETA_GROUP_SIZE;
+	num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < num; i++) {
 		memcpy(reta_conf->reta,
-			&lif->rss_ind_tbl[i * RTE_RETA_GROUP_SIZE],
-			RTE_RETA_GROUP_SIZE);
+			&lif->rss_ind_tbl[i * RTE_ETH_RETA_GROUP_SIZE],
+			RTE_ETH_RETA_GROUP_SIZE);
 		reta_conf++;
 	}
 
@@ -618,17 +618,17 @@ ionic_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
 			IONIC_RSS_HASH_KEY_SIZE);
 
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 	rss_conf->rss_hf = rss_hf;
 
@@ -660,17 +660,17 @@ ionic_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
 		if (!lif->rss_ind_tbl)
 			return -EINVAL;
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV4)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
 			rss_types |= IONIC_RSS_TYPE_IPV4;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			rss_types |= IONIC_RSS_TYPE_IPV4_TCP;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 			rss_types |= IONIC_RSS_TYPE_IPV4_UDP;
-		if (rss_conf->rss_hf & ETH_RSS_IPV6)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
 			rss_types |= IONIC_RSS_TYPE_IPV6;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 			rss_types |= IONIC_RSS_TYPE_IPV6_TCP;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 			rss_types |= IONIC_RSS_TYPE_IPV6_UDP;
 
 		ionic_lif_rss_config(lif, rss_types, key, NULL);
@@ -842,15 +842,15 @@ ionic_dev_configure(struct rte_eth_dev *eth_dev)
 static inline uint32_t
 ionic_parse_link_speeds(uint16_t link_speeds)
 {
-	if (link_speeds & ETH_LINK_SPEED_100G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_100G)
 		return 100000;
-	else if (link_speeds & ETH_LINK_SPEED_50G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_50G)
 		return 50000;
-	else if (link_speeds & ETH_LINK_SPEED_40G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_40G)
 		return 40000;
-	else if (link_speeds & ETH_LINK_SPEED_25G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_25G)
 		return 25000;
-	else if (link_speeds & ETH_LINK_SPEED_10G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 		return 10000;
 	else
 		return 0;
@@ -874,12 +874,12 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
 	IONIC_PRINT_CALL();
 
 	allowed_speeds =
-		ETH_LINK_SPEED_FIXED |
-		ETH_LINK_SPEED_10G |
-		ETH_LINK_SPEED_25G |
-		ETH_LINK_SPEED_40G |
-		ETH_LINK_SPEED_50G |
-		ETH_LINK_SPEED_100G;
+		RTE_ETH_LINK_SPEED_FIXED |
+		RTE_ETH_LINK_SPEED_10G |
+		RTE_ETH_LINK_SPEED_25G |
+		RTE_ETH_LINK_SPEED_40G |
+		RTE_ETH_LINK_SPEED_50G |
+		RTE_ETH_LINK_SPEED_100G;
 
 	if (dev_conf->link_speeds & ~allowed_speeds) {
 		IONIC_PRINT(ERR, "Invalid link setting");
@@ -896,7 +896,7 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
 	}
 
 	/* Configure link */
-	an_enable = (dev_conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+	an_enable = (dev_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 	ionic_dev_cmd_port_autoneg(idev, an_enable);
 	err = ionic_dev_cmd_wait_check(idev, IONIC_DEVCMD_TIMEOUT);
diff --git a/drivers/net/ionic/ionic_ethdev.h b/drivers/net/ionic/ionic_ethdev.h
index 6cbcd0f825a3..652f28c97d57 100644
--- a/drivers/net/ionic/ionic_ethdev.h
+++ b/drivers/net/ionic/ionic_ethdev.h
@@ -8,12 +8,12 @@
 #include <rte_ethdev.h>
 
 #define IONIC_ETH_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define IONIC_ETH_DEV_TO_LIF(eth_dev) ((struct ionic_lif *) \
 	(eth_dev)->data->dev_private)
diff --git a/drivers/net/ionic/ionic_lif.c b/drivers/net/ionic/ionic_lif.c
index a1f9ce2d81cb..5e8fdf3893ad 100644
--- a/drivers/net/ionic/ionic_lif.c
+++ b/drivers/net/ionic/ionic_lif.c
@@ -1688,12 +1688,12 @@ ionic_lif_configure_vlan_offload(struct ionic_lif *lif, int mask)
 
 	/*
 	 * IONIC_ETH_HW_VLAN_RX_FILTER cannot be turned off, so
-	 * set DEV_RX_OFFLOAD_VLAN_FILTER and ignore ETH_VLAN_FILTER_MASK
+	 * set RTE_ETH_RX_OFFLOAD_VLAN_FILTER and ignore RTE_ETH_VLAN_FILTER_MASK
 	 */
-	rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			lif->features |= IONIC_ETH_HW_VLAN_RX_STRIP;
 		else
 			lif->features &= ~IONIC_ETH_HW_VLAN_RX_STRIP;
@@ -1733,19 +1733,19 @@ ionic_lif_configure(struct ionic_lif *lif)
 	/*
 	 * NB: While it is true that RSS_HASH is always enabled on ionic,
 	 *     setting this flag unconditionally causes problems in DTS.
-	 * rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	 * rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	 */
 
 	/* RX per-port */
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM ||
-	    rxmode->offloads & DEV_RX_OFFLOAD_UDP_CKSUM ||
-	    rxmode->offloads & DEV_RX_OFFLOAD_TCP_CKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM ||
+	    rxmode->offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM ||
+	    rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
 		lif->features |= IONIC_ETH_HW_RX_CSUM;
 	else
 		lif->features &= ~IONIC_ETH_HW_RX_CSUM;
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		lif->features |= IONIC_ETH_HW_RX_SG;
 		lif->eth_dev->data->scattered_rx = 1;
 	} else {
@@ -1754,30 +1754,30 @@ ionic_lif_configure(struct ionic_lif *lif)
 	}
 
 	/* Covers VLAN_STRIP */
-	ionic_lif_configure_vlan_offload(lif, ETH_VLAN_STRIP_MASK);
+	ionic_lif_configure_vlan_offload(lif, RTE_ETH_VLAN_STRIP_MASK);
 
 	/* TX per-port */
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		lif->features |= IONIC_ETH_HW_TX_CSUM;
 	else
 		lif->features &= ~IONIC_ETH_HW_TX_CSUM;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		lif->features |= IONIC_ETH_HW_VLAN_TX_TAG;
 	else
 		lif->features &= ~IONIC_ETH_HW_VLAN_TX_TAG;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		lif->features |= IONIC_ETH_HW_TX_SG;
 	else
 		lif->features &= ~IONIC_ETH_HW_TX_SG;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		lif->features |= IONIC_ETH_HW_TSO;
 		lif->features |= IONIC_ETH_HW_TSO_IPV6;
 		lif->features |= IONIC_ETH_HW_TSO_ECN;
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index 4d16a39c6b6d..e3df7c56debe 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -203,11 +203,11 @@ ionic_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id,
 		txq->flags |= IONIC_QCQ_F_DEFERRED;
 
 	/* Convert the offload flags into queue flags */
-	if (offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+	if (offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		txq->flags |= IONIC_QCQ_F_CSUM_L3;
-	if (offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+	if (offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 		txq->flags |= IONIC_QCQ_F_CSUM_TCP;
-	if (offloads & DEV_TX_OFFLOAD_UDP_CKSUM)
+	if (offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)
 		txq->flags |= IONIC_QCQ_F_CSUM_UDP;
 
 	eth_dev->data->tx_queues[tx_queue_id] = txq;
@@ -743,11 +743,11 @@ ionic_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 
 	/*
 	 * Note: the interface does not currently support
-	 * DEV_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
+	 * RTE_ETH_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
 	 * when the adapter will be able to keep the CRC and subtract
 	 * it to the length for all received packets:
 	 * if (eth_dev->data->dev_conf.rxmode.offloads &
-	 *     DEV_RX_OFFLOAD_KEEP_CRC)
+	 *     RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 	 *   rxq->crc_len = ETHER_CRC_LEN;
 	 */
 
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 063a9c6a6f7f..17088585757f 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -50,11 +50,11 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->speed_capa =
 		(hw->retimer.mac_type ==
 			IFPGA_RAWDEV_RETIMER_MAC_TYPE_10GE_XFI) ?
-		ETH_LINK_SPEED_10G :
+		RTE_ETH_LINK_SPEED_10G :
 		((hw->retimer.mac_type ==
 			IFPGA_RAWDEV_RETIMER_MAC_TYPE_25GE_25GAUI) ?
-		ETH_LINK_SPEED_25G :
-		ETH_LINK_SPEED_AUTONEG);
+		RTE_ETH_LINK_SPEED_25G :
+		RTE_ETH_LINK_SPEED_AUTONEG);
 
 	dev_info->max_rx_queues  = 1;
 	dev_info->max_tx_queues  = 1;
@@ -67,30 +67,30 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
 	};
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_VLAN_FILTER;
-
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
 		dev_info->tx_queue_offload_capa;
 
 	dev_info->dev_capa =
@@ -2399,10 +2399,10 @@ ipn3ke_update_link(struct rte_rawdev *rawdev,
 				(uint64_t *)&link_speed);
 	switch (link_speed) {
 	case IFPGA_RAWDEV_LINK_SPEED_10GB:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case IFPGA_RAWDEV_LINK_SPEED_25GB:
-		link->link_speed = ETH_SPEED_NUM_25G;
+		link->link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	default:
 		IPN3KE_AFU_PMD_ERR("Unknown link speed info %u", link_speed);
@@ -2460,9 +2460,9 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,
 
 	memset(&link, 0, sizeof(link));
 
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_autoneg = !(ethdev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	rawdev = hw->rawdev;
 	ipn3ke_update_link(rawdev, rpst->port_id, &link);
@@ -2518,9 +2518,9 @@ ipn3ke_rpst_link_check(struct ipn3ke_rpst *rpst)
 
 	memset(&link, 0, sizeof(link));
 
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_autoneg = !(rpst->ethdev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	rawdev = hw->rawdev;
 	ipn3ke_update_link(rawdev, rpst->port_id, &link);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 46c95425adfb..7fd2c539e002 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1857,7 +1857,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 	qinq &= IXGBE_DMATXCTL_GDV;
 
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
+	case RTE_ETH_VLAN_TYPE_INNER:
 		if (qinq) {
 			reg = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
 			reg = (reg & (~IXGBE_VLNCTRL_VET)) | (uint32_t)tpid;
@@ -1872,7 +1872,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 				    " by single VLAN");
 		}
 		break;
-	case ETH_VLAN_TYPE_OUTER:
+	case RTE_ETH_VLAN_TYPE_OUTER:
 		if (qinq) {
 			/* Only the high 16-bits is valid */
 			IXGBE_WRITE_REG(hw, IXGBE_EXVET, (uint32_t)tpid <<
@@ -1959,10 +1959,10 @@ ixgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 
 	if (on) {
 		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
-		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
 		rxq->vlan_flags = PKT_RX_VLAN;
-		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
 
@@ -2083,7 +2083,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	if (hw->mac.type == ixgbe_mac_82598EB) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 			ctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
 			ctrl |= IXGBE_VLNCTRL_VME;
 			IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, ctrl);
@@ -2100,7 +2100,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
 			ctrl = IXGBE_READ_REG(hw, IXGBE_RXDCTL(rxq->reg_idx));
-			if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+			if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 				ctrl |= IXGBE_RXDCTL_VME;
 				on = TRUE;
 			} else {
@@ -2122,17 +2122,17 @@ ixgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	struct ixgbe_rx_queue *rxq;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		rxmode = &dev->data->dev_conf.rxmode;
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 		else
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 	}
 }
@@ -2143,19 +2143,18 @@ ixgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	rxmode = &dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK)
 		ixgbe_vlan_hw_strip_config(dev);
-	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ixgbe_vlan_hw_filter_enable(dev);
 		else
 			ixgbe_vlan_hw_filter_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			ixgbe_vlan_hw_extend_enable(dev);
 		else
 			ixgbe_vlan_hw_extend_disable(dev);
@@ -2194,10 +2193,10 @@ ixgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
 	switch (nb_rx_q) {
 	case 1:
 	case 2:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
 		break;
 	case 4:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
 		break;
 	default:
 		return -EINVAL;
@@ -2221,18 +2220,18 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
 		/* check multi-queue mode */
 		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
 			break;
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
 			/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
 			PMD_INIT_LOG(ERR, "SRIOV active,"
 					" unsupported mq_mode rx %d.",
 					dev_conf->rxmode.mq_mode);
 			return -EINVAL;
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
 			if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
 				if (ixgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
 					PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -2242,12 +2241,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 					return -EINVAL;
 				}
 			break;
-		case ETH_MQ_RX_VMDQ_ONLY:
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_NONE:
 			/* if nothing mq mode configure, use default scheme */
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
 			break;
-		default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+		default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
 			/* SRIOV only works in VMDq enable mode */
 			PMD_INIT_LOG(ERR, "SRIOV is active,"
 					" wrong mq_mode rx %d.",
@@ -2256,12 +2255,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 		}
 
 		switch (dev_conf->txmode.mq_mode) {
-		case ETH_MQ_TX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+		case RTE_ETH_MQ_TX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+			dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
 			break;
-		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
+		default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
+			dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_ONLY;
 			break;
 		}
 
@@ -2276,13 +2275,13 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 			return -EINVAL;
 		}
 	} else {
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
 			PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
 					  " not supported.");
 			return -EINVAL;
 		}
 		/* check configuration for vmdb+dcb mode */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_conf *conf;
 
 			if (nb_rx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2291,15 +2290,15 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools must be %d or %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_tx_conf *conf;
 
 			if (nb_tx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2308,39 +2307,39 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools != %d and"
 						" nb_queue_pools != %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
 
 		/* For DCB mode check our configuration before we go further */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
 			const struct rte_eth_dcb_rx_conf *conf;
 
 			conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
 
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 			const struct rte_eth_dcb_tx_conf *conf;
 
 			conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
@@ -2349,7 +2348,7 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 		 * When DCB/VT is off, maximum number of queues changes,
 		 * except for 82598EB, which remains constant.
 		 */
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
 				hw->mac.type != ixgbe_mac_82598EB) {
 			if (nb_tx_q > IXGBE_NONE_MODE_TX_NB_QUEUES) {
 				PMD_INIT_LOG(ERR,
@@ -2373,8 +2372,8 @@ ixgbe_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multipe queue mode checking */
 	ret  = ixgbe_check_mq_mode(dev);
@@ -2619,15 +2618,15 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 		goto error;
 	}
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = ixgbe_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
 		goto error;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
 		/* Enable vlan filtering for VMDq */
 		ixgbe_vmdq_vlan_hw_filter_enable(dev);
 	}
@@ -2704,17 +2703,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 	case ixgbe_mac_X550:
 	case ixgbe_mac_X550EM_x:
 	case ixgbe_mac_X550EM_a:
-		allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_2_5G |  ETH_LINK_SPEED_5G |
-			ETH_LINK_SPEED_10G;
+		allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_2_5G |  RTE_ETH_LINK_SPEED_5G |
+			RTE_ETH_LINK_SPEED_10G;
 		if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
 				hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
-			allowed_speeds = ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+			allowed_speeds = RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
 		break;
 	default:
-		allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_10G;
+		allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G;
 	}
 
 	link_speeds = &dev->data->dev_conf.link_speeds;
@@ -2728,7 +2727,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 	}
 
 	speed = 0x0;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		switch (hw->mac.type) {
 		case ixgbe_mac_82598EB:
 			speed = IXGBE_LINK_SPEED_82598_AUTONEG;
@@ -2746,17 +2745,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 			speed = IXGBE_LINK_SPEED_82599_AUTONEG;
 		}
 	} else {
-		if (*link_speeds & ETH_LINK_SPEED_10G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
 			speed |= IXGBE_LINK_SPEED_10GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
 			speed |= IXGBE_LINK_SPEED_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_2_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
 			speed |= IXGBE_LINK_SPEED_2_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_1G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed |= IXGBE_LINK_SPEED_1GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_100M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed |= IXGBE_LINK_SPEED_100_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_10M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
 			speed |= IXGBE_LINK_SPEED_10_FULL;
 	}
 
@@ -3832,7 +3831,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		 * When DCB/VT is off, maximum number of queues changes,
 		 * except for 82598EB, which remains constant.
 		 */
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
 				hw->mac.type != ixgbe_mac_82598EB)
 			dev_info->max_tx_queues = IXGBE_NONE_MODE_TX_NB_QUEUES;
 	}
@@ -3842,9 +3841,9 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
 	if (hw->mac.type == ixgbe_mac_82598EB)
-		dev_info->max_vmdq_pools = ETH_16_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	else
-		dev_info->max_vmdq_pools = ETH_64_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->max_mtu =  dev_info->max_rx_pktlen - IXGBE_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 	dev_info->vmdq_queue_num = dev_info->max_rx_queues;
@@ -3883,21 +3882,21 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->reta_size = ixgbe_reta_size_get(hw->mac.type);
 	dev_info->flow_type_rss_offloads = IXGBE_RSS_OFFLOAD_ALL;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
 	if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
 			hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
-		dev_info->speed_capa = ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
 
 	if (hw->mac.type == ixgbe_mac_X540 ||
 	    hw->mac.type == ixgbe_mac_X540_vf ||
 	    hw->mac.type == ixgbe_mac_X550 ||
 	    hw->mac.type == ixgbe_mac_X550_vf) {
-		dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
 	}
 	if (hw->mac.type == ixgbe_mac_X550) {
-		dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
-		dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
 	}
 
 	/* Driver-preferred Rx/Tx parameters */
@@ -3966,9 +3965,9 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
 	if (hw->mac.type == ixgbe_mac_82598EB)
-		dev_info->max_vmdq_pools = ETH_16_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	else
-		dev_info->max_vmdq_pools = ETH_64_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->rx_queue_offload_capa = ixgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (ixgbe_get_rx_port_offloads(dev) |
 				     dev_info->rx_queue_offload_capa);
@@ -4211,11 +4210,11 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	u32 esdp_reg;
 
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_SPEED_FIXED);
 
 	hw->mac.get_link_status = true;
 
@@ -4237,8 +4236,8 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
 		diag = ixgbe_check_link(hw, &link_speed, &link_up, wait);
 
 	if (diag != 0) {
-		link.link_speed = ETH_SPEED_NUM_100M;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
@@ -4274,37 +4273,37 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (link_speed) {
 	default:
 	case IXGBE_LINK_SPEED_UNKNOWN:
-		link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 
 	case IXGBE_LINK_SPEED_10_FULL:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 
 	case IXGBE_LINK_SPEED_100_FULL:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	case IXGBE_LINK_SPEED_1GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case IXGBE_LINK_SPEED_2_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_2_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 
 	case IXGBE_LINK_SPEED_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 
 	case IXGBE_LINK_SPEED_10GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	}
 
@@ -4521,7 +4520,7 @@ ixgbe_dev_link_status_print(struct rte_eth_dev *dev)
 		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -4740,13 +4739,13 @@ ixgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		tx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -5044,8 +5043,8 @@ ixgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i += IXGBE_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IXGBE_4_BIT_MASK);
 		if (!mask)
@@ -5092,8 +5091,8 @@ ixgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i += IXGBE_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IXGBE_4_BIT_MASK);
 		if (!mask)
@@ -5255,22 +5254,22 @@ ixgbevf_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
 		     dev->data->port_id);
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/*
 	 * VF has no ability to enable/disable HW CRC
 	 * Keep the persistent behavior the same as Host PF
 	 */
 #ifndef RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
-		conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #else
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
 		PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #endif
 
@@ -5330,8 +5329,8 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
 	ixgbevf_set_vfta_all(dev, 1);
 
 	/* Set HW strip */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = ixgbevf_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -5568,10 +5567,10 @@ ixgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	int on = 0;
 
 	/* VF function only support hw strip feature, others are not support */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
-			on = !!(rxq->offloads &	DEV_RX_OFFLOAD_VLAN_STRIP);
+			on = !!(rxq->offloads &	RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 			ixgbevf_vlan_strip_queue_set(dev, i, on);
 		}
 	}
@@ -5702,12 +5701,12 @@ ixgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
 		return -ENOTSUP;
 
 	if (on) {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = ~0;
 			IXGBE_WRITE_REG(hw, IXGBE_UTA(i), ~0);
 		}
 	} else {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = 0;
 			IXGBE_WRITE_REG(hw, IXGBE_UTA(i), 0);
 		}
@@ -5721,15 +5720,15 @@ ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
 {
 	uint32_t new_val = orig_val;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
 		new_val |= IXGBE_VMOLR_AUPE;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
 		new_val |= IXGBE_VMOLR_ROMPE;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 		new_val |= IXGBE_VMOLR_ROPE;
-	if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 		new_val |= IXGBE_VMOLR_BAM;
-	if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 		new_val |= IXGBE_VMOLR_MPE;
 
 	return new_val;
@@ -6724,15 +6723,15 @@ ixgbe_start_timecounters(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 
 	switch (link.link_speed) {
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		incval = IXGBE_INCVAL_100;
 		shift = IXGBE_INCVAL_SHIFT_100;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		incval = IXGBE_INCVAL_1GB;
 		shift = IXGBE_INCVAL_SHIFT_1GB;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 	default:
 		incval = IXGBE_INCVAL_10GB;
 		shift = IXGBE_INCVAL_SHIFT_10GB;
@@ -7143,16 +7142,16 @@ ixgbe_reta_size_get(enum ixgbe_mac_type mac_type) {
 	case ixgbe_mac_X550:
 	case ixgbe_mac_X550EM_x:
 	case ixgbe_mac_X550EM_a:
-		return ETH_RSS_RETA_SIZE_512;
+		return RTE_ETH_RSS_RETA_SIZE_512;
 	case ixgbe_mac_X550_vf:
 	case ixgbe_mac_X550EM_x_vf:
 	case ixgbe_mac_X550EM_a_vf:
-		return ETH_RSS_RETA_SIZE_64;
+		return RTE_ETH_RSS_RETA_SIZE_64;
 	case ixgbe_mac_X540_vf:
 	case ixgbe_mac_82599_vf:
 		return 0;
 	default:
-		return ETH_RSS_RETA_SIZE_128;
+		return RTE_ETH_RSS_RETA_SIZE_128;
 	}
 }
 
@@ -7162,10 +7161,10 @@ ixgbe_reta_reg_get(enum ixgbe_mac_type mac_type, uint16_t reta_idx) {
 	case ixgbe_mac_X550:
 	case ixgbe_mac_X550EM_x:
 	case ixgbe_mac_X550EM_a:
-		if (reta_idx < ETH_RSS_RETA_SIZE_128)
+		if (reta_idx < RTE_ETH_RSS_RETA_SIZE_128)
 			return IXGBE_RETA(reta_idx >> 2);
 		else
-			return IXGBE_ERETA((reta_idx - ETH_RSS_RETA_SIZE_128) >> 2);
+			return IXGBE_ERETA((reta_idx - RTE_ETH_RSS_RETA_SIZE_128) >> 2);
 	case ixgbe_mac_X550_vf:
 	case ixgbe_mac_X550EM_x_vf:
 	case ixgbe_mac_X550EM_a_vf:
@@ -7221,7 +7220,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	uint8_t nb_tcs;
 	uint8_t i, j;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
 	else
 		dcb_info->nb_tcs = 1;
@@ -7232,7 +7231,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	if (dcb_config->vt_mode) { /* vt is enabled*/
 		struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
 		if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
 			for (j = 0; j < nb_tcs; j++) {
@@ -7256,9 +7255,9 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	} else { /* vt is disabled*/
 		struct rte_eth_dcb_rx_conf *rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
-		if (dcb_info->nb_tcs == ETH_4_TCS) {
+		if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7271,7 +7270,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 			dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
 			dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
 			dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
-		} else if (dcb_info->nb_tcs == ETH_8_TCS) {
+		} else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7524,7 +7523,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = ixgbe_e_tag_filter_add(dev, l2_tunnel);
 		break;
 	default:
@@ -7556,7 +7555,7 @@ ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 		return ret;
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = ixgbe_e_tag_filter_del(dev, l2_tunnel);
 		break;
 	default:
@@ -7653,12 +7652,12 @@ ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ixgbe_add_vxlan_port(hw, udp_tunnel->udp_port);
 		break;
 
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -EINVAL;
 		break;
@@ -7690,11 +7689,11 @@ ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ixgbe_del_vxlan_port(hw, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -EINVAL;
 		break;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 950fb2d2450c..876b670f2682 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -114,15 +114,15 @@
 #define IXGBE_FDIR_NVGRE_TUNNEL_TYPE    0x0
 
 #define IXGBE_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define IXGBE_VF_IRQ_ENABLE_MASK        3          /* vf irq enable mask */
 #define IXGBE_VF_MAXMSIVECTOR           1
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 27a49bbce5e7..7894047829a8 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -90,9 +90,9 @@ static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl);
 static uint32_t ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
 				 uint32_t key);
 static uint32_t atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc);
+		enum rte_eth_fdir_pballoc_type pballoc);
 static uint32_t atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc);
+		enum rte_eth_fdir_pballoc_type pballoc);
 static int fdir_write_perfect_filter_82599(struct ixgbe_hw *hw,
 			union ixgbe_atr_input *input, uint8_t queue,
 			uint32_t fdircmd, uint32_t fdirhash,
@@ -163,20 +163,20 @@ fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl)
  * flexbytes matching field, and drop queue (only for perfect matching mode).
  */
 static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf, uint32_t *fdirctrl)
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf, uint32_t *fdirctrl)
 {
 	*fdirctrl = 0;
 
 	switch (conf->pballoc) {
-	case RTE_FDIR_PBALLOC_64K:
+	case RTE_ETH_FDIR_PBALLOC_64K:
 		/* 8k - 1 signature filters */
 		*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_64K;
 		break;
-	case RTE_FDIR_PBALLOC_128K:
+	case RTE_ETH_FDIR_PBALLOC_128K:
 		/* 16k - 1 signature filters */
 		*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_128K;
 		break;
-	case RTE_FDIR_PBALLOC_256K:
+	case RTE_ETH_FDIR_PBALLOC_256K:
 		/* 32k - 1 signature filters */
 		*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_256K;
 		break;
@@ -807,13 +807,13 @@ ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
 
 static uint32_t
 atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		return ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				PERFECT_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		return ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				PERFECT_BUCKET_128KB_HASH_MASK;
@@ -850,15 +850,15 @@ ixgbe_fdir_check_cmd_complete(struct ixgbe_hw *hw, uint32_t *fdircmd)
  */
 static uint32_t
 atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
 	uint32_t bucket_hash, sig_hash;
 
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		bucket_hash = ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				SIG_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		bucket_hash = ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				SIG_BUCKET_128KB_HASH_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 27322ab9038a..bdc9d4796c02 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -1259,7 +1259,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
 		return -rte_errno;
 	}
 
-	filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+	filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
 	/**
 	 * grp and e_cid_base are bit fields and only use 14 bits.
 	 * e-tag id is taken as little endian by HW.
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
index e45c5501e6bf..944c9f23809e 100644
--- a/drivers/net/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -392,7 +392,7 @@ ixgbe_crypto_create_session(void *device,
 	aead_xform = &conf->crypto_xform->aead;
 
 	if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 			ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -400,7 +400,7 @@ ixgbe_crypto_create_session(void *device,
 			return -ENOTSUP;
 		}
 	} else {
-		if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+		if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 			ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -633,11 +633,11 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	tx_offloads = dev->data->dev_conf.txmode.offloads;
 
 	/* sanity checks */
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
 		return -1;
 	}
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
 		return -1;
 	}
@@ -657,7 +657,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
 	IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
 		reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
 		if (reg != 0) {
@@ -665,7 +665,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 			return -1;
 		}
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 		IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL,
 				IXGBE_SECTXCTRL_STORE_FORWARD);
 		reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 295e5a39b245..9f1bd0a62ba4 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -104,15 +104,15 @@ int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
 	memset(uta_info, 0, sizeof(struct ixgbe_uta_info));
 	hw->mac.mc_filter_type = 0;
 
-	if (vf_num >= ETH_32_POOLS) {
+	if (vf_num >= RTE_ETH_32_POOLS) {
 		nb_queue = 2;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
-	} else if (vf_num >= ETH_16_POOLS) {
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+	} else if (vf_num >= RTE_ETH_16_POOLS) {
 		nb_queue = 4;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
 	} else {
 		nb_queue = 8;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
 	}
 
 	RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -263,15 +263,15 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
 	gpie |= IXGBE_GPIE_MSIX_MODE | IXGBE_GPIE_PBA_SUPPORT;
 
 	switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		gcr_ext |= IXGBE_GCR_EXT_VT_MODE_64;
 		gpie |= IXGBE_GPIE_VTMODE_64;
 		break;
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		gcr_ext |= IXGBE_GCR_EXT_VT_MODE_32;
 		gpie |= IXGBE_GPIE_VTMODE_32;
 		break;
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		gcr_ext |= IXGBE_GCR_EXT_VT_MODE_16;
 		gpie |= IXGBE_GPIE_VTMODE_16;
 		break;
@@ -674,29 +674,29 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
 	/* Notify VF of number of DCB traffic classes */
 	eth_conf = &dev->data->dev_conf;
 	switch (eth_conf->txmode.mq_mode) {
-	case ETH_MQ_TX_NONE:
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_NONE:
+	case RTE_ETH_MQ_TX_DCB:
 		PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
 			", but its tx mode = %d\n", vf,
 			eth_conf->txmode.mq_mode);
 		return -1;
 
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		vmdq_dcb_tx_conf = &eth_conf->tx_adv_conf.vmdq_dcb_tx_conf;
 		switch (vmdq_dcb_tx_conf->nb_queue_pools) {
-		case ETH_16_POOLS:
-			num_tcs = ETH_8_TCS;
+		case RTE_ETH_16_POOLS:
+			num_tcs = RTE_ETH_8_TCS;
 			break;
-		case ETH_32_POOLS:
-			num_tcs = ETH_4_TCS;
+		case RTE_ETH_32_POOLS:
+			num_tcs = RTE_ETH_4_TCS;
 			break;
 		default:
 			return -1;
 		}
 		break;
 
-	/* ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
-	case ETH_MQ_TX_VMDQ_ONLY:
+	/* RTE_ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
+	case RTE_ETH_MQ_TX_VMDQ_ONLY:
 		hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 		vmvir = IXGBE_READ_REG(hw, IXGBE_VMVIR(vf));
 		vlana = vmvir & IXGBE_VMVIR_VLANA_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index a51450fe5b82..aa3a406c204d 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2592,26 +2592,26 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM   |
-		DEV_TX_OFFLOAD_SCTP_CKSUM  |
-		DEV_TX_OFFLOAD_TCP_TSO     |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO     |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	if (hw->mac.type == ixgbe_mac_82599EB ||
 	    hw->mac.type == ixgbe_mac_X540)
-		tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 
 	if (hw->mac.type == ixgbe_mac_X550 ||
 	    hw->mac.type == ixgbe_mac_X550EM_x ||
 	    hw->mac.type == ixgbe_mac_X550EM_a)
-		tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
 #endif
 	return tx_offload_capa;
 }
@@ -2780,7 +2780,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_deferred_start = tx_conf->tx_deferred_start;
 #ifdef RTE_LIB_SECURITY
 	txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SECURITY);
+			RTE_ETH_TX_OFFLOAD_SECURITY);
 #endif
 
 	/*
@@ -3021,7 +3021,7 @@ ixgbe_get_rx_queue_offloads(struct rte_eth_dev *dev)
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (hw->mac.type != ixgbe_mac_82598EB)
-		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	return offloads;
 }
@@ -3032,19 +3032,19 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
 	uint64_t offloads;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	offloads = DEV_RX_OFFLOAD_IPV4_CKSUM  |
-		   DEV_RX_OFFLOAD_UDP_CKSUM   |
-		   DEV_RX_OFFLOAD_TCP_CKSUM   |
-		   DEV_RX_OFFLOAD_KEEP_CRC    |
-		   DEV_RX_OFFLOAD_VLAN_FILTER |
-		   DEV_RX_OFFLOAD_SCATTER |
-		   DEV_RX_OFFLOAD_RSS_HASH;
+	offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+		   RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+		   RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		   RTE_ETH_RX_OFFLOAD_SCATTER |
+		   RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (hw->mac.type == ixgbe_mac_82598EB)
-		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	if (ixgbe_is_vf(dev) == 0)
-		offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
 	/*
 	 * RSC is only supported by 82599 and x540 PF devices in a non-SR-IOV
@@ -3054,20 +3054,20 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
 	     hw->mac.type == ixgbe_mac_X540 ||
 	     hw->mac.type == ixgbe_mac_X550) &&
 	    !RTE_ETH_DEV_SRIOV(dev).active)
-		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+		offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 
 	if (hw->mac.type == ixgbe_mac_82599EB ||
 	    hw->mac.type == ixgbe_mac_X540)
-		offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
 
 	if (hw->mac.type == ixgbe_mac_X550 ||
 	    hw->mac.type == ixgbe_mac_X550EM_x ||
 	    hw->mac.type == ixgbe_mac_X550EM_a)
-		offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		offloads |= DEV_RX_OFFLOAD_SECURITY;
+		offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
 #endif
 
 	return offloads;
@@ -3122,7 +3122,7 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
 		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -3507,23 +3507,23 @@ ixgbe_hw_rss_hash_set(struct ixgbe_hw *hw, struct rte_eth_rss_conf *rss_conf)
 	/* Set configured hashing protocols in MRQC register */
 	rss_hf = rss_conf->rss_hf;
 	mrqc = IXGBE_MRQC_RSSEN; /* Enable RSS */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_TCP;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6;
-	if (rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_EX)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_TCP;
-	if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_UDP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_UDP;
-	if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP;
 	IXGBE_WRITE_REG(hw, mrqc_reg, mrqc);
 }
@@ -3605,23 +3605,23 @@ ixgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 	}
 	rss_hf = 0;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP)
-		rss_hf |= ETH_RSS_IPV6_UDP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
 	rss_conf->rss_hf = rss_hf;
 	return 0;
 }
@@ -3697,12 +3697,12 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
 	num_pools = cfg->nb_queue_pools;
 	/* Check we have a valid number of pools */
-	if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+	if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
 		ixgbe_rss_disable(dev);
 		return;
 	}
 	/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
-	nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+	nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
 
 	/*
 	 * RXPBSIZE
@@ -3727,7 +3727,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
 	}
 	/* zero alloc all unused TCs */
-	for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		uint32_t rxpbsize = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(i));
 
 		rxpbsize &= (~(0x3FF << IXGBE_RXPBSIZE_SHIFT));
@@ -3736,7 +3736,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	}
 
 	/* MRQC: enable vmdq and dcb */
-	mrqc = (num_pools == ETH_16_POOLS) ?
+	mrqc = (num_pools == RTE_ETH_16_POOLS) ?
 		IXGBE_MRQC_VMDQRT8TCEN : IXGBE_MRQC_VMDQRT4TCEN;
 	IXGBE_WRITE_REG(hw, IXGBE_MRQC, mrqc);
 
@@ -3752,7 +3752,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 
 	/* RTRUP2TC: mapping user priorities to traffic classes (TCs) */
 	queue_mapping = 0;
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 		/*
 		 * mapping is done with 3 bits per priority,
 		 * so shift by i*3 each time
@@ -3776,7 +3776,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 
 	/* VFRE: pool enabling for receive - 16 or 32 */
 	IXGBE_WRITE_REG(hw, IXGBE_VFRE(0),
-			num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+			num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	/*
 	 * MPSAR - allow pools to read specific mac addresses
@@ -3858,7 +3858,7 @@ ixgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
 	if (hw->mac.type != ixgbe_mac_82598EB)
 		/*PF VF Transmit Enable*/
 		IXGBE_WRITE_REG(hw, IXGBE_VFTE(0),
-			vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+			vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	/*Configure general DCB TX parameters*/
 	ixgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3874,12 +3874,12 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
-	if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3889,7 +3889,7 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3907,12 +3907,12 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
-	if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3922,7 +3922,7 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3949,7 +3949,7 @@ ixgbe_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3976,7 +3976,7 @@ ixgbe_dcb_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -4145,7 +4145,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
 
 	switch (dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_VMDQ_DCB:
+	case RTE_ETH_MQ_RX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		if (hw->mac.type != ixgbe_mac_82598EB) {
 			config_dcb_rx = DCB_RX_CONFIG;
@@ -4158,8 +4158,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			ixgbe_vmdq_dcb_configure(dev);
 		}
 		break;
-	case ETH_MQ_RX_DCB:
-	case ETH_MQ_RX_DCB_RSS:
+	case RTE_ETH_MQ_RX_DCB:
+	case RTE_ETH_MQ_RX_DCB_RSS:
 		dcb_config->vt_mode = false;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -4172,7 +4172,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		break;
 	}
 	switch (dev->data->dev_conf.txmode.mq_mode) {
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/* get DCB and VT TX configuration parameters
@@ -4183,7 +4183,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		ixgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
 		break;
 
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_DCB:
 		dcb_config->vt_mode = false;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/*get DCB TX configuration parameters from rte_eth_conf*/
@@ -4199,15 +4199,15 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	nb_tcs = dcb_config->num_tcs.pfc_tcs;
 	/* Unpack map */
 	ixgbe_dcb_unpack_map_cee(dcb_config, IXGBE_DCB_RX_CONFIG, map);
-	if (nb_tcs == ETH_4_TCS) {
+	if (nb_tcs == RTE_ETH_4_TCS) {
 		/* Avoid un-configured priority mapping to TC0 */
 		uint8_t j = 4;
 		uint8_t mask = 0xFF;
 
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
 			mask = (uint8_t)(mask & (~(1 << map[i])));
 		for (i = 0; mask && (i < IXGBE_DCB_MAX_TRAFFIC_CLASS); i++) {
-			if ((mask & 0x1) && (j < ETH_DCB_NUM_USER_PRIORITIES))
+			if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
 				map[j++] = i;
 			mask >>= 1;
 		}
@@ -4257,9 +4257,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
 		}
 		/* zero alloc all unused TCs */
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
-		}
 	}
 	if (config_dcb_tx) {
 		/* Only support an equally distributed
@@ -4273,7 +4272,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), txpbthresh);
 		}
 		/* Clear unused TCs, if any, to zero buffer size*/
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), 0);
 			IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), 0);
 		}
@@ -4309,7 +4308,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	ixgbe_dcb_config_tc_stats_82599(hw, dcb_config);
 
 	/* Check if the PFC is supported */
-	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
 		for (i = 0; i < nb_tcs; i++) {
 			/*
@@ -4323,7 +4322,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			tc->pfc = ixgbe_dcb_pfc_enabled;
 		}
 		ixgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
-		if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+		if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
 			pfc_en &= 0x0F;
 		ret = ixgbe_dcb_config_pfc(hw, pfc_en, map);
 	}
@@ -4344,12 +4343,12 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	/* check support mq_mode for DCB */
-	if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
-	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
-	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS))
+	if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
 		return;
 
-	if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+	if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
 		return;
 
 	/** Configure DCB hardware **/
@@ -4405,7 +4404,7 @@ ixgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 
 	/* VFRE: pool enabling for receive - 64 */
 	IXGBE_WRITE_REG(hw, IXGBE_VFRE(0), UINT32_MAX);
-	if (num_pools == ETH_64_POOLS)
+	if (num_pools == RTE_ETH_64_POOLS)
 		IXGBE_WRITE_REG(hw, IXGBE_VFRE(1), UINT32_MAX);
 
 	/*
@@ -4526,11 +4525,11 @@ ixgbe_config_vf_rss(struct rte_eth_dev *dev)
 	mrqc = IXGBE_READ_REG(hw, IXGBE_MRQC);
 	mrqc &= ~IXGBE_MRQC_MRQE_MASK;
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		mrqc |= IXGBE_MRQC_VMDQRSS64EN;
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		mrqc |= IXGBE_MRQC_VMDQRSS32EN;
 		break;
 
@@ -4551,17 +4550,17 @@ ixgbe_config_vf_default(struct rte_eth_dev *dev)
 		IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		IXGBE_WRITE_REG(hw, IXGBE_MRQC,
 			IXGBE_MRQC_VMDQEN);
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		IXGBE_WRITE_REG(hw, IXGBE_MRQC,
 			IXGBE_MRQC_VMDQRT4TCEN);
 		break;
 
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		IXGBE_WRITE_REG(hw, IXGBE_MRQC,
 			IXGBE_MRQC_VMDQRT8TCEN);
 		break;
@@ -4588,21 +4587,21 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * any DCB/RSS w/o VMDq multi-queue setting
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_DCB_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			ixgbe_rss_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
 			ixgbe_vmdq_dcb_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
 			ixgbe_vmdq_rx_hw_configure(dev);
 			break;
 
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_NONE:
 		default:
 			/* if mq_mode is none, disable rss mode.*/
 			ixgbe_rss_disable(dev);
@@ -4613,18 +4612,18 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * Support RSS together with SRIOV.
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			ixgbe_config_vf_rss(dev);
 			break;
-		case ETH_MQ_RX_VMDQ_DCB:
-		case ETH_MQ_RX_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_DCB:
 		/* In SRIOV, the configuration is the same as VMDq case */
 			ixgbe_vmdq_dcb_configure(dev);
 			break;
 		/* DCB/RSS together with SRIOV is not supported */
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
-		case ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
 			PMD_INIT_LOG(ERR,
 				"Could not support DCB/RSS with VMDq & SRIOV");
 			return -1;
@@ -4658,7 +4657,7 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV inactive scheme
 		 * any DCB w/o VMDq multi-queue setting
 		 */
-		if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+		if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
 			ixgbe_vmdq_tx_hw_configure(hw);
 		else {
 			mtqc = IXGBE_MTQC_64Q_1PB;
@@ -4671,13 +4670,13 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV active scheme
 		 * FIXME if support DCB together with VMDq & SRIOV
 		 */
-		case ETH_64_POOLS:
+		case RTE_ETH_64_POOLS:
 			mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_64VF;
 			break;
-		case ETH_32_POOLS:
+		case RTE_ETH_32_POOLS:
 			mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_32VF;
 			break;
-		case ETH_16_POOLS:
+		case RTE_ETH_16_POOLS:
 			mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_RT_ENA |
 				IXGBE_MTQC_8TC_8TQ;
 			break;
@@ -4885,7 +4884,7 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
 		rxq->rx_using_sse = rx_using_sse;
 #ifdef RTE_LIB_SECURITY
 		rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_SECURITY);
+				RTE_ETH_RX_OFFLOAD_SECURITY);
 #endif
 	}
 }
@@ -4913,10 +4912,10 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* Sanity check */
 	dev->dev_ops->dev_infos_get(dev, &dev_info);
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		rsc_capable = true;
 
-	if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
 				   "support it");
 		return -EINVAL;
@@ -4924,8 +4923,8 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* RSC global configuration (chapter 4.6.7.2.1 of 82599 Spec) */
 
-	if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
-	     (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+	     (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		/*
 		 * According to chapter of 4.6.7.2.1 of the Spec Rev.
 		 * 3.0 RSC configuration requires HW CRC stripping being
@@ -4939,7 +4938,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* RFCTL configuration  */
 	rfctl = IXGBE_READ_REG(hw, IXGBE_RFCTL);
-	if ((rsc_capable) && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if ((rsc_capable) && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		rfctl &= ~IXGBE_RFCTL_RSC_DIS;
 	else
 		rfctl |= IXGBE_RFCTL_RSC_DIS;
@@ -4948,7 +4947,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 	IXGBE_WRITE_REG(hw, IXGBE_RFCTL, rfctl);
 
 	/* If LRO hasn't been requested - we are done here. */
-	if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		return 0;
 
 	/* Set RDRXCTL.RSCACKC bit */
@@ -5070,7 +5069,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Configure CRC stripping, if any.
 	 */
 	hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		hlreg0 &= ~IXGBE_HLREG0_RXCRCSTRP;
 	else
 		hlreg0 |= IXGBE_HLREG0_RXCRCSTRP;
@@ -5107,7 +5106,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	/* Setup RX queues */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
@@ -5116,7 +5115,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 * call to configure.
 		 */
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -5158,11 +5157,11 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 		/* It adds dual VLAN length for supporting dual VLAN */
 		if (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
 			dev->data->scattered_rx = 1;
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		dev->data->scattered_rx = 1;
 
 	/*
@@ -5177,7 +5176,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 */
 	rxcsum = IXGBE_READ_REG(hw, IXGBE_RXCSUM);
 	rxcsum |= IXGBE_RXCSUM_PCSD;
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= IXGBE_RXCSUM_IPPCSE;
 	else
 		rxcsum &= ~IXGBE_RXCSUM_IPPCSE;
@@ -5187,7 +5186,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	if (hw->mac.type == ixgbe_mac_82599EB ||
 	    hw->mac.type == ixgbe_mac_X540) {
 		rdrxctl = IXGBE_READ_REG(hw, IXGBE_RDRXCTL);
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rdrxctl &= ~IXGBE_RDRXCTL_CRCSTRIP;
 		else
 			rdrxctl |= IXGBE_RDRXCTL_CRCSTRIP;
@@ -5393,9 +5392,9 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
 
 #ifdef RTE_LIB_SECURITY
 	if ((dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_SECURITY) ||
+			RTE_ETH_RX_OFFLOAD_SECURITY) ||
 		(dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SECURITY)) {
+			RTE_ETH_TX_OFFLOAD_SECURITY)) {
 		ret = ixgbe_crypto_enable_ipsec(dev);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR,
@@ -5681,7 +5680,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	/* Setup RX queues */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
@@ -5730,7 +5729,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
 		buf_size = (uint16_t) ((srrctl & IXGBE_SRRCTL_BSIZEPKT_MASK) <<
 				       IXGBE_SRRCTL_BSIZEPKT_SHIFT);
 
-		if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
 		    /* It adds dual VLAN length for supporting dual VLAN */
 		    (frame_size + 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
 			if (!dev->data->scattered_rx)
@@ -5738,8 +5737,8 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
 			dev->data->scattered_rx = 1;
 		}
 
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	/* Set RQPL for VF RSS according to max Rx queue */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index a1764f2b08af..668a5b9814f6 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -133,7 +133,7 @@ struct ixgbe_rx_queue {
 	uint8_t             rx_udp_csum_zero_err;
 	/** flags to set in mbuf when a vlan is detected. */
 	uint64_t            vlan_flags;
-	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
 	/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
 	struct rte_mbuf fake_mbuf;
 	/** hold packets to return to application */
@@ -227,7 +227,7 @@ struct ixgbe_tx_queue {
 	uint8_t             pthresh;       /**< Prefetch threshold register. */
 	uint8_t             hthresh;       /**< Host threshold register. */
 	uint8_t             wthresh;       /**< Write-back threshold reg. */
-	uint64_t offloads; /**< Tx offload flags of DEV_TX_OFFLOAD_* */
+	uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
 	uint32_t            ctx_curr;      /**< Hardware context states. */
 	/** Hardware context0 history. */
 	struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 005e60668a8b..cd34d4098785 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -277,7 +277,7 @@ static inline int
 ixgbe_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
 {
 #ifndef RTE_LIBRTE_IEEE1588
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 
 	/* no fdir support */
 	if (fconf->mode != RTE_FDIR_MODE_NONE)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index ae03ea6e9db3..ac8976062fa7 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -119,14 +119,14 @@ ixgbe_tc_nb_get(struct rte_eth_dev *dev)
 	uint8_t nb_tcs = 0;
 
 	eth_conf = &dev->data->dev_conf;
-	if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+	if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 		nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
-	} else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	} else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
-		    ETH_32_POOLS)
-			nb_tcs = ETH_4_TCS;
+		    RTE_ETH_32_POOLS)
+			nb_tcs = RTE_ETH_4_TCS;
 		else
-			nb_tcs = ETH_8_TCS;
+			nb_tcs = RTE_ETH_8_TCS;
 	} else {
 		nb_tcs = 1;
 	}
@@ -375,10 +375,10 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 	if (vf_num) {
 		/* no DCB */
 		if (nb_tcs == 1) {
-			if (vf_num >= ETH_32_POOLS) {
+			if (vf_num >= RTE_ETH_32_POOLS) {
 				*nb = 2;
 				*base = vf_num * 2;
-			} else if (vf_num >= ETH_16_POOLS) {
+			} else if (vf_num >= RTE_ETH_16_POOLS) {
 				*nb = 4;
 				*base = vf_num * 4;
 			} else {
@@ -392,7 +392,7 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 		}
 	} else {
 		/* VT off */
-		if (nb_tcs == ETH_8_TCS) {
+		if (nb_tcs == RTE_ETH_8_TCS) {
 			switch (tc_node_no) {
 			case 0:
 				*base = 0;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index 9fa75984fb31..bd528ff346c7 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -58,20 +58,20 @@ ixgbe_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	/**< Maximum number of MAC addresses. */
 
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |	DEV_RX_OFFLOAD_UDP_CKSUM  |
-		DEV_RX_OFFLOAD_TCP_CKSUM;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	RTE_ETH_RX_OFFLOAD_UDP_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 	/**< Device RX offload capabilities. */
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	/**< Device TX offload capabilities. */
 
 	dev_info->speed_capa =
 		representor->pf_ethdev->data->dev_link.link_speed;
-	/**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+	/**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
 
 	dev_info->switch_info.name =
 		representor->pf_ethdev->device->name;
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.c b/drivers/net/ixgbe/rte_pmd_ixgbe.c
index cf089cd9aee5..9729f8575f53 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.c
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.c
@@ -303,10 +303,10 @@ rte_pmd_ixgbe_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
 	 */
 	if (hw->mac.type == ixgbe_mac_82598EB)
 		queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
-				  ETH_16_POOLS;
+				  RTE_ETH_16_POOLS;
 	else
 		queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
-				  ETH_64_POOLS;
+				  RTE_ETH_64_POOLS;
 
 	for (q = 0; q < queues_per_pool; q++)
 		(*dev->dev_ops->vlan_strip_queue_set)(dev,
@@ -736,14 +736,14 @@ rte_pmd_ixgbe_set_tc_bw_alloc(uint16_t port,
 	bw_conf = IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
 	eth_conf = &dev->data->dev_conf;
 
-	if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+	if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 		nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
-	} else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	} else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
-		    ETH_32_POOLS)
-			nb_tcs = ETH_4_TCS;
+		    RTE_ETH_32_POOLS)
+			nb_tcs = RTE_ETH_4_TCS;
 		else
-			nb_tcs = ETH_8_TCS;
+			nb_tcs = RTE_ETH_8_TCS;
 	} else {
 		nb_tcs = 1;
 	}
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.h b/drivers/net/ixgbe/rte_pmd_ixgbe.h
index 90fc8160b1f8..eef6f6661c74 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.h
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.h
@@ -285,8 +285,8 @@ int rte_pmd_ixgbe_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an,
 * @param rx_mask
 *    The RX mode mask, which is one or more of accepting Untagged Packets,
 *    packets that match the PFUTA table, Broadcast and Multicast Promiscuous.
-*    ETH_VMDQ_ACCEPT_UNTAG,ETH_VMDQ_ACCEPT_HASH_UC,
-*    ETH_VMDQ_ACCEPT_BROADCAST and ETH_VMDQ_ACCEPT_MULTICAST will be used
+*    RTE_ETH_VMDQ_ACCEPT_UNTAG, RTE_ETH_VMDQ_ACCEPT_HASH_UC,
+*    RTE_ETH_VMDQ_ACCEPT_BROADCAST and RTE_ETH_VMDQ_ACCEPT_MULTICAST will be used
 *    in rx_mode.
 * @param on
 *    1 - Enable a VF RX mode.
diff --git a/drivers/net/kni/rte_eth_kni.c b/drivers/net/kni/rte_eth_kni.c
index cb9f7c8e8200..c428caf44189 100644
--- a/drivers/net/kni/rte_eth_kni.c
+++ b/drivers/net/kni/rte_eth_kni.c
@@ -61,10 +61,10 @@ struct pmd_internals {
 };
 
 static const struct rte_eth_link pmd_link = {
-		.link_speed = ETH_SPEED_NUM_10G,
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_status = ETH_LINK_DOWN,
-		.link_autoneg = ETH_LINK_FIXED,
+		.link_speed = RTE_ETH_SPEED_NUM_10G,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_status = RTE_ETH_LINK_DOWN,
+		.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 static int is_kni_initialized;
 
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 0fc3f0ab66a9..90ffe31b9fda 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -384,15 +384,15 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
 	case PCI_SUBSYS_DEV_ID_CN2360_210SVPN3:
 	case PCI_SUBSYS_DEV_ID_CN2350_210SVPT:
 	case PCI_SUBSYS_DEV_ID_CN2360_210SVPT:
-		devinfo->speed_capa = ETH_LINK_SPEED_10G;
+		devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
 		break;
 	/* CN23xx 25G cards */
 	case PCI_SUBSYS_DEV_ID_CN2350_225:
 	case PCI_SUBSYS_DEV_ID_CN2360_225:
-		devinfo->speed_capa = ETH_LINK_SPEED_25G;
+		devinfo->speed_capa = RTE_ETH_LINK_SPEED_25G;
 		break;
 	default:
-		devinfo->speed_capa = ETH_LINK_SPEED_10G;
+		devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
 		lio_dev_err(lio_dev,
 			    "Unknown CN23XX subsystem device id. Setting 10G as default link speed.\n");
 		return -EINVAL;
@@ -406,27 +406,27 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
 
 	devinfo->max_mac_addrs = 1;
 
-	devinfo->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM		|
-				    DEV_RX_OFFLOAD_UDP_CKSUM		|
-				    DEV_RX_OFFLOAD_TCP_CKSUM		|
-				    DEV_RX_OFFLOAD_VLAN_STRIP		|
-				    DEV_RX_OFFLOAD_RSS_HASH);
-	devinfo->tx_offload_capa = (DEV_TX_OFFLOAD_IPV4_CKSUM		|
-				    DEV_TX_OFFLOAD_UDP_CKSUM		|
-				    DEV_TX_OFFLOAD_TCP_CKSUM		|
-				    DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM);
+	devinfo->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM		|
+				    RTE_ETH_RX_OFFLOAD_UDP_CKSUM		|
+				    RTE_ETH_RX_OFFLOAD_TCP_CKSUM		|
+				    RTE_ETH_RX_OFFLOAD_VLAN_STRIP		|
+				    RTE_ETH_RX_OFFLOAD_RSS_HASH);
+	devinfo->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
+				    RTE_ETH_TX_OFFLOAD_UDP_CKSUM		|
+				    RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
+				    RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM);
 
 	devinfo->rx_desc_lim = lio_rx_desc_lim;
 	devinfo->tx_desc_lim = lio_tx_desc_lim;
 
 	devinfo->reta_size = LIO_RSS_MAX_TABLE_SZ;
 	devinfo->hash_key_size = LIO_RSS_MAX_KEY_SZ;
-	devinfo->flow_type_rss_offloads = (ETH_RSS_IPV4			|
-					   ETH_RSS_NONFRAG_IPV4_TCP	|
-					   ETH_RSS_IPV6			|
-					   ETH_RSS_NONFRAG_IPV6_TCP	|
-					   ETH_RSS_IPV6_EX		|
-					   ETH_RSS_IPV6_TCP_EX);
+	devinfo->flow_type_rss_offloads = (RTE_ETH_RSS_IPV4			|
+					   RTE_ETH_RSS_NONFRAG_IPV4_TCP	|
+					   RTE_ETH_RSS_IPV6			|
+					   RTE_ETH_RSS_NONFRAG_IPV6_TCP	|
+					   RTE_ETH_RSS_IPV6_EX		|
+					   RTE_ETH_RSS_IPV6_TCP_EX);
 	return 0;
 }
 
@@ -519,10 +519,10 @@ lio_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
 	rss_param->param.flags &= ~LIO_RSS_PARAM_ITABLE_UNCHANGED;
 	rss_param->param.itablesize = LIO_RSS_MAX_TABLE_SZ;
 
-	for (i = 0; i < (reta_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask) & ((uint64_t)1 << j)) {
-				index = (i * RTE_RETA_GROUP_SIZE) + j;
+				index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
 				rss_state->itable[index] = reta_conf[i].reta[j];
 			}
 		}
@@ -562,12 +562,12 @@ lio_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	num = reta_size / RTE_RETA_GROUP_SIZE;
+	num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < num; i++) {
 		memcpy(reta_conf->reta,
-		       &rss_state->itable[i * RTE_RETA_GROUP_SIZE],
-		       RTE_RETA_GROUP_SIZE);
+		       &rss_state->itable[i * RTE_ETH_RETA_GROUP_SIZE],
+		       RTE_ETH_RETA_GROUP_SIZE);
 		reta_conf++;
 	}
 
@@ -595,17 +595,17 @@ lio_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
 		memcpy(hash_key, rss_state->hash_key, rss_state->hash_key_size);
 
 	if (rss_state->ip)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (rss_state->tcp_hash)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (rss_state->ipv6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (rss_state->ipv6_tcp_hash)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (rss_state->ipv6_ex)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (rss_state->ipv6_tcp_ex_hash)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 
 	rss_conf->rss_hf = rss_hf;
 
@@ -673,42 +673,42 @@ lio_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
 		if (rss_state->hash_disable)
 			return -EINVAL;
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
 			hashinfo |= LIO_RSS_HASH_IPV4;
 			rss_state->ip = 1;
 		} else {
 			rss_state->ip = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 			hashinfo |= LIO_RSS_HASH_TCP_IPV4;
 			rss_state->tcp_hash = 1;
 		} else {
 			rss_state->tcp_hash = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV6) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6) {
 			hashinfo |= LIO_RSS_HASH_IPV6;
 			rss_state->ipv6 = 1;
 		} else {
 			rss_state->ipv6 = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 			hashinfo |= LIO_RSS_HASH_TCP_IPV6;
 			rss_state->ipv6_tcp_hash = 1;
 		} else {
 			rss_state->ipv6_tcp_hash = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV6_EX) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX) {
 			hashinfo |= LIO_RSS_HASH_IPV6_EX;
 			rss_state->ipv6_ex = 1;
 		} else {
 			rss_state->ipv6_ex = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) {
 			hashinfo |= LIO_RSS_HASH_TCP_IPV6_EX;
 			rss_state->ipv6_tcp_ex_hash = 1;
 		} else {
@@ -757,7 +757,7 @@ lio_dev_udp_tunnel_add(struct rte_eth_dev *eth_dev,
 	if (udp_tnl == NULL)
 		return -EINVAL;
 
-	if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+	if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
 		lio_dev_err(lio_dev, "Unsupported tunnel type\n");
 		return -1;
 	}
@@ -814,7 +814,7 @@ lio_dev_udp_tunnel_del(struct rte_eth_dev *eth_dev,
 	if (udp_tnl == NULL)
 		return -EINVAL;
 
-	if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+	if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
 		lio_dev_err(lio_dev, "Unsupported tunnel type\n");
 		return -1;
 	}
@@ -912,10 +912,10 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
 
 	/* Initialize */
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	/* Return what we found */
 	if (lio_dev->linfo.link.s.link_up == 0) {
@@ -923,18 +923,18 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
 		return rte_eth_linkstatus_set(eth_dev, &link);
 	}
 
-	link.link_status = ETH_LINK_UP; /* Interface is up */
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP; /* Interface is up */
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	switch (lio_dev->linfo.link.s.speed) {
 	case LIO_LINK_SPEED_10000:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case LIO_LINK_SPEED_25000:
-		link.link_speed = ETH_SPEED_NUM_25G;
+		link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	default:
-		link.link_speed = ETH_SPEED_NUM_NONE;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	}
 
 	return rte_eth_linkstatus_set(eth_dev, &link);
@@ -1086,8 +1086,8 @@ lio_dev_rss_configure(struct rte_eth_dev *eth_dev)
 
 		q_idx = (uint8_t)((eth_dev->data->nb_rx_queues > 1) ?
 				  i % eth_dev->data->nb_rx_queues : 0);
-		conf_idx = i / RTE_RETA_GROUP_SIZE;
-		reta_idx = i % RTE_RETA_GROUP_SIZE;
+		conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
 		reta_conf[conf_idx].reta[reta_idx] = q_idx;
 		reta_conf[conf_idx].mask |= ((uint64_t)1 << reta_idx);
 	}
@@ -1103,10 +1103,10 @@ lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rss_conf rss_conf;
 
 	switch (eth_dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		lio_dev_rss_configure(eth_dev);
 		break;
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 	/* if mq_mode is none, disable rss mode. */
 	default:
 		memset(&rss_conf, 0, sizeof(rss_conf));
@@ -1484,7 +1484,7 @@ lio_dev_set_link_up(struct rte_eth_dev *eth_dev)
 	}
 
 	lio_dev->linfo.link.s.link_up = 1;
-	eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -1505,11 +1505,11 @@ lio_dev_set_link_down(struct rte_eth_dev *eth_dev)
 	}
 
 	lio_dev->linfo.link.s.link_up = 0;
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	if (lio_send_rx_ctrl_cmd(eth_dev, 0)) {
 		lio_dev->linfo.link.s.link_up = 1;
-		eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+		eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 		lio_dev_err(lio_dev, "Unable to set Link Down\n");
 		return -1;
 	}
@@ -1721,9 +1721,9 @@ lio_dev_configure(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Inform firmware about change in number of queues to use.
 	 * Disable IO queues and reset registers for re-configuration.
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index 364e818d65c1..8533e39f6957 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -525,7 +525,7 @@ memif_disconnect(struct rte_eth_dev *dev)
 	int i;
 	int ret;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
 	pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTED;
 
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index 980150293e86..9deb7a5f1360 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -55,10 +55,10 @@ static const char * const valid_arguments[] = {
 };
 
 static const struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_AUTONEG
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_AUTONEG
 };
 
 #define MEMIF_MP_SEND_REGION		"memif_mp_send_region"
@@ -199,7 +199,7 @@ memif_dev_info(struct rte_eth_dev *dev __rte_unused, struct rte_eth_dev_info *de
 	dev_info->max_rx_queues = ETH_MEMIF_MAX_NUM_Q_PAIRS;
 	dev_info->max_tx_queues = ETH_MEMIF_MAX_NUM_Q_PAIRS;
 	dev_info->min_rx_bufsize = 0;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -1219,7 +1219,7 @@ memif_connect(struct rte_eth_dev *dev)
 
 		pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
 		pmd->flags |= ETH_MEMIF_FLAG_CONNECTED;
-		dev->data->dev_link.link_status = ETH_LINK_UP;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	}
 	MIF_LOG(INFO, "Connected.");
 	return 0;
@@ -1381,10 +1381,10 @@ memif_link_update(struct rte_eth_dev *dev,
 
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
 		proc_private = dev->process_private;
-		if (dev->data->dev_link.link_status == ETH_LINK_UP &&
+		if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP &&
 				proc_private->regions_num == 0) {
 			memif_mp_request_regions(dev);
-		} else if (dev->data->dev_link.link_status == ETH_LINK_DOWN &&
+		} else if (dev->data->dev_link.link_status == RTE_ETH_LINK_DOWN &&
 				proc_private->regions_num > 0) {
 			memif_free_regions(dev);
 		}
diff --git a/drivers/net/mlx4/mlx4_ethdev.c b/drivers/net/mlx4/mlx4_ethdev.c
index 783ff94dce8d..d606ec8ca76d 100644
--- a/drivers/net/mlx4/mlx4_ethdev.c
+++ b/drivers/net/mlx4/mlx4_ethdev.c
@@ -657,11 +657,11 @@ mlx4_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 	info->if_index = priv->if_index;
 	info->hash_key_size = MLX4_RSS_HASH_KEY_SIZE;
 	info->speed_capa =
-			ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_10G |
-			ETH_LINK_SPEED_20G |
-			ETH_LINK_SPEED_40G |
-			ETH_LINK_SPEED_56G;
+			RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_20G |
+			RTE_ETH_LINK_SPEED_40G |
+			RTE_ETH_LINK_SPEED_56G;
 	info->flow_type_rss_offloads = mlx4_conv_rss_types(priv, 0, 1);
 
 	return 0;
@@ -821,13 +821,13 @@ mlx4_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 	}
 	link_speed = ethtool_cmd_speed(&edata);
 	if (link_speed == -1)
-		dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	else
 		dev_link.link_speed = link_speed;
 	dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
-				ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+				RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
 	dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				  ETH_LINK_SPEED_FIXED);
+				  RTE_ETH_LINK_SPEED_FIXED);
 	dev->data->dev_link = dev_link;
 	return 0;
 }
@@ -863,13 +863,13 @@ mlx4_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 	fc_conf->autoneg = ethpause.autoneg;
 	if (ethpause.rx_pause && ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (ethpause.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	ret = 0;
 out:
 	MLX4_ASSERT(ret >= 0);
@@ -899,13 +899,13 @@ mlx4_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	ifr.ifr_data = (void *)&ethpause;
 	ethpause.autoneg = fc_conf->autoneg;
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_RX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
 		ethpause.rx_pause = 1;
 	else
 		ethpause.rx_pause = 0;
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_TX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
 		ethpause.tx_pause = 1;
 	else
 		ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c
index 71ea91b3fb82..2e1b6c87e983 100644
--- a/drivers/net/mlx4/mlx4_flow.c
+++ b/drivers/net/mlx4/mlx4_flow.c
@@ -109,21 +109,21 @@ mlx4_conv_rss_types(struct mlx4_priv *priv, uint64_t types, int verbs_to_dpdk)
 	};
 	static const uint64_t dpdk[] = {
 		[INNER] = 0,
-		[IPV4] = ETH_RSS_IPV4,
-		[IPV4_1] = ETH_RSS_FRAG_IPV4,
-		[IPV4_2] = ETH_RSS_NONFRAG_IPV4_OTHER,
-		[IPV6] = ETH_RSS_IPV6,
-		[IPV6_1] = ETH_RSS_FRAG_IPV6,
-		[IPV6_2] = ETH_RSS_NONFRAG_IPV6_OTHER,
-		[IPV6_3] = ETH_RSS_IPV6_EX,
+		[IPV4] = RTE_ETH_RSS_IPV4,
+		[IPV4_1] = RTE_ETH_RSS_FRAG_IPV4,
+		[IPV4_2] = RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+		[IPV6] = RTE_ETH_RSS_IPV6,
+		[IPV6_1] = RTE_ETH_RSS_FRAG_IPV6,
+		[IPV6_2] = RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+		[IPV6_3] = RTE_ETH_RSS_IPV6_EX,
 		[TCP] = 0,
 		[UDP] = 0,
-		[IPV4_TCP] = ETH_RSS_NONFRAG_IPV4_TCP,
-		[IPV4_UDP] = ETH_RSS_NONFRAG_IPV4_UDP,
-		[IPV6_TCP] = ETH_RSS_NONFRAG_IPV6_TCP,
-		[IPV6_TCP_1] = ETH_RSS_IPV6_TCP_EX,
-		[IPV6_UDP] = ETH_RSS_NONFRAG_IPV6_UDP,
-		[IPV6_UDP_1] = ETH_RSS_IPV6_UDP_EX,
+		[IPV4_TCP] = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+		[IPV4_UDP] = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+		[IPV6_TCP] = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+		[IPV6_TCP_1] = RTE_ETH_RSS_IPV6_TCP_EX,
+		[IPV6_UDP] = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+		[IPV6_UDP_1] = RTE_ETH_RSS_IPV6_UDP_EX,
 	};
 	static const uint64_t verbs[RTE_DIM(dpdk)] = {
 		[INNER] = IBV_RX_HASH_INNER,
@@ -1283,7 +1283,7 @@ mlx4_flow_internal_next_vlan(struct mlx4_priv *priv, uint16_t vlan)
  * - MAC flow rules are generated from @p dev->data->mac_addrs
  *   (@p priv->mac array).
  * - An additional flow rule for Ethernet broadcasts is also generated.
- * - All these are per-VLAN if @p DEV_RX_OFFLOAD_VLAN_FILTER
+ * - All these are per-VLAN if @p RTE_ETH_RX_OFFLOAD_VLAN_FILTER
  *   is enabled and VLAN filters are configured.
  *
  * @param priv
@@ -1358,7 +1358,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 	struct rte_ether_addr *rule_mac = &eth_spec.dst;
 	rte_be16_t *rule_vlan =
 		(ETH_DEV(priv)->data->dev_conf.rxmode.offloads &
-		 DEV_RX_OFFLOAD_VLAN_FILTER) &&
+		 RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 		!ETH_DEV(priv)->data->promiscuous ?
 		&vlan_spec.tci :
 		NULL;
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index d56009c41845..2aab0f60a7b5 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -118,7 +118,7 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
 static void
 mlx4_link_status_alarm(struct mlx4_priv *priv)
 {
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 
 	MLX4_ASSERT(priv->intr_alarm == 1);
@@ -183,7 +183,7 @@ mlx4_interrupt_handler(struct mlx4_priv *priv)
 	};
 	uint32_t caught[RTE_DIM(type)] = { 0 };
 	struct ibv_async_event event;
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 	unsigned int i;
 
@@ -280,7 +280,7 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
 int
 mlx4_intr_install(struct mlx4_priv *priv)
 {
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 	int rc;
 
@@ -386,7 +386,7 @@ mlx4_rx_intr_enable(struct rte_eth_dev *dev, uint16_t idx)
 int
 mlx4_rxq_intr_enable(struct mlx4_priv *priv)
 {
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 
 	if (intr_conf->rxq && mlx4_rx_intr_vec_enable(priv) < 0)
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index ee2d2b75e59a..781ee256df71 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -682,12 +682,12 @@ mlx4_rxq_detach(struct rxq *rxq)
 uint64_t
 mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
 {
-	uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
-			    DEV_RX_OFFLOAD_KEEP_CRC |
-			    DEV_RX_OFFLOAD_RSS_HASH;
+	uint64_t offloads = RTE_ETH_RX_OFFLOAD_SCATTER |
+			    RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+			    RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (priv->hw_csum)
-		offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 	return offloads;
 }
 
@@ -703,7 +703,7 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
 uint64_t
 mlx4_get_rx_port_offloads(struct mlx4_priv *priv)
 {
-	uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+	uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	(void)priv;
 	return offloads;
@@ -785,7 +785,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	}
 	/* By default, FCS (CRC) is stripped by hardware. */
 	crc_present = 0;
-	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		if (priv->hw_fcs_strip) {
 			crc_present = 1;
 		} else {
@@ -816,9 +816,9 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		.elts = elts,
 		/* Toggle Rx checksum offload if hardware supports it. */
 		.csum = priv->hw_csum &&
-			(offloads & DEV_RX_OFFLOAD_CHECKSUM),
+			(offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
 		.csum_l2tun = priv->hw_csum_l2tun &&
-			      (offloads & DEV_RX_OFFLOAD_CHECKSUM),
+			      (offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
 		.crc_present = crc_present,
 		.l2tun_offload = priv->hw_csum_l2tun,
 		.stats = {
@@ -832,7 +832,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
 	if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
 		;
-	} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+	} else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
 		uint32_t sges_n;
 
diff --git a/drivers/net/mlx4/mlx4_txq.c b/drivers/net/mlx4/mlx4_txq.c
index 7d8c4f2a2223..0db2e55befd3 100644
--- a/drivers/net/mlx4/mlx4_txq.c
+++ b/drivers/net/mlx4/mlx4_txq.c
@@ -273,20 +273,20 @@ mlx4_txq_fill_dv_obj_info(struct txq *txq, struct mlx4dv_obj *mlxdv)
 uint64_t
 mlx4_get_tx_port_offloads(struct mlx4_priv *priv)
 {
-	uint64_t offloads = DEV_TX_OFFLOAD_MULTI_SEGS;
+	uint64_t offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	if (priv->hw_csum) {
-		offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_UDP_CKSUM |
-			     DEV_TX_OFFLOAD_TCP_CKSUM);
+		offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
 	}
 	if (priv->tso)
-		offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+		offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	if (priv->hw_csum_l2tun) {
-		offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+		offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 		if (priv->tso)
-			offloads |= (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				     DEV_TX_OFFLOAD_GRE_TNL_TSO);
+			offloads |= (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
 	}
 	return offloads;
 }
@@ -394,12 +394,12 @@ mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		.elts_comp_cd_init =
 			RTE_MIN(MLX4_PMD_TX_PER_COMP_REQ, desc / 4),
 		.csum = priv->hw_csum &&
-			(offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-					   DEV_TX_OFFLOAD_UDP_CKSUM |
-					   DEV_TX_OFFLOAD_TCP_CKSUM)),
+			(offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+					   RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+					   RTE_ETH_TX_OFFLOAD_TCP_CKSUM)),
 		.csum_l2tun = priv->hw_csum_l2tun &&
 			      (offloads &
-			       DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM),
+			       RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM),
 		/* Enable Tx loopback for VF devices. */
 		.lb = !!priv->vf,
 		.bounce_buf = bounce_buf,
diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index f34133e2c641..79e27fe2d668 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -439,24 +439,24 @@ mlx5_link_update_unlocked_gset(struct rte_eth_dev *dev,
 	}
 	link_speed = ethtool_cmd_speed(&edata);
 	if (link_speed == -1)
-		dev_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		dev_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	else
 		dev_link.link_speed = link_speed;
 	priv->link_speed_capa = 0;
 	if (edata.supported & (SUPPORTED_1000baseT_Full |
 			       SUPPORTED_1000baseKX_Full))
-		priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (edata.supported & SUPPORTED_10000baseKR_Full)
-		priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (edata.supported & (SUPPORTED_40000baseKR4_Full |
 			       SUPPORTED_40000baseCR4_Full |
 			       SUPPORTED_40000baseSR4_Full |
 			       SUPPORTED_40000baseLR4_Full))
-		priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
-				ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+				RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
 	dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_SPEED_FIXED);
 	*link = dev_link;
 	return 0;
 }
@@ -545,45 +545,45 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
 		return ret;
 	}
 	dev_link.link_speed = (ecmd->speed == UINT32_MAX) ?
-				ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
+				RTE_ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
 	sc = ecmd->link_mode_masks[0] |
 		((uint64_t)ecmd->link_mode_masks[1] << 32);
 	priv->link_speed_capa = 0;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseT_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseKX_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKR_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseR_FEC_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_20G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_20G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseCR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseSR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseLR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_56G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_56G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseCR_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseKR_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseSR_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_25G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_50G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_100G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
 
 	sc = ecmd->link_mode_masks[2] |
 		((uint64_t)ecmd->link_mode_masks[3] << 32);
@@ -591,11 +591,11 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
 		  MLX5_BITSHIFT
 		       (ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseDR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
 	dev_link.link_duplex = ((ecmd->duplex == DUPLEX_HALF) ?
-				ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+				RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
 	dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				  ETH_LINK_SPEED_FIXED);
+				  RTE_ETH_LINK_SPEED_FIXED);
 	*link = dev_link;
 	return 0;
 }
@@ -677,13 +677,13 @@ mlx5_dev_get_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 	fc_conf->autoneg = ethpause.autoneg;
 	if (ethpause.rx_pause && ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (ethpause.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	return 0;
 }
 
@@ -709,14 +709,14 @@ mlx5_dev_set_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	ifr.ifr_data = (void *)&ethpause;
 	ethpause.autoneg = fc_conf->autoneg;
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_RX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
 		ethpause.rx_pause = 1;
 	else
 		ethpause.rx_pause = 0;
 
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_TX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
 		ethpause.tx_pause = 1;
 	else
 		ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 111a7597317a..23d9e0a476ac 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1310,8 +1310,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	 * Remove this check once DPDK supports larger/variable
 	 * indirection tables.
 	 */
-	if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
-		config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+	if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+		config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
 	DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
 		config->ind_table_max_size);
 	config->hw_vlan_strip = !!(sh->device_attr.raw_packet_caps &
@@ -1594,7 +1594,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	/*
 	 * If HW has bug working with tunnel packet decapsulation and
 	 * scatter FCS, and decapsulation is needed, clear the hw_fcs_strip
-	 * bit. Then DEV_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
+	 * bit. Then RTE_ETH_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
 	 */
 	if (config->hca_attr.scatter_fcs_w_decap_disable && config->decap_en)
 		config->hw_fcs_strip = 0;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 7263d354b180..3a9b716e438c 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1704,10 +1704,10 @@ mlx5_udp_tunnel_port_add(struct rte_eth_dev *dev __rte_unused,
 			 struct rte_eth_udp_tunnel *udp_tunnel)
 {
 	MLX5_ASSERT(udp_tunnel != NULL);
-	if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN &&
+	if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN &&
 	    udp_tunnel->udp_port == 4789)
 		return 0;
-	if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN_GPE &&
+	if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN_GPE &&
 	    udp_tunnel->udp_port == 4790)
 		return 0;
 	return -ENOTSUP;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 42cacd0bbe3b..52f03ada2ced 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1233,7 +1233,7 @@ TAILQ_HEAD(mlx5_legacy_flow_meters, mlx5_legacy_flow_meter);
 struct mlx5_flow_rss_desc {
 	uint32_t level;
 	uint32_t queue_num; /**< Number of entries in @p queue. */
-	uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+	uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
 	uint64_t hash_fields; /* Verbs Hash fields. */
 	uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
 	uint32_t key_len; /**< RSS hash key len. */
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index fe86bb40d351..12ddf4c7ff28 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -90,11 +90,11 @@
 #define MLX5_VPMD_DESCS_PER_LOOP      4
 
 /* Mask of RSS on source only or destination only. */
-#define MLX5_RSS_SRC_DST_ONLY (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | \
-			       ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define MLX5_RSS_SRC_DST_ONLY (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY | \
+			       RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
 
 /* Supported RSS */
-#define MLX5_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP | \
+#define MLX5_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP | \
 			    MLX5_RSS_SRC_DST_ONLY))
 
 /* Timeout in seconds to get a valid link status. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 82e2284d9866..f2b78c3cc69e 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -91,7 +91,7 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
 	}
 
 	if ((dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
+			RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
 			rte_mbuf_dyn_tx_timestamp_register(NULL, NULL) != 0) {
 		DRV_LOG(ERR, "port %u cannot register Tx timestamp field/flag",
 			dev->data->port_id);
@@ -225,8 +225,8 @@ mlx5_set_default_params(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 	info->default_txportconf.ring_size = 256;
 	info->default_rxportconf.burst_size = MLX5_RX_DEFAULT_BURST;
 	info->default_txportconf.burst_size = MLX5_TX_DEFAULT_BURST;
-	if ((priv->link_speed_capa & ETH_LINK_SPEED_200G) |
-		(priv->link_speed_capa & ETH_LINK_SPEED_100G)) {
+	if ((priv->link_speed_capa & RTE_ETH_LINK_SPEED_200G) |
+		(priv->link_speed_capa & RTE_ETH_LINK_SPEED_100G)) {
 		info->default_rxportconf.nb_queues = 16;
 		info->default_txportconf.nb_queues = 16;
 		if (dev->data->nb_rx_queues > 2 ||
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 002449e993e7..d645fd48647e 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -98,7 +98,7 @@ struct mlx5_flow_expand_node {
 	uint64_t rss_types;
 	/**<
 	 * RSS types bit-field associated with this node
-	 * (see ETH_RSS_* definitions).
+	 * (see RTE_ETH_RSS_* definitions).
 	 */
 	uint64_t node_flags;
 	/**<
@@ -298,7 +298,7 @@ mlx5_flow_expand_rss_skip_explicit(const struct mlx5_flow_expand_node graph[],
  * @param[in] pattern
  *   User flow pattern.
  * @param[in] types
- *   RSS types to expand (see ETH_RSS_* definitions).
+ *   RSS types to expand (see RTE_ETH_RSS_* definitions).
  * @param[in] graph
  *   Input graph to expand @p pattern according to @p types.
  * @param[in] graph_root_index
@@ -560,8 +560,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 			 MLX5_EXPANSION_IPV4,
 			 MLX5_EXPANSION_IPV6),
 		.type = RTE_FLOW_ITEM_TYPE_IPV4,
-		.rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			ETH_RSS_NONFRAG_IPV4_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	},
 	[MLX5_EXPANSION_OUTER_IPV4_UDP] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -569,11 +569,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 						  MLX5_EXPANSION_MPLS,
 						  MLX5_EXPANSION_GTP),
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 	},
 	[MLX5_EXPANSION_OUTER_IPV4_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 	},
 	[MLX5_EXPANSION_OUTER_IPV6] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT
@@ -584,8 +584,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 			 MLX5_EXPANSION_GRE,
 			 MLX5_EXPANSION_NVGRE),
 		.type = RTE_FLOW_ITEM_TYPE_IPV6,
-		.rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-			ETH_RSS_NONFRAG_IPV6_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	},
 	[MLX5_EXPANSION_OUTER_IPV6_UDP] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -593,11 +593,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 						  MLX5_EXPANSION_MPLS,
 						  MLX5_EXPANSION_GTP),
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 	},
 	[MLX5_EXPANSION_OUTER_IPV6_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	},
 	[MLX5_EXPANSION_VXLAN] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_ETH,
@@ -659,32 +659,32 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4_UDP,
 						  MLX5_EXPANSION_IPV4_TCP),
 		.type = RTE_FLOW_ITEM_TYPE_IPV4,
-		.rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			ETH_RSS_NONFRAG_IPV4_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	},
 	[MLX5_EXPANSION_IPV4_UDP] = {
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 	},
 	[MLX5_EXPANSION_IPV4_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 	},
 	[MLX5_EXPANSION_IPV6] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV6_UDP,
 						  MLX5_EXPANSION_IPV6_TCP,
 						  MLX5_EXPANSION_IPV6_FRAG_EXT),
 		.type = RTE_FLOW_ITEM_TYPE_IPV6,
-		.rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-			ETH_RSS_NONFRAG_IPV6_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	},
 	[MLX5_EXPANSION_IPV6_UDP] = {
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 	},
 	[MLX5_EXPANSION_IPV6_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	},
 	[MLX5_EXPANSION_IPV6_FRAG_EXT] = {
 		.type = RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
@@ -1100,7 +1100,7 @@ mlx5_flow_item_acceptable(const struct rte_flow_item *item,
  * @param[in] tunnel
  *   1 when the hash field is for a tunnel item.
  * @param[in] layer_types
- *   ETH_RSS_* types.
+ *   RTE_ETH_RSS_* types.
  * @param[in] hash_fields
  *   Item hash fields.
  *
@@ -1653,14 +1653,14 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev,
 					  &rss->types,
 					  "some RSS protocols are not"
 					  " supported");
-	if ((rss->types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) &&
-	    !(rss->types & ETH_RSS_IP))
+	if ((rss->types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) &&
+	    !(rss->types & RTE_ETH_RSS_IP))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
 					  "L3 partial RSS requested but L3 RSS"
 					  " type not specified");
-	if ((rss->types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) &&
-	    !(rss->types & (ETH_RSS_UDP | ETH_RSS_TCP)))
+	if ((rss->types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) &&
+	    !(rss->types & (RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP)))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
 					  "L4 partial RSS requested but L4 RSS"
@@ -6427,8 +6427,8 @@ flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type,
 		 * mlx5_flow_hashfields_adjust() in advance.
 		 */
 		rss_desc->level = rss->level;
-		/* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
-		rss_desc->types = !rss->types ? ETH_RSS_IP : rss->types;
+		/* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+		rss_desc->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
 	}
 	flow->dev_handles = 0;
 	if (rss && rss->types) {
@@ -7126,7 +7126,7 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev,
 	if (!priv->reta_idx_n || !priv->rxqs_n) {
 		return 0;
 	}
-	if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+	if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 		action_rss.types = 0;
 	for (i = 0; i != priv->reta_idx_n; ++i)
 		queue[i] = (*priv->reta_idx)[i];
@@ -8794,7 +8794,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
 				(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION_CONF,
 				NULL, "invalid port configuration");
-		if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+		if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 			ctx->action_rss.types = 0;
 		for (i = 0; i != priv->reta_idx_n; ++i)
 			ctx->queue[i] = (*priv->reta_idx)[i];
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index f1a83d537d0c..4a16f30fb7a6 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -331,18 +331,18 @@ enum mlx5_feature_name {
 
 /* Valid layer type for IPV4 RSS. */
 #define MLX5_IPV4_LAYER_TYPES \
-	(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
-	 ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
-	 ETH_RSS_NONFRAG_IPV4_OTHER)
+	(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+	 RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	 RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
 
 /* IBV hash source bits  for IPV4. */
 #define MLX5_IPV4_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV4 | IBV_RX_HASH_DST_IPV4)
 
 /* Valid layer type for IPV6 RSS. */
 #define MLX5_IPV6_LAYER_TYPES \
-	(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP | \
-	 ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_EX  | ETH_RSS_IPV6_TCP_EX | \
-	 ETH_RSS_IPV6_UDP_EX | ETH_RSS_NONFRAG_IPV6_OTHER)
+	(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	 RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_EX  | RTE_ETH_RSS_IPV6_TCP_EX | \
+	 RTE_ETH_RSS_IPV6_UDP_EX | RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
 
 /* IBV hash source bits  for IPV6. */
 #define MLX5_IPV6_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV6 | IBV_RX_HASH_DST_IPV6)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 5bd90bfa2818..c4a5706532a9 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -10862,9 +10862,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 	if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV4)) ||
 	    (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV4))) {
 		if (rss_types & MLX5_IPV4_LAYER_TYPES) {
-			if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV4;
-			else if (rss_types & ETH_RSS_L3_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV4;
 			else
 				dev_flow->hash_fields |= MLX5_IPV4_IBV_RX_HASH;
@@ -10872,9 +10872,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 	} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV6)) ||
 		   (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV6))) {
 		if (rss_types & MLX5_IPV6_LAYER_TYPES) {
-			if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV6;
-			else if (rss_types & ETH_RSS_L3_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV6;
 			else
 				dev_flow->hash_fields |= MLX5_IPV6_IBV_RX_HASH;
@@ -10888,11 +10888,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 		return;
 	if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_UDP)) ||
 	    (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_UDP))) {
-		if (rss_types & ETH_RSS_UDP) {
-			if (rss_types & ETH_RSS_L4_SRC_ONLY)
+		if (rss_types & RTE_ETH_RSS_UDP) {
+			if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_SRC_PORT_UDP;
-			else if (rss_types & ETH_RSS_L4_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_DST_PORT_UDP;
 			else
@@ -10900,11 +10900,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 		}
 	} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_TCP)) ||
 		   (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_TCP))) {
-		if (rss_types & ETH_RSS_TCP) {
-			if (rss_types & ETH_RSS_L4_SRC_ONLY)
+		if (rss_types & RTE_ETH_RSS_TCP) {
+			if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_SRC_PORT_TCP;
-			else if (rss_types & ETH_RSS_L4_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_DST_PORT_TCP;
 			else
@@ -14444,9 +14444,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV4:
 		if (rss_types & MLX5_IPV4_LAYER_TYPES) {
 			*hash_field &= ~MLX5_RSS_HASH_IPV4;
-			if (rss_types & ETH_RSS_L3_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_IPV4;
-			else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_IPV4;
 			else
 				*hash_field |= MLX5_RSS_HASH_IPV4;
@@ -14455,9 +14455,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV6:
 		if (rss_types & MLX5_IPV6_LAYER_TYPES) {
 			*hash_field &= ~MLX5_RSS_HASH_IPV6;
-			if (rss_types & ETH_RSS_L3_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_IPV6;
-			else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_IPV6;
 			else
 				*hash_field |= MLX5_RSS_HASH_IPV6;
@@ -14466,11 +14466,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV4_UDP:
 		/* fall-through. */
 	case MLX5_RSS_HASH_IPV6_UDP:
-		if (rss_types & ETH_RSS_UDP) {
+		if (rss_types & RTE_ETH_RSS_UDP) {
 			*hash_field &= ~MLX5_UDP_IBV_RX_HASH;
-			if (rss_types & ETH_RSS_L4_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_PORT_UDP;
-			else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_PORT_UDP;
 			else
 				*hash_field |= MLX5_UDP_IBV_RX_HASH;
@@ -14479,11 +14479,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV4_TCP:
 		/* fall-through. */
 	case MLX5_RSS_HASH_IPV6_TCP:
-		if (rss_types & ETH_RSS_TCP) {
+		if (rss_types & RTE_ETH_RSS_TCP) {
 			*hash_field &= ~MLX5_TCP_IBV_RX_HASH;
-			if (rss_types & ETH_RSS_L4_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_PORT_TCP;
-			else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_PORT_TCP;
 			else
 				*hash_field |= MLX5_TCP_IBV_RX_HASH;
@@ -14631,8 +14631,8 @@ __flow_dv_action_rss_create(struct rte_eth_dev *dev,
 	origin = &shared_rss->origin;
 	origin->func = rss->func;
 	origin->level = rss->level;
-	/* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
-	origin->types = !rss->types ? ETH_RSS_IP : rss->types;
+	/* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+	origin->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
 	/* NULL RSS key indicates default RSS key. */
 	rss_key = !rss->key ? rss_hash_default_key : rss->key;
 	memcpy(shared_rss->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 892abcb65779..f9010a674d7f 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1824,7 +1824,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
 			if (dev_flow->hash_fields != 0)
 				dev_flow->hash_fields |=
 					mlx5_flow_hashfields_adjust
-					(rss_desc, tunnel, ETH_RSS_TCP,
+					(rss_desc, tunnel, RTE_ETH_RSS_TCP,
 					 (IBV_RX_HASH_SRC_PORT_TCP |
 					  IBV_RX_HASH_DST_PORT_TCP));
 			item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP :
@@ -1837,7 +1837,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
 			if (dev_flow->hash_fields != 0)
 				dev_flow->hash_fields |=
 					mlx5_flow_hashfields_adjust
-					(rss_desc, tunnel, ETH_RSS_UDP,
+					(rss_desc, tunnel, RTE_ETH_RSS_UDP,
 					 (IBV_RX_HASH_SRC_PORT_UDP |
 					  IBV_RX_HASH_DST_PORT_UDP));
 			item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
diff --git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c
index c32129cdc2b8..a4f690039e24 100644
--- a/drivers/net/mlx5/mlx5_rss.c
+++ b/drivers/net/mlx5/mlx5_rss.c
@@ -68,7 +68,7 @@ mlx5_rss_hash_update(struct rte_eth_dev *dev,
 		if (!(*priv->rxqs)[i])
 			continue;
 		(*priv->rxqs)[i]->rss_hash = !!rss_conf->rss_hf &&
-			!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS);
+			!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS);
 		++idx;
 	}
 	return 0;
@@ -170,8 +170,8 @@ mlx5_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 	/* Fill each entry of the table even if its bit is not set. */
 	for (idx = 0, i = 0; (i != reta_size); ++i) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		reta_conf[idx].reta[i % RTE_RETA_GROUP_SIZE] =
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] =
 			(*priv->reta_idx)[i];
 	}
 	return 0;
@@ -209,8 +209,8 @@ mlx5_dev_rss_reta_update(struct rte_eth_dev *dev,
 	if (ret)
 		return ret;
 	for (idx = 0, i = 0; (i != reta_size); ++i) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		pos = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (((reta_conf[idx].mask >> i) & 0x1) == 0)
 			continue;
 		MLX5_ASSERT(reta_conf[idx].reta[pos] < priv->rxqs_n);
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 60673d014d02..14b9991c5fa8 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -333,22 +333,22 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_dev_config *config = &priv->config;
-	uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
-			     DEV_RX_OFFLOAD_TIMESTAMP |
-			     DEV_RX_OFFLOAD_RSS_HASH);
+	uint64_t offloads = (RTE_ETH_RX_OFFLOAD_SCATTER |
+			     RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+			     RTE_ETH_RX_OFFLOAD_RSS_HASH);
 
 	if (!config->mprq.enabled)
 		offloads |= RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT;
 	if (config->hw_fcs_strip)
-		offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	if (config->hw_csum)
-		offloads |= (DEV_RX_OFFLOAD_IPV4_CKSUM |
-			     DEV_RX_OFFLOAD_UDP_CKSUM |
-			     DEV_RX_OFFLOAD_TCP_CKSUM);
+		offloads |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			     RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
 	if (config->hw_vlan_strip)
-		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	if (MLX5_LRO_SUPPORTED(dev))
-		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+		offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 	return offloads;
 }
 
@@ -362,7 +362,7 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
 uint64_t
 mlx5_get_rx_port_offloads(void)
 {
-	uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+	uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	return offloads;
 }
@@ -694,7 +694,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 				    dev->data->dev_conf.rxmode.offloads;
 
 		/* The offloads should be checked on rte_eth_dev layer. */
-		MLX5_ASSERT(offloads & DEV_RX_OFFLOAD_SCATTER);
+		MLX5_ASSERT(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
 		if (!(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
 			DRV_LOG(ERR, "port %u queue index %u split "
 				     "offload not configured",
@@ -1336,7 +1336,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	struct mlx5_dev_config *config = &priv->config;
 	uint64_t offloads = conf->offloads |
 			   dev->data->dev_conf.rxmode.offloads;
-	unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
+	unsigned int lro_on_queue = !!(offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO);
 	unsigned int max_rx_pktlen = lro_on_queue ?
 			dev->data->dev_conf.rxmode.max_lro_pkt_size :
 			dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN +
@@ -1439,7 +1439,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	} while (tail_len || !rte_is_power_of_2(tmpl->rxq.rxseg_n));
 	MLX5_ASSERT(tmpl->rxq.rxseg_n &&
 		    tmpl->rxq.rxseg_n <= MLX5_MAX_RXQ_NSEG);
-	if (tmpl->rxq.rxseg_n > 1 && !(offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	if (tmpl->rxq.rxseg_n > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
 			" configured and no enough mbuf space(%u) to contain "
 			"the maximum RX packet length(%u) with head-room(%u)",
@@ -1485,7 +1485,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 			config->mprq.stride_size_n : mprq_stride_size;
 		tmpl->rxq.strd_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT;
 		tmpl->rxq.strd_scatter_en =
-				!!(offloads & DEV_RX_OFFLOAD_SCATTER);
+				!!(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
 		tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
 				config->mprq.max_memcpy_len);
 		max_lro_size = RTE_MIN(max_rx_pktlen,
@@ -1500,7 +1500,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
 		tmpl->rxq.sges_n = 0;
 		max_lro_size = max_rx_pktlen;
-	} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+	} else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		unsigned int sges_n;
 
 		if (lro_on_queue && first_mb_free_size <
@@ -1561,9 +1561,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	}
 	mlx5_max_lro_msg_size_adjust(dev, idx, max_lro_size);
 	/* Toggle RX checksum offload if hardware supports it. */
-	tmpl->rxq.csum = !!(offloads & DEV_RX_OFFLOAD_CHECKSUM);
+	tmpl->rxq.csum = !!(offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM);
 	/* Configure Rx timestamp. */
-	tmpl->rxq.hw_timestamp = !!(offloads & DEV_RX_OFFLOAD_TIMESTAMP);
+	tmpl->rxq.hw_timestamp = !!(offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP);
 	tmpl->rxq.timestamp_rx_flag = 0;
 	if (tmpl->rxq.hw_timestamp && rte_mbuf_dyn_rx_timestamp_register(
 			&tmpl->rxq.timestamp_offset,
@@ -1572,11 +1572,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		goto error;
 	}
 	/* Configure VLAN stripping. */
-	tmpl->rxq.vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	tmpl->rxq.vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 	/* By default, FCS (CRC) is stripped by hardware. */
 	tmpl->rxq.crc_present = 0;
 	tmpl->rxq.lro = lro_on_queue;
-	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		if (config->hw_fcs_strip) {
 			/*
 			 * RQs used for LRO-enabled TIRs should not be
@@ -1606,7 +1606,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		tmpl->rxq.crc_present << 2);
 	/* Save port ID. */
 	tmpl->rxq.rss_hash = !!priv->rss_conf.rss_hf &&
-		(!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS));
+		(!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS));
 	tmpl->rxq.port_id = dev->data->port_id;
 	tmpl->priv = priv;
 	tmpl->rxq.mp = rx_seg[0].mp;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.h b/drivers/net/mlx5/mlx5_rxtx_vec.h
index 93b4f517bb3e..65d91bdf67e2 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.h
@@ -16,10 +16,10 @@
 
 /* HW checksum offload capabilities of vectorized Tx. */
 #define MLX5_VEC_TX_CKSUM_OFFLOAD_CAP \
-	(DEV_TX_OFFLOAD_IPV4_CKSUM | \
-	 DEV_TX_OFFLOAD_UDP_CKSUM | \
-	 DEV_TX_OFFLOAD_TCP_CKSUM | \
-	 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+	(RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+	 RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+	 RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+	 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 
 /*
  * Compile time sanity check for vectorized functions.
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index df671379e46d..12aeba60348a 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -523,36 +523,36 @@ mlx5_select_tx_function(struct rte_eth_dev *dev)
 	unsigned int diff = 0, olx = 0, i, m;
 
 	MLX5_ASSERT(priv);
-	if (tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
 		/* We should support Multi-Segment Packets. */
 		olx |= MLX5_TXOFF_CONFIG_MULTI;
 	}
-	if (tx_offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-			   DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			   DEV_TX_OFFLOAD_GRE_TNL_TSO |
-			   DEV_TX_OFFLOAD_IP_TNL_TSO |
-			   DEV_TX_OFFLOAD_UDP_TNL_TSO)) {
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+			   RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO)) {
 		/* We should support TCP Send Offload. */
 		olx |= MLX5_TXOFF_CONFIG_TSO;
 	}
-	if (tx_offloads & (DEV_TX_OFFLOAD_IP_TNL_TSO |
-			   DEV_TX_OFFLOAD_UDP_TNL_TSO |
-			   DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 		/* We should support Software Parser for Tunnels. */
 		olx |= MLX5_TXOFF_CONFIG_SWP;
 	}
-	if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			   DEV_TX_OFFLOAD_UDP_CKSUM |
-			   DEV_TX_OFFLOAD_TCP_CKSUM |
-			   DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 		/* We should support IP/TCP/UDP Checksums. */
 		olx |= MLX5_TXOFF_CONFIG_CSUM;
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) {
 		/* We should support VLAN insertion. */
 		olx |= MLX5_TXOFF_CONFIG_VLAN;
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
 	    rte_mbuf_dynflag_lookup
 			(RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME, NULL) >= 0 &&
 	    rte_mbuf_dynfield_lookup
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 1f92250f5edd..02bb9307ae61 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -98,42 +98,42 @@ uint64_t
 mlx5_get_tx_port_offloads(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	uint64_t offloads = (DEV_TX_OFFLOAD_MULTI_SEGS |
-			     DEV_TX_OFFLOAD_VLAN_INSERT);
+	uint64_t offloads = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+			     RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
 	struct mlx5_dev_config *config = &priv->config;
 
 	if (config->hw_csum)
-		offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_UDP_CKSUM |
-			     DEV_TX_OFFLOAD_TCP_CKSUM);
+		offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
 	if (config->tso)
-		offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+		offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	if (config->tx_pp)
-		offloads |= DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP;
+		offloads |= RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP;
 	if (config->swp) {
 		if (config->swp & MLX5_SW_PARSING_CSUM_CAP)
-			offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+			offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 		if (config->swp & MLX5_SW_PARSING_TSO_CAP)
-			offloads |= (DEV_TX_OFFLOAD_IP_TNL_TSO |
-				     DEV_TX_OFFLOAD_UDP_TNL_TSO);
+			offloads |= (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 	}
 	if (config->tunnel_en) {
 		if (config->hw_csum)
-			offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+			offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 		if (config->tso) {
 			if (config->tunnel_en &
 				MLX5_TUNNELED_OFFLOADS_VXLAN_CAP)
-				offloads |= DEV_TX_OFFLOAD_VXLAN_TNL_TSO;
+				offloads |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
 			if (config->tunnel_en &
 				MLX5_TUNNELED_OFFLOADS_GRE_CAP)
-				offloads |= DEV_TX_OFFLOAD_GRE_TNL_TSO;
+				offloads |= RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO;
 			if (config->tunnel_en &
 				MLX5_TUNNELED_OFFLOADS_GENEVE_CAP)
-				offloads |= DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+				offloads |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
 		}
 	}
 	if (!config->mprq.enabled)
-		offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	return offloads;
 }
 
@@ -801,17 +801,17 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
 	unsigned int inlen_mode; /* Minimal required Inline data. */
 	unsigned int txqs_inline; /* Min Tx queues to enable inline. */
 	uint64_t dev_txoff = priv->dev_data->dev_conf.txmode.offloads;
-	bool tso = txq_ctrl->txq.offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-					    DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-					    DEV_TX_OFFLOAD_GRE_TNL_TSO |
-					    DEV_TX_OFFLOAD_IP_TNL_TSO |
-					    DEV_TX_OFFLOAD_UDP_TNL_TSO);
+	bool tso = txq_ctrl->txq.offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+					    RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+					    RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+					    RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+					    RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 	bool vlan_inline;
 	unsigned int temp;
 
 	txq_ctrl->txq.fast_free =
-		!!((txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
-		   !(txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MULTI_SEGS) &&
+		!!((txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
+		   !(txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) &&
 		   !config->mprq.enabled);
 	if (config->txqs_inline == MLX5_ARG_UNSET)
 		txqs_inline =
@@ -870,7 +870,7 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
 	 * tx_burst routine.
 	 */
 	txq_ctrl->txq.vlan_en = config->hw_vlan_insert;
-	vlan_inline = (dev_txoff & DEV_TX_OFFLOAD_VLAN_INSERT) &&
+	vlan_inline = (dev_txoff & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) &&
 		      !config->hw_vlan_insert;
 	/*
 	 * If there are few Tx queues it is prioritized
@@ -978,19 +978,19 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
 						    MLX5_MAX_TSO_HEADER);
 		txq_ctrl->txq.tso_en = 1;
 	}
-	if (((DEV_TX_OFFLOAD_VXLAN_TNL_TSO & txq_ctrl->txq.offloads) &&
+	if (((RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO & txq_ctrl->txq.offloads) &&
 	    (config->tunnel_en & MLX5_TUNNELED_OFFLOADS_VXLAN_CAP)) |
-	   ((DEV_TX_OFFLOAD_GRE_TNL_TSO & txq_ctrl->txq.offloads) &&
+	   ((RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO & txq_ctrl->txq.offloads) &&
 	    (config->tunnel_en & MLX5_TUNNELED_OFFLOADS_GRE_CAP)) |
-	   ((DEV_TX_OFFLOAD_GENEVE_TNL_TSO & txq_ctrl->txq.offloads) &&
+	   ((RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO & txq_ctrl->txq.offloads) &&
 	    (config->tunnel_en & MLX5_TUNNELED_OFFLOADS_GENEVE_CAP)) |
 	   (config->swp  & MLX5_SW_PARSING_TSO_CAP))
 		txq_ctrl->txq.tunnel_en = 1;
-	txq_ctrl->txq.swp_en = (((DEV_TX_OFFLOAD_IP_TNL_TSO |
-				  DEV_TX_OFFLOAD_UDP_TNL_TSO) &
+	txq_ctrl->txq.swp_en = (((RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO) &
 				  txq_ctrl->txq.offloads) && (config->swp &
 				  MLX5_SW_PARSING_TSO_CAP)) |
-				((DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM &
+				((RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM &
 				 txq_ctrl->txq.offloads) && (config->swp &
 				 MLX5_SW_PARSING_CSUM_CAP));
 }
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 60f97f2d2d1f..07792fc5d94f 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -142,9 +142,9 @@ mlx5_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct mlx5_priv *priv = dev->data->dev_private;
 	unsigned int i;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		int hw_vlan_strip = !!(dev->data->dev_conf.rxmode.offloads &
-				       DEV_RX_OFFLOAD_VLAN_STRIP);
+				       RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 		if (!priv->config.hw_vlan_strip) {
 			DRV_LOG(ERR, "port %u VLAN stripping is not supported",
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 31c4d3276053..9a9069da7572 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -485,8 +485,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	 * Remove this check once DPDK supports larger/variable
 	 * indirection tables.
 	 */
-	if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
-		config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+	if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+		config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
 	DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
 		config->ind_table_max_size);
 	if (config->hw_padding) {
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index 2a0288087357..10fe6d828ccd 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -114,7 +114,7 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
 	struct mvneta_priv *priv = dev->data->dev_private;
 	struct neta_ppio_params *ppio_params;
 
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE) {
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE) {
 		MVNETA_LOG(INFO, "Unsupported RSS and rx multi queue mode %d",
 			dev->data->dev_conf.rxmode.mq_mode);
 		if (dev->data->nb_rx_queues > 1)
@@ -126,7 +126,7 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		priv->multiseg = 1;
 
 	ppio_params = &priv->ppio_params;
@@ -151,10 +151,10 @@ static int
 mvneta_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
 		   struct rte_eth_dev_info *info)
 {
-	info->speed_capa = ETH_LINK_SPEED_10M |
-			   ETH_LINK_SPEED_100M |
-			   ETH_LINK_SPEED_1G |
-			   ETH_LINK_SPEED_2_5G;
+	info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			   RTE_ETH_LINK_SPEED_100M |
+			   RTE_ETH_LINK_SPEED_1G |
+			   RTE_ETH_LINK_SPEED_2_5G;
 
 	info->max_rx_queues = MRVL_NETA_RXQ_MAX;
 	info->max_tx_queues = MRVL_NETA_TXQ_MAX;
@@ -503,28 +503,28 @@ mvneta_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 
 	switch (ethtool_cmd_speed(&edata)) {
 	case SPEED_10:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case SPEED_100:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case SPEED_1000:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case SPEED_2500:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	default:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	}
 
-	dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
-							 ETH_LINK_HALF_DUPLEX;
-	dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
-							   ETH_LINK_FIXED;
+	dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+							 RTE_ETH_LINK_HALF_DUPLEX;
+	dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+							   RTE_ETH_LINK_FIXED;
 
 	neta_ppio_get_link_state(priv->ppio, &link_up);
-	dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index 126a9a0c11b9..ccb87d518d83 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -54,14 +54,14 @@
 #define MRVL_NETA_MRU_TO_MTU(mru)	((mru) - MRVL_NETA_HDRS_LEN)
 
 /** Rx offloads capabilities */
-#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_CHECKSUM)
+#define MVNETA_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_CHECKSUM)
 
 /** Tx offloads capabilities */
-#define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				    DEV_TX_OFFLOAD_UDP_CKSUM  | \
-				    DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MVNETA_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+				    RTE_ETH_TX_OFFLOAD_UDP_CKSUM  | \
+				    RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 #define MVNETA_TX_OFFLOADS (MVNETA_TX_OFFLOAD_CHECKSUM | \
-			    DEV_TX_OFFLOAD_MULTI_SEGS)
+			    RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define MVNETA_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
 				PKT_TX_TCP_CKSUM | \
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index 9836bb071a82..62d8aa586dae 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -734,7 +734,7 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	rxq->priv = priv;
 	rxq->mp = mp;
 	rxq->cksum_enabled = dev->data->dev_conf.rxmode.offloads &
-			     DEV_RX_OFFLOAD_IPV4_CKSUM;
+			     RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 	rxq->queue_id = idx;
 	rxq->port_id = dev->data->port_id;
 	rxq->size = desc;
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index a6458d2ce9b5..d0746b0d1215 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -58,15 +58,15 @@
 #define MRVL_COOKIE_HIGH_ADDR_MASK 0xffffff0000000000
 
 /** Port Rx offload capabilities */
-#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
-			  DEV_RX_OFFLOAD_CHECKSUM)
+#define MRVL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+			  RTE_ETH_RX_OFFLOAD_CHECKSUM)
 
 /** Port Tx offloads capabilities */
-#define MRVL_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				  DEV_TX_OFFLOAD_UDP_CKSUM  | \
-				  DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MRVL_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM  | \
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 #define MRVL_TX_OFFLOADS (MRVL_TX_OFFLOAD_CHECKSUM | \
-			  DEV_TX_OFFLOAD_MULTI_SEGS)
+			  RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define MRVL_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
 			      PKT_TX_TCP_CKSUM | \
@@ -442,14 +442,14 @@ mrvl_configure_rss(struct mrvl_priv *priv, struct rte_eth_rss_conf *rss_conf)
 
 	if (rss_conf->rss_hf == 0) {
 		priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
-	} else if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+	} else if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
 		priv->ppio_params.inqs_params.hash_type =
 			PP2_PPIO_HASH_T_2_TUPLE;
-	} else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	} else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		priv->ppio_params.inqs_params.hash_type =
 			PP2_PPIO_HASH_T_5_TUPLE;
 		priv->rss_hf_tcp = 1;
-	} else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	} else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		priv->ppio_params.inqs_params.hash_type =
 			PP2_PPIO_HASH_T_5_TUPLE;
 		priv->rss_hf_tcp = 0;
@@ -483,8 +483,8 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE &&
-	    dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE &&
+	    dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		MRVL_LOG(INFO, "Unsupported rx multi queue mode %d",
 			dev->data->dev_conf.rxmode.mq_mode);
 		return -EINVAL;
@@ -502,7 +502,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		priv->multiseg = 1;
 
 	ret = mrvl_configure_rxqs(priv, dev->data->port_id,
@@ -524,7 +524,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		return ret;
 
 	if (dev->data->nb_rx_queues == 1 &&
-	    dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	    dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		MRVL_LOG(WARNING, "Disabling hash for 1 rx queue");
 		priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
 		priv->configured = 1;
@@ -623,7 +623,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
 	int ret;
 
 	if (!priv->ppio) {
-		dev->data->dev_link.link_status = ETH_LINK_UP;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 		return 0;
 	}
 
@@ -644,7 +644,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
 		return ret;
 	}
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -664,14 +664,14 @@ mrvl_dev_set_link_down(struct rte_eth_dev *dev)
 	int ret;
 
 	if (!priv->ppio) {
-		dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 		return 0;
 	}
 	ret = pp2_ppio_disable(priv->ppio);
 	if (ret)
 		return ret;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
@@ -893,7 +893,7 @@ mrvl_dev_start(struct rte_eth_dev *dev)
 	if (dev->data->all_multicast == 1)
 		mrvl_allmulticast_enable(dev);
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		ret = mrvl_populate_vlan_table(dev, 1);
 		if (ret) {
 			MRVL_LOG(ERR, "Failed to populate VLAN table");
@@ -929,11 +929,11 @@ mrvl_dev_start(struct rte_eth_dev *dev)
 		priv->flow_ctrl = 0;
 	}
 
-	if (dev->data->dev_link.link_status == ETH_LINK_UP) {
+	if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
 		ret = mrvl_dev_set_link_up(dev);
 		if (ret) {
 			MRVL_LOG(ERR, "Failed to set link up");
-			dev->data->dev_link.link_status = ETH_LINK_DOWN;
+			dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 			goto out;
 		}
 	}
@@ -1202,30 +1202,30 @@ mrvl_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 
 	switch (ethtool_cmd_speed(&edata)) {
 	case SPEED_10:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case SPEED_100:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case SPEED_1000:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case SPEED_2500:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	case SPEED_10000:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	default:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	}
 
-	dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
-							 ETH_LINK_HALF_DUPLEX;
-	dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
-							   ETH_LINK_FIXED;
+	dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+							 RTE_ETH_LINK_HALF_DUPLEX;
+	dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+							   RTE_ETH_LINK_FIXED;
 	pp2_ppio_get_link_state(priv->ppio, &link_up);
-	dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -1709,11 +1709,11 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
 {
 	struct mrvl_priv *priv = dev->data->dev_private;
 
-	info->speed_capa = ETH_LINK_SPEED_10M |
-			   ETH_LINK_SPEED_100M |
-			   ETH_LINK_SPEED_1G |
-			   ETH_LINK_SPEED_2_5G |
-			   ETH_LINK_SPEED_10G;
+	info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			   RTE_ETH_LINK_SPEED_100M |
+			   RTE_ETH_LINK_SPEED_1G |
+			   RTE_ETH_LINK_SPEED_2_5G |
+			   RTE_ETH_LINK_SPEED_10G;
 
 	info->max_rx_queues = MRVL_PP2_RXQ_MAX;
 	info->max_tx_queues = MRVL_PP2_TXQ_MAX;
@@ -1733,9 +1733,9 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
 	info->tx_offload_capa = MRVL_TX_OFFLOADS;
 	info->tx_queue_offload_capa = MRVL_TX_OFFLOADS;
 
-	info->flow_type_rss_offloads = ETH_RSS_IPV4 |
-				       ETH_RSS_NONFRAG_IPV4_TCP |
-				       ETH_RSS_NONFRAG_IPV4_UDP;
+	info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+				       RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				       RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	/* By default packets are dropped if no descriptors are available */
 	info->default_rxconf.rx_drop_en = 1;
@@ -1864,13 +1864,13 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
 	int ret;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		MRVL_LOG(ERR, "VLAN stripping is not supported\n");
 		return -ENOTSUP;
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ret = mrvl_populate_vlan_table(dev, 1);
 		else
 			ret = mrvl_populate_vlan_table(dev, 0);
@@ -1879,7 +1879,7 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			return ret;
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
 		MRVL_LOG(ERR, "Extend VLAN not supported\n");
 		return -ENOTSUP;
 	}
@@ -2022,7 +2022,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 
 	rxq->priv = priv;
 	rxq->mp = mp;
-	rxq->cksum_enabled = offloads & DEV_RX_OFFLOAD_IPV4_CKSUM;
+	rxq->cksum_enabled = offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 	rxq->queue_id = idx;
 	rxq->port_id = dev->data->port_id;
 	mrvl_port_to_bpool_lookup[rxq->port_id] = priv->bpool;
@@ -2182,7 +2182,7 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		return ret;
 	}
 
-	fc_conf->mode = en ? RTE_FC_RX_PAUSE : RTE_FC_NONE;
+	fc_conf->mode = en ? RTE_ETH_FC_RX_PAUSE : RTE_ETH_FC_NONE;
 
 	ret = pp2_ppio_get_tx_pause(priv->ppio, &en);
 	if (ret) {
@@ -2191,10 +2191,10 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	if (en) {
-		if (fc_conf->mode == RTE_FC_NONE)
-			fc_conf->mode = RTE_FC_TX_PAUSE;
+		if (fc_conf->mode == RTE_ETH_FC_NONE)
+			fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		else
-			fc_conf->mode = RTE_FC_FULL;
+			fc_conf->mode = RTE_ETH_FC_FULL;
 	}
 
 	return 0;
@@ -2240,19 +2240,19 @@ mrvl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		rx_en = 1;
 		tx_en = 1;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		rx_en = 0;
 		tx_en = 1;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		rx_en = 1;
 		tx_en = 0;
 		break;
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		rx_en = 0;
 		tx_en = 0;
 		break;
@@ -2329,11 +2329,11 @@ mrvl_rss_hash_conf_get(struct rte_eth_dev *dev,
 	if (hash_type == PP2_PPIO_HASH_T_NONE)
 		rss_conf->rss_hf = 0;
 	else if (hash_type == PP2_PPIO_HASH_T_2_TUPLE)
-		rss_conf->rss_hf = ETH_RSS_IPV4;
+		rss_conf->rss_hf = RTE_ETH_RSS_IPV4;
 	else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && priv->rss_hf_tcp)
-		rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && !priv->rss_hf_tcp)
-		rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	return 0;
 }
@@ -3152,7 +3152,7 @@ mrvl_eth_dev_create(struct rte_vdev_device *vdev, const char *name)
 	eth_dev->dev_ops = &mrvl_ops;
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	rte_eth_dev_probing_finish(eth_dev);
 	return 0;
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9e2a40597349..9c4ae80e7e16 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -40,16 +40,16 @@
 #include "hn_nvs.h"
 #include "ndis.h"
 
-#define HN_TX_OFFLOAD_CAPS (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-			    DEV_TX_OFFLOAD_TCP_CKSUM  | \
-			    DEV_TX_OFFLOAD_UDP_CKSUM  | \
-			    DEV_TX_OFFLOAD_TCP_TSO    | \
-			    DEV_TX_OFFLOAD_MULTI_SEGS | \
-			    DEV_TX_OFFLOAD_VLAN_INSERT)
+#define HN_TX_OFFLOAD_CAPS (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+			    RTE_ETH_TX_OFFLOAD_TCP_CKSUM  | \
+			    RTE_ETH_TX_OFFLOAD_UDP_CKSUM  | \
+			    RTE_ETH_TX_OFFLOAD_TCP_TSO    | \
+			    RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+			    RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 
-#define HN_RX_OFFLOAD_CAPS (DEV_RX_OFFLOAD_CHECKSUM | \
-			    DEV_RX_OFFLOAD_VLAN_STRIP | \
-			    DEV_RX_OFFLOAD_RSS_HASH)
+#define HN_RX_OFFLOAD_CAPS (RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+			    RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+			    RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define NETVSC_ARG_LATENCY "latency"
 #define NETVSC_ARG_RXBREAK "rx_copybreak"
@@ -238,21 +238,21 @@ hn_dev_link_update(struct rte_eth_dev *dev,
 	hn_rndis_get_linkspeed(hv);
 
 	link = (struct rte_eth_link) {
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_autoneg = ETH_LINK_SPEED_FIXED,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_autoneg = RTE_ETH_LINK_SPEED_FIXED,
 		.link_speed = hv->link_speed / 10000,
 	};
 
 	if (hv->link_status == NDIS_MEDIA_STATE_CONNECTED)
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 	else
-		link.link_status = ETH_LINK_DOWN;
+		link.link_status = RTE_ETH_LINK_DOWN;
 
 	if (old.link_status == link.link_status)
 		return 0;
 
 	PMD_INIT_LOG(DEBUG, "Port %d is %s", dev->data->port_id,
-		     (link.link_status == ETH_LINK_UP) ? "up" : "down");
+		     (link.link_status == RTE_ETH_LINK_UP) ? "up" : "down");
 
 	return rte_eth_linkstatus_set(dev, &link);
 }
@@ -263,14 +263,14 @@ static int hn_dev_info_get(struct rte_eth_dev *dev,
 	struct hn_data *hv = dev->data->dev_private;
 	int rc;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
 	dev_info->min_rx_bufsize = HN_MIN_RX_BUF_SIZE;
 	dev_info->max_rx_pktlen  = HN_MAX_XFER_LEN;
 	dev_info->max_mac_addrs  = 1;
 
 	dev_info->hash_key_size = NDIS_HASH_KEYSIZE_TOEPLITZ;
 	dev_info->flow_type_rss_offloads = hv->rss_offloads;
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 
 	dev_info->max_rx_queues = hv->max_queues;
 	dev_info->max_tx_queues = hv->max_queues;
@@ -306,8 +306,8 @@ static int hn_rss_reta_update(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < NDIS_HASH_INDCNT; i++) {
-		uint16_t idx = i / RTE_RETA_GROUP_SIZE;
-		uint16_t shift = i % RTE_RETA_GROUP_SIZE;
+		uint16_t idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint16_t shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint64_t mask = (uint64_t)1 << shift;
 
 		if (reta_conf[idx].mask & mask)
@@ -346,8 +346,8 @@ static int hn_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < NDIS_HASH_INDCNT; i++) {
-		uint16_t idx = i / RTE_RETA_GROUP_SIZE;
-		uint16_t shift = i % RTE_RETA_GROUP_SIZE;
+		uint16_t idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint16_t shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint64_t mask = (uint64_t)1 << shift;
 
 		if (reta_conf[idx].mask & mask)
@@ -362,17 +362,17 @@ static void hn_rss_hash_init(struct hn_data *hv,
 	/* Convert from DPDK RSS hash flags to NDIS hash flags */
 	hv->rss_hash = NDIS_HASH_FUNCTION_TOEPLITZ;
 
-	if (rss_conf->rss_hf & ETH_RSS_IPV4)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
 		hv->rss_hash |= NDIS_HASH_IPV4;
-	if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		hv->rss_hash |= NDIS_HASH_TCP_IPV4;
-	if (rss_conf->rss_hf & ETH_RSS_IPV6)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
 		hv->rss_hash |=  NDIS_HASH_IPV6;
-	if (rss_conf->rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX)
 		hv->rss_hash |=  NDIS_HASH_IPV6_EX;
-	if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		hv->rss_hash |= NDIS_HASH_TCP_IPV6;
-	if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		hv->rss_hash |= NDIS_HASH_TCP_IPV6_EX;
 
 	memcpy(hv->rss_key, rss_conf->rss_key ? : rss_default_key,
@@ -427,22 +427,22 @@ static int hn_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	rss_conf->rss_hf = 0;
 	if (hv->rss_hash & NDIS_HASH_IPV4)
-		rss_conf->rss_hf |= ETH_RSS_IPV4;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
 
 	if (hv->rss_hash & NDIS_HASH_TCP_IPV4)
-		rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 
 	if (hv->rss_hash & NDIS_HASH_IPV6)
-		rss_conf->rss_hf |= ETH_RSS_IPV6;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
 
 	if (hv->rss_hash & NDIS_HASH_IPV6_EX)
-		rss_conf->rss_hf |= ETH_RSS_IPV6_EX;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_EX;
 
 	if (hv->rss_hash & NDIS_HASH_TCP_IPV6)
-		rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 
 	if (hv->rss_hash & NDIS_HASH_TCP_IPV6_EX)
-		rss_conf->rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 
 	return 0;
 }
@@ -686,8 +686,8 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev_conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev_conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	unsupported = txmode->offloads & ~HN_TX_OFFLOAD_CAPS;
 	if (unsupported) {
@@ -705,7 +705,7 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	hv->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	hv->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 	err = hn_rndis_conf_offload(hv, txmode->offloads,
 				    rxmode->offloads);
diff --git a/drivers/net/netvsc/hn_rndis.c b/drivers/net/netvsc/hn_rndis.c
index 62ba39636cd8..1b63b27e0c3e 100644
--- a/drivers/net/netvsc/hn_rndis.c
+++ b/drivers/net/netvsc/hn_rndis.c
@@ -710,15 +710,15 @@ hn_rndis_query_rsscaps(struct hn_data *hv,
 
 	hv->rss_offloads = 0;
 	if (caps.ndis_caps & NDIS_RSS_CAP_IPV4)
-		hv->rss_offloads |= ETH_RSS_IPV4
-			| ETH_RSS_NONFRAG_IPV4_TCP
-			| ETH_RSS_NONFRAG_IPV4_UDP;
+		hv->rss_offloads |= RTE_ETH_RSS_IPV4
+			| RTE_ETH_RSS_NONFRAG_IPV4_TCP
+			| RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (caps.ndis_caps & NDIS_RSS_CAP_IPV6)
-		hv->rss_offloads |= ETH_RSS_IPV6
-			| ETH_RSS_NONFRAG_IPV6_TCP;
+		hv->rss_offloads |= RTE_ETH_RSS_IPV6
+			| RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (caps.ndis_caps & NDIS_RSS_CAP_IPV6_EX)
-		hv->rss_offloads |= ETH_RSS_IPV6_EX
-			| ETH_RSS_IPV6_TCP_EX;
+		hv->rss_offloads |= RTE_ETH_RSS_IPV6_EX
+			| RTE_ETH_RSS_IPV6_TCP_EX;
 
 	/* Commit! */
 	*rxr_cnt0 = rxr_cnt;
@@ -800,7 +800,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 		params.ndis_hdr.ndis_size = NDIS_OFFLOAD_PARAMS_SIZE;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_TCP4)
 			params.ndis_tcp4csum = NDIS_OFFLOAD_PARAM_TX;
 		else
@@ -812,7 +812,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) {
 		if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4)
 		    == NDIS_RXCSUM_CAP_TCP4)
 			params.ndis_tcp4csum |= NDIS_OFFLOAD_PARAM_RX;
@@ -826,7 +826,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4)
 			params.ndis_udp4csum = NDIS_OFFLOAD_PARAM_TX;
 		else
@@ -839,7 +839,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (rx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+	if (rx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4)
 			params.ndis_udp4csum |= NDIS_OFFLOAD_PARAM_RX;
 		else
@@ -851,21 +851,21 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
 		if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_IP4)
 		    == NDIS_TXCSUM_CAP_IP4)
 			params.ndis_ip4csum = NDIS_OFFLOAD_PARAM_TX;
 		else
 			goto unsupported;
 	}
-	if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
 			params.ndis_ip4csum |= NDIS_OFFLOAD_PARAM_RX;
 		else
 			goto unsupported;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		if (hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023)
 			params.ndis_lsov2_ip4 = NDIS_OFFLOAD_LSOV2_ON;
 		else
@@ -907,41 +907,41 @@ int hn_rndis_get_offload(struct hn_data *hv,
 		return error;
 	}
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-				    DEV_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				    RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_IP4)
 	    == HN_NDIS_TXCSUM_CAP_IP4)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_TCP4)
 	    == HN_NDIS_TXCSUM_CAP_TCP4 &&
 	    (hwcaps.ndis_csum.ndis_ip6_txcsum & HN_NDIS_TXCSUM_CAP_TCP6)
 	    == HN_NDIS_TXCSUM_CAP_TCP6)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4) &&
 	    (hwcaps.ndis_csum.ndis_ip6_txcsum & NDIS_TXCSUM_CAP_UDP6))
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_UDP_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
 
 	if ((hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023) &&
 	    (hwcaps.ndis_lsov2.ndis_ip6_opts & HN_NDIS_LSOV2_CAP_IP6)
 	    == HN_NDIS_LSOV2_CAP_IP6)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
-				    DEV_RX_OFFLOAD_RSS_HASH;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				    RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4) &&
 	    (hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_TCP6))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4) &&
 	    (hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_UDP6))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_UDP_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
 
 	return 0;
 }
diff --git a/drivers/net/nfb/nfb_ethdev.c b/drivers/net/nfb/nfb_ethdev.c
index 99d93ebf4667..3c39937816a4 100644
--- a/drivers/net/nfb/nfb_ethdev.c
+++ b/drivers/net/nfb/nfb_ethdev.c
@@ -200,7 +200,7 @@ nfb_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_rx_pktlen = (uint32_t)-1;
 	dev_info->max_rx_queues = dev->data->nb_rx_queues;
 	dev_info->max_tx_queues = dev->data->nb_tx_queues;
-	dev_info->speed_capa = ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -268,26 +268,26 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
 
 	status.speed = MAC_SPEED_UNKNOWN;
 
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_status = ETH_LINK_DOWN;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_autoneg = ETH_LINK_SPEED_FIXED;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_SPEED_FIXED;
 
 	if (internals->rxmac[0] != NULL) {
 		nc_rxmac_read_status(internals->rxmac[0], &status);
 
 		switch (status.speed) {
 		case MAC_SPEED_10G:
-			link.link_speed = ETH_SPEED_NUM_10G;
+			link.link_speed = RTE_ETH_SPEED_NUM_10G;
 			break;
 		case MAC_SPEED_40G:
-			link.link_speed = ETH_SPEED_NUM_40G;
+			link.link_speed = RTE_ETH_SPEED_NUM_40G;
 			break;
 		case MAC_SPEED_100G:
-			link.link_speed = ETH_SPEED_NUM_100G;
+			link.link_speed = RTE_ETH_SPEED_NUM_100G;
 			break;
 		default:
-			link.link_speed = ETH_SPEED_NUM_NONE;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 			break;
 		}
 	}
@@ -296,7 +296,7 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
 		nc_rxmac_read_status(internals->rxmac[i], &status);
 
 		if (status.enabled && status.link_up) {
-			link.link_status = ETH_LINK_UP;
+			link.link_status = RTE_ETH_LINK_UP;
 			break;
 		}
 	}
diff --git a/drivers/net/nfb/nfb_rx.c b/drivers/net/nfb/nfb_rx.c
index 3ebb332ae46c..f76e2ba64621 100644
--- a/drivers/net/nfb/nfb_rx.c
+++ b/drivers/net/nfb/nfb_rx.c
@@ -42,7 +42,7 @@ nfb_check_timestamp(struct rte_devargs *devargs)
 	}
 	/* Timestamps are enabled when there is
 	 * key-value pair: enable_timestamp=1
-	 * TODO: timestamp should be enabled with DEV_RX_OFFLOAD_TIMESTAMP
+	 * TODO: timestamp should be enabled with RTE_ETH_RX_OFFLOAD_TIMESTAMP
 	 */
 	if (rte_kvargs_process(kvlist, TIMESTAMP_ARG,
 		timestamp_check_handler, NULL) < 0) {
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 0003fd54dde5..3ea697c54462 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -160,8 +160,8 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Checking TX mode */
 	if (txmode->mq_mode) {
@@ -170,7 +170,7 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	}
 
 	/* Checking RX mode */
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS &&
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS &&
 	    !(hw->cap & NFP_NET_CFG_CTRL_RSS)) {
 		PMD_INIT_LOG(INFO, "RSS not supported");
 		return -EINVAL;
@@ -359,19 +359,19 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
 		if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
 			ctrl |= NFP_NET_CFG_CTRL_RXCSUM;
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 		if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
 			ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
 	}
 
 	hw->mtu = dev->data->mtu;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
 
 	/* L2 broadcast */
@@ -383,13 +383,13 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 		ctrl |= NFP_NET_CFG_CTRL_L2MC;
 
 	/* TX checksum offload */
-	if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 		ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
 
 	/* LSO offload */
-	if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		if (hw->cap & NFP_NET_CFG_CTRL_LSO)
 			ctrl |= NFP_NET_CFG_CTRL_LSO;
 		else
@@ -397,7 +397,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 	}
 
 	/* RX gather */
-	if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		ctrl |= NFP_NET_CFG_CTRL_GATHER;
 
 	return ctrl;
@@ -485,14 +485,14 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 	int ret;
 
 	static const uint32_t ls_to_ethtool[] = {
-		[NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = ETH_SPEED_NUM_NONE,
-		[NFP_NET_CFG_STS_LINK_RATE_UNKNOWN]     = ETH_SPEED_NUM_NONE,
-		[NFP_NET_CFG_STS_LINK_RATE_1G]          = ETH_SPEED_NUM_1G,
-		[NFP_NET_CFG_STS_LINK_RATE_10G]         = ETH_SPEED_NUM_10G,
-		[NFP_NET_CFG_STS_LINK_RATE_25G]         = ETH_SPEED_NUM_25G,
-		[NFP_NET_CFG_STS_LINK_RATE_40G]         = ETH_SPEED_NUM_40G,
-		[NFP_NET_CFG_STS_LINK_RATE_50G]         = ETH_SPEED_NUM_50G,
-		[NFP_NET_CFG_STS_LINK_RATE_100G]        = ETH_SPEED_NUM_100G,
+		[NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = RTE_ETH_SPEED_NUM_NONE,
+		[NFP_NET_CFG_STS_LINK_RATE_UNKNOWN]     = RTE_ETH_SPEED_NUM_NONE,
+		[NFP_NET_CFG_STS_LINK_RATE_1G]          = RTE_ETH_SPEED_NUM_1G,
+		[NFP_NET_CFG_STS_LINK_RATE_10G]         = RTE_ETH_SPEED_NUM_10G,
+		[NFP_NET_CFG_STS_LINK_RATE_25G]         = RTE_ETH_SPEED_NUM_25G,
+		[NFP_NET_CFG_STS_LINK_RATE_40G]         = RTE_ETH_SPEED_NUM_40G,
+		[NFP_NET_CFG_STS_LINK_RATE_50G]         = RTE_ETH_SPEED_NUM_50G,
+		[NFP_NET_CFG_STS_LINK_RATE_100G]        = RTE_ETH_SPEED_NUM_100G,
 	};
 
 	PMD_DRV_LOG(DEBUG, "Link update");
@@ -504,15 +504,15 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 	memset(&link, 0, sizeof(struct rte_eth_link));
 
 	if (nn_link_status & NFP_NET_CFG_STS_LINK)
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	nn_link_status = (nn_link_status >> NFP_NET_CFG_STS_LINK_RATE_SHIFT) &
 			 NFP_NET_CFG_STS_LINK_RATE_MASK;
 
 	if (nn_link_status >= RTE_DIM(ls_to_ethtool))
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	else
 		link.link_speed = ls_to_ethtool[nn_link_status];
 
@@ -701,26 +701,26 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = 1;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
-		dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+		dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM |
-					     DEV_RX_OFFLOAD_UDP_CKSUM |
-					     DEV_RX_OFFLOAD_TCP_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+					     RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+					     RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)
-		dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+		dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_TXCSUM)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM |
-					     DEV_TX_OFFLOAD_UDP_CKSUM |
-					     DEV_TX_OFFLOAD_TCP_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+					     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+					     RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_LSO_ANY)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_GATHER)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -757,22 +757,22 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	};
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
-		dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
-						   ETH_RSS_NONFRAG_IPV4_TCP |
-						   ETH_RSS_NONFRAG_IPV4_UDP |
-						   ETH_RSS_IPV6 |
-						   ETH_RSS_NONFRAG_IPV6_TCP |
-						   ETH_RSS_NONFRAG_IPV6_UDP;
+		dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+						   RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+						   RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+						   RTE_ETH_RSS_IPV6 |
+						   RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+						   RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 		dev_info->reta_size = NFP_NET_CFG_RSS_ITBL_SZ;
 		dev_info->hash_key_size = NFP_NET_CFG_RSS_KEY_SZ;
 	}
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			       ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
-			       ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			       RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+			       RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -843,7 +843,7 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
 	if (link.link_status)
 		PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 			    dev->data->port_id, link.link_speed,
-			    link.link_duplex == ETH_LINK_FULL_DUPLEX
+			    link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX
 			    ? "full-duplex" : "half-duplex");
 	else
 		PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -973,12 +973,12 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	new_ctrl = 0;
 
 	/* Enable vlan strip if it is not configured yet */
-	if ((mask & ETH_VLAN_STRIP_OFFLOAD) &&
+	if ((mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
 	    !(hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
 		new_ctrl = hw->ctrl | NFP_NET_CFG_CTRL_RXVLAN;
 
 	/* Disable vlan strip just if it is configured */
-	if (!(mask & ETH_VLAN_STRIP_OFFLOAD) &&
+	if (!(mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
 	    (hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
 		new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_RXVLAN;
 
@@ -1018,8 +1018,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 	 */
 	for (i = 0; i < reta_size; i += 4) {
 		/* Handling 4 RSS entries per loop */
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
 
 		if (!mask)
@@ -1099,8 +1099,8 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 	 */
 	for (i = 0; i < reta_size; i += 4) {
 		/* Handling 4 RSS entries per loop */
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
 
 		if (!mask)
@@ -1138,22 +1138,22 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 
 	rss_hf = rss_conf->rss_hf;
 
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_TCP;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_UDP;
 
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_TCP;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_UDP;
 
 	cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK;
@@ -1223,22 +1223,22 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
 	cfg_rss_ctrl = nn_cfg_readl(hw, NFP_NET_CFG_RSS_CTRL);
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 	/* Propagate current RSS hash functions to caller */
 	rss_conf->rss_hf = rss_hf;
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 1169ea77a8c7..e08e594b04fe 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -141,7 +141,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
 		new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 62cb3536e0c9..817fe64dbceb 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -103,7 +103,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
 		new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3b5c6615adfa..fc76b84b5b66 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -409,7 +409,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 	dev->data->dev_link.link_status = link_up;
 
 	link_speeds = &dev->data->dev_conf.link_speeds;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG)
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG)
 		negotiate = true;
 
 	err = hw->mac.get_link_capabilities(hw, &speed, &negotiate);
@@ -418,11 +418,11 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 
 	allowed_speeds = 0;
 	if (hw->mac.default_speeds & NGBE_LINK_SPEED_1GB_FULL)
-		allowed_speeds |= ETH_LINK_SPEED_1G;
+		allowed_speeds |= RTE_ETH_LINK_SPEED_1G;
 	if (hw->mac.default_speeds & NGBE_LINK_SPEED_100M_FULL)
-		allowed_speeds |= ETH_LINK_SPEED_100M;
+		allowed_speeds |= RTE_ETH_LINK_SPEED_100M;
 	if (hw->mac.default_speeds & NGBE_LINK_SPEED_10M_FULL)
-		allowed_speeds |= ETH_LINK_SPEED_10M;
+		allowed_speeds |= RTE_ETH_LINK_SPEED_10M;
 
 	if (*link_speeds & ~allowed_speeds) {
 		PMD_INIT_LOG(ERR, "Invalid link setting");
@@ -430,14 +430,14 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 	}
 
 	speed = 0x0;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		speed = hw->mac.default_speeds;
 	} else {
-		if (*link_speeds & ETH_LINK_SPEED_1G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed |= NGBE_LINK_SPEED_1GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_100M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed |= NGBE_LINK_SPEED_100M_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_10M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
 			speed |= NGBE_LINK_SPEED_10M_FULL;
 	}
 
@@ -653,8 +653,8 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->rx_desc_lim = rx_desc_lim;
 	dev_info->tx_desc_lim = tx_desc_lim;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_10M;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_10M;
 
 	/* Driver-preferred Rx/Tx parameters */
 	dev_info->default_rxportconf.burst_size = 32;
@@ -682,11 +682,11 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 	int wait = 1;
 
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			~ETH_LINK_SPEED_AUTONEG);
+			~RTE_ETH_LINK_SPEED_AUTONEG);
 
 	hw->mac.get_link_status = true;
 
@@ -699,8 +699,8 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 
 	err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
 	if (err != 0) {
-		link.link_speed = ETH_SPEED_NUM_NONE;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
@@ -708,27 +708,27 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 		return rte_eth_linkstatus_set(dev, &link);
 
 	intr->flags &= ~NGBE_FLAG_NEED_LINK_CONFIG;
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (link_speed) {
 	default:
 	case NGBE_LINK_SPEED_UNKNOWN:
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 
 	case NGBE_LINK_SPEED_10M_FULL:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		lan_speed = 0;
 		break;
 
 	case NGBE_LINK_SPEED_100M_FULL:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		lan_speed = 1;
 		break;
 
 	case NGBE_LINK_SPEED_1GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		lan_speed = 2;
 		break;
 	}
@@ -912,11 +912,11 @@ ngbe_dev_link_status_print(struct rte_eth_dev *dev)
 
 	rte_eth_linkstatus_get(dev, &link);
 
-	if (link.link_status == ETH_LINK_UP) {
+	if (link.link_status == RTE_ETH_LINK_UP) {
 		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned int)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -956,7 +956,7 @@ ngbe_dev_interrupt_action(struct rte_eth_dev *dev)
 		ngbe_dev_link_update(dev, 0);
 
 		/* likely to up */
-		if (link.link_status != ETH_LINK_UP)
+		if (link.link_status != RTE_ETH_LINK_UP)
 			/* handle it 1 sec later, wait it being stable */
 			timeout = NGBE_LINK_UP_CHECK_TIMEOUT;
 		/* likely to down */
diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
index 25b9e5b1ce1b..ca03469d0e6d 100644
--- a/drivers/net/null/rte_eth_null.c
+++ b/drivers/net/null/rte_eth_null.c
@@ -61,16 +61,16 @@ struct pmd_internals {
 	rte_spinlock_t rss_lock;
 
 	uint16_t reta_size;
-	struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_128 /
-			RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_128 /
+			RTE_ETH_RETA_GROUP_SIZE];
 
 	uint8_t rss_key[40];                /**< 40-byte hash key. */
 };
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(eth_null_logtype, NOTICE);
@@ -189,7 +189,7 @@ eth_dev_start(struct rte_eth_dev *dev)
 	if (dev == NULL)
 		return -EINVAL;
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -199,7 +199,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
 	if (dev == NULL)
 		return 0;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -391,9 +391,9 @@ eth_rss_reta_update(struct rte_eth_dev *dev,
 	rte_spinlock_lock(&internal->rss_lock);
 
 	/* Copy RETA table */
-	for (i = 0; i < (internal->reta_size / RTE_RETA_GROUP_SIZE); i++) {
+	for (i = 0; i < (internal->reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
 		internal->reta_conf[i].mask = reta_conf[i].mask;
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				internal->reta_conf[i].reta[j] = reta_conf[i].reta[j];
 	}
@@ -416,8 +416,8 @@ eth_rss_reta_query(struct rte_eth_dev *dev,
 	rte_spinlock_lock(&internal->rss_lock);
 
 	/* Copy RETA table */
-	for (i = 0; i < (internal->reta_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (internal->reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = internal->reta_conf[i].reta[j];
 	}
@@ -548,8 +548,8 @@ eth_dev_null_create(struct rte_vdev_device *dev, struct pmd_options *args)
 	internals->port_id = eth_dev->data->port_id;
 	rte_eth_random_addr(internals->eth_addr.addr_bytes);
 
-	internals->flow_type_rss_offloads =  ETH_RSS_PROTO_MASK;
-	internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_RETA_GROUP_SIZE;
+	internals->flow_type_rss_offloads =  RTE_ETH_RSS_PROTO_MASK;
+	internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_ETH_RETA_GROUP_SIZE;
 
 	rte_memcpy(internals->rss_key, default_rss_key, 40);
 
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index f578123ed00b..5b8cbec67b5d 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -158,7 +158,7 @@ octeontx_link_status_print(struct rte_eth_dev *eth_dev,
 		octeontx_log_info("Port %u: Link Up - speed %u Mbps - %s",
 			  (eth_dev->data->port_id),
 			  link->link_speed,
-			  link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+			  link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 			  "full-duplex" : "half-duplex");
 	else
 		octeontx_log_info("Port %d: Link Down",
@@ -171,38 +171,38 @@ octeontx_link_status_update(struct octeontx_nic *nic,
 {
 	memset(link, 0, sizeof(*link));
 
-	link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	switch (nic->speed) {
 	case OCTEONTX_LINK_SPEED_SGMII:
-		link->link_speed = ETH_SPEED_NUM_1G;
+		link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case OCTEONTX_LINK_SPEED_XAUI:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 
 	case OCTEONTX_LINK_SPEED_RXAUI:
 	case OCTEONTX_LINK_SPEED_10G_R:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case OCTEONTX_LINK_SPEED_QSGMII:
-		link->link_speed = ETH_SPEED_NUM_5G;
+		link->link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 	case OCTEONTX_LINK_SPEED_40G_R:
-		link->link_speed = ETH_SPEED_NUM_40G;
+		link->link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 
 	case OCTEONTX_LINK_SPEED_RESERVE1:
 	case OCTEONTX_LINK_SPEED_RESERVE2:
 	default:
-		link->link_speed = ETH_SPEED_NUM_NONE;
+		link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 		octeontx_log_err("incorrect link speed %d", nic->speed);
 		break;
 	}
 
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
-	link->link_autoneg = ETH_LINK_AUTONEG;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 }
 
 static void
@@ -355,20 +355,20 @@ octeontx_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
 	uint16_t flags = 0;
 
-	if (nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= OCCTX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (nic->tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= OCCTX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(nic->tx_offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= OCCTX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (nic->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= OCCTX_TX_MULTI_SEG_F;
 
 	return flags;
@@ -380,21 +380,21 @@ octeontx_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
 	uint16_t flags = 0;
 
-	if (nic->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
-			 DEV_RX_OFFLOAD_UDP_CKSUM))
+	if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			 RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= OCCTX_RX_OFFLOAD_CSUM_F;
 
-	if (nic->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= OCCTX_RX_OFFLOAD_CSUM_F;
 
-	if (nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		flags |= OCCTX_RX_MULTI_SEG_F;
 		eth_dev->data->scattered_rx = 1;
 		/* If scatter mode is enabled, TX should also be in multi
 		 * seg mode, else memory leak will occur
 		 */
-		nic->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		nic->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	}
 
 	return flags;
@@ -423,18 +423,18 @@ octeontx_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-		rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+		rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		octeontx_log_err("unsupported rx qmode %d", rxmode->mq_mode);
 		return -EINVAL;
 	}
 
-	if (!(txmode->offloads & DEV_TX_OFFLOAD_MT_LOCKFREE)) {
+	if (!(txmode->offloads & RTE_ETH_TX_OFFLOAD_MT_LOCKFREE)) {
 		PMD_INIT_LOG(NOTICE, "cant disable lockfree tx");
-		txmode->offloads |= DEV_TX_OFFLOAD_MT_LOCKFREE;
+		txmode->offloads |= RTE_ETH_TX_OFFLOAD_MT_LOCKFREE;
 	}
 
-	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		octeontx_log_err("setting link speed/duplex not supported");
 		return -EINVAL;
 	}
@@ -530,13 +530,13 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	 * when this feature has not been enabled before.
 	 */
 	if (data->dev_started && frame_size > buffsz &&
-	    !(nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	    !(nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		octeontx_log_err("Scatter mode is disabled");
 		return -EINVAL;
 	}
 
 	/* Check <seg size> * <max_seg>  >= max_frame */
-	if ((nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER)	&&
+	if ((nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	&&
 	    (frame_size > buffsz * OCCTX_RX_NB_SEG_MAX))
 		return -EINVAL;
 
@@ -571,7 +571,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
 
 	/* Setup scatter mode if needed by jumbo */
 	if (data->mtu > buffsz) {
-		nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+		nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 		nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
 		nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
 	}
@@ -843,10 +843,10 @@ octeontx_dev_info(struct rte_eth_dev *dev,
 	struct octeontx_nic *nic = octeontx_pmd_priv(dev);
 
 	/* Autonegotiation may be disabled */
-	dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
-	dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			ETH_LINK_SPEED_40G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_40G;
 
 	/* Min/Max MTU supported */
 	dev_info->min_rx_bufsize = OCCTX_MIN_FRS;
@@ -1356,7 +1356,7 @@ octeontx_create(struct rte_vdev_device *dev, int port, uint8_t evdev,
 	nic->ev_ports = 1;
 	nic->print_flag = -1;
 
-	data->dev_link.link_status = ETH_LINK_DOWN;
+	data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	data->dev_started = 0;
 	data->promiscuous = 0;
 	data->all_multicast = 0;
diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index 3a02824e3948..c493fa7a03ed 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -55,23 +55,23 @@
 #define OCCTX_MAX_MTU		(OCCTX_MAX_FRS - OCCTX_L2_OVERHEAD)
 
 #define OCTEONTX_RX_OFFLOADS		(				   \
-					 DEV_RX_OFFLOAD_CHECKSUM	 | \
-					 DEV_RX_OFFLOAD_SCTP_CKSUM       | \
-					 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-					 DEV_RX_OFFLOAD_SCATTER	         | \
-					 DEV_RX_OFFLOAD_SCATTER		 | \
-					 DEV_RX_OFFLOAD_VLAN_FILTER)
+					 RTE_ETH_RX_OFFLOAD_CHECKSUM	 | \
+					 RTE_ETH_RX_OFFLOAD_SCTP_CKSUM       | \
+					 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+					 RTE_ETH_RX_OFFLOAD_SCATTER	         | \
+					 RTE_ETH_RX_OFFLOAD_SCATTER		 | \
+					 RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 
 #define OCTEONTX_TX_OFFLOADS		(				   \
-					 DEV_TX_OFFLOAD_MBUF_FAST_FREE	 | \
-					 DEV_TX_OFFLOAD_MT_LOCKFREE	 | \
-					 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-					 DEV_TX_OFFLOAD_OUTER_UDP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_IPV4_CKSUM	 | \
-					 DEV_TX_OFFLOAD_TCP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_UDP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_SCTP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_MULTI_SEGS)
+					 RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE	 | \
+					 RTE_ETH_TX_OFFLOAD_MT_LOCKFREE	 | \
+					 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+					 RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_TCP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_UDP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 static inline struct octeontx_nic *
 octeontx_pmd_priv(struct rte_eth_dev *dev)
diff --git a/drivers/net/octeontx/octeontx_ethdev_ops.c b/drivers/net/octeontx/octeontx_ethdev_ops.c
index dbe13ce3826b..6ec2b71b0672 100644
--- a/drivers/net/octeontx/octeontx_ethdev_ops.c
+++ b/drivers/net/octeontx/octeontx_ethdev_ops.c
@@ -43,20 +43,20 @@ octeontx_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	rxmode = &dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 			rc = octeontx_vlan_hw_filter(nic, true);
 			if (rc)
 				goto done;
 
-			nic->rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+			nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			nic->rx_offload_flags |= OCCTX_RX_VLAN_FLTR_F;
 		} else {
 			rc = octeontx_vlan_hw_filter(nic, false);
 			if (rc)
 				goto done;
 
-			nic->rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+			nic->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			nic->rx_offload_flags &= ~OCCTX_RX_VLAN_FLTR_F;
 		}
 	}
@@ -139,7 +139,7 @@ octeontx_dev_vlan_offload_init(struct rte_eth_dev *dev)
 
 	TAILQ_INIT(&nic->vlan_info.fltr_tbl);
 
-	rc = octeontx_dev_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+	rc = octeontx_dev_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
 	if (rc)
 		octeontx_log_err("Failed to set vlan offload rc=%d", rc);
 
@@ -219,13 +219,13 @@ octeontx_dev_flow_ctrl_get(struct rte_eth_dev *dev,
 		return rc;
 
 	if (conf.rx_pause && conf.tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (conf.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (conf.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	/* low_water & high_water values are in Bytes */
 	fc_conf->low_water = conf.low_water;
@@ -272,10 +272,10 @@ octeontx_dev_flow_ctrl_set(struct rte_eth_dev *dev,
 		return -EINVAL;
 	}
 
-	rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-			(fc_conf->mode == RTE_FC_RX_PAUSE);
-	tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-			(fc_conf->mode == RTE_FC_TX_PAUSE);
+	rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+			(fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+	tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+			(fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
 
 	conf.high_water = fc_conf->high_water;
 	conf.low_water = fc_conf->low_water;
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index f491e20e95c1..060d267f5de5 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -21,7 +21,7 @@ nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
 
 	if (otx2_dev_is_vf(dev) ||
 	    dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG)
-		capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+		capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	return capa;
 }
@@ -33,10 +33,10 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
 
 	/* TSO not supported for earlier chip revisions */
 	if (otx2_dev_is_96xx_A0(dev) || otx2_dev_is_95xx_Ax(dev))
-		capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
-			  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			  DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-			  DEV_TX_OFFLOAD_GRE_TNL_TSO);
+		capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+			  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
 	return capa;
 }
 
@@ -66,8 +66,8 @@ nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
 	req->npa_func = otx2_npa_pf_func_get();
 	req->sso_func = otx2_sso_pf_func_get();
 	req->rx_cfg = BIT_ULL(35 /* DIS_APAD */);
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
-			 DEV_RX_OFFLOAD_UDP_CKSUM)) {
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			 RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
 		req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */);
 		req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */);
 	}
@@ -373,7 +373,7 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
 
 	aq->rq.sso_ena = 0;
 
-	if (rxq->offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		aq->rq.ipsech_ena = 1;
 
 	aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */
@@ -665,7 +665,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
 	 * These are needed in deriving raw clock value from tsc counter.
 	 * read_clock eth op returns raw clock value.
 	 */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
 	    otx2_ethdev_is_ptp_en(dev)) {
 		rc = otx2_nix_raw_clock_tsc_conv(dev);
 		if (rc) {
@@ -692,7 +692,7 @@ nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
 	 * Maximum three segments can be supported with W8, Choose
 	 * NIX_MAXSQESZ_W16 for multi segment offload.
 	 */
-	if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		return NIX_MAXSQESZ_W16;
 	else
 		return NIX_MAXSQESZ_W8;
@@ -707,29 +707,29 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rxmode *rxmode = &conf->rxmode;
 	uint16_t flags = 0;
 
-	if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
-			(dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+			(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		flags |= NIX_RX_OFFLOAD_RSS_F;
 
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
-			 DEV_RX_OFFLOAD_UDP_CKSUM))
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			 RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		flags |= NIX_RX_MULTI_SEG_F;
 
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
-				DEV_RX_OFFLOAD_QINQ_STRIP))
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				RTE_ETH_RX_OFFLOAD_QINQ_STRIP))
 		flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		flags |= NIX_RX_OFFLOAD_SECURITY_F;
 
 	if (!dev->ptype_disable)
@@ -768,43 +768,43 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
 			 offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
 
-	if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
-	    conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+	    conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
 
-	if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= NIX_TX_MULTI_SEG_F;
 
 	/* Enable Inner checksum for TSO */
-	if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+	if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		flags |= (NIX_TX_OFFLOAD_TSO_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
 	/* Enable Inner and Outer checksum for Tunnel TSO */
-	if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		    DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		    DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		flags |= (NIX_TX_OFFLOAD_TSO_F |
 			  NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
-	if (conf & DEV_TX_OFFLOAD_SECURITY)
+	if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
 		flags |= NIX_TX_OFFLOAD_SECURITY_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
 	return flags;
@@ -914,8 +914,8 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
 	buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
 
 	if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
-		dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
-		dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+		dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 		/* Setting up the rx[tx]_offload_flags due to change
 		 * in rx[tx]_offloads.
@@ -1848,21 +1848,21 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
 		goto fail_configure;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-	    rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+	    rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode);
 		goto fail_configure;
 	}
 
-	if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		otx2_err("Unsupported mq tx mode %d", txmode->mq_mode);
 		goto fail_configure;
 	}
 
 	if (otx2_dev_is_Ax(dev) &&
-	    (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
-	    ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
-	    (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+	    ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
 		otx2_err("Outer IP and SCTP checksum unsupported");
 		goto fail_configure;
 	}
@@ -2235,7 +2235,7 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
 	 * enabled in PF owning this VF
 	 */
 	memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info));
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
 	    otx2_ethdev_is_ptp_en(dev))
 		otx2_nix_timesync_enable(eth_dev);
 	else
@@ -2563,8 +2563,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
 	rc = otx2_eth_sec_ctx_create(eth_dev);
 	if (rc)
 		goto free_mac_addrs;
-	dev->tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
-	dev->rx_offload_capa |= DEV_RX_OFFLOAD_SECURITY;
+	dev->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
+	dev->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SECURITY;
 
 	/* Initialize rte-flow */
 	rc = otx2_flow_init(dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 4557a0ee1945..a5282c6c1231 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -117,43 +117,43 @@
 #define CQ_TIMER_THRESH_DEFAULT	0xAULL /* ~1usec i.e (0xA * 100nsec) */
 #define CQ_TIMER_THRESH_MAX     255
 
-#define NIX_RSS_L3_L4_SRC_DST  (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY \
-				| ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define NIX_RSS_L3_L4_SRC_DST  (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY \
+				| RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
 
-#define NIX_RSS_OFFLOAD		(ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\
-				 ETH_RSS_TCP | ETH_RSS_SCTP | \
-				 ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD | \
-				 NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | \
-				 ETH_RSS_C_VLAN)
+#define NIX_RSS_OFFLOAD		(RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |\
+				 RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | \
+				 RTE_ETH_RSS_TUNNEL | RTE_ETH_RSS_L2_PAYLOAD | \
+				 NIX_RSS_L3_L4_SRC_DST | RTE_ETH_RSS_LEVEL_MASK | \
+				 RTE_ETH_RSS_C_VLAN)
 
 #define NIX_TX_OFFLOAD_CAPA ( \
-	DEV_TX_OFFLOAD_MBUF_FAST_FREE	| \
-	DEV_TX_OFFLOAD_MT_LOCKFREE	| \
-	DEV_TX_OFFLOAD_VLAN_INSERT	| \
-	DEV_TX_OFFLOAD_QINQ_INSERT	| \
-	DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM	| \
-	DEV_TX_OFFLOAD_OUTER_UDP_CKSUM	| \
-	DEV_TX_OFFLOAD_TCP_CKSUM	| \
-	DEV_TX_OFFLOAD_UDP_CKSUM	| \
-	DEV_TX_OFFLOAD_SCTP_CKSUM	| \
-	DEV_TX_OFFLOAD_TCP_TSO		| \
-	DEV_TX_OFFLOAD_VXLAN_TNL_TSO    | \
-	DEV_TX_OFFLOAD_GENEVE_TNL_TSO   | \
-	DEV_TX_OFFLOAD_GRE_TNL_TSO	| \
-	DEV_TX_OFFLOAD_MULTI_SEGS	| \
-	DEV_TX_OFFLOAD_IPV4_CKSUM)
+	RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE	| \
+	RTE_ETH_TX_OFFLOAD_MT_LOCKFREE	| \
+	RTE_ETH_TX_OFFLOAD_VLAN_INSERT	| \
+	RTE_ETH_TX_OFFLOAD_QINQ_INSERT	| \
+	RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_TCP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_UDP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_SCTP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_TCP_TSO		| \
+	RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO    | \
+	RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO   | \
+	RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO	| \
+	RTE_ETH_TX_OFFLOAD_MULTI_SEGS	| \
+	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 
 #define NIX_RX_OFFLOAD_CAPA ( \
-	DEV_RX_OFFLOAD_CHECKSUM		| \
-	DEV_RX_OFFLOAD_SCTP_CKSUM	| \
-	DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-	DEV_RX_OFFLOAD_SCATTER		| \
-	DEV_RX_OFFLOAD_OUTER_UDP_CKSUM	| \
-	DEV_RX_OFFLOAD_VLAN_STRIP	| \
-	DEV_RX_OFFLOAD_VLAN_FILTER	| \
-	DEV_RX_OFFLOAD_QINQ_STRIP	| \
-	DEV_RX_OFFLOAD_TIMESTAMP	| \
-	DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RX_OFFLOAD_CHECKSUM		| \
+	RTE_ETH_RX_OFFLOAD_SCTP_CKSUM	| \
+	RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+	RTE_ETH_RX_OFFLOAD_SCATTER		| \
+	RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM	| \
+	RTE_ETH_RX_OFFLOAD_VLAN_STRIP	| \
+	RTE_ETH_RX_OFFLOAD_VLAN_FILTER	| \
+	RTE_ETH_RX_OFFLOAD_QINQ_STRIP	| \
+	RTE_ETH_RX_OFFLOAD_TIMESTAMP	| \
+	RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define NIX_DEFAULT_RSS_CTX_GROUP  0
 #define NIX_DEFAULT_RSS_MCAM_IDX  -1
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index 83f905315b38..60bf6c3f5f05 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -49,12 +49,12 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
 
 	val = atoi(value);
 
-	if (val <= ETH_RSS_RETA_SIZE_64)
-		val = ETH_RSS_RETA_SIZE_64;
-	else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
-		val = ETH_RSS_RETA_SIZE_128;
-	else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
-		val = ETH_RSS_RETA_SIZE_256;
+	if (val <= RTE_ETH_RSS_RETA_SIZE_64)
+		val = RTE_ETH_RSS_RETA_SIZE_64;
+	else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
+		val = RTE_ETH_RSS_RETA_SIZE_128;
+	else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
+		val = RTE_ETH_RSS_RETA_SIZE_256;
 	else
 		val = NIX_RSS_RETA_SIZE;
 
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 22a8af5cba45..d5caaa326a5a 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -26,11 +26,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	 * when this feature has not been enabled before.
 	 */
 	if (data->dev_started && frame_size > buffsz &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER))
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER))
 		return -EINVAL;
 
 	/* Check <seg size> * <max_seg>  >= max_frame */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)	&&
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	&&
 	    (frame_size > buffsz * NIX_RX_NB_SEG_MAX))
 		return -EINVAL;
 
@@ -568,17 +568,17 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
 	};
 
 	/* Auto negotiation disabled */
-	devinfo->speed_capa = ETH_LINK_SPEED_FIXED;
+	devinfo->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
 	if (!otx2_dev_is_vf_or_sdp(dev) && !otx2_dev_is_lbk(dev)) {
-		devinfo->speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G;
+		devinfo->speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G;
 
 		/* 50G and 100G to be supported for board version C0
 		 * and above.
 		 */
 		if (!otx2_dev_is_Ax(dev))
-			devinfo->speed_capa |= ETH_LINK_SPEED_50G |
-					       ETH_LINK_SPEED_100G;
+			devinfo->speed_capa |= RTE_ETH_LINK_SPEED_50G |
+					       RTE_ETH_LINK_SPEED_100G;
 	}
 
 	devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.c b/drivers/net/octeontx2/otx2_ethdev_sec.c
index 7bd1ed6da043..4d40184de46d 100644
--- a/drivers/net/octeontx2/otx2_ethdev_sec.c
+++ b/drivers/net/octeontx2/otx2_ethdev_sec.c
@@ -869,8 +869,8 @@ otx2_eth_sec_init(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(sa_width < 32 || sa_width > 512 ||
 			 !RTE_IS_POWER_OF_2(sa_width));
 
-	if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+	if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
 		return 0;
 
 	if (rte_security_dynfield_register() < 0)
@@ -912,8 +912,8 @@ otx2_eth_sec_fini(struct rte_eth_dev *eth_dev)
 	uint16_t port = eth_dev->data->port_id;
 	char name[RTE_MEMZONE_NAMESIZE];
 
-	if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+	if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
 		return;
 
 	lookup_mem_sa_tbl_clear(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 6df0732189eb..1d0fe4e950d4 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -625,7 +625,7 @@ otx2_flow_create(struct rte_eth_dev *dev,
 		goto err_exit;
 	}
 
-	if (hw->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (hw->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		rc = flow_update_sec_tt(dev, actions);
 		if (rc != 0) {
 			rte_flow_error_set(error, EIO,
diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c
index 76bf48100183..071740de86a7 100644
--- a/drivers/net/octeontx2/otx2_flow_ctrl.c
+++ b/drivers/net/octeontx2/otx2_flow_ctrl.c
@@ -54,7 +54,7 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 	int rc;
 
 	if (otx2_dev_is_lbk(dev)) {
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		return 0;
 	}
 
@@ -66,13 +66,13 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 		goto done;
 
 	if (rsp->rx_pause && rsp->tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rsp->rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (rsp->tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 done:
 	return rc;
@@ -159,10 +159,10 @@ otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	if (fc_conf->mode == fc->mode)
 		return 0;
 
-	rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_RX_PAUSE);
-	tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_TX_PAUSE);
+	rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+	tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
 
 	/* Check if TX pause frame is already enabled or not */
 	if (fc->tx_pause ^ tx_pause) {
@@ -212,11 +212,11 @@ otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev)
 	/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
 	if (otx2_dev_is_Ax(dev) &&
 	    (dev->npc_flow.switch_header_type != OTX2_PRIV_FLAGS_HIGIG) &&
-	    (fc_conf.mode == RTE_FC_FULL || fc_conf.mode == RTE_FC_RX_PAUSE)) {
+	    (fc_conf.mode == RTE_ETH_FC_FULL || fc_conf.mode == RTE_ETH_FC_RX_PAUSE)) {
 		fc_conf.mode =
-				(fc_conf.mode == RTE_FC_FULL ||
-				fc_conf.mode == RTE_FC_TX_PAUSE) ?
-				RTE_FC_TX_PAUSE : RTE_FC_NONE;
+				(fc_conf.mode == RTE_ETH_FC_FULL ||
+				fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ?
+				RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
 	}
 
 	return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf);
@@ -234,7 +234,7 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
 		return 0;
 
 	memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
-	/* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+	/* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
 	 * by AF driver, update those info in PMD structure.
 	 */
 	rc = otx2_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -242,10 +242,10 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
 		goto exit;
 
 	fc->mode = fc_conf.mode;
-	fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_RX_PAUSE);
-	fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_TX_PAUSE);
+	fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+	fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
 
 exit:
 	return rc;
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index 79b92fda8a4a..91267bbb8182 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -852,7 +852,7 @@ parse_rss_action(struct rte_eth_dev *dev,
 					  attr, "No support of RSS in egress");
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS)
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS)
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ACTION,
 					  act, "multi-queue mode is disabled");
@@ -1186,7 +1186,7 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev,
 		 *FLOW_KEY_ALG index. So, till we update the action with
 		 *flow_key_alg index, set the action to drop.
 		 */
-		if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+		if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 			flow->npc_action = NIX_RX_ACTIONOP_DROP;
 		else
 			flow->npc_action = NIX_RX_ACTIONOP_UCAST;
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
index 81dd6243b977..8f5d0eed92b6 100644
--- a/drivers/net/octeontx2/otx2_link.c
+++ b/drivers/net/octeontx2/otx2_link.c
@@ -41,7 +41,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
 		otx2_info("Port %d: Link Up - speed %u Mbps - %s",
 			  (int)(eth_dev->data->port_id),
 			  (uint32_t)link->link_speed,
-			  link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+			  link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 			  "full-duplex" : "half-duplex");
 	else
 		otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id));
@@ -92,7 +92,7 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
 
 	eth_link.link_status = link->link_up;
 	eth_link.link_speed = link->speed;
-	eth_link.link_autoneg = ETH_LINK_AUTONEG;
+	eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	eth_link.link_duplex = link->full_duplex;
 
 	otx2_dev->speed = link->speed;
@@ -111,10 +111,10 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
 static int
 lbk_link_update(struct rte_eth_link *link)
 {
-	link->link_status = ETH_LINK_UP;
-	link->link_speed = ETH_SPEED_NUM_100G;
-	link->link_autoneg = ETH_LINK_FIXED;
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_status = RTE_ETH_LINK_UP;
+	link->link_speed = RTE_ETH_SPEED_NUM_100G;
+	link->link_autoneg = RTE_ETH_LINK_FIXED;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	return 0;
 }
 
@@ -131,7 +131,7 @@ cgx_link_update(struct otx2_eth_dev *dev, struct rte_eth_link *link)
 
 	link->link_status = rsp->link_info.link_up;
 	link->link_speed = rsp->link_info.speed;
-	link->link_autoneg = ETH_LINK_AUTONEG;
+	link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	if (rsp->link_info.full_duplex)
 		link->link_duplex = rsp->link_info.full_duplex;
@@ -233,22 +233,22 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
 
 	/* 50G and 100G to be supported for board version C0 and above */
 	if (!otx2_dev_is_Ax(dev)) {
-		if (link_speeds & ETH_LINK_SPEED_100G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_100G)
 			link_speed = 100000;
-		if (link_speeds & ETH_LINK_SPEED_50G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_50G)
 			link_speed = 50000;
 	}
-	if (link_speeds & ETH_LINK_SPEED_40G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
 		link_speed = 40000;
-	if (link_speeds & ETH_LINK_SPEED_25G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
 		link_speed = 25000;
-	if (link_speeds & ETH_LINK_SPEED_20G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_20G)
 		link_speed = 20000;
-	if (link_speeds & ETH_LINK_SPEED_10G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 		link_speed = 10000;
-	if (link_speeds & ETH_LINK_SPEED_5G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_5G)
 		link_speed = 5000;
-	if (link_speeds & ETH_LINK_SPEED_1G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
 		link_speed = 1000;
 
 	return link_speed;
@@ -257,11 +257,11 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
 static inline uint8_t
 nix_parse_eth_link_duplex(uint32_t link_speeds)
 {
-	if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
-			(link_speeds & ETH_LINK_SPEED_100M_HD))
-		return ETH_LINK_HALF_DUPLEX;
+	if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+			(link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+		return RTE_ETH_LINK_HALF_DUPLEX;
 	else
-		return ETH_LINK_FULL_DUPLEX;
+		return RTE_ETH_LINK_FULL_DUPLEX;
 }
 
 int
@@ -279,7 +279,7 @@ otx2_apply_link_speed(struct rte_eth_dev *eth_dev)
 	cfg.speed = nix_parse_link_speeds(dev, conf->link_speeds);
 	if (cfg.speed != SPEED_NONE && cfg.speed != dev->speed) {
 		cfg.duplex = nix_parse_eth_link_duplex(conf->link_speeds);
-		cfg.an = (conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+		cfg.an = (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 		return cgx_change_mode(dev, &cfg);
 	}
diff --git a/drivers/net/octeontx2/otx2_mcast.c b/drivers/net/octeontx2/otx2_mcast.c
index f84aa1bf570c..b9c63ad3bc21 100644
--- a/drivers/net/octeontx2/otx2_mcast.c
+++ b/drivers/net/octeontx2/otx2_mcast.c
@@ -100,7 +100,7 @@ nix_hw_update_mc_addr_list(struct rte_eth_dev *eth_dev)
 
 		action = NIX_RX_ACTIONOP_UCAST;
 
-		if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+		if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 			action = NIX_RX_ACTIONOP_RSS;
 			action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
 		}
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
index 91e5c0f6bd11..abb213058792 100644
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -250,7 +250,7 @@ otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
 	/* System time should be already on by default */
 	nix_start_timecounters(eth_dev);
 
-	dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+	dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 	dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
@@ -287,7 +287,7 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
 	if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
 		return -EINVAL;
 
-	dev->rx_offloads &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+	dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F;
 	dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F;
 
diff --git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
index 7dbe5f69ae65..68cef1caa394 100644
--- a/drivers/net/octeontx2/otx2_rss.c
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -85,8 +85,8 @@ otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Copy RETA table */
-	for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask >> j) & 0x01)
 				rss->ind_tbl[idx] = reta_conf[i].reta[j];
 			idx++;
@@ -118,8 +118,8 @@ otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Copy RETA table */
-	for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = rss->ind_tbl[j];
 	}
@@ -178,23 +178,23 @@ rss_get_key(struct otx2_eth_dev *dev, uint8_t *key)
 }
 
 #define RSS_IPV4_ENABLE ( \
-			  ETH_RSS_IPV4 | \
-			  ETH_RSS_FRAG_IPV4 | \
-			  ETH_RSS_NONFRAG_IPV4_UDP | \
-			  ETH_RSS_NONFRAG_IPV4_TCP | \
-			  ETH_RSS_NONFRAG_IPV4_SCTP)
+			  RTE_ETH_RSS_IPV4 | \
+			  RTE_ETH_RSS_FRAG_IPV4 | \
+			  RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
 #define RSS_IPV6_ENABLE ( \
-			  ETH_RSS_IPV6 | \
-			  ETH_RSS_FRAG_IPV6 | \
-			  ETH_RSS_NONFRAG_IPV6_UDP | \
-			  ETH_RSS_NONFRAG_IPV6_TCP | \
-			  ETH_RSS_NONFRAG_IPV6_SCTP)
+			  RTE_ETH_RSS_IPV6 | \
+			  RTE_ETH_RSS_FRAG_IPV6 | \
+			  RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 #define RSS_IPV6_EX_ENABLE ( \
-			     ETH_RSS_IPV6_EX | \
-			     ETH_RSS_IPV6_TCP_EX | \
-			     ETH_RSS_IPV6_UDP_EX)
+			     RTE_ETH_RSS_IPV6_EX | \
+			     RTE_ETH_RSS_IPV6_TCP_EX | \
+			     RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define RSS_MAX_LEVELS   3
 
@@ -233,24 +233,24 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
 
 	dev->rss_info.nix_rss = ethdev_rss;
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
 	    dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) {
 		flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
 	}
 
-	if (ethdev_rss & ETH_RSS_C_VLAN)
+	if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
 
-	if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
 
-	if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
 
-	if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
 
-	if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
 
 	if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -259,34 +259,34 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
 	if (ethdev_rss & RSS_IPV6_ENABLE)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
 
-	if (ethdev_rss & ETH_RSS_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_TCP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_UDP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_SCTP)
+	if (ethdev_rss & RTE_ETH_RSS_SCTP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
 
 	if (ethdev_rss & RSS_IPV6_EX_ENABLE)
 		flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
 
-	if (ethdev_rss & ETH_RSS_PORT)
+	if (ethdev_rss & RTE_ETH_RSS_PORT)
 		flowkey_cfg |= FLOW_KEY_TYPE_PORT;
 
-	if (ethdev_rss & ETH_RSS_NVGRE)
+	if (ethdev_rss & RTE_ETH_RSS_NVGRE)
 		flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
 
-	if (ethdev_rss & ETH_RSS_VXLAN)
+	if (ethdev_rss & RTE_ETH_RSS_VXLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
 
-	if (ethdev_rss & ETH_RSS_GENEVE)
+	if (ethdev_rss & RTE_ETH_RSS_GENEVE)
 		flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
 
-	if (ethdev_rss & ETH_RSS_GTPU)
+	if (ethdev_rss & RTE_ETH_RSS_GTPU)
 		flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
 
 	return flowkey_cfg;
@@ -343,7 +343,7 @@ otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
 		otx2_nix_rss_set_key(dev, rss_conf->rss_key,
 				     (uint32_t)rss_conf->rss_key_len);
 
-	rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 	flowkey_cfg =
@@ -390,7 +390,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
 	int rc;
 
 	/* Skip further configuration if selected mode is not RSS */
-	if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS || !qcnt)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS || !qcnt)
 		return 0;
 
 	/* Update default RSS key and cfg */
@@ -408,7 +408,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
 	}
 
 	rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
-	rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 	flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, rss_hash_level);
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index 0d85c898bfe7..2c18483b98fd 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -414,12 +414,12 @@ NIX_RX_FASTPATH_MODES
 	/* For PTP enabled, scalar rx function should be chosen as most of the
 	 * PTP apps are implemented to rx burst 1 pkt.
 	 */
-	if (dev->scalar_ena || dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (dev->scalar_ena || dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 		pick_rx_func(eth_dev, nix_eth_rx_burst);
 	else
 		pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
 
 	/* Copy multi seg version with no offload for tear down sequence */
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index ad704d745b04..135615580bbf 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -1070,7 +1070,7 @@ NIX_TX_FASTPATH_MODES
 	else
 		pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
 
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
 
 	rte_mb();
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index f5161e17a16d..cce643b7b51d 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -50,7 +50,7 @@ nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
 
 	action = NIX_RX_ACTIONOP_UCAST;
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		action = NIX_RX_ACTIONOP_RSS;
 		action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
 	}
@@ -99,7 +99,7 @@ nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type,
 	 * Take offset from LA since in case of untagged packet,
 	 * lbptr is zero.
 	 */
-	if (type == ETH_VLAN_TYPE_OUTER) {
+	if (type == RTE_ETH_VLAN_TYPE_OUTER) {
 		vtag_action.act.vtag0_def = vtag_index;
 		vtag_action.act.vtag0_lid = NPC_LID_LA;
 		vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
@@ -413,7 +413,7 @@ nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
 		if (vlan->strip_on ||
 		    (vlan->qinq_on && !vlan->qinq_before_def)) {
 			if (eth_dev->data->dev_conf.rxmode.mq_mode ==
-								ETH_MQ_RX_RSS)
+								RTE_ETH_MQ_RX_RSS)
 				vlan->def_rx_mcam_ent.action |=
 							NIX_RX_ACTIONOP_RSS;
 			else
@@ -717,48 +717,48 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 
 	rxmode = &eth_dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
-			offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
+			offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			rc = nix_vlan_hw_strip(eth_dev, true);
 		} else {
-			offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+			offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			rc = nix_vlan_hw_strip(eth_dev, false);
 		}
 		if (rc)
 			goto done;
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
-			offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
+			offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			rc = nix_vlan_hw_filter(eth_dev, true, 0);
 		} else {
-			offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+			offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			rc = nix_vlan_hw_filter(eth_dev, false, 0);
 		}
 		if (rc)
 			goto done;
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) {
 		if (!dev->vlan_info.qinq_on) {
-			offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+			offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 			rc = otx2_nix_config_double_vlan(eth_dev, true);
 			if (rc)
 				goto done;
 		}
 	} else {
 		if (dev->vlan_info.qinq_on) {
-			offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+			offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 			rc = otx2_nix_config_double_vlan(eth_dev, false);
 			if (rc)
 				goto done;
 		}
 	}
 
-	if (offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
-			DEV_RX_OFFLOAD_QINQ_STRIP)) {
+	if (offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+			RTE_ETH_RX_OFFLOAD_QINQ_STRIP)) {
 		dev->rx_offloads |= offloads;
 		dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
 		otx2_eth_set_rx_function(eth_dev);
@@ -780,7 +780,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
 	tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox);
 
 	tpid_cfg->tpid = tpid;
-	if (type == ETH_VLAN_TYPE_OUTER)
+	if (type == RTE_ETH_VLAN_TYPE_OUTER)
 		tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER;
 	else
 		tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER;
@@ -789,7 +789,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
 	if (rc)
 		return rc;
 
-	if (type == ETH_VLAN_TYPE_OUTER)
+	if (type == RTE_ETH_VLAN_TYPE_OUTER)
 		dev->vlan_info.outer_vlan_tpid = tpid;
 	else
 		dev->vlan_info.inner_vlan_tpid = tpid;
@@ -864,7 +864,7 @@ otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev,       uint16_t vlan_id, int on)
 		vlan->outer_vlan_idx = 0;
 	}
 
-	rc = nix_vlan_handle_default_tx_entry(dev, ETH_VLAN_TYPE_OUTER,
+	rc = nix_vlan_handle_default_tx_entry(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					      vtag_index, on);
 	if (rc < 0) {
 		printf("Default tx entry failed with rc %d\n", rc);
@@ -986,12 +986,12 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
 	} else {
 		/* Reinstall all mcam entries now if filter offload is set */
 		if (eth_dev->data->dev_conf.rxmode.offloads &
-		    DEV_RX_OFFLOAD_VLAN_FILTER)
+		    RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			nix_vlan_reinstall_vlan_filters(eth_dev);
 	}
 
 	mask =
-	    ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+	    RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
 	rc = otx2_nix_vlan_offload_set(eth_dev, mask);
 	if (rc) {
 		otx2_err("Failed to set vlan offload rc=%d", rc);
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index 698d22e22685..74dc36a17648 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -33,14 +33,14 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
 
 	otx_epvf = OTX_EP_DEV(eth_dev);
 
-	devinfo->speed_capa = ETH_LINK_SPEED_10G;
+	devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
 	devinfo->max_rx_queues = otx_epvf->max_rx_queues;
 	devinfo->max_tx_queues = otx_epvf->max_tx_queues;
 
 	devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
 	devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
-	devinfo->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
-	devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+	devinfo->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
+	devinfo->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
 
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index aa4dcd33cc79..9338b30672ec 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -563,7 +563,7 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
 			struct otx_ep_buf_free_info *finfo;
 			int j, frags, num_sg;
 
-			if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+			if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
 				goto xmit_fail;
 
 			finfo = (struct otx_ep_buf_free_info *)rte_malloc(NULL,
@@ -697,7 +697,7 @@ otx2_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
 			struct otx_ep_buf_free_info *finfo;
 			int j, frags, num_sg;
 
-			if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+			if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
 				goto xmit_fail;
 
 			finfo = (struct otx_ep_buf_free_info *)
@@ -954,7 +954,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
 	droq_pkt->l4_len = hdr_lens.l4_len;
 
 	if (droq_pkt->nb_segs > 1 &&
-	    !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	    !(otx_ep->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		rte_pktmbuf_free(droq_pkt);
 		goto oq_read_fail;
 	}
diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index d695c5eef7b0..ec29fd6bc53c 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -136,10 +136,10 @@ static const char *valid_arguments[] = {
 };
 
 static struct rte_eth_link pmd_link = {
-		.link_speed = ETH_SPEED_NUM_10G,
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_status = ETH_LINK_DOWN,
-		.link_autoneg = ETH_LINK_FIXED,
+		.link_speed = RTE_ETH_SPEED_NUM_10G,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_status = RTE_ETH_LINK_DOWN,
+		.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(eth_pcap_logtype, NOTICE);
@@ -659,7 +659,7 @@ eth_dev_start(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_tx_queues; i++)
 		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -714,7 +714,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_tx_queues; i++)
 		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index 4cc002ee8fab..047010e15ed0 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -22,15 +22,15 @@ struct pfe_vdev_init_params {
 static struct pfe *g_pfe;
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 /* Supported Tx offloads */
 static uint64_t dev_tx_offloads_sup =
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 /* TODO: make pfe_svr a runtime option.
  * Driver should be able to get the SVR
@@ -601,9 +601,9 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 	}
 
 	link.link_status = lstatus;
-	link.link_speed = ETH_LINK_SPEED_1G;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_speed = RTE_ETH_LINK_SPEED_1G;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	pfe_eth_atomic_write_link_status(dev, &link);
 
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 6667c2d7ab6d..511742c6a1b3 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -65,8 +65,8 @@ typedef u32 offsize_t;      /* In DWORDS !!! */
 struct eth_phy_cfg {
 /* 0 = autoneg, 1000/10000/20000/25000/40000/50000/100000 */
 	u32 speed;
-#define ETH_SPEED_AUTONEG   0
-#define ETH_SPEED_SMARTLINQ  0x8 /* deprecated - use link_modes field instead */
+#define RTE_ETH_SPEED_AUTONEG   0
+#define RTE_ETH_SPEED_SMARTLINQ  0x8 /* deprecated - use link_modes field instead */
 
 	u32 pause;      /* bitmask */
 #define ETH_PAUSE_NONE		0x0
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 27f6932dc74e..c907d7fd8312 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -342,9 +342,9 @@ qede_assign_rxtx_handlers(struct rte_eth_dev *dev, bool is_dummy)
 	}
 
 	use_tx_offload = !!(tx_offloads &
-			    (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
-			     DEV_TX_OFFLOAD_TCP_TSO | /* tso */
-			     DEV_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
+			    (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
+			     RTE_ETH_TX_OFFLOAD_TCP_TSO | /* tso */
+			     RTE_ETH_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
 
 	if (use_tx_offload) {
 		DP_INFO(edev, "Assigning qede_xmit_pkts\n");
@@ -1002,16 +1002,16 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
 	uint64_t rx_offloads = eth_dev->data->dev_conf.rxmode.offloads;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			(void)qede_vlan_stripping(eth_dev, 1);
 		else
 			(void)qede_vlan_stripping(eth_dev, 0);
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* VLAN filtering kicks in when a VLAN is added */
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 			qede_vlan_filter_set(eth_dev, 0, 1);
 		} else {
 			if (qdev->configured_vlans > 1) { /* Excluding VLAN0 */
@@ -1022,7 +1022,7 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 				 * enabled
 				 */
 				eth_dev->data->dev_conf.rxmode.offloads |=
-						DEV_RX_OFFLOAD_VLAN_FILTER;
+						RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			} else {
 				qede_vlan_filter_set(eth_dev, 0, 0);
 			}
@@ -1069,11 +1069,11 @@ int qede_config_rss(struct rte_eth_dev *eth_dev)
 	/* Configure default RETA */
 	memset(reta_conf, 0, sizeof(reta_conf));
 	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++)
-		reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+		reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
 
 	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
-		id = i / RTE_RETA_GROUP_SIZE;
-		pos = i % RTE_RETA_GROUP_SIZE;
+		id = i / RTE_ETH_RETA_GROUP_SIZE;
+		pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		q = i % QEDE_RSS_COUNT(eth_dev);
 		reta_conf[id].reta[pos] = q;
 	}
@@ -1112,12 +1112,12 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
 	}
 
 	/* Configure TPA parameters */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		if (qede_enable_tpa(eth_dev, true))
 			return -EINVAL;
 		/* Enable scatter mode for LRO */
 		if (!eth_dev->data->scattered_rx)
-			rxmode->offloads |= DEV_RX_OFFLOAD_SCATTER;
+			rxmode->offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 	}
 
 	/* Start queues */
@@ -1132,7 +1132,7 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
 	 * Also, we would like to retain similar behavior in PF case, so we
 	 * don't do PF/VF specific check here.
 	 */
-	if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 		if (qede_config_rss(eth_dev))
 			goto err;
 
@@ -1272,8 +1272,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE(edev);
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* We need to have min 1 RX queue.There is no min check in
 	 * rte_eth_dev_configure(), so we are checking it here.
@@ -1291,8 +1291,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 		DP_NOTICE(edev, false,
 			  "Invalid devargs supplied, requested change will not take effect\n");
 
-	if (!(rxmode->mq_mode == ETH_MQ_RX_NONE ||
-	      rxmode->mq_mode == ETH_MQ_RX_RSS)) {
+	if (!(rxmode->mq_mode == RTE_ETH_MQ_RX_NONE ||
+	      rxmode->mq_mode == RTE_ETH_MQ_RX_RSS)) {
 		DP_ERR(edev, "Unsupported multi-queue mode\n");
 		return -ENOTSUP;
 	}
@@ -1312,7 +1312,7 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 			return -ENOMEM;
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		eth_dev->data->scattered_rx = 1;
 
 	if (qede_start_vport(qdev, eth_dev->data->mtu))
@@ -1321,8 +1321,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 	qdev->mtu = eth_dev->data->mtu;
 
 	/* Enable VLAN offloads by default */
-	ret = qede_vlan_offload_set(eth_dev, ETH_VLAN_STRIP_MASK  |
-					     ETH_VLAN_FILTER_MASK);
+	ret = qede_vlan_offload_set(eth_dev, RTE_ETH_VLAN_STRIP_MASK  |
+					     RTE_ETH_VLAN_FILTER_MASK);
 	if (ret)
 		return ret;
 
@@ -1385,34 +1385,34 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->reta_size = ECORE_RSS_IND_TABLE_SIZE;
 	dev_info->hash_key_size = ECORE_RSS_KEY_SIZE * sizeof(uint32_t);
 	dev_info->flow_type_rss_offloads = (uint64_t)QEDE_RSS_OFFLOAD_ALL;
-	dev_info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM	|
-				     DEV_RX_OFFLOAD_UDP_CKSUM	|
-				     DEV_RX_OFFLOAD_TCP_CKSUM	|
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				     DEV_RX_OFFLOAD_TCP_LRO	|
-				     DEV_RX_OFFLOAD_KEEP_CRC    |
-				     DEV_RX_OFFLOAD_SCATTER	|
-				     DEV_RX_OFFLOAD_VLAN_FILTER |
-				     DEV_RX_OFFLOAD_VLAN_STRIP  |
-				     DEV_RX_OFFLOAD_RSS_HASH);
+	dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM	|
+				     RTE_ETH_RX_OFFLOAD_UDP_CKSUM	|
+				     RTE_ETH_RX_OFFLOAD_TCP_CKSUM	|
+				     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     RTE_ETH_RX_OFFLOAD_TCP_LRO	|
+				     RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+				     RTE_ETH_RX_OFFLOAD_SCATTER	|
+				     RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				     RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+				     RTE_ETH_RX_OFFLOAD_RSS_HASH);
 	dev_info->rx_queue_offload_capa = 0;
 
 	/* TX offloads are on a per-packet basis, so it is applicable
 	 * to both at port and queue levels.
 	 */
-	dev_info->tx_offload_capa = (DEV_TX_OFFLOAD_VLAN_INSERT	|
-				     DEV_TX_OFFLOAD_IPV4_CKSUM	|
-				     DEV_TX_OFFLOAD_UDP_CKSUM	|
-				     DEV_TX_OFFLOAD_TCP_CKSUM	|
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				     DEV_TX_OFFLOAD_MULTI_SEGS  |
-				     DEV_TX_OFFLOAD_TCP_TSO	|
-				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO);
+	dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_VLAN_INSERT	|
+				     RTE_ETH_TX_OFFLOAD_IPV4_CKSUM	|
+				     RTE_ETH_TX_OFFLOAD_UDP_CKSUM	|
+				     RTE_ETH_TX_OFFLOAD_TCP_CKSUM	|
+				     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+				     RTE_ETH_TX_OFFLOAD_TCP_TSO	|
+				     RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO);
 	dev_info->tx_queue_offload_capa = dev_info->tx_offload_capa;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
-		.offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+		.offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
 	};
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1424,17 +1424,17 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
 	memset(&link, 0, sizeof(struct qed_link_output));
 	qdev->ops->common->get_link(edev, &link);
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G)
-		speed_cap |= ETH_LINK_SPEED_1G;
+		speed_cap |= RTE_ETH_LINK_SPEED_1G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G)
-		speed_cap |= ETH_LINK_SPEED_10G;
+		speed_cap |= RTE_ETH_LINK_SPEED_10G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G)
-		speed_cap |= ETH_LINK_SPEED_25G;
+		speed_cap |= RTE_ETH_LINK_SPEED_25G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G)
-		speed_cap |= ETH_LINK_SPEED_40G;
+		speed_cap |= RTE_ETH_LINK_SPEED_40G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G)
-		speed_cap |= ETH_LINK_SPEED_50G;
+		speed_cap |= RTE_ETH_LINK_SPEED_50G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G)
-		speed_cap |= ETH_LINK_SPEED_100G;
+		speed_cap |= RTE_ETH_LINK_SPEED_100G;
 	dev_info->speed_capa = speed_cap;
 
 	return 0;
@@ -1461,10 +1461,10 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
 	/* Link Mode */
 	switch (q_link.duplex) {
 	case QEDE_DUPLEX_HALF:
-		link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case QEDE_DUPLEX_FULL:
-		link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case QEDE_DUPLEX_UNKNOWN:
 	default:
@@ -1473,11 +1473,11 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
 	link.link_duplex = link_duplex;
 
 	/* Link Status */
-	link.link_status = q_link.link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	link.link_status = q_link.link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	/* AN */
 	link.link_autoneg = (q_link.supported_caps & QEDE_SUPPORTED_AUTONEG) ?
-			     ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+			     RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
 
 	DP_INFO(edev, "Link - Speed %u Mode %u AN %u Status %u\n",
 		link.link_speed, link.link_duplex,
@@ -2012,12 +2012,12 @@ static int qede_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Pause is assumed to be supported (SUPPORTED_Pause) */
-	if (fc_conf->mode == RTE_FC_FULL)
+	if (fc_conf->mode == RTE_ETH_FC_FULL)
 		params.pause_config |= (QED_LINK_PAUSE_TX_ENABLE |
 					QED_LINK_PAUSE_RX_ENABLE);
-	if (fc_conf->mode == RTE_FC_TX_PAUSE)
+	if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
 		params.pause_config |= QED_LINK_PAUSE_TX_ENABLE;
-	if (fc_conf->mode == RTE_FC_RX_PAUSE)
+	if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
 		params.pause_config |= QED_LINK_PAUSE_RX_ENABLE;
 
 	params.link_up = true;
@@ -2041,13 +2041,13 @@ static int qede_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 
 	if (current_link.pause_config & (QED_LINK_PAUSE_RX_ENABLE |
 					 QED_LINK_PAUSE_TX_ENABLE))
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (current_link.pause_config & QED_LINK_PAUSE_RX_ENABLE)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (current_link.pause_config & QED_LINK_PAUSE_TX_ENABLE)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -2088,14 +2088,14 @@ qede_dev_supported_ptypes_get(struct rte_eth_dev *eth_dev)
 static void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf)
 {
 	*rss_caps = 0;
-	*rss_caps |= (hf & ETH_RSS_IPV4)              ? ECORE_RSS_IPV4 : 0;
-	*rss_caps |= (hf & ETH_RSS_IPV6)              ? ECORE_RSS_IPV6 : 0;
-	*rss_caps |= (hf & ETH_RSS_IPV6_EX)           ? ECORE_RSS_IPV6 : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_TCP)  ? ECORE_RSS_IPV4_TCP : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_TCP)  ? ECORE_RSS_IPV6_TCP : 0;
-	*rss_caps |= (hf & ETH_RSS_IPV6_TCP_EX)       ? ECORE_RSS_IPV6_TCP : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_UDP)  ? ECORE_RSS_IPV4_UDP : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_UDP)  ? ECORE_RSS_IPV6_UDP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV4)              ? ECORE_RSS_IPV4 : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV6)              ? ECORE_RSS_IPV6 : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV6_EX)           ? ECORE_RSS_IPV6 : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)  ? ECORE_RSS_IPV4_TCP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)  ? ECORE_RSS_IPV6_TCP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV6_TCP_EX)       ? ECORE_RSS_IPV6_TCP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)  ? ECORE_RSS_IPV4_UDP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)  ? ECORE_RSS_IPV6_UDP : 0;
 }
 
 int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
@@ -2221,7 +2221,7 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 	uint8_t entry;
 	int rc = 0;
 
-	if (reta_size > ETH_RSS_RETA_SIZE_128) {
+	if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
 		DP_ERR(edev, "reta_size %d is not supported by hardware\n",
 		       reta_size);
 		return -EINVAL;
@@ -2245,8 +2245,8 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 
 	for_each_hwfn(edev, i) {
 		for (j = 0; j < reta_size; j++) {
-			idx = j / RTE_RETA_GROUP_SIZE;
-			shift = j % RTE_RETA_GROUP_SIZE;
+			idx = j / RTE_ETH_RETA_GROUP_SIZE;
+			shift = j % RTE_ETH_RETA_GROUP_SIZE;
 			if (reta_conf[idx].mask & (1ULL << shift)) {
 				entry = reta_conf[idx].reta[shift];
 				fid = entry * edev->num_hwfns + i;
@@ -2282,15 +2282,15 @@ static int qede_rss_reta_query(struct rte_eth_dev *eth_dev,
 	uint16_t i, idx, shift;
 	uint8_t entry;
 
-	if (reta_size > ETH_RSS_RETA_SIZE_128) {
+	if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
 		DP_ERR(edev, "reta_size %d is not supported\n",
 		       reta_size);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift)) {
 			entry = qdev->rss_ind_table[i];
 			reta_conf[idx].reta[shift] = entry;
@@ -2718,16 +2718,16 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 	adapter->ipgre.num_filters = 0;
 	if (is_vf) {
 		adapter->vxlan.enable = true;
-		adapter->vxlan.filter_type = ETH_TUNNEL_FILTER_IMAC |
-					     ETH_TUNNEL_FILTER_IVLAN;
+		adapter->vxlan.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+					     RTE_ETH_TUNNEL_FILTER_IVLAN;
 		adapter->vxlan.udp_port = QEDE_VXLAN_DEF_PORT;
 		adapter->geneve.enable = true;
-		adapter->geneve.filter_type = ETH_TUNNEL_FILTER_IMAC |
-					      ETH_TUNNEL_FILTER_IVLAN;
+		adapter->geneve.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+					      RTE_ETH_TUNNEL_FILTER_IVLAN;
 		adapter->geneve.udp_port = QEDE_GENEVE_DEF_PORT;
 		adapter->ipgre.enable = true;
-		adapter->ipgre.filter_type = ETH_TUNNEL_FILTER_IMAC |
-					     ETH_TUNNEL_FILTER_IVLAN;
+		adapter->ipgre.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+					     RTE_ETH_TUNNEL_FILTER_IVLAN;
 	} else {
 		adapter->vxlan.enable = false;
 		adapter->geneve.enable = false;
diff --git a/drivers/net/qede/qede_filter.c b/drivers/net/qede/qede_filter.c
index c756594bfc4b..440440423a32 100644
--- a/drivers/net/qede/qede_filter.c
+++ b/drivers/net/qede/qede_filter.c
@@ -20,97 +20,97 @@ const struct _qede_udp_tunn_types {
 	const char *string;
 } qede_tunn_types[] = {
 	{
-		ETH_TUNNEL_FILTER_OMAC,
+		RTE_ETH_TUNNEL_FILTER_OMAC,
 		ECORE_FILTER_MAC,
 		ECORE_TUNN_CLSS_MAC_VLAN,
 		"outer-mac"
 	},
 	{
-		ETH_TUNNEL_FILTER_TENID,
+		RTE_ETH_TUNNEL_FILTER_TENID,
 		ECORE_FILTER_VNI,
 		ECORE_TUNN_CLSS_MAC_VNI,
 		"vni"
 	},
 	{
-		ETH_TUNNEL_FILTER_IMAC,
+		RTE_ETH_TUNNEL_FILTER_IMAC,
 		ECORE_FILTER_INNER_MAC,
 		ECORE_TUNN_CLSS_INNER_MAC_VLAN,
 		"inner-mac"
 	},
 	{
-		ETH_TUNNEL_FILTER_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_IVLAN,
 		ECORE_FILTER_INNER_VLAN,
 		ECORE_TUNN_CLSS_INNER_MAC_VLAN,
 		"inner-vlan"
 	},
 	{
-		ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_TENID,
+		RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_TENID,
 		ECORE_FILTER_MAC_VNI_PAIR,
 		ECORE_TUNN_CLSS_MAC_VNI,
 		"outer-mac and vni"
 	},
 	{
-		ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_IMAC,
+		RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_IMAC,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"outer-mac and inner-mac"
 	},
 	{
-		ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"outer-mac and inner-vlan"
 	},
 	{
-		ETH_TUNNEL_FILTER_TENID | ETH_TUNNEL_FILTER_IMAC,
+		RTE_ETH_TUNNEL_FILTER_TENID | RTE_ETH_TUNNEL_FILTER_IMAC,
 		ECORE_FILTER_INNER_MAC_VNI_PAIR,
 		ECORE_TUNN_CLSS_INNER_MAC_VNI,
 		"vni and inner-mac",
 	},
 	{
-		ETH_TUNNEL_FILTER_TENID | ETH_TUNNEL_FILTER_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_TENID | RTE_ETH_TUNNEL_FILTER_IVLAN,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"vni and inner-vlan",
 	},
 	{
-		ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
 		ECORE_FILTER_INNER_PAIR,
 		ECORE_TUNN_CLSS_INNER_MAC_VLAN,
 		"inner-mac and inner-vlan",
 	},
 	{
-		ETH_TUNNEL_FILTER_OIP,
+		RTE_ETH_TUNNEL_FILTER_OIP,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"outer-IP"
 	},
 	{
-		ETH_TUNNEL_FILTER_IIP,
+		RTE_ETH_TUNNEL_FILTER_IIP,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"inner-IP"
 	},
 	{
-		RTE_TUNNEL_FILTER_IMAC_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"IMAC_IVLAN"
 	},
 	{
-		RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID,
+		RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"IMAC_IVLAN_TENID"
 	},
 	{
-		RTE_TUNNEL_FILTER_IMAC_TENID,
+		RTE_ETH_TUNNEL_FILTER_IMAC_TENID,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"IMAC_TENID"
 	},
 	{
-		RTE_TUNNEL_FILTER_OMAC_TENID_IMAC,
+		RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"OMAC_TENID_IMAC"
@@ -144,7 +144,7 @@ int qede_check_fdir_support(struct rte_eth_dev *eth_dev)
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct rte_fdir_conf *fdir = &eth_dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fdir = &eth_dev->data->dev_conf.fdir_conf;
 
 	/* check FDIR modes */
 	switch (fdir->mode) {
@@ -542,7 +542,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
 	memset(&tunn, 0, sizeof(tunn));
 
 	switch (tunnel_udp->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (qdev->vxlan.udp_port != tunnel_udp->udp_port) {
 			DP_ERR(edev, "UDP port %u doesn't exist\n",
 				tunnel_udp->udp_port);
@@ -570,7 +570,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
 					ECORE_TUNN_CLSS_MAC_VLAN, false);
 
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (qdev->geneve.udp_port != tunnel_udp->udp_port) {
 			DP_ERR(edev, "UDP port %u doesn't exist\n",
 				tunnel_udp->udp_port);
@@ -622,7 +622,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
 	memset(&tunn, 0, sizeof(tunn));
 
 	switch (tunnel_udp->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (qdev->vxlan.udp_port == tunnel_udp->udp_port) {
 			DP_INFO(edev,
 				"UDP port %u for VXLAN was already configured\n",
@@ -659,7 +659,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
 
 		qdev->vxlan.udp_port = udp_port;
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (qdev->geneve.udp_port == tunnel_udp->udp_port) {
 			DP_INFO(edev,
 				"UDP port %u for GENEVE was already configured\n",
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index c2263787b4ec..d585db8b61e8 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -249,7 +249,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
 	bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
 	/* cache align the mbuf size to simplfy rx_buf_size calculation */
 	bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
-	if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)	||
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	||
 	    (max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) {
 		if (!dev->data->scattered_rx) {
 			DP_INFO(edev, "Forcing scatter-gather mode\n");
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index c9334448c887..15112b83f4f7 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -73,14 +73,14 @@
 #define QEDE_MAX_ETHER_HDR_LEN	(RTE_ETHER_HDR_LEN + QEDE_ETH_OVERHEAD)
 #define QEDE_ETH_MAX_LEN	(RTE_ETHER_MTU + QEDE_MAX_ETHER_HDR_LEN)
 
-#define QEDE_RSS_OFFLOAD_ALL    (ETH_RSS_IPV4			|\
-				 ETH_RSS_NONFRAG_IPV4_TCP	|\
-				 ETH_RSS_NONFRAG_IPV4_UDP	|\
-				 ETH_RSS_IPV6			|\
-				 ETH_RSS_NONFRAG_IPV6_TCP	|\
-				 ETH_RSS_NONFRAG_IPV6_UDP	|\
-				 ETH_RSS_VXLAN			|\
-				 ETH_RSS_GENEVE)
+#define QEDE_RSS_OFFLOAD_ALL    (RTE_ETH_RSS_IPV4			|\
+				 RTE_ETH_RSS_NONFRAG_IPV4_TCP	|\
+				 RTE_ETH_RSS_NONFRAG_IPV4_UDP	|\
+				 RTE_ETH_RSS_IPV6			|\
+				 RTE_ETH_RSS_NONFRAG_IPV6_TCP	|\
+				 RTE_ETH_RSS_NONFRAG_IPV6_UDP	|\
+				 RTE_ETH_RSS_VXLAN			|\
+				 RTE_ETH_RSS_GENEVE)
 
 #define QEDE_RXTX_MAX(qdev) \
 	(RTE_MAX(qdev->num_rx_queues, qdev->num_tx_queues))
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index 0440019e07e1..db10f035dfcb 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -56,10 +56,10 @@ struct pmd_internals {
 };
 
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(eth_ring_logtype, NOTICE);
@@ -102,7 +102,7 @@ eth_dev_configure(struct rte_eth_dev *dev __rte_unused) { return 0; }
 static int
 eth_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -110,21 +110,21 @@ static int
 eth_dev_stop(struct rte_eth_dev *dev)
 {
 	dev->data->dev_started = 0;
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
 static int
 eth_dev_set_link_down(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
 static int
 eth_dev_set_link_up(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -163,8 +163,8 @@ eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_mac_addrs = 1;
 	dev_info->max_rx_pktlen = (uint32_t)-1;
 	dev_info->max_rx_queues = (uint16_t)internals->max_rx_queues;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	dev_info->max_tx_queues = (uint16_t)internals->max_tx_queues;
 	dev_info->min_rx_bufsize = 0;
 
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 431c42f508d0..9c1be10ac93d 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -106,13 +106,13 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
 {
 	uint32_t phy_caps = 0;
 
-	if (~speeds & ETH_LINK_SPEED_FIXED) {
+	if (~speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		phy_caps |= (1 << EFX_PHY_CAP_AN);
 		/*
 		 * If no speeds are specified in the mask, any supported
 		 * may be negotiated
 		 */
-		if (speeds == ETH_LINK_SPEED_AUTONEG)
+		if (speeds == RTE_ETH_LINK_SPEED_AUTONEG)
 			phy_caps |=
 				(1 << EFX_PHY_CAP_1000FDX) |
 				(1 << EFX_PHY_CAP_10000FDX) |
@@ -121,17 +121,17 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
 				(1 << EFX_PHY_CAP_50000FDX) |
 				(1 << EFX_PHY_CAP_100000FDX);
 	}
-	if (speeds & ETH_LINK_SPEED_1G)
+	if (speeds & RTE_ETH_LINK_SPEED_1G)
 		phy_caps |= (1 << EFX_PHY_CAP_1000FDX);
-	if (speeds & ETH_LINK_SPEED_10G)
+	if (speeds & RTE_ETH_LINK_SPEED_10G)
 		phy_caps |= (1 << EFX_PHY_CAP_10000FDX);
-	if (speeds & ETH_LINK_SPEED_25G)
+	if (speeds & RTE_ETH_LINK_SPEED_25G)
 		phy_caps |= (1 << EFX_PHY_CAP_25000FDX);
-	if (speeds & ETH_LINK_SPEED_40G)
+	if (speeds & RTE_ETH_LINK_SPEED_40G)
 		phy_caps |= (1 << EFX_PHY_CAP_40000FDX);
-	if (speeds & ETH_LINK_SPEED_50G)
+	if (speeds & RTE_ETH_LINK_SPEED_50G)
 		phy_caps |= (1 << EFX_PHY_CAP_50000FDX);
-	if (speeds & ETH_LINK_SPEED_100G)
+	if (speeds & RTE_ETH_LINK_SPEED_100G)
 		phy_caps |= (1 << EFX_PHY_CAP_100000FDX);
 
 	return phy_caps;
@@ -401,10 +401,10 @@ sfc_set_fw_subvariant(struct sfc_adapter *sa)
 			tx_offloads |= txq_info->offloads;
 	}
 
-	if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			   DEV_TX_OFFLOAD_TCP_CKSUM |
-			   DEV_TX_OFFLOAD_UDP_CKSUM |
-			   DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
 		req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_DEFAULT;
 	else
 		req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_NO_TX_CSUM;
@@ -899,7 +899,7 @@ sfc_attach(struct sfc_adapter *sa)
 	sa->priv.shared->tunnel_encaps =
 		encp->enc_tunnel_encapsulations_supported;
 
-	if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		sa->tso = encp->enc_fw_assisted_tso_v2_enabled ||
 			  encp->enc_tso_v3_enabled;
 		if (!sa->tso)
@@ -908,8 +908,8 @@ sfc_attach(struct sfc_adapter *sa)
 
 	if (sa->tso &&
 	    (sfc_dp_tx_offload_capa(sa->priv.dp_tx) &
-	     (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-	      DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
+	     (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+	      RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
 		sa->tso_encap = encp->enc_fw_assisted_tso_v2_encap_enabled ||
 				encp->enc_tso_v3_enabled;
 		if (!sa->tso_encap)
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index d958fd642fb1..eeb73a7530ef 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -979,11 +979,11 @@ struct sfc_dp_rx sfc_ef100_rx = {
 				  SFC_DP_RX_FEAT_INTR |
 				  SFC_DP_RX_FEAT_STATS,
 	.dev_offload_capa	= 0,
-	.queue_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-				  DEV_RX_OFFLOAD_SCATTER |
-				  DEV_RX_OFFLOAD_RSS_HASH,
+	.queue_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				  RTE_ETH_RX_OFFLOAD_SCATTER |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
 	.get_dev_info		= sfc_ef100_rx_get_dev_info,
 	.qsize_up_rings		= sfc_ef100_rx_qsize_up_rings,
 	.qcreate		= sfc_ef100_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index e166fda888b1..67980a587fe4 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -971,16 +971,16 @@ struct sfc_dp_tx sfc_ef100_tx = {
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS |
 				  SFC_DP_TX_FEAT_STATS,
 	.dev_offload_capa	= 0,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_VLAN_INSERT |
-				  DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_MULTI_SEGS |
-				  DEV_TX_OFFLOAD_TCP_TSO |
-				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				  RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				  RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
 	.get_dev_info		= sfc_ef100_get_dev_info,
 	.qsize_up_rings		= sfc_ef100_tx_qsize_up_rings,
 	.qcreate		= sfc_ef100_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 991329e86f01..9ea207cca163 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -746,8 +746,8 @@ struct sfc_dp_rx sfc_ef10_essb_rx = {
 	},
 	.features		= SFC_DP_RX_FEAT_FLOW_FLAG |
 				  SFC_DP_RX_FEAT_FLOW_MARK,
-	.dev_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_RSS_HASH,
+	.dev_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
 	.queue_offload_capa	= 0,
 	.get_dev_info		= sfc_ef10_essb_rx_get_dev_info,
 	.pool_ops_supported	= sfc_ef10_essb_rx_pool_ops_supported,
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 49a7d4fb42fd..9aaabd30eee6 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -819,10 +819,10 @@ struct sfc_dp_rx sfc_ef10_rx = {
 	},
 	.features		= SFC_DP_RX_FEAT_MULTI_PROCESS |
 				  SFC_DP_RX_FEAT_INTR,
-	.dev_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_RX_OFFLOAD_RSS_HASH,
-	.queue_offload_capa	= DEV_RX_OFFLOAD_SCATTER,
+	.dev_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
+	.queue_offload_capa	= RTE_ETH_RX_OFFLOAD_SCATTER,
 	.get_dev_info		= sfc_ef10_rx_get_dev_info,
 	.qsize_up_rings		= sfc_ef10_rx_qsize_up_rings,
 	.qcreate		= sfc_ef10_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index ed43adb4ca5c..e7da4608bcb0 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -958,9 +958,9 @@ sfc_ef10_tx_qcreate(uint16_t port_id, uint16_t queue_id,
 	if (txq->sw_ring == NULL)
 		goto fail_sw_ring_alloc;
 
-	if (info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-			      DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			      DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) {
+	if (info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+			      RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			      RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) {
 		txq->tsoh = rte_calloc_socket("sfc-ef10-txq-tsoh",
 					      info->txq_entries,
 					      SFC_TSOH_STD_LEN,
@@ -1125,14 +1125,14 @@ struct sfc_dp_tx sfc_ef10_tx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_EF10,
 	},
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
-	.dev_offload_capa	= DEV_TX_OFFLOAD_MULTI_SEGS,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_TSO |
-				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+	.dev_offload_capa	= RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
 	.get_dev_info		= sfc_ef10_get_dev_info,
 	.qsize_up_rings		= sfc_ef10_tx_qsize_up_rings,
 	.qcreate		= sfc_ef10_tx_qcreate,
@@ -1152,11 +1152,11 @@ struct sfc_dp_tx sfc_ef10_simple_tx = {
 		.type		= SFC_DP_TX,
 	},
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
-	.dev_offload_capa	= DEV_TX_OFFLOAD_MBUF_FAST_FREE,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM,
+	.dev_offload_capa	= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM,
 	.get_dev_info		= sfc_ef10_get_dev_info,
 	.qsize_up_rings		= sfc_ef10_tx_qsize_up_rings,
 	.qcreate		= sfc_ef10_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index f5986b610fff..833d833a0408 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -105,19 +105,19 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_vfs = sa->sriov.num_vfs;
 
 	/* Autonegotiation may be disabled */
-	dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_1000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_1G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_10000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_10G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_25000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_25G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_40000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_50000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_100000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
 
 	dev_info->max_rx_queues = sa->rxq_max;
 	dev_info->max_tx_queues = sa->txq_max;
@@ -145,8 +145,8 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->tx_offload_capa = sfc_tx_get_dev_offload_caps(sa) |
 				    dev_info->tx_queue_offload_capa;
 
-	if (dev_info->tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
-		txq_offloads_def |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	if (dev_info->tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+		txq_offloads_def |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->default_txconf.offloads |= txq_offloads_def;
 
@@ -989,16 +989,16 @@ sfc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	switch (link_fc) {
 	case 0:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		break;
 	case EFX_FCNTL_RESPOND:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case EFX_FCNTL_GENERATE:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case (EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE):
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	default:
 		sfc_err(sa, "%s: unexpected flow control value %#x",
@@ -1029,16 +1029,16 @@ sfc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		fcntl = 0;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		fcntl = EFX_FCNTL_RESPOND;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		fcntl = EFX_FCNTL_GENERATE;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		fcntl = EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE;
 		break;
 	default:
@@ -1313,7 +1313,7 @@ sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 	qinfo->conf.rx_deferred_start = rxq_info->deferred_start;
 	qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads;
 	if (rxq_info->type_flags & EFX_RXQ_FLAG_SCATTER) {
-		qinfo->conf.offloads |= DEV_RX_OFFLOAD_SCATTER;
+		qinfo->conf.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 		qinfo->scattered_rx = 1;
 	}
 	qinfo->nb_desc = rxq_info->entries;
@@ -1523,9 +1523,9 @@ static efx_tunnel_protocol_t
 sfc_tunnel_rte_type_to_efx_udp_proto(enum rte_eth_tunnel_type rte_type)
 {
 	switch (rte_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		return EFX_TUNNEL_PROTOCOL_VXLAN;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		return EFX_TUNNEL_PROTOCOL_GENEVE;
 	default:
 		return EFX_TUNNEL_NPROTOS;
@@ -1652,7 +1652,7 @@ sfc_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	/*
 	 * Mapping of hash configuration between RTE and EFX is not one-to-one,
-	 * hence, conversion is done here to derive a correct set of ETH_RSS
+	 * hence, conversion is done here to derive a correct set of RTE_ETH_RSS
 	 * flags which corresponds to the active EFX configuration stored
 	 * locally in 'sfc_adapter' and kept up-to-date
 	 */
@@ -1778,8 +1778,8 @@ sfc_dev_rss_reta_query(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	for (entry = 0; entry < reta_size; entry++) {
-		int grp = entry / RTE_RETA_GROUP_SIZE;
-		int grp_idx = entry % RTE_RETA_GROUP_SIZE;
+		int grp = entry / RTE_ETH_RETA_GROUP_SIZE;
+		int grp_idx = entry % RTE_ETH_RETA_GROUP_SIZE;
 
 		if ((reta_conf[grp].mask >> grp_idx) & 1)
 			reta_conf[grp].reta[grp_idx] = rss->tbl[entry];
@@ -1828,10 +1828,10 @@ sfc_dev_rss_reta_update(struct rte_eth_dev *dev,
 	rte_memcpy(rss_tbl_new, rss->tbl, sizeof(rss->tbl));
 
 	for (entry = 0; entry < reta_size; entry++) {
-		int grp_idx = entry % RTE_RETA_GROUP_SIZE;
+		int grp_idx = entry % RTE_ETH_RETA_GROUP_SIZE;
 		struct rte_eth_rss_reta_entry64 *grp;
 
-		grp = &reta_conf[entry / RTE_RETA_GROUP_SIZE];
+		grp = &reta_conf[entry / RTE_ETH_RETA_GROUP_SIZE];
 
 		if (grp->mask & (1ull << grp_idx)) {
 			if (grp->reta[grp_idx] >= rss->channels) {
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 8096af56739f..be2dfe778a0d 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -392,7 +392,7 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = NULL;
 	const struct rte_flow_item_vlan *mask = NULL;
 	const struct rte_flow_item_vlan supp_mask = {
-		.tci = rte_cpu_to_be_16(ETH_VLAN_ID_MAX),
+		.tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
 		.inner_type = RTE_BE16(0xffff),
 	};
 
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index 5320d8903dac..27b02b1119fb 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -573,66 +573,66 @@ sfc_port_link_mode_to_info(efx_link_mode_t link_mode,
 
 	memset(link_info, 0, sizeof(*link_info));
 	if ((link_mode == EFX_LINK_DOWN) || (link_mode == EFX_LINK_UNKNOWN))
-		link_info->link_status = ETH_LINK_DOWN;
+		link_info->link_status = RTE_ETH_LINK_DOWN;
 	else
-		link_info->link_status = ETH_LINK_UP;
+		link_info->link_status = RTE_ETH_LINK_UP;
 
 	switch (link_mode) {
 	case EFX_LINK_10HDX:
-		link_info->link_speed  = ETH_SPEED_NUM_10M;
-		link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_10M;
+		link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case EFX_LINK_10FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_10M;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_10M;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_100HDX:
-		link_info->link_speed  = ETH_SPEED_NUM_100M;
-		link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_100M;
+		link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case EFX_LINK_100FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_100M;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_100M;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_1000HDX:
-		link_info->link_speed  = ETH_SPEED_NUM_1G;
-		link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_1G;
+		link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case EFX_LINK_1000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_1G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_1G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_10000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_10G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_10G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_25000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_25G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_25G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_40000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_40G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_40G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_50000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_50G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_50G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_100000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_100G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_100G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	default:
 		SFC_ASSERT(B_FALSE);
 		/* FALLTHROUGH */
 	case EFX_LINK_UNKNOWN:
 	case EFX_LINK_DOWN:
-		link_info->link_speed  = ETH_SPEED_NUM_NONE;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_NONE;
 		link_info->link_duplex = 0;
 		break;
 	}
 
-	link_info->link_autoneg = ETH_LINK_AUTONEG;
+	link_info->link_autoneg = RTE_ETH_LINK_AUTONEG;
 }
 
 int
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index 2500b14cb006..9d88d554c1ba 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -405,7 +405,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
 	}
 
 	switch (conf->rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		if (nb_rx_queues != 1) {
 			sfcr_err(sr, "Rx RSS is not supported with %u queues",
 				 nb_rx_queues);
@@ -420,7 +420,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
 			ret = -EINVAL;
 		}
 		break;
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 		break;
 	default:
 		sfcr_err(sr, "Rx mode MQ modes other than RSS not supported");
@@ -428,7 +428,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
 		break;
 	}
 
-	if (conf->txmode.mq_mode != ETH_MQ_TX_NONE) {
+	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
 		sfcr_err(sr, "Tx mode MQ modes not supported");
 		ret = -EINVAL;
 	}
@@ -553,8 +553,8 @@ sfc_repr_dev_link_update(struct rte_eth_dev *dev,
 		sfc_port_link_mode_to_info(EFX_LINK_UNKNOWN, &link);
 	} else {
 		memset(&link, 0, sizeof(link));
-		link.link_status = ETH_LINK_UP;
-		link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		link.link_status = RTE_ETH_LINK_UP;
+		link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index c60ef17a922a..23df27c8f45a 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -648,9 +648,9 @@ struct sfc_dp_rx sfc_efx_rx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_RX_EFX,
 	},
 	.features		= SFC_DP_RX_FEAT_INTR,
-	.dev_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_RSS_HASH,
-	.queue_offload_capa	= DEV_RX_OFFLOAD_SCATTER,
+	.dev_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
+	.queue_offload_capa	= RTE_ETH_RX_OFFLOAD_SCATTER,
 	.qsize_up_rings		= sfc_efx_rx_qsize_up_rings,
 	.qcreate		= sfc_efx_rx_qcreate,
 	.qdestroy		= sfc_efx_rx_qdestroy,
@@ -931,7 +931,7 @@ sfc_rx_get_offload_mask(struct sfc_adapter *sa)
 	uint64_t no_caps = 0;
 
 	if (encp->enc_tunnel_encapsulations_supported == 0)
-		no_caps |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		no_caps |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 	return ~no_caps;
 }
@@ -1140,7 +1140,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 
 	if (!sfc_rx_check_scatter(sa->port.pdu, buf_size,
 				  encp->enc_rx_prefix_size,
-				  (offloads & DEV_RX_OFFLOAD_SCATTER),
+				  (offloads & RTE_ETH_RX_OFFLOAD_SCATTER),
 				  encp->enc_rx_scatter_max,
 				  &error)) {
 		sfc_err(sa, "RxQ %d (internal %u) MTU check failed: %s",
@@ -1166,15 +1166,15 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 		rxq_info->type = EFX_RXQ_TYPE_DEFAULT;
 
 	rxq_info->type_flags |=
-		(offloads & DEV_RX_OFFLOAD_SCATTER) ?
+		(offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ?
 		EFX_RXQ_FLAG_SCATTER : EFX_RXQ_FLAG_NONE;
 
 	if ((encp->enc_tunnel_encapsulations_supported != 0) &&
 	    (sfc_dp_rx_offload_capa(sa->priv.dp_rx) &
-	     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+	     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
 		rxq_info->type_flags |= EFX_RXQ_FLAG_INNER_CLASSES;
 
-	if (offloads & DEV_RX_OFFLOAD_RSS_HASH)
+	if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)
 		rxq_info->type_flags |= EFX_RXQ_FLAG_RSS_HASH;
 
 	if ((sa->negotiated_rx_metadata & RTE_ETH_RX_METADATA_USER_FLAG) != 0)
@@ -1211,7 +1211,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 	rxq_info->refill_mb_pool = mb_pool;
 
 	if (rss->hash_support == EFX_RX_HASH_AVAILABLE && rss->channels > 0 &&
-	    (offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	    (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		rxq_info->rxq_flags = SFC_RXQ_FLAG_RSS_HASH;
 	else
 		rxq_info->rxq_flags = 0;
@@ -1313,19 +1313,19 @@ sfc_rx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
  * Mapping between RTE RSS hash functions and their EFX counterparts.
  */
 static const struct sfc_rss_hf_rte_to_efx sfc_rss_hf_map[] = {
-	{ ETH_RSS_NONFRAG_IPV4_TCP,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 	  EFX_RX_HASH(IPV4_TCP, 4TUPLE) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 	  EFX_RX_HASH(IPV4_UDP, 4TUPLE) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX,
 	  EFX_RX_HASH(IPV6_TCP, 4TUPLE) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX,
 	  EFX_RX_HASH(IPV6_UDP, 4TUPLE) },
-	{ ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER,
+	{ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	  EFX_RX_HASH(IPV4_TCP, 2TUPLE) | EFX_RX_HASH(IPV4_UDP, 2TUPLE) |
 	  EFX_RX_HASH(IPV4, 2TUPLE) },
-	{ ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER |
-	  ETH_RSS_IPV6_EX,
+	{ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+	  RTE_ETH_RSS_IPV6_EX,
 	  EFX_RX_HASH(IPV6_TCP, 2TUPLE) | EFX_RX_HASH(IPV6_UDP, 2TUPLE) |
 	  EFX_RX_HASH(IPV6, 2TUPLE) }
 };
@@ -1645,10 +1645,10 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
 	int rc = 0;
 
 	switch (rxmode->mq_mode) {
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 		/* No special checks are required */
 		break;
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		if (rss->context_type == EFX_RX_SCALE_UNAVAILABLE) {
 			sfc_err(sa, "RSS is not available");
 			rc = EINVAL;
@@ -1665,16 +1665,16 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
 	 * so unsupported offloads cannot be added as the result of
 	 * below check.
 	 */
-	if ((rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM) !=
-	    (offloads_supported & DEV_RX_OFFLOAD_CHECKSUM)) {
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM) !=
+	    (offloads_supported & RTE_ETH_RX_OFFLOAD_CHECKSUM)) {
 		sfc_warn(sa, "Rx checksum offloads cannot be disabled - always on (IPv4/TCP/UDP)");
-		rxmode->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 	}
 
-	if ((offloads_supported & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
-	    (~rxmode->offloads & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+	if ((offloads_supported & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+	    (~rxmode->offloads & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 		sfc_warn(sa, "Rx outer IPv4 checksum offload cannot be disabled - always on");
-		rxmode->offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 	}
 
 	return rc;
@@ -1820,7 +1820,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	}
 
 configure_rss:
-	rss->channels = (dev_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) ?
+	rss->channels = (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) ?
 			 MIN(sas->ethdev_rxq_count, EFX_MAXRSS) : 0;
 
 	if (rss->channels > 0) {
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 13392cdd5a09..0273788c20ce 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -54,23 +54,23 @@ sfc_tx_get_offload_mask(struct sfc_adapter *sa)
 	uint64_t no_caps = 0;
 
 	if (!encp->enc_hw_tx_insert_vlan_enabled)
-		no_caps |= DEV_TX_OFFLOAD_VLAN_INSERT;
+		no_caps |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if (!encp->enc_tunnel_encapsulations_supported)
-		no_caps |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+		no_caps |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 	if (!sa->tso)
-		no_caps |= DEV_TX_OFFLOAD_TCP_TSO;
+		no_caps |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (!sa->tso_encap ||
 	    (encp->enc_tunnel_encapsulations_supported &
 	     (1u << EFX_TUNNEL_PROTOCOL_VXLAN)) == 0)
-		no_caps |= DEV_TX_OFFLOAD_VXLAN_TNL_TSO;
+		no_caps |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
 
 	if (!sa->tso_encap ||
 	    (encp->enc_tunnel_encapsulations_supported &
 	     (1u << EFX_TUNNEL_PROTOCOL_GENEVE)) == 0)
-		no_caps |= DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+		no_caps |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
 
 	return ~no_caps;
 }
@@ -114,8 +114,8 @@ sfc_tx_qcheck_conf(struct sfc_adapter *sa, unsigned int txq_max_fill_level,
 	}
 
 	/* We either perform both TCP and UDP offload, or no offload at all */
-	if (((offloads & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) !=
-	    ((offloads & DEV_TX_OFFLOAD_UDP_CKSUM) == 0)) {
+	if (((offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) !=
+	    ((offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0)) {
 		sfc_err(sa, "TCP and UDP offloads can't be set independently");
 		rc = EINVAL;
 	}
@@ -309,7 +309,7 @@ sfc_tx_check_mode(struct sfc_adapter *sa, const struct rte_eth_txmode *txmode)
 	int rc = 0;
 
 	switch (txmode->mq_mode) {
-	case ETH_MQ_TX_NONE:
+	case RTE_ETH_MQ_TX_NONE:
 		break;
 	default:
 		sfc_err(sa, "Tx multi-queue mode %u not supported",
@@ -529,23 +529,23 @@ sfc_tx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 	if (rc != 0)
 		goto fail_ev_qstart;
 
-	if (txq_info->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+	if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		flags |= EFX_TXQ_CKSUM_IPV4;
 
-	if (txq_info->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+	if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 		flags |= EFX_TXQ_CKSUM_INNER_IPV4;
 
-	if ((txq_info->offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
-	    (txq_info->offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+	if ((txq_info->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+	    (txq_info->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
 		flags |= EFX_TXQ_CKSUM_TCPUDP;
 
-		if (offloads_supported & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+		if (offloads_supported & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 			flags |= EFX_TXQ_CKSUM_INNER_TCPUDP;
 	}
 
-	if (txq_info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+	if (txq_info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
 		flags |= EFX_TXQ_FATSOV2;
 
 	rc = efx_tx_qcreate(sa->nic, txq->hw_index, 0, &txq->mem,
@@ -876,9 +876,9 @@ sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 		/*
 		 * Here VLAN TCI is expected to be zero in case if no
-		 * DEV_TX_OFFLOAD_VLAN_INSERT capability is advertised;
+		 * RTE_ETH_TX_OFFLOAD_VLAN_INSERT capability is advertised;
 		 * if the calling app ignores the absence of
-		 * DEV_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
+		 * RTE_ETH_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
 		 * TX_ERROR will occur
 		 */
 		pkt_descs += sfc_efx_tx_maybe_insert_tag(txq, m_seg, &pend);
@@ -1242,13 +1242,13 @@ struct sfc_dp_tx sfc_efx_tx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_TX_EFX,
 	},
 	.features		= 0,
-	.dev_offload_capa	= DEV_TX_OFFLOAD_VLAN_INSERT |
-				  DEV_TX_OFFLOAD_MULTI_SEGS,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_TSO,
+	.dev_offload_capa	= RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				  RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_TSO,
 	.qsize_up_rings		= sfc_efx_tx_qsize_up_rings,
 	.qcreate		= sfc_efx_tx_qcreate,
 	.qdestroy		= sfc_efx_tx_qdestroy,
diff --git a/drivers/net/softnic/rte_eth_softnic.c b/drivers/net/softnic/rte_eth_softnic.c
index b3b55b9035b1..3ef33818a9e0 100644
--- a/drivers/net/softnic/rte_eth_softnic.c
+++ b/drivers/net/softnic/rte_eth_softnic.c
@@ -173,7 +173,7 @@ pmd_dev_start(struct rte_eth_dev *dev)
 		return status;
 
 	/* Link UP */
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -184,7 +184,7 @@ pmd_dev_stop(struct rte_eth_dev *dev)
 	struct pmd_internals *p = dev->data->dev_private;
 
 	/* Link DOWN */
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	/* Firmware */
 	softnic_pipeline_disable_all(p);
@@ -386,10 +386,10 @@ pmd_ethdev_register(struct rte_vdev_device *vdev,
 
 	/* dev->data */
 	dev->data->dev_private = dev_private;
-	dev->data->dev_link.link_speed = ETH_SPEED_NUM_100G;
-	dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+	dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	dev->data->mac_addrs = &eth_addr;
 	dev->data->promiscuous = 1;
 	dev->data->numa_node = params->cpu_id;
diff --git a/drivers/net/szedata2/rte_eth_szedata2.c b/drivers/net/szedata2/rte_eth_szedata2.c
index 3c6a285e3c5e..6a084e3e1b1b 100644
--- a/drivers/net/szedata2/rte_eth_szedata2.c
+++ b/drivers/net/szedata2/rte_eth_szedata2.c
@@ -1042,7 +1042,7 @@ static int
 eth_dev_configure(struct rte_eth_dev *dev)
 {
 	struct rte_eth_dev_data *data = dev->data;
-	if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		dev->rx_pkt_burst = eth_szedata2_rx_scattered;
 		data->scattered_rx = 1;
 	} else {
@@ -1064,11 +1064,11 @@ eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_rx_queues = internals->max_rx_queues;
 	dev_info->max_tx_queues = internals->max_tx_queues;
 	dev_info->min_rx_bufsize = 0;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
 	dev_info->tx_offload_capa = 0;
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->tx_queue_offload_capa = 0;
-	dev_info->speed_capa = ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -1202,10 +1202,10 @@ eth_link_update(struct rte_eth_dev *dev,
 
 	memset(&link, 0, sizeof(link));
 
-	link.link_speed = ETH_SPEED_NUM_100G;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_status = ETH_LINK_UP;
-	link.link_autoneg = ETH_LINK_FIXED;
+	link.link_speed = RTE_ETH_SPEED_NUM_100G;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 
 	rte_eth_linkstatus_set(dev, &link);
 	return 0;
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index e4f1ad45219e..5d5350d78e03 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -70,16 +70,16 @@
 
 #define TAP_IOV_DEFAULT_MAX 1024
 
-#define TAP_RX_OFFLOAD (DEV_RX_OFFLOAD_SCATTER |	\
-			DEV_RX_OFFLOAD_IPV4_CKSUM |	\
-			DEV_RX_OFFLOAD_UDP_CKSUM |	\
-			DEV_RX_OFFLOAD_TCP_CKSUM)
+#define TAP_RX_OFFLOAD (RTE_ETH_RX_OFFLOAD_SCATTER |	\
+			RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	\
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
 
-#define TAP_TX_OFFLOAD (DEV_TX_OFFLOAD_MULTI_SEGS |	\
-			DEV_TX_OFFLOAD_IPV4_CKSUM |	\
-			DEV_TX_OFFLOAD_UDP_CKSUM |	\
-			DEV_TX_OFFLOAD_TCP_CKSUM |	\
-			DEV_TX_OFFLOAD_TCP_TSO)
+#define TAP_TX_OFFLOAD (RTE_ETH_TX_OFFLOAD_MULTI_SEGS |	\
+			RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |	\
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |	\
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM |	\
+			RTE_ETH_TX_OFFLOAD_TCP_TSO)
 
 static int tap_devices_count;
 
@@ -97,10 +97,10 @@ static const char *valid_arguments[] = {
 static volatile uint32_t tap_trigger;	/* Rx trigger */
 
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 static void
@@ -433,7 +433,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 		len = readv(process_private->rxq_fds[rxq->queue_id],
 			*rxq->iovecs,
-			1 + (rxq->rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ?
+			1 + (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ?
 			     rxq->nb_rx_desc : 1));
 		if (len < (int)sizeof(struct tun_pi))
 			break;
@@ -489,7 +489,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		seg->next = NULL;
 		mbuf->packet_type = rte_net_get_ptype(mbuf, NULL,
 						      RTE_PTYPE_ALL_MASK);
-		if (rxq->rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+		if (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 			tap_verify_csum(mbuf);
 
 		/* account for the receive frame */
@@ -866,7 +866,7 @@ tap_link_set_down(struct rte_eth_dev *dev)
 	struct pmd_internals *pmd = dev->data->dev_private;
 	struct ifreq ifr = { .ifr_flags = IFF_UP };
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 0, LOCAL_ONLY);
 }
 
@@ -876,7 +876,7 @@ tap_link_set_up(struct rte_eth_dev *dev)
 	struct pmd_internals *pmd = dev->data->dev_private;
 	struct ifreq ifr = { .ifr_flags = IFF_UP };
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 1, LOCAL_AND_REMOTE);
 }
 
@@ -956,30 +956,30 @@ tap_dev_speed_capa(void)
 	uint32_t speed = pmd_link.link_speed;
 	uint32_t capa = 0;
 
-	if (speed >= ETH_SPEED_NUM_10M)
-		capa |= ETH_LINK_SPEED_10M;
-	if (speed >= ETH_SPEED_NUM_100M)
-		capa |= ETH_LINK_SPEED_100M;
-	if (speed >= ETH_SPEED_NUM_1G)
-		capa |= ETH_LINK_SPEED_1G;
-	if (speed >= ETH_SPEED_NUM_5G)
-		capa |= ETH_LINK_SPEED_2_5G;
-	if (speed >= ETH_SPEED_NUM_5G)
-		capa |= ETH_LINK_SPEED_5G;
-	if (speed >= ETH_SPEED_NUM_10G)
-		capa |= ETH_LINK_SPEED_10G;
-	if (speed >= ETH_SPEED_NUM_20G)
-		capa |= ETH_LINK_SPEED_20G;
-	if (speed >= ETH_SPEED_NUM_25G)
-		capa |= ETH_LINK_SPEED_25G;
-	if (speed >= ETH_SPEED_NUM_40G)
-		capa |= ETH_LINK_SPEED_40G;
-	if (speed >= ETH_SPEED_NUM_50G)
-		capa |= ETH_LINK_SPEED_50G;
-	if (speed >= ETH_SPEED_NUM_56G)
-		capa |= ETH_LINK_SPEED_56G;
-	if (speed >= ETH_SPEED_NUM_100G)
-		capa |= ETH_LINK_SPEED_100G;
+	if (speed >= RTE_ETH_SPEED_NUM_10M)
+		capa |= RTE_ETH_LINK_SPEED_10M;
+	if (speed >= RTE_ETH_SPEED_NUM_100M)
+		capa |= RTE_ETH_LINK_SPEED_100M;
+	if (speed >= RTE_ETH_SPEED_NUM_1G)
+		capa |= RTE_ETH_LINK_SPEED_1G;
+	if (speed >= RTE_ETH_SPEED_NUM_5G)
+		capa |= RTE_ETH_LINK_SPEED_2_5G;
+	if (speed >= RTE_ETH_SPEED_NUM_5G)
+		capa |= RTE_ETH_LINK_SPEED_5G;
+	if (speed >= RTE_ETH_SPEED_NUM_10G)
+		capa |= RTE_ETH_LINK_SPEED_10G;
+	if (speed >= RTE_ETH_SPEED_NUM_20G)
+		capa |= RTE_ETH_LINK_SPEED_20G;
+	if (speed >= RTE_ETH_SPEED_NUM_25G)
+		capa |= RTE_ETH_LINK_SPEED_25G;
+	if (speed >= RTE_ETH_SPEED_NUM_40G)
+		capa |= RTE_ETH_LINK_SPEED_40G;
+	if (speed >= RTE_ETH_SPEED_NUM_50G)
+		capa |= RTE_ETH_LINK_SPEED_50G;
+	if (speed >= RTE_ETH_SPEED_NUM_56G)
+		capa |= RTE_ETH_LINK_SPEED_56G;
+	if (speed >= RTE_ETH_SPEED_NUM_100G)
+		capa |= RTE_ETH_LINK_SPEED_100G;
 
 	return capa;
 }
@@ -1196,15 +1196,15 @@ tap_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 		tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, REMOTE_ONLY);
 		if (!(ifr.ifr_flags & IFF_UP) ||
 		    !(ifr.ifr_flags & IFF_RUNNING)) {
-			dev_link->link_status = ETH_LINK_DOWN;
+			dev_link->link_status = RTE_ETH_LINK_DOWN;
 			return 0;
 		}
 	}
 	tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, LOCAL_ONLY);
 	dev_link->link_status =
 		((ifr.ifr_flags & IFF_UP) && (ifr.ifr_flags & IFF_RUNNING) ?
-		 ETH_LINK_UP :
-		 ETH_LINK_DOWN);
+		 RTE_ETH_LINK_UP :
+		 RTE_ETH_LINK_DOWN);
 	return 0;
 }
 
@@ -1391,7 +1391,7 @@ tap_gso_ctx_setup(struct rte_gso_ctx *gso_ctx, struct rte_eth_dev *dev)
 	int ret;
 
 	/* initialize GSO context */
-	gso_types = DEV_TX_OFFLOAD_TCP_TSO;
+	gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	if (!pmd->gso_ctx_mp) {
 		/*
 		 * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE
@@ -1606,9 +1606,9 @@ tap_tx_queue_setup(struct rte_eth_dev *dev,
 
 	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
 	txq->csum = !!(offloads &
-			(DEV_TX_OFFLOAD_IPV4_CKSUM |
-			 DEV_TX_OFFLOAD_UDP_CKSUM |
-			 DEV_TX_OFFLOAD_TCP_CKSUM));
+			(RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			 RTE_ETH_TX_OFFLOAD_TCP_CKSUM));
 
 	ret = tap_setup_queue(dev, internals, tx_queue_id, 0);
 	if (ret == -1)
@@ -1760,7 +1760,7 @@ static int
 tap_flow_ctrl_get(struct rte_eth_dev *dev __rte_unused,
 		  struct rte_eth_fc_conf *fc_conf)
 {
-	fc_conf->mode = RTE_FC_NONE;
+	fc_conf->mode = RTE_ETH_FC_NONE;
 	return 0;
 }
 
@@ -1768,7 +1768,7 @@ static int
 tap_flow_ctrl_set(struct rte_eth_dev *dev __rte_unused,
 		  struct rte_eth_fc_conf *fc_conf)
 {
-	if (fc_conf->mode != RTE_FC_NONE)
+	if (fc_conf->mode != RTE_ETH_FC_NONE)
 		return -ENOTSUP;
 	return 0;
 }
@@ -2262,7 +2262,7 @@ rte_pmd_tun_probe(struct rte_vdev_device *dev)
 			}
 		}
 	}
-	pmd_link.link_speed = ETH_SPEED_NUM_10G;
+	pmd_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 
 	TAP_LOG(DEBUG, "Initializing pmd_tun for %s", name);
 
@@ -2436,7 +2436,7 @@ rte_pmd_tap_probe(struct rte_vdev_device *dev)
 		return 0;
 	}
 
-	speed = ETH_SPEED_NUM_10G;
+	speed = RTE_ETH_SPEED_NUM_10G;
 
 	/* use tap%d which causes kernel to choose next available */
 	strlcpy(tap_name, DEFAULT_TAP_NAME "%d", RTE_ETH_NAME_MAX_LEN);
diff --git a/drivers/net/tap/tap_rss.h b/drivers/net/tap/tap_rss.h
index 176e7180bdaa..48c151cf6b68 100644
--- a/drivers/net/tap/tap_rss.h
+++ b/drivers/net/tap/tap_rss.h
@@ -13,7 +13,7 @@
 #define TAP_RSS_HASH_KEY_SIZE 40
 
 /* Supported RSS */
-#define TAP_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP))
+#define TAP_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP))
 
 /* hashed fields for RSS */
 enum hash_field {
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 8ce9a99dc074..762647e3b6ee 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -61,14 +61,14 @@ nicvf_link_status_update(struct nicvf *nic,
 {
 	memset(link, 0, sizeof(*link));
 
-	link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	if (nic->duplex == NICVF_HALF_DUPLEX)
-		link->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	else if (nic->duplex == NICVF_FULL_DUPLEX)
-		link->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link->link_speed = nic->speed;
-	link->link_autoneg = ETH_LINK_AUTONEG;
+	link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 }
 
 static void
@@ -134,7 +134,7 @@ nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		/* rte_eth_link_get() might need to wait up to 9 seconds */
 		for (i = 0; i < MAX_CHECK_TIME; i++) {
 			nicvf_link_status_update(nic, &link);
-			if (link.link_status == ETH_LINK_UP)
+			if (link.link_status == RTE_ETH_LINK_UP)
 				break;
 			rte_delay_ms(CHECK_INTERVAL);
 		}
@@ -390,35 +390,35 @@ nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
 {
 	uint64_t nic_rss = 0;
 
-	if (ethdev_rss & ETH_RSS_IPV4)
+	if (ethdev_rss & RTE_ETH_RSS_IPV4)
 		nic_rss |= RSS_IP_ENA;
 
-	if (ethdev_rss & ETH_RSS_IPV6)
+	if (ethdev_rss & RTE_ETH_RSS_IPV6)
 		nic_rss |= RSS_IP_ENA;
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
 
-	if (ethdev_rss & ETH_RSS_PORT)
+	if (ethdev_rss & RTE_ETH_RSS_PORT)
 		nic_rss |= RSS_L2_EXTENDED_HASH_ENA;
 
 	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
-		if (ethdev_rss & ETH_RSS_VXLAN)
+		if (ethdev_rss & RTE_ETH_RSS_VXLAN)
 			nic_rss |= RSS_TUN_VXLAN_ENA;
 
-		if (ethdev_rss & ETH_RSS_GENEVE)
+		if (ethdev_rss & RTE_ETH_RSS_GENEVE)
 			nic_rss |= RSS_TUN_GENEVE_ENA;
 
-		if (ethdev_rss & ETH_RSS_NVGRE)
+		if (ethdev_rss & RTE_ETH_RSS_NVGRE)
 			nic_rss |= RSS_TUN_NVGRE_ENA;
 	}
 
@@ -431,28 +431,28 @@ nicvf_rss_nic_to_ethdev(struct nicvf *nic,  uint64_t nic_rss)
 	uint64_t ethdev_rss = 0;
 
 	if (nic_rss & RSS_IP_ENA)
-		ethdev_rss |= (ETH_RSS_IPV4 | ETH_RSS_IPV6);
+		ethdev_rss |= (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6);
 
 	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_TCP_ENA))
-		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_TCP |
-				ETH_RSS_NONFRAG_IPV6_TCP);
+		ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP);
 
 	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_UDP_ENA))
-		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_UDP |
-				ETH_RSS_NONFRAG_IPV6_UDP);
+		ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP);
 
 	if (nic_rss & RSS_L2_EXTENDED_HASH_ENA)
-		ethdev_rss |= ETH_RSS_PORT;
+		ethdev_rss |= RTE_ETH_RSS_PORT;
 
 	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
 		if (nic_rss & RSS_TUN_VXLAN_ENA)
-			ethdev_rss |= ETH_RSS_VXLAN;
+			ethdev_rss |= RTE_ETH_RSS_VXLAN;
 
 		if (nic_rss & RSS_TUN_GENEVE_ENA)
-			ethdev_rss |= ETH_RSS_GENEVE;
+			ethdev_rss |= RTE_ETH_RSS_GENEVE;
 
 		if (nic_rss & RSS_TUN_NVGRE_ENA)
-			ethdev_rss |= ETH_RSS_NVGRE;
+			ethdev_rss |= RTE_ETH_RSS_NVGRE;
 	}
 	return ethdev_rss;
 }
@@ -479,8 +479,8 @@ nicvf_dev_reta_query(struct rte_eth_dev *dev,
 		return ret;
 
 	/* Copy RETA table */
-	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = tbl[j];
 	}
@@ -509,8 +509,8 @@ nicvf_dev_reta_update(struct rte_eth_dev *dev,
 		return ret;
 
 	/* Copy RETA table */
-	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				tbl[j] = reta_conf[i].reta[j];
 	}
@@ -807,9 +807,9 @@ nicvf_configure_rss(struct rte_eth_dev *dev)
 		    dev->data->nb_rx_queues,
 		    dev->data->dev_conf.lpbk_mode, rsshf);
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
 		ret = nicvf_rss_term(nic);
-	else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+	else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 		ret = nicvf_rss_config(nic, dev->data->nb_rx_queues, rsshf);
 	if (ret)
 		PMD_INIT_LOG(ERR, "Failed to configure RSS %d", ret);
@@ -870,7 +870,7 @@ nicvf_set_tx_function(struct rte_eth_dev *dev)
 
 	for (i = 0; i < dev->data->nb_tx_queues; i++) {
 		txq = dev->data->tx_queues[i];
-		if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
 			multiseg = true;
 			break;
 		}
@@ -992,7 +992,7 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
 	txq->offloads = offloads;
 
-	is_single_pool = !!(offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE);
+	is_single_pool = !!(offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE);
 
 	/* Choose optimum free threshold value for multipool case */
 	if (!is_single_pool) {
@@ -1382,11 +1382,11 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	PMD_INIT_FUNC_TRACE();
 
 	/* Autonegotiation may be disabled */
-	dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
-	dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
-				 ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+				 RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
 	if (nicvf_hw_version(nic) != PCI_SUB_DEVICE_ID_CN81XX_NICVF)
-		dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
 
 	dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU;
 	dev_info->max_rx_pktlen = NIC_HW_MAX_MTU + RTE_ETHER_HDR_LEN;
@@ -1415,10 +1415,10 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = NICVF_DEFAULT_TX_FREE_THRESH,
-		.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE |
-			DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM   |
-			DEV_TX_OFFLOAD_UDP_CKSUM          |
-			DEV_TX_OFFLOAD_TCP_CKSUM,
+		.offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+			RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM   |
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM          |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM,
 	};
 
 	return 0;
@@ -1582,8 +1582,8 @@ nicvf_vf_start(struct rte_eth_dev *dev, struct nicvf *nic, uint32_t rbdrsz)
 		     nic->rbdr->tail, nb_rbdr_desc, nic->vf_id);
 
 	/* Configure VLAN Strip */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	ret = nicvf_vlan_offload_config(dev, mask);
 
 	/* Based on the packet type(IPv4 or IPv6), the nicvf HW aligns L3 data
@@ -1711,7 +1711,7 @@ nicvf_dev_start(struct rte_eth_dev *dev)
 	/* Setup scatter mode if needed by jumbo */
 	if (dev->data->mtu + (uint32_t)NIC_HW_L2_OVERHEAD + 2 * VLAN_TAG_SIZE > buffsz)
 		dev->data->scattered_rx = 1;
-	if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
+	if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) != 0)
 		dev->data->scattered_rx = 1;
 
 	/* Setup MTU */
@@ -1896,8 +1896,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (!rte_eal_has_hugepages()) {
 		PMD_INIT_LOG(INFO, "Huge page is not configured");
@@ -1909,8 +1909,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-		rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+		rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		PMD_INIT_LOG(INFO, "Unsupported rx qmode %d", rxmode->mq_mode);
 		return -EINVAL;
 	}
@@ -1920,7 +1920,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(INFO, "Setting link speed/duplex not supported");
 		return -EINVAL;
 	}
@@ -1955,7 +1955,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		nic->offload_cksum = 1;
 
 	PMD_INIT_LOG(DEBUG, "Configured ethdev port%d hwcap=0x%" PRIx64,
@@ -2032,8 +2032,8 @@ nicvf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	struct nicvf *nic = nicvf_pmd_priv(dev);
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			nicvf_vlan_hw_strip(nic, true);
 		else
 			nicvf_vlan_hw_strip(nic, false);
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index 5d38750d6313..cb474e26b81e 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -16,32 +16,32 @@
 #define NICVF_UNKNOWN_DUPLEX		0xff
 
 #define NICVF_RSS_OFFLOAD_PASS1 ( \
-	ETH_RSS_PORT | \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_PORT | \
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define NICVF_RSS_OFFLOAD_TUNNEL ( \
-	ETH_RSS_VXLAN | \
-	ETH_RSS_GENEVE | \
-	ETH_RSS_NVGRE)
+	RTE_ETH_RSS_VXLAN | \
+	RTE_ETH_RSS_GENEVE | \
+	RTE_ETH_RSS_NVGRE)
 
 #define NICVF_TX_OFFLOAD_CAPA ( \
-	DEV_TX_OFFLOAD_IPV4_CKSUM       | \
-	DEV_TX_OFFLOAD_UDP_CKSUM        | \
-	DEV_TX_OFFLOAD_TCP_CKSUM        | \
-	DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-	DEV_TX_OFFLOAD_MBUF_FAST_FREE   | \
-	DEV_TX_OFFLOAD_MULTI_SEGS)
+	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM       | \
+	RTE_ETH_TX_OFFLOAD_UDP_CKSUM        | \
+	RTE_ETH_TX_OFFLOAD_TCP_CKSUM        | \
+	RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+	RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE   | \
+	RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define NICVF_RX_OFFLOAD_CAPA ( \
-	DEV_RX_OFFLOAD_CHECKSUM    | \
-	DEV_RX_OFFLOAD_VLAN_STRIP  | \
-	DEV_RX_OFFLOAD_SCATTER     | \
-	DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RX_OFFLOAD_CHECKSUM    | \
+	RTE_ETH_RX_OFFLOAD_VLAN_STRIP  | \
+	RTE_ETH_RX_OFFLOAD_SCATTER     | \
+	RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define NICVF_DEFAULT_RX_FREE_THRESH    224
 #define NICVF_DEFAULT_TX_FREE_THRESH    224
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 7b46ffb68635..0b0f9db7cb2a 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -998,7 +998,7 @@ txgbe_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
 	rxbal = rd32(hw, TXGBE_RXBAL(rxq->reg_idx));
 	rxbah = rd32(hw, TXGBE_RXBAH(rxq->reg_idx));
 	rxcfg = rd32(hw, TXGBE_RXCFG(rxq->reg_idx));
-	if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 		restart = (rxcfg & TXGBE_RXCFG_ENA) &&
 			!(rxcfg & TXGBE_RXCFG_VLAN);
 		rxcfg |= TXGBE_RXCFG_VLAN;
@@ -1033,7 +1033,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 	vlan_ext = (portctrl & TXGBE_PORTCTL_VLANEXT);
 	qinq = vlan_ext && (portctrl & TXGBE_PORTCTL_QINQ);
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
+	case RTE_ETH_VLAN_TYPE_INNER:
 		if (vlan_ext) {
 			wr32m(hw, TXGBE_VLANCTL,
 				TXGBE_VLANCTL_TPID_MASK,
@@ -1053,7 +1053,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 				TXGBE_TAGTPID_LSB(tpid));
 		}
 		break;
-	case ETH_VLAN_TYPE_OUTER:
+	case RTE_ETH_VLAN_TYPE_OUTER:
 		if (vlan_ext) {
 			/* Only the high 16-bits is valid */
 			wr32m(hw, TXGBE_EXTAG,
@@ -1138,10 +1138,10 @@ txgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 
 	if (on) {
 		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
-		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
 		rxq->vlan_flags = PKT_RX_VLAN;
-		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
 
@@ -1240,7 +1240,7 @@ txgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
 
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			txgbe_vlan_strip_queue_set(dev, i, 1);
 		else
 			txgbe_vlan_strip_queue_set(dev, i, 0);
@@ -1254,17 +1254,17 @@ txgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	struct txgbe_rx_queue *rxq;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		rxmode = &dev->data->dev_conf.rxmode;
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 		else
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 	}
 }
@@ -1275,25 +1275,25 @@ txgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	rxmode = &dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_STRIP_MASK)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK)
 		txgbe_vlan_hw_strip_config(dev);
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			txgbe_vlan_hw_filter_enable(dev);
 		else
 			txgbe_vlan_hw_filter_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			txgbe_vlan_hw_extend_enable(dev);
 		else
 			txgbe_vlan_hw_extend_disable(dev);
 	}
 
-	if (mask & ETH_QINQ_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+	if (mask & RTE_ETH_QINQ_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
 			txgbe_qinq_hw_strip_enable(dev);
 		else
 			txgbe_qinq_hw_strip_disable(dev);
@@ -1331,10 +1331,10 @@ txgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
 	switch (nb_rx_q) {
 	case 1:
 	case 2:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
 		break;
 	case 4:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
 		break;
 	default:
 		return -EINVAL;
@@ -1357,18 +1357,18 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
 		/* check multi-queue mode */
 		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
 			break;
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
 			/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
 			PMD_INIT_LOG(ERR, "SRIOV active,"
 					" unsupported mq_mode rx %d.",
 					dev_conf->rxmode.mq_mode);
 			return -EINVAL;
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
 			if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
 				if (txgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
 					PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -1378,13 +1378,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 					return -EINVAL;
 				}
 			break;
-		case ETH_MQ_RX_VMDQ_ONLY:
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_NONE:
 			/* if nothing mq mode configure, use default scheme */
 			dev->data->dev_conf.rxmode.mq_mode =
-				ETH_MQ_RX_VMDQ_ONLY;
+				RTE_ETH_MQ_RX_VMDQ_ONLY;
 			break;
-		default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+		default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
 			/* SRIOV only works in VMDq enable mode */
 			PMD_INIT_LOG(ERR, "SRIOV is active,"
 					" wrong mq_mode rx %d.",
@@ -1393,13 +1393,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 		}
 
 		switch (dev_conf->txmode.mq_mode) {
-		case ETH_MQ_TX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+		case RTE_ETH_MQ_TX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+			dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
 			break;
-		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
+		default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
 			dev->data->dev_conf.txmode.mq_mode =
-				ETH_MQ_TX_VMDQ_ONLY;
+				RTE_ETH_MQ_TX_VMDQ_ONLY;
 			break;
 		}
 
@@ -1414,13 +1414,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 			return -EINVAL;
 		}
 	} else {
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
 			PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
 					  " not supported.");
 			return -EINVAL;
 		}
 		/* check configuration for vmdb+dcb mode */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_conf *conf;
 
 			if (nb_rx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1429,15 +1429,15 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools must be %d or %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_tx_conf *conf;
 
 			if (nb_tx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1446,39 +1446,39 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools != %d and"
 						" nb_queue_pools != %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
 
 		/* For DCB mode check our configuration before we go further */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
 			const struct rte_eth_dcb_rx_conf *conf;
 
 			conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
 
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 			const struct rte_eth_dcb_tx_conf *conf;
 
 			conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
@@ -1495,8 +1495,8 @@ txgbe_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multiple queue mode checking */
 	ret  = txgbe_check_mq_mode(dev);
@@ -1694,15 +1694,15 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 		goto error;
 	}
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = txgbe_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
 		goto error;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
 		/* Enable vlan filtering for VMDq */
 		txgbe_vmdq_vlan_hw_filter_enable(dev);
 	}
@@ -1763,8 +1763,8 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 	if (err)
 		goto error;
 
-	allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_10G;
+	allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G;
 
 	link_speeds = &dev->data->dev_conf.link_speeds;
 	if (((*link_speeds) >> 1) & ~(allowed_speeds >> 1)) {
@@ -1773,20 +1773,20 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 	}
 
 	speed = 0x0;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		speed = (TXGBE_LINK_SPEED_100M_FULL |
 			 TXGBE_LINK_SPEED_1GB_FULL |
 			 TXGBE_LINK_SPEED_10GB_FULL);
 	} else {
-		if (*link_speeds & ETH_LINK_SPEED_10G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
 			speed |= TXGBE_LINK_SPEED_10GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
 			speed |= TXGBE_LINK_SPEED_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_2_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
 			speed |= TXGBE_LINK_SPEED_2_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_1G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed |= TXGBE_LINK_SPEED_1GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_100M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed |= TXGBE_LINK_SPEED_100M_FULL;
 	}
 
@@ -2601,7 +2601,7 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
-	dev_info->max_vmdq_pools = ETH_64_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->vmdq_queue_num = dev_info->max_rx_queues;
 	dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
@@ -2634,11 +2634,11 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->tx_desc_lim = tx_desc_lim;
 
 	dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
-	dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
 
 	/* Driver-preferred Rx/Tx parameters */
 	dev_info->default_rxportconf.burst_size = 32;
@@ -2695,11 +2695,11 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	int wait = 1;
 
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_AUTONEG);
 
 	hw->mac.get_link_status = true;
 
@@ -2713,8 +2713,8 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
 
 	if (err != 0) {
-		link.link_speed = ETH_SPEED_NUM_100M;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
@@ -2733,34 +2733,34 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	}
 
 	intr->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (link_speed) {
 	default:
 	case TXGBE_LINK_SPEED_UNKNOWN:
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	case TXGBE_LINK_SPEED_100M_FULL:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	case TXGBE_LINK_SPEED_1GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case TXGBE_LINK_SPEED_2_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_2_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 
 	case TXGBE_LINK_SPEED_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 
 	case TXGBE_LINK_SPEED_10GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	}
 
@@ -2990,7 +2990,7 @@ txgbe_dev_link_status_print(struct rte_eth_dev *dev)
 		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned int)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3221,13 +3221,13 @@ txgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		tx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -3359,16 +3359,16 @@ txgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 		return -ENOTSUP;
 	}
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += 4) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
 		if (!mask)
 			continue;
@@ -3400,16 +3400,16 @@ txgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += 4) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
 		if (!mask)
 			continue;
@@ -3576,12 +3576,12 @@ txgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
 		return -ENOTSUP;
 
 	if (on) {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = ~0;
 			wr32(hw, TXGBE_UCADDRTBL(i), ~0);
 		}
 	} else {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = 0;
 			wr32(hw, TXGBE_UCADDRTBL(i), 0);
 		}
@@ -3605,15 +3605,15 @@ txgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
 {
 	uint32_t new_val = orig_val;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
 		new_val |= TXGBE_POOLETHCTL_UTA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
 		new_val |= TXGBE_POOLETHCTL_MCHA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 		new_val |= TXGBE_POOLETHCTL_UCHA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 		new_val |= TXGBE_POOLETHCTL_BCA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 		new_val |= TXGBE_POOLETHCTL_MCP;
 
 	return new_val;
@@ -4264,15 +4264,15 @@ txgbe_start_timecounters(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 
 	switch (link.link_speed) {
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		incval = TXGBE_INCVAL_100;
 		shift = TXGBE_INCVAL_SHIFT_100;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		incval = TXGBE_INCVAL_1GB;
 		shift = TXGBE_INCVAL_SHIFT_1GB;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 	default:
 		incval = TXGBE_INCVAL_10GB;
 		shift = TXGBE_INCVAL_SHIFT_10GB;
@@ -4628,7 +4628,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	uint8_t nb_tcs;
 	uint8_t i, j;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
 	else
 		dcb_info->nb_tcs = 1;
@@ -4639,7 +4639,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	if (dcb_config->vt_mode) { /* vt is enabled */
 		struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
 		if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
 			for (j = 0; j < nb_tcs; j++) {
@@ -4663,9 +4663,9 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	} else { /* vt is disabled */
 		struct rte_eth_dcb_rx_conf *rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
-		if (dcb_info->nb_tcs == ETH_4_TCS) {
+		if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4678,7 +4678,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 			dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
 			dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
 			dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
-		} else if (dcb_info->nb_tcs == ETH_8_TCS) {
+		} else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4908,7 +4908,7 @@ txgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = txgbe_e_tag_filter_add(dev, l2_tunnel);
 		break;
 	default:
@@ -4939,7 +4939,7 @@ txgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 		return ret;
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = txgbe_e_tag_filter_del(dev, l2_tunnel);
 		break;
 	default:
@@ -4979,7 +4979,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
 			ret = -EINVAL;
@@ -4987,7 +4987,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_VXLANPORT, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add Geneve port 0 is not allowed.");
 			ret = -EINVAL;
@@ -4995,7 +4995,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_GENEVEPORT, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add Teredo port 0 is not allowed.");
 			ret = -EINVAL;
@@ -5003,7 +5003,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_TEREDOPORT, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
 			ret = -EINVAL;
@@ -5035,7 +5035,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORT);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5045,7 +5045,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_VXLANPORT, 0);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		cur_port = (uint16_t)rd32(hw, TXGBE_GENEVEPORT);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5055,7 +5055,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_GENEVEPORT, 0);
 		break;
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		cur_port = (uint16_t)rd32(hw, TXGBE_TEREDOPORT);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5065,7 +5065,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_TEREDOPORT, 0);
 		break;
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORTGPE);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index fd65d89ffe7d..8304b68292da 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -60,15 +60,15 @@
 #define TXGBE_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
 
 #define TXGBE_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define TXGBE_MISC_VEC_ID               RTE_INTR_VEC_ZERO_OFFSET
 #define TXGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 43dc0ed39b75..283b52e8f3db 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -486,14 +486,14 @@ txgbevf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
-	dev_info->max_vmdq_pools = ETH_64_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
 				     dev_info->rx_queue_offload_capa);
 	dev_info->tx_queue_offload_capa = txgbe_get_tx_queue_offloads(dev);
 	dev_info->tx_offload_capa = txgbe_get_tx_port_offloads(dev);
 	dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -574,22 +574,22 @@ txgbevf_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
 		     dev->data->port_id);
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/*
 	 * VF has no ability to enable/disable HW CRC
 	 * Keep the persistent behavior the same as Host PF
 	 */
 #ifndef RTE_LIBRTE_TXGBE_PF_DISABLE_STRIP_CRC
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
-		conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #else
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
 		PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #endif
 
@@ -647,8 +647,8 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
 	txgbevf_set_vfta_all(dev, 1);
 
 	/* Set HW strip */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = txgbevf_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -891,10 +891,10 @@ txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	int on = 0;
 
 	/* VF function only support hw strip feature, others are not support */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
-			on = !!(rxq->offloads &	DEV_RX_OFFLOAD_VLAN_STRIP);
+			on = !!(rxq->offloads &	RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 			txgbevf_vlan_strip_queue_set(dev, i, on);
 		}
 	}
diff --git a/drivers/net/txgbe/txgbe_fdir.c b/drivers/net/txgbe/txgbe_fdir.c
index 8abb86228608..e303d87176ed 100644
--- a/drivers/net/txgbe/txgbe_fdir.c
+++ b/drivers/net/txgbe/txgbe_fdir.c
@@ -102,22 +102,22 @@ txgbe_fdir_enable(struct txgbe_hw *hw, uint32_t fdirctrl)
  * flexbytes matching field, and drop queue (only for perfect matching mode).
  */
 static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf,
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf,
 		     uint32_t *fdirctrl, uint32_t *flex)
 {
 	*fdirctrl = 0;
 	*flex = 0;
 
 	switch (conf->pballoc) {
-	case RTE_FDIR_PBALLOC_64K:
+	case RTE_ETH_FDIR_PBALLOC_64K:
 		/* 8k - 1 signature filters */
 		*fdirctrl |= TXGBE_FDIRCTL_BUF_64K;
 		break;
-	case RTE_FDIR_PBALLOC_128K:
+	case RTE_ETH_FDIR_PBALLOC_128K:
 		/* 16k - 1 signature filters */
 		*fdirctrl |= TXGBE_FDIRCTL_BUF_128K;
 		break;
-	case RTE_FDIR_PBALLOC_256K:
+	case RTE_ETH_FDIR_PBALLOC_256K:
 		/* 32k - 1 signature filters */
 		*fdirctrl |= TXGBE_FDIRCTL_BUF_256K;
 		break;
@@ -521,15 +521,15 @@ txgbe_atr_compute_hash(struct txgbe_atr_input *atr_input,
 
 static uint32_t
 atr_compute_perfect_hash(struct txgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
 	uint32_t bucket_hash;
 
 	bucket_hash = txgbe_atr_compute_hash(input,
 				TXGBE_ATR_BUCKET_HASH_KEY);
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		bucket_hash &= PERFECT_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		bucket_hash &= PERFECT_BUCKET_128KB_HASH_MASK;
 	else
 		bucket_hash &= PERFECT_BUCKET_64KB_HASH_MASK;
@@ -564,15 +564,15 @@ txgbe_fdir_check_cmd_complete(struct txgbe_hw *hw, uint32_t *fdircmd)
  */
 static uint32_t
 atr_compute_signature_hash(struct txgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
 	uint32_t bucket_hash, sig_hash;
 
 	bucket_hash = txgbe_atr_compute_hash(input,
 				TXGBE_ATR_BUCKET_HASH_KEY);
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		bucket_hash &= SIG_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		bucket_hash &= SIG_BUCKET_128KB_HASH_MASK;
 	else
 		bucket_hash &= SIG_BUCKET_64KB_HASH_MASK;
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index eae400b14176..6d7fd1842843 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -1215,7 +1215,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
 		return -rte_errno;
 	}
 
-	filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+	filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
 	/**
 	 * grp and e_cid_base are bit fields and only use 14 bits.
 	 * e-tag id is taken as little endian by HW.
diff --git a/drivers/net/txgbe/txgbe_ipsec.c b/drivers/net/txgbe/txgbe_ipsec.c
index ccd747973ba2..445733f3ba46 100644
--- a/drivers/net/txgbe/txgbe_ipsec.c
+++ b/drivers/net/txgbe/txgbe_ipsec.c
@@ -372,7 +372,7 @@ txgbe_crypto_create_session(void *device,
 	aead_xform = &conf->crypto_xform->aead;
 
 	if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 			ic_session->op = TXGBE_OP_AUTHENTICATED_DECRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -380,7 +380,7 @@ txgbe_crypto_create_session(void *device,
 			return -ENOTSUP;
 		}
 	} else {
-		if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+		if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 			ic_session->op = TXGBE_OP_AUTHENTICATED_ENCRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -611,11 +611,11 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	tx_offloads = dev->data->dev_conf.txmode.offloads;
 
 	/* sanity checks */
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
 		return -1;
 	}
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
 		return -1;
 	}
@@ -634,7 +634,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	reg |= TXGBE_SECRXCTL_CRCSTRIP;
 	wr32(hw, TXGBE_SECRXCTL, reg);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		wr32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA, 0);
 		reg = rd32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA);
 		if (reg != 0) {
@@ -642,7 +642,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 			return -1;
 		}
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 		wr32(hw, TXGBE_SECTXCTL, TXGBE_SECTXCTL_STFWD);
 		reg = rd32(hw, TXGBE_SECTXCTL);
 		if (reg != TXGBE_SECTXCTL_STFWD) {
diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c
index a48972b1a381..30be2873307a 100644
--- a/drivers/net/txgbe/txgbe_pf.c
+++ b/drivers/net/txgbe/txgbe_pf.c
@@ -101,15 +101,15 @@ int txgbe_pf_host_init(struct rte_eth_dev *eth_dev)
 	memset(uta_info, 0, sizeof(struct txgbe_uta_info));
 	hw->mac.mc_filter_type = 0;
 
-	if (vf_num >= ETH_32_POOLS) {
+	if (vf_num >= RTE_ETH_32_POOLS) {
 		nb_queue = 2;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
-	} else if (vf_num >= ETH_16_POOLS) {
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+	} else if (vf_num >= RTE_ETH_16_POOLS) {
 		nb_queue = 4;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
 	} else {
 		nb_queue = 8;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
 	}
 
 	RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -256,13 +256,13 @@ int txgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
 	gcr_ext &= ~TXGBE_PORTCTL_NUMVT_MASK;
 
 	switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		gcr_ext |= TXGBE_PORTCTL_NUMVT_64;
 		break;
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		gcr_ext |= TXGBE_PORTCTL_NUMVT_32;
 		break;
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		gcr_ext |= TXGBE_PORTCTL_NUMVT_16;
 		break;
 	}
@@ -611,29 +611,29 @@ txgbe_get_vf_queues(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf)
 	/* Notify VF of number of DCB traffic classes */
 	eth_conf = &eth_dev->data->dev_conf;
 	switch (eth_conf->txmode.mq_mode) {
-	case ETH_MQ_TX_NONE:
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_NONE:
+	case RTE_ETH_MQ_TX_DCB:
 		PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
 			", but its tx mode = %d\n", vf,
 			eth_conf->txmode.mq_mode);
 		return -1;
 
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		vmdq_dcb_tx_conf = &eth_conf->tx_adv_conf.vmdq_dcb_tx_conf;
 		switch (vmdq_dcb_tx_conf->nb_queue_pools) {
-		case ETH_16_POOLS:
-			num_tcs = ETH_8_TCS;
+		case RTE_ETH_16_POOLS:
+			num_tcs = RTE_ETH_8_TCS;
 			break;
-		case ETH_32_POOLS:
-			num_tcs = ETH_4_TCS;
+		case RTE_ETH_32_POOLS:
+			num_tcs = RTE_ETH_4_TCS;
 			break;
 		default:
 			return -1;
 		}
 		break;
 
-	/* ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
-	case ETH_MQ_TX_VMDQ_ONLY:
+	/* RTE_ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
+	case RTE_ETH_MQ_TX_VMDQ_ONLY:
 		hw = TXGBE_DEV_HW(eth_dev);
 		vmvir = rd32(hw, TXGBE_POOLTAG(vf));
 		vlana = vmvir & TXGBE_POOLTAG_ACT_MASK;
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 7e18dcce0a86..1204dc5499a5 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1960,7 +1960,7 @@ txgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
 uint64_t
 txgbe_get_rx_queue_offloads(struct rte_eth_dev *dev __rte_unused)
 {
-	return DEV_RX_OFFLOAD_VLAN_STRIP;
+	return RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 }
 
 uint64_t
@@ -1970,34 +1970,34 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
 	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
 	struct rte_eth_dev_sriov *sriov = &RTE_ETH_DEV_SRIOV(dev);
 
-	offloads = DEV_RX_OFFLOAD_IPV4_CKSUM  |
-		   DEV_RX_OFFLOAD_UDP_CKSUM   |
-		   DEV_RX_OFFLOAD_TCP_CKSUM   |
-		   DEV_RX_OFFLOAD_KEEP_CRC    |
-		   DEV_RX_OFFLOAD_VLAN_FILTER |
-		   DEV_RX_OFFLOAD_RSS_HASH |
-		   DEV_RX_OFFLOAD_SCATTER;
+	offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+		   RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+		   RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		   RTE_ETH_RX_OFFLOAD_RSS_HASH |
+		   RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	if (!txgbe_is_vf(dev))
-		offloads |= (DEV_RX_OFFLOAD_VLAN_FILTER |
-			     DEV_RX_OFFLOAD_QINQ_STRIP |
-			     DEV_RX_OFFLOAD_VLAN_EXTEND);
+		offloads |= (RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+			     RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+			     RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
 
 	/*
 	 * RSC is only supported by PF devices in a non-SR-IOV
 	 * mode.
 	 */
 	if (hw->mac.type == txgbe_mac_raptor && !sriov->active)
-		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+		offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 
 	if (hw->mac.type == txgbe_mac_raptor)
-		offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
 
-	offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+	offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		offloads |= DEV_RX_OFFLOAD_SECURITY;
+		offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
 #endif
 
 	return offloads;
@@ -2222,32 +2222,32 @@ txgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
 	uint64_t tx_offload_capa;
 
 	tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM   |
-		DEV_TX_OFFLOAD_SCTP_CKSUM  |
-		DEV_TX_OFFLOAD_TCP_TSO     |
-		DEV_TX_OFFLOAD_UDP_TSO	   |
-		DEV_TX_OFFLOAD_UDP_TNL_TSO	|
-		DEV_TX_OFFLOAD_IP_TNL_TSO	|
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO	|
-		DEV_TX_OFFLOAD_GRE_TNL_TSO	|
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO	|
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO	|
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO     |
+		RTE_ETH_TX_OFFLOAD_UDP_TSO	   |
+		RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_IP_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	if (!txgbe_is_vf(dev))
-		tx_offload_capa |= DEV_TX_OFFLOAD_QINQ_INSERT;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
 
-	tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+	tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 
-	tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-			   DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+	tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
 #endif
 	return tx_offload_capa;
 }
@@ -2349,7 +2349,7 @@ txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_deferred_start = tx_conf->tx_deferred_start;
 #ifdef RTE_LIB_SECURITY
 	txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SECURITY);
+			RTE_ETH_TX_OFFLOAD_SECURITY);
 #endif
 
 	/* Modification to set tail pointer for virtual function
@@ -2599,7 +2599,7 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
 		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -2900,20 +2900,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
 	if (hw->mac.type == txgbe_mac_raptor_vf) {
 		mrqc = rd32(hw, TXGBE_VFPLCFG);
 		mrqc &= ~TXGBE_VFPLCFG_RSSMASK;
-		if (rss_hf & ETH_RSS_IPV4)
+		if (rss_hf & RTE_ETH_RSS_IPV4)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV4TCP;
-		if (rss_hf & ETH_RSS_IPV6 ||
-		    rss_hf & ETH_RSS_IPV6_EX)
+		if (rss_hf & RTE_ETH_RSS_IPV6 ||
+		    rss_hf & RTE_ETH_RSS_IPV6_EX)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV6;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
-		    rss_hf & ETH_RSS_IPV6_TCP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV6TCP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV4UDP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
-		    rss_hf & ETH_RSS_IPV6_UDP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV6UDP;
 
 		if (rss_hf)
@@ -2930,20 +2930,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
 	} else {
 		mrqc = rd32(hw, TXGBE_RACTL);
 		mrqc &= ~TXGBE_RACTL_RSSMASK;
-		if (rss_hf & ETH_RSS_IPV4)
+		if (rss_hf & RTE_ETH_RSS_IPV4)
 			mrqc |= TXGBE_RACTL_RSSIPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			mrqc |= TXGBE_RACTL_RSSIPV4TCP;
-		if (rss_hf & ETH_RSS_IPV6 ||
-		    rss_hf & ETH_RSS_IPV6_EX)
+		if (rss_hf & RTE_ETH_RSS_IPV6 ||
+		    rss_hf & RTE_ETH_RSS_IPV6_EX)
 			mrqc |= TXGBE_RACTL_RSSIPV6;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
-		    rss_hf & ETH_RSS_IPV6_TCP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 			mrqc |= TXGBE_RACTL_RSSIPV6TCP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 			mrqc |= TXGBE_RACTL_RSSIPV4UDP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
-		    rss_hf & ETH_RSS_IPV6_UDP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 			mrqc |= TXGBE_RACTL_RSSIPV6UDP;
 
 		if (rss_hf)
@@ -2984,39 +2984,39 @@ txgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 	if (hw->mac.type == txgbe_mac_raptor_vf) {
 		mrqc = rd32(hw, TXGBE_VFPLCFG);
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV4)
-			rss_hf |= ETH_RSS_IPV4;
+			rss_hf |= RTE_ETH_RSS_IPV4;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV4TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV6)
-			rss_hf |= ETH_RSS_IPV6 |
-				  ETH_RSS_IPV6_EX;
+			rss_hf |= RTE_ETH_RSS_IPV6 |
+				  RTE_ETH_RSS_IPV6_EX;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV6TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
-				  ETH_RSS_IPV6_TCP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				  RTE_ETH_RSS_IPV6_TCP_EX;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV4UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV6UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
-				  ETH_RSS_IPV6_UDP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				  RTE_ETH_RSS_IPV6_UDP_EX;
 		if (!(mrqc & TXGBE_VFPLCFG_RSSENA))
 			rss_hf = 0;
 	} else {
 		mrqc = rd32(hw, TXGBE_RACTL);
 		if (mrqc & TXGBE_RACTL_RSSIPV4)
-			rss_hf |= ETH_RSS_IPV4;
+			rss_hf |= RTE_ETH_RSS_IPV4;
 		if (mrqc & TXGBE_RACTL_RSSIPV4TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		if (mrqc & TXGBE_RACTL_RSSIPV6)
-			rss_hf |= ETH_RSS_IPV6 |
-				  ETH_RSS_IPV6_EX;
+			rss_hf |= RTE_ETH_RSS_IPV6 |
+				  RTE_ETH_RSS_IPV6_EX;
 		if (mrqc & TXGBE_RACTL_RSSIPV6TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
-				  ETH_RSS_IPV6_TCP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				  RTE_ETH_RSS_IPV6_TCP_EX;
 		if (mrqc & TXGBE_RACTL_RSSIPV4UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 		if (mrqc & TXGBE_RACTL_RSSIPV6UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
-				  ETH_RSS_IPV6_UDP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				  RTE_ETH_RSS_IPV6_UDP_EX;
 		if (!(mrqc & TXGBE_RACTL_RSSENA))
 			rss_hf = 0;
 	}
@@ -3046,7 +3046,7 @@ txgbe_rss_configure(struct rte_eth_dev *dev)
 	 */
 	if (adapter->rss_reta_updated == 0) {
 		reta = 0;
-		for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+		for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
 			if (j == dev->data->nb_rx_queues)
 				j = 0;
 			reta = (reta >> 8) | LS32(j, 24, 0xFF);
@@ -3083,12 +3083,12 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
 	num_pools = cfg->nb_queue_pools;
 	/* Check we have a valid number of pools */
-	if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+	if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
 		txgbe_rss_disable(dev);
 		return;
 	}
 	/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
-	nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+	nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
 
 	/*
 	 * split rx buffer up into sections, each for 1 traffic class
@@ -3103,7 +3103,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
 	}
 	/* zero alloc all unused TCs */
-	for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		uint32_t rxpbsize = rd32(hw, TXGBE_PBRXSIZE(i));
 
 		rxpbsize &= (~(0x3FF << 10));
@@ -3111,7 +3111,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
 	}
 
-	if (num_pools == ETH_16_POOLS) {
+	if (num_pools == RTE_ETH_16_POOLS) {
 		mrqc = TXGBE_PORTCTL_NUMTC_8;
 		mrqc |= TXGBE_PORTCTL_NUMVT_16;
 	} else {
@@ -3130,7 +3130,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	wr32(hw, TXGBE_POOLCTL, vt_ctl);
 
 	queue_mapping = 0;
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 		/*
 		 * mapping is done with 3 bits per priority,
 		 * so shift by i*3 each time
@@ -3151,7 +3151,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		wr32(hw, TXGBE_VLANTBL(i), 0xFFFFFFFF);
 
 	wr32(hw, TXGBE_POOLRXENA(0),
-			num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+			num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	wr32(hw, TXGBE_ETHADDRIDX, 0);
 	wr32(hw, TXGBE_ETHADDRASSL, 0xFFFFFFFF);
@@ -3221,7 +3221,7 @@ txgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
 	/*PF VF Transmit Enable*/
 	wr32(hw, TXGBE_POOLTXENA(0),
 		vmdq_tx_conf->nb_queue_pools ==
-				ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+				RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	/*Configure general DCB TX parameters*/
 	txgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3237,12 +3237,12 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
-	if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3252,7 +3252,7 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3270,12 +3270,12 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
-	if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3285,7 +3285,7 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3312,7 +3312,7 @@ txgbe_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3339,7 +3339,7 @@ txgbe_dcb_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3475,7 +3475,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(dev);
 
 	switch (dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_VMDQ_DCB:
+	case RTE_ETH_MQ_RX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/*
@@ -3486,8 +3486,8 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		/*Configure general VMDQ and DCB RX parameters*/
 		txgbe_vmdq_dcb_configure(dev);
 		break;
-	case ETH_MQ_RX_DCB:
-	case ETH_MQ_RX_DCB_RSS:
+	case RTE_ETH_MQ_RX_DCB:
+	case RTE_ETH_MQ_RX_DCB_RSS:
 		dcb_config->vt_mode = false;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -3500,7 +3500,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		break;
 	}
 	switch (dev->data->dev_conf.txmode.mq_mode) {
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/* get DCB and VT TX configuration parameters
@@ -3511,7 +3511,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		txgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
 		break;
 
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_DCB:
 		dcb_config->vt_mode = false;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/* get DCB TX configuration parameters from rte_eth_conf */
@@ -3527,15 +3527,15 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	nb_tcs = dcb_config->num_tcs.pfc_tcs;
 	/* Unpack map */
 	txgbe_dcb_unpack_map_cee(dcb_config, TXGBE_DCB_RX_CONFIG, map);
-	if (nb_tcs == ETH_4_TCS) {
+	if (nb_tcs == RTE_ETH_4_TCS) {
 		/* Avoid un-configured priority mapping to TC0 */
 		uint8_t j = 4;
 		uint8_t mask = 0xFF;
 
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
 			mask = (uint8_t)(mask & (~(1 << map[i])));
 		for (i = 0; mask && (i < TXGBE_DCB_TC_MAX); i++) {
-			if ((mask & 0x1) && j < ETH_DCB_NUM_USER_PRIORITIES)
+			if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
 				map[j++] = i;
 			mask >>= 1;
 		}
@@ -3576,7 +3576,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
 
 		/* zero alloc all unused TCs */
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			wr32(hw, TXGBE_PBRXSIZE(i), 0);
 	}
 	if (config_dcb_tx) {
@@ -3592,7 +3592,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			wr32(hw, TXGBE_PBTXDMATH(i), txpbthresh);
 		}
 		/* Clear unused TCs, if any, to zero buffer size*/
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			wr32(hw, TXGBE_PBTXSIZE(i), 0);
 			wr32(hw, TXGBE_PBTXDMATH(i), 0);
 		}
@@ -3634,7 +3634,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	txgbe_dcb_config_tc_stats_raptor(hw, dcb_config);
 
 	/* Check if the PFC is supported */
-	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
 		for (i = 0; i < nb_tcs; i++) {
 			/* If the TC count is 8,
@@ -3648,7 +3648,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			tc->pfc = txgbe_dcb_pfc_enabled;
 		}
 		txgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
-		if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+		if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
 			pfc_en &= 0x0F;
 		ret = txgbe_dcb_config_pfc(hw, pfc_en, map);
 	}
@@ -3719,12 +3719,12 @@ void txgbe_configure_dcb(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	/* check support mq_mode for DCB */
-	if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB &&
-	    dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB &&
-	    dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS)
+	if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
 		return;
 
-	if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+	if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
 		return;
 
 	/** Configure DCB hardware **/
@@ -3780,7 +3780,7 @@ txgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 
 	/* pool enabling for receive - 64 */
 	wr32(hw, TXGBE_POOLRXENA(0), UINT32_MAX);
-	if (num_pools == ETH_64_POOLS)
+	if (num_pools == RTE_ETH_64_POOLS)
 		wr32(hw, TXGBE_POOLRXENA(1), UINT32_MAX);
 
 	/*
@@ -3904,11 +3904,11 @@ txgbe_config_vf_rss(struct rte_eth_dev *dev)
 	mrqc = rd32(hw, TXGBE_PORTCTL);
 	mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_64;
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_32;
 		break;
 
@@ -3931,15 +3931,15 @@ txgbe_config_vf_default(struct rte_eth_dev *dev)
 	mrqc = rd32(hw, TXGBE_PORTCTL);
 	mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_64;
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_32;
 		break;
 
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_16;
 		break;
 	default:
@@ -3962,21 +3962,21 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * any DCB/RSS w/o VMDq multi-queue setting
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_DCB_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			txgbe_rss_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
 			txgbe_vmdq_dcb_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
 			txgbe_vmdq_rx_hw_configure(dev);
 			break;
 
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_NONE:
 		default:
 			/* if mq_mode is none, disable rss mode.*/
 			txgbe_rss_disable(dev);
@@ -3987,18 +3987,18 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * Support RSS together with SRIOV.
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			txgbe_config_vf_rss(dev);
 			break;
-		case ETH_MQ_RX_VMDQ_DCB:
-		case ETH_MQ_RX_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_DCB:
 		/* In SRIOV, the configuration is the same as VMDq case */
 			txgbe_vmdq_dcb_configure(dev);
 			break;
 		/* DCB/RSS together with SRIOV is not supported */
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
-		case ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
 			PMD_INIT_LOG(ERR,
 				"Could not support DCB/RSS with VMDq & SRIOV");
 			return -1;
@@ -4028,7 +4028,7 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV inactive scheme
 		 * any DCB w/o VMDq multi-queue setting
 		 */
-		if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+		if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
 			txgbe_vmdq_tx_hw_configure(hw);
 		else
 			wr32m(hw, TXGBE_PORTCTL, TXGBE_PORTCTL_NUMVT_MASK, 0);
@@ -4038,13 +4038,13 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV active scheme
 		 * FIXME if support DCB together with VMDq & SRIOV
 		 */
-		case ETH_64_POOLS:
+		case RTE_ETH_64_POOLS:
 			mtqc = TXGBE_PORTCTL_NUMVT_64;
 			break;
-		case ETH_32_POOLS:
+		case RTE_ETH_32_POOLS:
 			mtqc = TXGBE_PORTCTL_NUMVT_32;
 			break;
-		case ETH_16_POOLS:
+		case RTE_ETH_16_POOLS:
 			mtqc = TXGBE_PORTCTL_NUMVT_16;
 			break;
 		default:
@@ -4107,10 +4107,10 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* Sanity check */
 	dev->dev_ops->dev_infos_get(dev, &dev_info);
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		rsc_capable = true;
 
-	if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
 				   "support it");
 		return -EINVAL;
@@ -4118,22 +4118,22 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* RSC global configuration */
 
-	if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
-	     (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+	     (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		PMD_INIT_LOG(CRIT, "LRO can't be enabled when HW CRC "
 				    "is disabled");
 		return -EINVAL;
 	}
 
 	rfctl = rd32(hw, TXGBE_PSRCTL);
-	if (rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if (rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		rfctl &= ~TXGBE_PSRCTL_RSCDIA;
 	else
 		rfctl |= TXGBE_PSRCTL_RSCDIA;
 	wr32(hw, TXGBE_PSRCTL, rfctl);
 
 	/* If LRO hasn't been requested - we are done here. */
-	if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		return 0;
 
 	/* Set PSRCTL.RSCACK bit */
@@ -4273,7 +4273,7 @@ txgbe_set_rx_function(struct rte_eth_dev *dev)
 		struct txgbe_rx_queue *rxq = dev->data->rx_queues[i];
 
 		rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_SECURITY);
+				RTE_ETH_RX_OFFLOAD_SECURITY);
 	}
 #endif
 }
@@ -4316,7 +4316,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Configure CRC stripping, if any.
 	 */
 	hlreg0 = rd32(hw, TXGBE_SECRXCTL);
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		hlreg0 &= ~TXGBE_SECRXCTL_CRCSTRIP;
 	else
 		hlreg0 |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4344,7 +4344,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	/* Setup RX queues */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -4354,7 +4354,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 * call to configure.
 		 */
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -4391,11 +4391,11 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 		if (dev->data->mtu + TXGBE_ETH_OVERHEAD +
 				2 * TXGBE_VLAN_TAG_SIZE > buf_size)
 			dev->data->scattered_rx = 1;
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		dev->data->scattered_rx = 1;
 
 	/*
@@ -4410,7 +4410,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 */
 	rxcsum = rd32(hw, TXGBE_PSRCTL);
 	rxcsum |= TXGBE_PSRCTL_PCSD;
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= TXGBE_PSRCTL_L4CSUM;
 	else
 		rxcsum &= ~TXGBE_PSRCTL_L4CSUM;
@@ -4419,7 +4419,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 
 	if (hw->mac.type == txgbe_mac_raptor) {
 		rdrxctl = rd32(hw, TXGBE_SECRXCTL);
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rdrxctl &= ~TXGBE_SECRXCTL_CRCSTRIP;
 		else
 			rdrxctl |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4542,8 +4542,8 @@ txgbe_dev_rxtx_start(struct rte_eth_dev *dev)
 		txgbe_setup_loopback_link_raptor(hw);
 
 #ifdef RTE_LIB_SECURITY
-	if ((dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) ||
-	    (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_SECURITY)) {
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) ||
+	    (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY)) {
 		ret = txgbe_crypto_enable_ipsec(dev);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR,
@@ -4851,7 +4851,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	/* Set PSR type for VF RSS according to max Rx queue */
 	psrtype = TXGBE_VFPLCFG_PSRL4HDR |
@@ -4903,7 +4903,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
 		 */
 		wr32(hw, TXGBE_RXCFG(i), srrctl);
 
-		if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
 		    /* It adds dual VLAN length for supporting dual VLAN */
 		    (dev->data->mtu + TXGBE_ETH_OVERHEAD +
 				2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
@@ -4912,8 +4912,8 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
 			dev->data->scattered_rx = 1;
 		}
 
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	/*
@@ -5084,7 +5084,7 @@ txgbe_config_rss_filter(struct rte_eth_dev *dev,
 	 * little-endian order.
 	 */
 	reta = 0;
-	for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+	for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
 		if (j == conf->conf.queue_num)
 			j = 0;
 		reta = (reta >> 8) | LS32(conf->conf.queue[j], 24, 0xFF);
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index b96f58a3f848..27d4c842c0e7 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -309,7 +309,7 @@ struct txgbe_rx_queue {
 	uint8_t             rx_deferred_start; /**< not in global dev start. */
 	/** flags to set in mbuf when a vlan is detected. */
 	uint64_t            vlan_flags;
-	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
 	/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
 	struct rte_mbuf fake_mbuf;
 	/** hold packets to return to application */
@@ -392,7 +392,7 @@ struct txgbe_tx_queue {
 	uint8_t             pthresh;       /**< Prefetch threshold register. */
 	uint8_t             hthresh;       /**< Host threshold register. */
 	uint8_t             wthresh;       /**< Write-back threshold reg. */
-	uint64_t            offloads; /* Tx offload flags of DEV_TX_OFFLOAD_* */
+	uint64_t            offloads; /* Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
 	uint32_t            ctx_curr;      /**< Hardware context states. */
 	/** Hardware context0 history. */
 	struct txgbe_ctx_info ctx_cache[TXGBE_CTX_NUM];
diff --git a/drivers/net/txgbe/txgbe_tm.c b/drivers/net/txgbe/txgbe_tm.c
index 3abe3959eb1a..3171be73d05d 100644
--- a/drivers/net/txgbe/txgbe_tm.c
+++ b/drivers/net/txgbe/txgbe_tm.c
@@ -118,14 +118,14 @@ txgbe_tc_nb_get(struct rte_eth_dev *dev)
 	uint8_t nb_tcs = 0;
 
 	eth_conf = &dev->data->dev_conf;
-	if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+	if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 		nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
-	} else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	} else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
-		    ETH_32_POOLS)
-			nb_tcs = ETH_4_TCS;
+		    RTE_ETH_32_POOLS)
+			nb_tcs = RTE_ETH_4_TCS;
 		else
-			nb_tcs = ETH_8_TCS;
+			nb_tcs = RTE_ETH_8_TCS;
 	} else {
 		nb_tcs = 1;
 	}
@@ -364,10 +364,10 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 	if (vf_num) {
 		/* no DCB */
 		if (nb_tcs == 1) {
-			if (vf_num >= ETH_32_POOLS) {
+			if (vf_num >= RTE_ETH_32_POOLS) {
 				*nb = 2;
 				*base = vf_num * 2;
-			} else if (vf_num >= ETH_16_POOLS) {
+			} else if (vf_num >= RTE_ETH_16_POOLS) {
 				*nb = 4;
 				*base = vf_num * 4;
 			} else {
@@ -381,7 +381,7 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 		}
 	} else {
 		/* VT off */
-		if (nb_tcs == ETH_8_TCS) {
+		if (nb_tcs == RTE_ETH_8_TCS) {
 			switch (tc_node_no) {
 			case 0:
 				*base = 0;
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 86498365e149..17b6a1a1ceec 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -125,8 +125,8 @@ static pthread_mutex_t internal_list_lock = PTHREAD_MUTEX_INITIALIZER;
 
 static struct rte_eth_link pmd_link = {
 		.link_speed = 10000,
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_status = ETH_LINK_DOWN
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_status = RTE_ETH_LINK_DOWN
 };
 
 struct rte_vhost_vring_state {
@@ -817,7 +817,7 @@ new_device(int vid)
 
 	rte_vhost_get_mtu(vid, &eth_dev->data->mtu);
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	rte_atomic32_set(&internal->dev_attached, 1);
 	update_queuing_status(eth_dev);
@@ -852,7 +852,7 @@ destroy_device(int vid)
 	rte_atomic32_set(&internal->dev_attached, 0);
 	update_queuing_status(eth_dev);
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	if (eth_dev->data->rx_queues && eth_dev->data->tx_queues) {
 		for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1118,7 +1118,7 @@ eth_dev_configure(struct rte_eth_dev *dev)
 	if (vhost_driver_setup(dev) < 0)
 		return -1;
 
-	internal->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	internal->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 	return 0;
 }
@@ -1267,9 +1267,9 @@ eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_tx_queues = internal->max_queues;
 	dev_info->min_rx_bufsize = 0;
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-				DEV_TX_OFFLOAD_VLAN_INSERT;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	return 0;
 }
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index ddf0e26ab4db..94120b349023 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -712,7 +712,7 @@ int
 virtio_dev_close(struct rte_eth_dev *dev)
 {
 	struct virtio_hw *hw = dev->data->dev_private;
-	struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+	struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
 
 	PMD_INIT_LOG(DEBUG, "virtio_dev_close");
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1774,7 +1774,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 		     hw->mac_addr[0], hw->mac_addr[1], hw->mac_addr[2],
 		     hw->mac_addr[3], hw->mac_addr[4], hw->mac_addr[5]);
 
-	if (hw->speed == ETH_SPEED_NUM_UNKNOWN) {
+	if (hw->speed == RTE_ETH_SPEED_NUM_UNKNOWN) {
 		if (virtio_with_feature(hw, VIRTIO_NET_F_SPEED_DUPLEX)) {
 			config = &local_config;
 			virtio_read_dev_config(hw,
@@ -1788,7 +1788,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 		}
 	}
 	if (hw->duplex == DUPLEX_UNKNOWN)
-		hw->duplex = ETH_LINK_FULL_DUPLEX;
+		hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	PMD_INIT_LOG(DEBUG, "link speed = %d, duplex = %d",
 		hw->speed, hw->duplex);
 	if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ)) {
@@ -1887,7 +1887,7 @@ int
 eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
 {
 	struct virtio_hw *hw = eth_dev->data->dev_private;
-	uint32_t speed = ETH_SPEED_NUM_UNKNOWN;
+	uint32_t speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	int vectorized = 0;
 	int ret;
 
@@ -1958,22 +1958,22 @@ static uint32_t
 virtio_dev_speed_capa_get(uint32_t speed)
 {
 	switch (speed) {
-	case ETH_SPEED_NUM_10G:
-		return ETH_LINK_SPEED_10G;
-	case ETH_SPEED_NUM_20G:
-		return ETH_LINK_SPEED_20G;
-	case ETH_SPEED_NUM_25G:
-		return ETH_LINK_SPEED_25G;
-	case ETH_SPEED_NUM_40G:
-		return ETH_LINK_SPEED_40G;
-	case ETH_SPEED_NUM_50G:
-		return ETH_LINK_SPEED_50G;
-	case ETH_SPEED_NUM_56G:
-		return ETH_LINK_SPEED_56G;
-	case ETH_SPEED_NUM_100G:
-		return ETH_LINK_SPEED_100G;
-	case ETH_SPEED_NUM_200G:
-		return ETH_LINK_SPEED_200G;
+	case RTE_ETH_SPEED_NUM_10G:
+		return RTE_ETH_LINK_SPEED_10G;
+	case RTE_ETH_SPEED_NUM_20G:
+		return RTE_ETH_LINK_SPEED_20G;
+	case RTE_ETH_SPEED_NUM_25G:
+		return RTE_ETH_LINK_SPEED_25G;
+	case RTE_ETH_SPEED_NUM_40G:
+		return RTE_ETH_LINK_SPEED_40G;
+	case RTE_ETH_SPEED_NUM_50G:
+		return RTE_ETH_LINK_SPEED_50G;
+	case RTE_ETH_SPEED_NUM_56G:
+		return RTE_ETH_LINK_SPEED_56G;
+	case RTE_ETH_SPEED_NUM_100G:
+		return RTE_ETH_LINK_SPEED_100G;
+	case RTE_ETH_SPEED_NUM_200G:
+		return RTE_ETH_LINK_SPEED_200G;
 	default:
 		return 0;
 	}
@@ -2089,14 +2089,14 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "configure");
 	req_features = VIRTIO_PMD_DEFAULT_GUEST_FEATURES;
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) {
 		PMD_DRV_LOG(ERR,
 			"Unsupported Rx multi queue mode %d",
 			rxmode->mq_mode);
 		return -EINVAL;
 	}
 
-	if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		PMD_DRV_LOG(ERR,
 			"Unsupported Tx multi queue mode %d",
 			txmode->mq_mode);
@@ -2114,20 +2114,20 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 
 	hw->max_rx_pkt_len = ether_hdr_len + rxmode->mtu;
 
-	if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
-			   DEV_RX_OFFLOAD_TCP_CKSUM))
+	if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
 		req_features |= (1ULL << VIRTIO_NET_F_GUEST_CSUM);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		req_features |=
 			(1ULL << VIRTIO_NET_F_GUEST_TSO4) |
 			(1ULL << VIRTIO_NET_F_GUEST_TSO6);
 
-	if (tx_offloads & (DEV_TX_OFFLOAD_UDP_CKSUM |
-			   DEV_TX_OFFLOAD_TCP_CKSUM))
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM))
 		req_features |= (1ULL << VIRTIO_NET_F_CSUM);
 
-	if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		req_features |=
 			(1ULL << VIRTIO_NET_F_HOST_TSO4) |
 			(1ULL << VIRTIO_NET_F_HOST_TSO6);
@@ -2139,15 +2139,15 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 			return ret;
 	}
 
-	if ((rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
-			    DEV_RX_OFFLOAD_TCP_CKSUM)) &&
+	if ((rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			    RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) &&
 		!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_CSUM)) {
 		PMD_DRV_LOG(ERR,
 			"rx checksum not available on this host");
 		return -ENOTSUP;
 	}
 
-	if ((rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) &&
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
 		(!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO4) ||
 		 !virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO6))) {
 		PMD_DRV_LOG(ERR,
@@ -2159,12 +2159,12 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 	if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
 		virtio_dev_cq_start(dev);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 		hw->vlan_strip = 1;
 
-	hw->rx_ol_scatter = (rx_offloads & DEV_RX_OFFLOAD_SCATTER);
+	hw->rx_ol_scatter = (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
 
-	if ((rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 			!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
 		PMD_DRV_LOG(ERR,
 			    "vlan filtering not available on this host");
@@ -2217,7 +2217,7 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 				hw->use_vec_rx = 0;
 			}
 
-			if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+			if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 				PMD_DRV_LOG(INFO,
 					"disabled packed ring vectorized rx for TCP_LRO enabled");
 				hw->use_vec_rx = 0;
@@ -2244,10 +2244,10 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 				hw->use_vec_rx = 0;
 			}
 
-			if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
-					   DEV_RX_OFFLOAD_TCP_CKSUM |
-					   DEV_RX_OFFLOAD_TCP_LRO |
-					   DEV_RX_OFFLOAD_VLAN_STRIP)) {
+			if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+					   RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+					   RTE_ETH_RX_OFFLOAD_TCP_LRO |
+					   RTE_ETH_RX_OFFLOAD_VLAN_STRIP)) {
 				PMD_DRV_LOG(INFO,
 					"disabled split ring vectorized rx for offloading enabled");
 				hw->use_vec_rx = 0;
@@ -2440,7 +2440,7 @@ virtio_dev_stop(struct rte_eth_dev *dev)
 {
 	struct virtio_hw *hw = dev->data->dev_private;
 	struct rte_eth_link link;
-	struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+	struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
 
 	PMD_INIT_LOG(DEBUG, "stop");
 	dev->data->dev_started = 0;
@@ -2481,28 +2481,28 @@ virtio_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complet
 	memset(&link, 0, sizeof(link));
 	link.link_duplex = hw->duplex;
 	link.link_speed  = hw->speed;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	if (!hw->started) {
-		link.link_status = ETH_LINK_DOWN;
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	} else if (virtio_with_feature(hw, VIRTIO_NET_F_STATUS)) {
 		PMD_INIT_LOG(DEBUG, "Get link status from hw");
 		virtio_read_dev_config(hw,
 				offsetof(struct virtio_net_config, status),
 				&status, sizeof(status));
 		if ((status & VIRTIO_NET_S_LINK_UP) == 0) {
-			link.link_status = ETH_LINK_DOWN;
-			link.link_speed = ETH_SPEED_NUM_NONE;
+			link.link_status = RTE_ETH_LINK_DOWN;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 			PMD_INIT_LOG(DEBUG, "Port %d is down",
 				     dev->data->port_id);
 		} else {
-			link.link_status = ETH_LINK_UP;
+			link.link_status = RTE_ETH_LINK_UP;
 			PMD_INIT_LOG(DEBUG, "Port %d is up",
 				     dev->data->port_id);
 		}
 	} else {
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -2515,8 +2515,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct virtio_hw *hw = dev->data->dev_private;
 	uint64_t offloads = rxmode->offloads;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if ((offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if ((offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 				!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
 
 			PMD_DRV_LOG(NOTICE,
@@ -2526,8 +2526,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		}
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK)
-		hw->vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	if (mask & RTE_ETH_VLAN_STRIP_MASK)
+		hw->vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 	return 0;
 }
@@ -2549,32 +2549,32 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = hw->max_mtu;
 
 	host_features = VIRTIO_OPS(hw)->get_features(hw);
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	if (host_features & (1ULL << VIRTIO_NET_F_MRG_RXBUF))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SCATTER;
 	if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
 		dev_info->rx_offload_capa |=
-			DEV_RX_OFFLOAD_TCP_CKSUM |
-			DEV_RX_OFFLOAD_UDP_CKSUM;
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
 	}
 	if (host_features & (1ULL << VIRTIO_NET_F_CTRL_VLAN))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_FILTER;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	tso_mask = (1ULL << VIRTIO_NET_F_GUEST_TSO4) |
 		(1ULL << VIRTIO_NET_F_GUEST_TSO6);
 	if ((host_features & tso_mask) == tso_mask)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_LRO;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-				    DEV_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				    RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	if (host_features & (1ULL << VIRTIO_NET_F_CSUM)) {
 		dev_info->tx_offload_capa |=
-			DEV_TX_OFFLOAD_UDP_CKSUM |
-			DEV_TX_OFFLOAD_TCP_CKSUM;
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 	}
 	tso_mask = (1ULL << VIRTIO_NET_F_HOST_TSO4) |
 		(1ULL << VIRTIO_NET_F_HOST_TSO6);
 	if ((host_features & tso_mask) == tso_mask)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (host_features & (1ULL << VIRTIO_F_RING_PACKED)) {
 		/*
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index a19895af1f17..26d9edf5319c 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -41,20 +41,20 @@
 #define	VMXNET3_TX_MAX_SEG	UINT8_MAX
 
 #define VMXNET3_TX_OFFLOAD_CAP		\
-	(DEV_TX_OFFLOAD_VLAN_INSERT |	\
-	 DEV_TX_OFFLOAD_TCP_CKSUM |	\
-	 DEV_TX_OFFLOAD_UDP_CKSUM |	\
-	 DEV_TX_OFFLOAD_TCP_TSO |	\
-	 DEV_TX_OFFLOAD_MULTI_SEGS)
+	(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |	\
+	 RTE_ETH_TX_OFFLOAD_TCP_CKSUM |	\
+	 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |	\
+	 RTE_ETH_TX_OFFLOAD_TCP_TSO |	\
+	 RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define VMXNET3_RX_OFFLOAD_CAP		\
-	(DEV_RX_OFFLOAD_VLAN_STRIP |	\
-	 DEV_RX_OFFLOAD_VLAN_FILTER |   \
-	 DEV_RX_OFFLOAD_SCATTER |	\
-	 DEV_RX_OFFLOAD_UDP_CKSUM |	\
-	 DEV_RX_OFFLOAD_TCP_CKSUM |	\
-	 DEV_RX_OFFLOAD_TCP_LRO |	\
-	 DEV_RX_OFFLOAD_RSS_HASH)
+	(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |	\
+	 RTE_ETH_RX_OFFLOAD_VLAN_FILTER |   \
+	 RTE_ETH_RX_OFFLOAD_SCATTER |	\
+	 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+	 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |	\
+	 RTE_ETH_RX_OFFLOAD_TCP_LRO |	\
+	 RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 int vmxnet3_segs_dynfield_offset = -1;
 
@@ -398,9 +398,9 @@ eth_vmxnet3_dev_init(struct rte_eth_dev *eth_dev)
 
 	/* set the initial link status */
 	memset(&link, 0, sizeof(link));
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_speed = ETH_SPEED_NUM_10G;
-	link.link_autoneg = ETH_LINK_FIXED;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 	rte_eth_linkstatus_set(eth_dev, &link);
 
 	return 0;
@@ -486,8 +486,8 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (dev->data->nb_tx_queues > VMXNET3_MAX_TX_QUEUES ||
 	    dev->data->nb_rx_queues > VMXNET3_MAX_RX_QUEUES) {
@@ -547,7 +547,7 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
 	hw->queueDescPA = mz->iova;
 	hw->queue_desc_len = (uint16_t)size;
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		/* Allocate memory structure for UPT1_RSSConf and configure */
 		mz = gpa_zone_reserve(dev, sizeof(struct VMXNET3_RSSConf),
 				      "rss_conf", rte_socket_id(),
@@ -843,15 +843,15 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
 	devRead->rxFilterConf.rxMode = 0;
 
 	/* Setting up feature flags */
-	if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		devRead->misc.uptFeatures |= VMXNET3_F_RXCSUM;
 
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		devRead->misc.uptFeatures |= VMXNET3_F_LRO;
 		devRead->misc.maxNumRxSG = 0;
 	}
 
-	if (port_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (port_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		ret = vmxnet3_rss_configure(dev);
 		if (ret != VMXNET3_SUCCESS)
 			return ret;
@@ -863,7 +863,7 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
 	}
 
 	ret = vmxnet3_dev_vlan_offload_set(dev,
-			ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+			RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
 	if (ret)
 		return ret;
 
@@ -930,7 +930,7 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
 	}
 
 	if (VMXNET3_VERSION_GE_4(hw) &&
-	    dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	    dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		/* Check for additional RSS  */
 		ret = vmxnet3_v4_rss_configure(dev);
 		if (ret != VMXNET3_SUCCESS) {
@@ -1039,9 +1039,9 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
 
 	/* Clear recorded link status */
 	memset(&link, 0, sizeof(link));
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_speed = ETH_SPEED_NUM_10G;
-	link.link_autoneg = ETH_LINK_FIXED;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 	rte_eth_linkstatus_set(dev, &link);
 
 	hw->adapter_stopped = 1;
@@ -1365,7 +1365,7 @@ vmxnet3_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
 	dev_info->min_mtu = VMXNET3_MIN_MTU;
 	dev_info->max_mtu = VMXNET3_MAX_MTU;
-	dev_info->speed_capa = ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
 	dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
 
 	dev_info->flow_type_rss_offloads = VMXNET3_RSS_OFFLOAD_ALL;
@@ -1447,10 +1447,10 @@ __vmxnet3_dev_link_update(struct rte_eth_dev *dev,
 	ret = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD);
 
 	if (ret & 0x1)
-		link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_speed = ETH_SPEED_NUM_10G;
-	link.link_autoneg = ETH_LINK_FIXED;
+		link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 
 	return rte_eth_linkstatus_set(dev, &link);
 }
@@ -1503,7 +1503,7 @@ vmxnet3_dev_promiscuous_disable(struct rte_eth_dev *dev)
 	uint32_t *vf_table = hw->shared->devRead.rxFilterConf.vfTable;
 	uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 		memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
 	else
 		memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
@@ -1573,8 +1573,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	uint32_t *vf_table = devRead->rxFilterConf.vfTable;
 	uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			devRead->misc.uptFeatures |= UPT1_F_RXVLAN;
 		else
 			devRead->misc.uptFeatures &= ~UPT1_F_RXVLAN;
@@ -1583,8 +1583,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 				       VMXNET3_CMD_UPDATE_FEATURE);
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
 		else
 			memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.h b/drivers/net/vmxnet3/vmxnet3_ethdev.h
index 8950175460f0..ef858ac9512f 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.h
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.h
@@ -32,18 +32,18 @@
 				VMXNET3_MAX_RX_QUEUES + 1)
 
 #define VMXNET3_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 
 #define VMXNET3_V4_RSS_MASK ( \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define VMXNET3_MANDATORY_V4_RSS ( \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP)
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 
 /* RSS configuration structure - shared with device through GPA */
 typedef struct VMXNET3_RSSConf {
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index b01c4c01f9c9..870100fa4f11 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -1326,13 +1326,13 @@ vmxnet3_v4_rss_configure(struct rte_eth_dev *dev)
 	rss_hf = port_rss_conf->rss_hf &
 		(VMXNET3_V4_RSS_MASK | VMXNET3_RSS_OFFLOAD_ALL);
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP6;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP6;
 
 	VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD,
@@ -1389,13 +1389,13 @@ vmxnet3_rss_configure(struct rte_eth_dev *dev)
 	/* loading hashType */
 	dev_rss_conf->hashType = 0;
 	rss_hf = port_rss_conf->rss_hf & VMXNET3_RSS_OFFLOAD_ALL;
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV4;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV6;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV6;
 
 	return VMXNET3_SUCCESS;
diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
index a26076b312e5..ecafc5e4f1a9 100644
--- a/examples/bbdev_app/main.c
+++ b/examples/bbdev_app/main.c
@@ -70,11 +70,11 @@ mbuf_input(struct rte_mbuf *mbuf)
 
 static const struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -327,7 +327,7 @@ check_port_link_status(uint16_t port_id)
 
 		if (link_get_err >= 0 && link.link_status) {
 			const char *dp = (link.link_duplex ==
-				ETH_LINK_FULL_DUPLEX) ?
+				RTE_ETH_LINK_FULL_DUPLEX) ?
 				"full-duplex" : "half-duplex";
 			printf("\nPort %u Link Up - speed %s - %s\n",
 				port_id,
diff --git a/examples/bond/main.c b/examples/bond/main.c
index fd8fd767c811..1087b0dad125 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -114,17 +114,17 @@ static struct rte_mempool *mbuf_pool;
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -148,9 +148,9 @@ slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
 			"Error during getting device (port %u) info: %s\n",
 			portid, strerror(-retval));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 		dev_info.flow_type_rss_offloads;
@@ -240,9 +240,9 @@ bond_port_init(struct rte_mempool *mbuf_pool)
 			"Error during getting device (port %u) info: %s\n",
 			BOND_PORT, strerror(-retval));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	retval = rte_eth_dev_configure(BOND_PORT, 1, 1, &local_port_conf);
 	if (retval != 0)
 		rte_exit(EXIT_FAILURE, "port %u: configuration failed (res=%d)\n",
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index 8c4a8feec0c2..c681e237ea46 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -80,15 +80,15 @@ struct app_stats prev_app_stats;
 
 static const struct rte_eth_conf port_conf_default = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
-			.rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
-				ETH_RSS_TCP | ETH_RSS_SCTP,
+			.rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+				RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
 		}
 	},
 };
@@ -126,9 +126,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	port_conf.rx_adv_conf.rss_conf.rss_hf &=
 		dev_info.flow_type_rss_offloads;
diff --git a/examples/ethtool/ethtool-app/main.c b/examples/ethtool/ethtool-app/main.c
index 1bc675962bf3..cdd9e9b60bd8 100644
--- a/examples/ethtool/ethtool-app/main.c
+++ b/examples/ethtool/ethtool-app/main.c
@@ -98,7 +98,7 @@ static void setup_ports(struct app_config *app_cfg, int cnt_ports)
 	int ret;
 
 	memset(&cfg_port, 0, sizeof(cfg_port));
-	cfg_port.txmode.mq_mode = ETH_MQ_TX_NONE;
+	cfg_port.txmode.mq_mode = RTE_ETH_MQ_TX_NONE;
 
 	for (idx_port = 0; idx_port < cnt_ports; idx_port++) {
 		struct app_port *ptr_port = &app_cfg->ports[idx_port];
diff --git a/examples/ethtool/lib/rte_ethtool.c b/examples/ethtool/lib/rte_ethtool.c
index 413251630709..e7cdf8d5775b 100644
--- a/examples/ethtool/lib/rte_ethtool.c
+++ b/examples/ethtool/lib/rte_ethtool.c
@@ -233,13 +233,13 @@ rte_ethtool_get_pauseparam(uint16_t port_id,
 	pause_param->tx_pause = 0;
 	pause_param->rx_pause = 0;
 	switch (fc_conf.mode) {
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		pause_param->rx_pause = 1;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		pause_param->tx_pause = 1;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		pause_param->rx_pause = 1;
 		pause_param->tx_pause = 1;
 	default:
@@ -277,14 +277,14 @@ rte_ethtool_set_pauseparam(uint16_t port_id,
 
 	if (pause_param->tx_pause) {
 		if (pause_param->rx_pause)
-			fc_conf.mode = RTE_FC_FULL;
+			fc_conf.mode = RTE_ETH_FC_FULL;
 		else
-			fc_conf.mode = RTE_FC_TX_PAUSE;
+			fc_conf.mode = RTE_ETH_FC_TX_PAUSE;
 	} else {
 		if (pause_param->rx_pause)
-			fc_conf.mode = RTE_FC_RX_PAUSE;
+			fc_conf.mode = RTE_ETH_FC_RX_PAUSE;
 		else
-			fc_conf.mode = RTE_FC_NONE;
+			fc_conf.mode = RTE_ETH_FC_NONE;
 	}
 
 	status = rte_eth_dev_flow_ctrl_set(port_id, &fc_conf);
@@ -398,12 +398,12 @@ rte_ethtool_net_set_rx_mode(uint16_t port_id)
 	for (vf = 0; vf < num_vfs; vf++) {
 #ifdef RTE_NET_IXGBE
 		rte_pmd_ixgbe_set_vf_rxmode(port_id, vf,
-			ETH_VMDQ_ACCEPT_UNTAG, 0);
+			RTE_ETH_VMDQ_ACCEPT_UNTAG, 0);
 #endif
 	}
 
 	/* Enable Rx vlan filter, VF unspport status is discard */
-	ret = rte_eth_dev_set_vlan_offload(port_id, ETH_VLAN_FILTER_MASK);
+	ret = rte_eth_dev_set_vlan_offload(port_id, RTE_ETH_VLAN_FILTER_MASK);
 	if (ret != 0)
 		return ret;
 
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index e26be8edf28f..193a16463449 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -283,13 +283,13 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 	struct rte_eth_rxconf rx_conf;
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
-				.rss_hf = ETH_RSS_IP |
-					  ETH_RSS_TCP |
-					  ETH_RSS_UDP,
+				.rss_hf = RTE_ETH_RSS_IP |
+					  RTE_ETH_RSS_TCP |
+					  RTE_ETH_RSS_UDP,
 			}
 		}
 	};
@@ -311,12 +311,12 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_RSS_HASH)
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_RSS_HASH)
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	rx_conf = dev_info.default_rxconf;
 	rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index 476b147bdfcc..1b841d46ad93 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -614,13 +614,13 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 	struct rte_eth_rxconf rx_conf;
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
-				.rss_hf = ETH_RSS_IP |
-					  ETH_RSS_TCP |
-					  ETH_RSS_UDP,
+				.rss_hf = RTE_ETH_RSS_IP |
+					  RTE_ETH_RSS_TCP |
+					  RTE_ETH_RSS_UDP,
 			}
 		}
 	};
@@ -642,9 +642,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	rx_conf = dev_info.default_rxconf;
 	rx_conf.offloads = port_conf.rxmode.offloads;
 
diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
index 8a43f6ac0f92..6185b340600c 100644
--- a/examples/flow_classify/flow_classify.c
+++ b/examples/flow_classify/flow_classify.c
@@ -212,9 +212,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/flow_filtering/main.c b/examples/flow_filtering/main.c
index dd8a33d036ee..bfc1949c8428 100644
--- a/examples/flow_filtering/main.c
+++ b/examples/flow_filtering/main.c
@@ -113,7 +113,7 @@ assert_link_status(void)
 	memset(&link, 0, sizeof(link));
 	do {
 		link_get_err = rte_eth_link_get(port_id, &link);
-		if (link_get_err == 0 && link.link_status == ETH_LINK_UP)
+		if (link_get_err == 0 && link.link_status == RTE_ETH_LINK_UP)
 			break;
 		rte_delay_ms(CHECK_INTERVAL);
 	} while (--rep_cnt);
@@ -121,7 +121,7 @@ assert_link_status(void)
 	if (link_get_err < 0)
 		rte_exit(EXIT_FAILURE, ":: error: link get is failing: %s\n",
 			 rte_strerror(-link_get_err));
-	if (link.link_status == ETH_LINK_DOWN)
+	if (link.link_status == RTE_ETH_LINK_DOWN)
 		rte_exit(EXIT_FAILURE, ":: error: link is still down\n");
 }
 
@@ -138,12 +138,12 @@ init_port(void)
 		},
 		.txmode = {
 			.offloads =
-				DEV_TX_OFFLOAD_VLAN_INSERT |
-				DEV_TX_OFFLOAD_IPV4_CKSUM  |
-				DEV_TX_OFFLOAD_UDP_CKSUM   |
-				DEV_TX_OFFLOAD_TCP_CKSUM   |
-				DEV_TX_OFFLOAD_SCTP_CKSUM  |
-				DEV_TX_OFFLOAD_TCP_TSO,
+				RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_TCP_TSO,
 		},
 	};
 	struct rte_eth_txconf txq_conf;
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index ccfee585f850..b1aa2767a0af 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -819,12 +819,12 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
 	/* Configuring port to use RSS for multiple RX queues. 8< */
 	static const struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_PROTO_MASK,
+				.rss_hf = RTE_ETH_RSS_PROTO_MASK,
 			}
 		}
 	};
@@ -852,9 +852,9 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
 
 	local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 		dev_info.flow_type_rss_offloads;
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(portid, nb_queues, 1, &local_port_conf);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Cannot configure device:"
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index d51133199c42..4ffe997baf23 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -148,13 +148,13 @@ static struct rte_eth_conf port_conf = {
 		.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
 			RTE_ETHER_CRC_LEN,
 		.split_hdr_size = 0,
-		.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
-			     DEV_RX_OFFLOAD_SCATTER),
+		.offloads = (RTE_ETH_RX_OFFLOAD_CHECKSUM |
+			     RTE_ETH_RX_OFFLOAD_SCATTER),
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_MULTI_SEGS),
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
 	},
 };
 
@@ -623,7 +623,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 9ba02e687adb..0290767af473 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -45,7 +45,7 @@ link_next(struct link *link)
 static struct rte_eth_conf port_conf_default = {
 	.link_speeds = 0,
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
 		.split_hdr_size = 0, /* Header split buffer size */
 	},
@@ -57,12 +57,12 @@ static struct rte_eth_conf port_conf_default = {
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
 
-#define RETA_CONF_SIZE     (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE     (RTE_ETH_RSS_RETA_SIZE_512 / RTE_ETH_RETA_GROUP_SIZE)
 
 static int
 rss_setup(uint16_t port_id,
@@ -77,11 +77,11 @@ rss_setup(uint16_t port_id,
 	memset(reta_conf, 0, sizeof(reta_conf));
 
 	for (i = 0; i < reta_size; i++)
-		reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+		reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
 
 	for (i = 0; i < reta_size; i++) {
-		uint32_t reta_id = i / RTE_RETA_GROUP_SIZE;
-		uint32_t reta_pos = i % RTE_RETA_GROUP_SIZE;
+		uint32_t reta_id = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint32_t reta_pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint32_t rss_qs_pos = i % rss->n_queues;
 
 		reta_conf[reta_id].reta[reta_pos] =
@@ -139,7 +139,7 @@ link_create(const char *name, struct link_params *params)
 	rss = params->rx.rss;
 	if (rss) {
 		if ((port_info.reta_size == 0) ||
-			(port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+			(port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
 			return NULL;
 
 		if ((rss->n_queues == 0) ||
@@ -157,9 +157,9 @@ link_create(const char *name, struct link_params *params)
 	/* Port */
 	memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
 	if (rss) {
-		port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+		port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
 		port_conf.rx_adv_conf.rss_conf.rss_hf =
-			(ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+			(RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
 			port_info.flow_type_rss_offloads;
 	}
 
@@ -267,5 +267,5 @@ link_is_up(const char *name)
 	if (rte_eth_link_get(link->port_id, &link_params) < 0)
 		return 0;
 
-	return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+	return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
 }
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 06dc42799314..41e35593867b 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -160,22 +160,22 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_RSS,
+		.mq_mode        = RTE_ETH_MQ_RX_RSS,
 		.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
 			RTE_ETHER_CRC_LEN,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_MULTI_SEGS),
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
 	},
 };
 
@@ -737,7 +737,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -1095,9 +1095,9 @@ main(int argc, char **argv)
 		n_tx_queue = nb_lcores;
 		if (n_tx_queue > MAX_TX_QUEUE_PER_PORT)
 			n_tx_queue = MAX_TX_QUEUE_PER_PORT;
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index a10e330f5003..1c60ac28e317 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -233,19 +233,19 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode	= ETH_MQ_RX_RSS,
+		.mq_mode	= RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
-				ETH_RSS_TCP | ETH_RSS_SCTP,
+			.rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+				RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -1444,10 +1444,10 @@ print_usage(const char *prgname)
 		"               \"parallel\" : Parallel\n"
 		"  --" CMD_LINE_OPT_RX_OFFLOAD
 		": bitmask of the RX HW offload capabilities to enable/use\n"
-		"                         (DEV_RX_OFFLOAD_*)\n"
+		"                         (RTE_ETH_RX_OFFLOAD_*)\n"
 		"  --" CMD_LINE_OPT_TX_OFFLOAD
 		": bitmask of the TX HW offload capabilities to enable/use\n"
-		"                         (DEV_TX_OFFLOAD_*)\n"
+		"                         (RTE_ETH_TX_OFFLOAD_*)\n"
 		"  --" CMD_LINE_OPT_REASSEMBLE " NUM"
 		": max number of entries in reassemble(fragment) table\n"
 		"    (zero (default value) disables reassembly)\n"
@@ -1898,7 +1898,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2201,8 +2201,8 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 	local_port_conf.rxmode.mtu = mtu_size;
 
 	if (multi_seg_required()) {
-		local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
-		local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		local_port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+		local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	}
 
 	local_port_conf.rxmode.offloads |= req_rx_offloads;
@@ -2225,12 +2225,12 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 			portid, local_port_conf.txmode.offloads,
 			dev_info.tx_offload_capa);
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM)
-		local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
+		local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 
 	printf("port %u configurng rx_offloads=0x%" PRIx64
 		", tx_offloads=0x%" PRIx64 "\n",
@@ -2288,7 +2288,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 		/* Pre-populate pkt offloads based on capabilities */
 		qconf->outbound.ipv4_offloads = PKT_TX_IPV4;
 		qconf->outbound.ipv6_offloads = PKT_TX_IPV6;
-		if (local_port_conf.txmode.offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+		if (local_port_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 			qconf->outbound.ipv4_offloads |= PKT_TX_IP_CKSUM;
 
 		tx_queueid++;
@@ -2649,7 +2649,7 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads)
 	struct rte_flow *flow;
 	int ret;
 
-	if (!(rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+	if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
 		return;
 
 	/* Add the default rte_flow to enable SECURITY for all ESP packets */
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 17a28556c971..5cdd794f017f 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -986,7 +986,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
 
 	if (inbound) {
 		if ((dev_info.rx_offload_capa &
-				DEV_RX_OFFLOAD_SECURITY) == 0) {
+				RTE_ETH_RX_OFFLOAD_SECURITY) == 0) {
 			RTE_LOG(WARNING, PORT,
 				"hardware RX IPSec offload is not supported\n");
 			return -EINVAL;
@@ -994,7 +994,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
 
 	} else { /* outbound */
 		if ((dev_info.tx_offload_capa &
-				DEV_TX_OFFLOAD_SECURITY) == 0) {
+				RTE_ETH_TX_OFFLOAD_SECURITY) == 0) {
 			RTE_LOG(WARNING, PORT,
 				"hardware TX IPSec offload is not supported\n");
 			return -EINVAL;
@@ -1628,7 +1628,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
 				rule_type ==
 				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
 				&& rule->portid == port_id)
-			*rx_offloads |= DEV_RX_OFFLOAD_SECURITY;
+			*rx_offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
 	}
 
 	/* Check for outbound rules that use offloads and use this port */
@@ -1639,7 +1639,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
 				rule_type ==
 				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
 				&& rule->portid == port_id)
-			*tx_offloads |= DEV_TX_OFFLOAD_SECURITY;
+			*tx_offloads |= RTE_ETH_TX_OFFLOAD_SECURITY;
 	}
 	return 0;
 }
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index 73391ce1a96d..bdcaa3bcd1ca 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -114,8 +114,8 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
 	},
 };
 
@@ -619,7 +619,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/kni/main.c b/examples/kni/main.c
index 69a0afced6cc..d324ee224109 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -94,7 +94,7 @@ static struct kni_port_params *kni_port_params_array[RTE_MAX_ETHPORTS];
 /* Options for configuring ethernet port */
 static struct rte_eth_conf port_conf = {
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -607,9 +607,9 @@ init_port(uint16_t port)
 			"Error during getting device (port %u) info: %s\n",
 			port, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(port, 1, 1, &local_port_conf);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Could not configure port%u (%d)\n",
@@ -687,7 +687,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 6e2016752fca..04a3bdace20c 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -215,11 +215,11 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -1807,7 +1807,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2631,9 +2631,9 @@ initialize_ports(struct l2fwd_crypto_options *options)
 			return retval;
 		}
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		retval = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (retval < 0) {
 			printf("Cannot configure device: err=%d, port=%u\n",
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index 9040be5ed9b6..cf3d1b8aaf40 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -14,7 +14,7 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
 			.split_hdr_size = 0,
 		},
 		.txmode = {
-			.mq_mode = ETH_MQ_TX_NONE,
+			.mq_mode = RTE_ETH_MQ_TX_NONE,
 		},
 	};
 	uint16_t nb_ports_available = 0;
@@ -22,9 +22,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
 	int ret;
 
 	if (rsrc->event_mode) {
-		port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+		port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
 		port_conf.rx_adv_conf.rss_conf.rss_key = NULL;
-		port_conf.rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP;
+		port_conf.rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP;
 	}
 
 	/* Initialise each port */
@@ -60,9 +60,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
 				local_port_conf.rx_adv_conf.rss_conf.rss_hf);
 		}
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure RX and TX queue. 8< */
 		ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 1db89f2bd139..9806204b81d1 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -395,7 +395,7 @@ check_all_ports_link_status(struct l2fwd_resources *rsrc,
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c
index 62981663ea78..d8eabe4c869e 100644
--- a/examples/l2fwd-jobstats/main.c
+++ b/examples/l2fwd-jobstats/main.c
@@ -93,7 +93,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -725,7 +725,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -868,9 +868,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure the RX and TX queues. 8< */
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/l2fwd-keepalive/main.c b/examples/l2fwd-keepalive/main.c
index af59d51b3ec4..78fc48f781fc 100644
--- a/examples/l2fwd-keepalive/main.c
+++ b/examples/l2fwd-keepalive/main.c
@@ -82,7 +82,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -477,7 +477,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -649,9 +649,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
 			rte_exit(EXIT_FAILURE,
diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
index 8feb50e0f542..c9d8d4918a34 100644
--- a/examples/l2fwd/main.c
+++ b/examples/l2fwd/main.c
@@ -94,7 +94,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -605,7 +605,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -791,9 +791,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure the number of queues for a port. */
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 410ec94b4131..1fb180723582 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -123,19 +123,19 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode	= ETH_MQ_RX_RSS,
+		.mq_mode	= RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
-				ETH_RSS_TCP | ETH_RSS_SCTP,
+			.rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+				RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -1935,7 +1935,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2003,7 +2003,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -2087,9 +2087,9 @@ main(int argc, char **argv)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index 05385807e83e..7f00c65609ed 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -111,17 +111,17 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -607,7 +607,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* Clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -731,7 +731,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -828,9 +828,9 @@ main(int argc, char **argv)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 39624993b081..21c79567b1f7 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -249,18 +249,18 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_RSS,
+		.mq_mode        = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_UDP,
+			.rss_hf = RTE_ETH_RSS_UDP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	}
 };
 
@@ -2196,7 +2196,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2509,7 +2509,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -2637,9 +2637,9 @@ main(int argc, char **argv)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 961860ea18ef..7c7613a83aad 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -75,9 +75,9 @@ l3fwd_eth_dev_port_setup(struct rte_eth_conf *port_conf)
 			rte_panic("Error during getting device (port %u) info:"
 				  "%s\n", port_id, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+						RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 						dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 202ef78b6e95..5dd3e4136ea1 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -119,18 +119,18 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -902,7 +902,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -987,7 +987,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -1052,15 +1052,15 @@ l3fwd_poll_resource_setup(void)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
 
 		if (dev_info.max_rx_queues == 1)
-			local_port_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+			local_port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
 
 		if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
 				port_conf.rx_adv_conf.rss_conf.rss_hf) {
diff --git a/examples/link_status_interrupt/main.c b/examples/link_status_interrupt/main.c
index ce8ae059d789..551f0524da79 100644
--- a/examples/link_status_interrupt/main.c
+++ b/examples/link_status_interrupt/main.c
@@ -82,7 +82,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.intr_conf = {
 		.lsc = 1, /**< lsc interrupt feature enabled */
@@ -146,7 +146,7 @@ print_stats(void)
 			   link_get_err < 0 ? "0" :
 			   rte_eth_link_speed_to_str(link.link_speed),
 			   link_get_err < 0 ? "Link get failed" :
-			   (link.link_duplex == ETH_LINK_FULL_DUPLEX ? \
+			   (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex"),
 			   port_statistics[portid].tx,
 			   port_statistics[portid].rx,
@@ -506,7 +506,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -633,9 +633,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure RX and TX queues. 8< */
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/multi_process/client_server_mp/mp_server/init.c b/examples/multi_process/client_server_mp/mp_server/init.c
index be669c2bcc06..a4d7a3e5436a 100644
--- a/examples/multi_process/client_server_mp/mp_server/init.c
+++ b/examples/multi_process/client_server_mp/mp_server/init.c
@@ -93,7 +93,7 @@ init_port(uint16_t port_num)
 	/* for port configuration all features are off by default */
 	const struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS
+			.mq_mode = RTE_ETH_MQ_RX_RSS
 		}
 	};
 	const uint16_t rx_rings = 1, tx_rings = num_clients;
@@ -212,7 +212,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/multi_process/symmetric_mp/main.c b/examples/multi_process/symmetric_mp/main.c
index a66328ba0caf..b35886a77b00 100644
--- a/examples/multi_process/symmetric_mp/main.c
+++ b/examples/multi_process/symmetric_mp/main.c
@@ -175,18 +175,18 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
 {
 	struct rte_eth_conf port_conf = {
 			.rxmode = {
-				.mq_mode	= ETH_MQ_RX_RSS,
+				.mq_mode	= RTE_ETH_MQ_RX_RSS,
 				.split_hdr_size = 0,
-				.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+				.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 			},
 			.rx_adv_conf = {
 				.rss_conf = {
 					.rss_key = NULL,
-					.rss_hf = ETH_RSS_IP,
+					.rss_hf = RTE_ETH_RSS_IP,
 				},
 			},
 			.txmode = {
-				.mq_mode = ETH_MQ_TX_NONE,
+				.mq_mode = RTE_ETH_MQ_TX_NONE,
 			}
 	};
 	const uint16_t rx_rings = num_queues, tx_rings = num_queues;
@@ -217,9 +217,9 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
 
 	info.default_rxconf.rx_drop_en = 1;
 
-	if (info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
 	port_conf.rx_adv_conf.rss_conf.rss_hf &= info.flow_type_rss_offloads;
@@ -391,7 +391,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
index e9a388710647..f110fc129f55 100644
--- a/examples/ntb/ntb_fwd.c
+++ b/examples/ntb/ntb_fwd.c
@@ -89,17 +89,17 @@ static uint16_t pkt_burst = NTB_DFLT_PKT_BURST;
 
 static struct rte_eth_conf eth_port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c
index 4f6982bc1289..b01ac60fd196 100644
--- a/examples/packet_ordering/main.c
+++ b/examples/packet_ordering/main.c
@@ -294,9 +294,9 @@ configure_eth_port(uint16_t port_id)
 		return ret;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(port_id, rxRings, txRings, &port_conf);
 	if (ret != 0)
 		return ret;
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 74e016e1d20d..3a6a33bda3b0 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -306,18 +306,18 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_TCP,
+			.rss_hf = RTE_ETH_RSS_TCP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -3437,7 +3437,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -3490,7 +3490,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -3589,9 +3589,9 @@ main(int argc, char **argv)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 4f20dfc4be06..569207a79d62 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -133,7 +133,7 @@ mempool_find(struct obj *obj, const char *name)
 static struct rte_eth_conf port_conf_default = {
 	.link_speeds = 0,
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
 		.split_hdr_size = 0, /* Header split buffer size */
 	},
@@ -145,12 +145,12 @@ static struct rte_eth_conf port_conf_default = {
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
 
-#define RETA_CONF_SIZE     (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE     (RTE_ETH_RSS_RETA_SIZE_512 / RTE_ETH_RETA_GROUP_SIZE)
 
 static int
 rss_setup(uint16_t port_id,
@@ -165,11 +165,11 @@ rss_setup(uint16_t port_id,
 	memset(reta_conf, 0, sizeof(reta_conf));
 
 	for (i = 0; i < reta_size; i++)
-		reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+		reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
 
 	for (i = 0; i < reta_size; i++) {
-		uint32_t reta_id = i / RTE_RETA_GROUP_SIZE;
-		uint32_t reta_pos = i % RTE_RETA_GROUP_SIZE;
+		uint32_t reta_id = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint32_t reta_pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint32_t rss_qs_pos = i % rss->n_queues;
 
 		reta_conf[reta_id].reta[reta_pos] =
@@ -227,7 +227,7 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
 	rss = params->rx.rss;
 	if (rss) {
 		if ((port_info.reta_size == 0) ||
-			(port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+			(port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
 			return NULL;
 
 		if ((rss->n_queues == 0) ||
@@ -245,9 +245,9 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
 	/* Port */
 	memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
 	if (rss) {
-		port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+		port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
 		port_conf.rx_adv_conf.rss_conf.rss_hf =
-			(ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+			(RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
 			port_info.flow_type_rss_offloads;
 	}
 
@@ -356,7 +356,7 @@ link_is_up(struct obj *obj, const char *name)
 	if (rte_eth_link_get(link->port_id, &link_params) < 0)
 		return 0;
 
-	return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+	return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
 }
 
 struct link *
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 229a277032cb..979d9eb9e9d0 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -193,14 +193,14 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	/* Force full Tx path in the driver, required for IEEE1588 */
-	port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index c32d2e12e633..743bae2da50a 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -51,18 +51,18 @@ static struct rte_mempool *pool = NULL;
  ***/
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode	= ETH_MQ_RX_RSS,
+		.mq_mode	= RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_DCB_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -332,8 +332,8 @@ main(int argc, char **argv)
 			"Error during getting device (port %u) info: %s\n",
 			port_rx, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
-		conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+		conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
 	if (conf.rx_adv_conf.rss_conf.rss_hf !=
@@ -378,8 +378,8 @@ main(int argc, char **argv)
 			"Error during getting device (port %u) info: %s\n",
 			port_tx, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
-		conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+		conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
 	if (conf.rx_adv_conf.rss_conf.rss_hf !=
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 1367569c65db..9b34e4a76b1b 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -60,7 +60,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_DCB_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -105,9 +105,9 @@ app_init_port(uint16_t portid, struct rte_mempool *mp)
 			"Error during getting device (port %u) info: %s\n",
 			portid, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE,
diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
index 6845c396b8d9..1903d8b095a1 100644
--- a/examples/rxtx_callbacks/main.c
+++ b/examples/rxtx_callbacks/main.c
@@ -141,17 +141,17 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	if (hw_timestamping) {
-		if (!(dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)) {
+		if (!(dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
 			printf("\nERROR: Port %u does not support hardware timestamping\n"
 					, port);
 			return -1;
 		}
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 		rte_mbuf_dyn_rx_timestamp_register(&hwts_dynfield_offset, NULL);
 		if (hwts_dynfield_offset < 0) {
 			printf("ERROR: Failed to register timestamp field\n");
diff --git a/examples/server_node_efd/server/init.c b/examples/server_node_efd/server/init.c
index a19934dbe0c8..0e5e3b5a9815 100644
--- a/examples/server_node_efd/server/init.c
+++ b/examples/server_node_efd/server/init.c
@@ -95,7 +95,7 @@ init_port(uint16_t port_num)
 	/* for port configuration all features are off by default */
 	struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 	};
 	const uint16_t rx_rings = 1, tx_rings = num_nodes;
@@ -114,9 +114,9 @@ init_port(uint16_t port_num)
 	if (retval != 0)
 		return retval;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/*
 	 * Standard DPDK port initialisation - config port, then set up
@@ -276,7 +276,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
index fd7207aee758..16435ee3ccc2 100644
--- a/examples/skeleton/basicfwd.c
+++ b/examples/skeleton/basicfwd.c
@@ -49,9 +49,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 97218917067e..44376417f83d 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -110,23 +110,23 @@ static int nb_sockets;
 /* empty vmdq configuration structure. Filled in programatically */
 static struct rte_eth_conf vmdq_conf_default = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_VMDQ_ONLY,
+		.mq_mode        = RTE_ETH_MQ_RX_VMDQ_ONLY,
 		.split_hdr_size = 0,
 		/*
 		 * VLAN strip is necessary for 1G NIC such as I350,
 		 * this fixes bug of ipv4 forwarding in guest can't
 		 * forward pakets from one virtio dev to another virtio dev.
 		 */
-		.offloads = DEV_RX_OFFLOAD_VLAN_STRIP,
+		.offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP,
 	},
 
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_TCP_CKSUM |
-			     DEV_TX_OFFLOAD_VLAN_INSERT |
-			     DEV_TX_OFFLOAD_MULTI_SEGS |
-			     DEV_TX_OFFLOAD_TCP_TSO),
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+			     RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+			     RTE_ETH_TX_OFFLOAD_TCP_TSO),
 	},
 	.rx_adv_conf = {
 		/*
@@ -134,7 +134,7 @@ static struct rte_eth_conf vmdq_conf_default = {
 		 * appropriate values
 		 */
 		.vmdq_rx_conf = {
-			.nb_queue_pools = ETH_8_POOLS,
+			.nb_queue_pools = RTE_ETH_8_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -291,9 +291,9 @@ port_init(uint16_t port)
 		return -1;
 
 	rx_rings = (uint16_t)dev_info.max_rx_queues;
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	/* Configure ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
 	if (retval != 0) {
@@ -557,8 +557,8 @@ us_vhost_parse_args(int argc, char **argv)
 		case 'P':
 			promiscuous = 1;
 			vmdq_conf_default.rx_adv_conf.vmdq_rx_conf.rx_mode =
-				ETH_VMDQ_ACCEPT_BROADCAST |
-				ETH_VMDQ_ACCEPT_MULTICAST;
+				RTE_ETH_VMDQ_ACCEPT_BROADCAST |
+				RTE_ETH_VMDQ_ACCEPT_MULTICAST;
 			break;
 
 		case OPT_VM2VM_NUM:
diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
index e19d79a40802..b159291d77ce 100644
--- a/examples/vm_power_manager/main.c
+++ b/examples/vm_power_manager/main.c
@@ -73,9 +73,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
@@ -270,7 +270,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 		       /* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
index 85996bf864b7..feee642f594d 100644
--- a/examples/vmdq/main.c
+++ b/examples/vmdq/main.c
@@ -65,12 +65,12 @@ static uint8_t rss_enable;
 /* empty vmdq configuration structure. Filled in programatically */
 static const struct rte_eth_conf vmdq_conf_default = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_VMDQ_ONLY,
+		.mq_mode        = RTE_ETH_MQ_RX_VMDQ_ONLY,
 		.split_hdr_size = 0,
 	},
 
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.rx_adv_conf = {
 		/*
@@ -78,7 +78,7 @@ static const struct rte_eth_conf vmdq_conf_default = {
 		 * appropriate values
 		 */
 		.vmdq_rx_conf = {
-			.nb_queue_pools = ETH_8_POOLS,
+			.nb_queue_pools = RTE_ETH_8_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -156,11 +156,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t num_pools)
 	(void)(rte_memcpy(&eth_conf->rx_adv_conf.vmdq_rx_conf, &conf,
 		   sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf)));
 	if (rss_enable) {
-		eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
-		eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
-							ETH_RSS_UDP |
-							ETH_RSS_TCP |
-							ETH_RSS_SCTP;
+		eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
+		eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+							RTE_ETH_RSS_UDP |
+							RTE_ETH_RSS_TCP |
+							RTE_ETH_RSS_SCTP;
 	}
 	return 0;
 }
@@ -258,9 +258,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	retval = rte_eth_dev_configure(port, rxRings, txRings, &port_conf);
 	if (retval != 0)
 		return retval;
diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c
index be0179fdeaf0..d2218f2cf741 100644
--- a/examples/vmdq_dcb/main.c
+++ b/examples/vmdq_dcb/main.c
@@ -59,8 +59,8 @@ static uint16_t ports[RTE_MAX_ETHPORTS];
 static unsigned num_ports;
 
 /* number of pools (if user does not specify any, 32 by default */
-static enum rte_eth_nb_pools num_pools = ETH_32_POOLS;
-static enum rte_eth_nb_tcs   num_tcs   = ETH_4_TCS;
+static enum rte_eth_nb_pools num_pools = RTE_ETH_32_POOLS;
+static enum rte_eth_nb_tcs   num_tcs   = RTE_ETH_4_TCS;
 static uint16_t num_queues, num_vmdq_queues;
 static uint16_t vmdq_pool_base, vmdq_queue_base;
 static uint8_t rss_enable;
@@ -68,11 +68,11 @@ static uint8_t rss_enable;
 /* Empty vmdq+dcb configuration structure. Filled in programmatically. 8< */
 static const struct rte_eth_conf vmdq_dcb_conf_default = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_VMDQ_DCB,
+		.mq_mode        = RTE_ETH_MQ_RX_VMDQ_DCB,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_VMDQ_DCB,
+		.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB,
 	},
 	/*
 	 * should be overridden separately in code with
@@ -80,7 +80,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 	 */
 	.rx_adv_conf = {
 		.vmdq_dcb_conf = {
-			.nb_queue_pools = ETH_32_POOLS,
+			.nb_queue_pools = RTE_ETH_32_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -88,12 +88,12 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 			.dcb_tc = {0},
 		},
 		.dcb_rx_conf = {
-				.nb_tcs = ETH_4_TCS,
+				.nb_tcs = RTE_ETH_4_TCS,
 				/** Traffic class each UP mapped to. */
 				.dcb_tc = {0},
 		},
 		.vmdq_rx_conf = {
-			.nb_queue_pools = ETH_32_POOLS,
+			.nb_queue_pools = RTE_ETH_32_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -102,7 +102,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 	},
 	.tx_adv_conf = {
 		.vmdq_dcb_tx_conf = {
-			.nb_queue_pools = ETH_32_POOLS,
+			.nb_queue_pools = RTE_ETH_32_POOLS,
 			.dcb_tc = {0},
 		},
 	},
@@ -156,7 +156,7 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
 		conf.pool_map[i].pools = 1UL << i;
 		vmdq_conf.pool_map[i].pools = 1UL << i;
 	}
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		conf.dcb_tc[i] = i % num_tcs;
 		dcb_conf.dcb_tc[i] = i % num_tcs;
 		tx_conf.dcb_tc[i] = i % num_tcs;
@@ -172,11 +172,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
 	(void)(rte_memcpy(&eth_conf->tx_adv_conf.vmdq_dcb_tx_conf, &tx_conf,
 			  sizeof(tx_conf)));
 	if (rss_enable) {
-		eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
-		eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
-							ETH_RSS_UDP |
-							ETH_RSS_TCP |
-							ETH_RSS_SCTP;
+		eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
+		eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+							RTE_ETH_RSS_UDP |
+							RTE_ETH_RSS_TCP |
+							RTE_ETH_RSS_SCTP;
 	}
 	return 0;
 }
@@ -270,9 +270,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
 	port_conf.rx_adv_conf.rss_conf.rss_hf &=
@@ -381,9 +381,9 @@ vmdq_parse_num_pools(const char *q_arg)
 	if (n != 16 && n != 32)
 		return -1;
 	if (n == 16)
-		num_pools = ETH_16_POOLS;
+		num_pools = RTE_ETH_16_POOLS;
 	else
-		num_pools = ETH_32_POOLS;
+		num_pools = RTE_ETH_32_POOLS;
 
 	return 0;
 }
@@ -403,9 +403,9 @@ vmdq_parse_num_tcs(const char *q_arg)
 	if (n != 4 && n != 8)
 		return -1;
 	if (n == 4)
-		num_tcs = ETH_4_TCS;
+		num_tcs = RTE_ETH_4_TCS;
 	else
-		num_tcs = ETH_8_TCS;
+		num_tcs = RTE_ETH_8_TCS;
 
 	return 0;
 }
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index b530ac6e320a..dcbffd4265fa 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -114,7 +114,7 @@ struct rte_eth_dev_data {
 	/** Device Ethernet link address. @see rte_eth_dev_release_port() */
 	struct rte_ether_addr *mac_addrs;
 	/** Bitmap associating MAC addresses to pools */
-	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+	uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
 	/**
 	 * Device Ethernet MAC addresses of hash filtering.
 	 * @see rte_eth_dev_release_port()
@@ -1700,23 +1700,23 @@ struct rte_eth_syn_filter {
 /**
  * filter type of tunneling packet
  */
-#define ETH_TUNNEL_FILTER_OMAC  0x01 /**< filter by outer MAC addr */
-#define ETH_TUNNEL_FILTER_OIP   0x02 /**< filter by outer IP Addr */
-#define ETH_TUNNEL_FILTER_TENID 0x04 /**< filter by tenant ID */
-#define ETH_TUNNEL_FILTER_IMAC  0x08 /**< filter by inner MAC addr */
-#define ETH_TUNNEL_FILTER_IVLAN 0x10 /**< filter by inner VLAN ID */
-#define ETH_TUNNEL_FILTER_IIP   0x20 /**< filter by inner IP addr */
-
-#define RTE_TUNNEL_FILTER_IMAC_IVLAN (ETH_TUNNEL_FILTER_IMAC | \
-					ETH_TUNNEL_FILTER_IVLAN)
-#define RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID (ETH_TUNNEL_FILTER_IMAC | \
-					ETH_TUNNEL_FILTER_IVLAN | \
-					ETH_TUNNEL_FILTER_TENID)
-#define RTE_TUNNEL_FILTER_IMAC_TENID (ETH_TUNNEL_FILTER_IMAC | \
-					ETH_TUNNEL_FILTER_TENID)
-#define RTE_TUNNEL_FILTER_OMAC_TENID_IMAC (ETH_TUNNEL_FILTER_OMAC | \
-					ETH_TUNNEL_FILTER_TENID | \
-					ETH_TUNNEL_FILTER_IMAC)
+#define RTE_ETH_TUNNEL_FILTER_OMAC  0x01 /**< filter by outer MAC addr */
+#define RTE_ETH_TUNNEL_FILTER_OIP   0x02 /**< filter by outer IP Addr */
+#define RTE_ETH_TUNNEL_FILTER_TENID 0x04 /**< filter by tenant ID */
+#define RTE_ETH_TUNNEL_FILTER_IMAC  0x08 /**< filter by inner MAC addr */
+#define RTE_ETH_TUNNEL_FILTER_IVLAN 0x10 /**< filter by inner VLAN ID */
+#define RTE_ETH_TUNNEL_FILTER_IIP   0x20 /**< filter by inner IP addr */
+
+#define RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN (RTE_ETH_TUNNEL_FILTER_IMAC | \
+					  RTE_ETH_TUNNEL_FILTER_IVLAN)
+#define RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID (RTE_ETH_TUNNEL_FILTER_IMAC | \
+						RTE_ETH_TUNNEL_FILTER_IVLAN | \
+						RTE_ETH_TUNNEL_FILTER_TENID)
+#define RTE_ETH_TUNNEL_FILTER_IMAC_TENID (RTE_ETH_TUNNEL_FILTER_IMAC | \
+					  RTE_ETH_TUNNEL_FILTER_TENID)
+#define RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC (RTE_ETH_TUNNEL_FILTER_OMAC | \
+					       RTE_ETH_TUNNEL_FILTER_TENID | \
+					       RTE_ETH_TUNNEL_FILTER_IMAC)
 
 /**
  *  Select IPv4 or IPv6 for tunnel filters.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 4ea5a657e003..9b6007803dd8 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -101,9 +101,6 @@ static const struct rte_eth_xstats_name_off eth_dev_txq_stats_strings[] = {
 #define RTE_NB_TXQ_STATS RTE_DIM(eth_dev_txq_stats_strings)
 
 #define RTE_RX_OFFLOAD_BIT2STR(_name)	\
-	{ DEV_RX_OFFLOAD_##_name, #_name }
-
-#define RTE_ETH_RX_OFFLOAD_BIT2STR(_name)	\
 	{ RTE_ETH_RX_OFFLOAD_##_name, #_name }
 
 static const struct {
@@ -128,14 +125,14 @@ static const struct {
 	RTE_RX_OFFLOAD_BIT2STR(SCTP_CKSUM),
 	RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM),
 	RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
-	RTE_ETH_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
+	RTE_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
 };
 
 #undef RTE_RX_OFFLOAD_BIT2STR
 #undef RTE_ETH_RX_OFFLOAD_BIT2STR
 
 #define RTE_TX_OFFLOAD_BIT2STR(_name)	\
-	{ DEV_TX_OFFLOAD_##_name, #_name }
+	{ RTE_ETH_TX_OFFLOAD_##_name, #_name }
 
 static const struct {
 	uint64_t offload;
@@ -1182,32 +1179,32 @@ uint32_t
 rte_eth_speed_bitflag(uint32_t speed, int duplex)
 {
 	switch (speed) {
-	case ETH_SPEED_NUM_10M:
-		return duplex ? ETH_LINK_SPEED_10M : ETH_LINK_SPEED_10M_HD;
-	case ETH_SPEED_NUM_100M:
-		return duplex ? ETH_LINK_SPEED_100M : ETH_LINK_SPEED_100M_HD;
-	case ETH_SPEED_NUM_1G:
-		return ETH_LINK_SPEED_1G;
-	case ETH_SPEED_NUM_2_5G:
-		return ETH_LINK_SPEED_2_5G;
-	case ETH_SPEED_NUM_5G:
-		return ETH_LINK_SPEED_5G;
-	case ETH_SPEED_NUM_10G:
-		return ETH_LINK_SPEED_10G;
-	case ETH_SPEED_NUM_20G:
-		return ETH_LINK_SPEED_20G;
-	case ETH_SPEED_NUM_25G:
-		return ETH_LINK_SPEED_25G;
-	case ETH_SPEED_NUM_40G:
-		return ETH_LINK_SPEED_40G;
-	case ETH_SPEED_NUM_50G:
-		return ETH_LINK_SPEED_50G;
-	case ETH_SPEED_NUM_56G:
-		return ETH_LINK_SPEED_56G;
-	case ETH_SPEED_NUM_100G:
-		return ETH_LINK_SPEED_100G;
-	case ETH_SPEED_NUM_200G:
-		return ETH_LINK_SPEED_200G;
+	case RTE_ETH_SPEED_NUM_10M:
+		return duplex ? RTE_ETH_LINK_SPEED_10M : RTE_ETH_LINK_SPEED_10M_HD;
+	case RTE_ETH_SPEED_NUM_100M:
+		return duplex ? RTE_ETH_LINK_SPEED_100M : RTE_ETH_LINK_SPEED_100M_HD;
+	case RTE_ETH_SPEED_NUM_1G:
+		return RTE_ETH_LINK_SPEED_1G;
+	case RTE_ETH_SPEED_NUM_2_5G:
+		return RTE_ETH_LINK_SPEED_2_5G;
+	case RTE_ETH_SPEED_NUM_5G:
+		return RTE_ETH_LINK_SPEED_5G;
+	case RTE_ETH_SPEED_NUM_10G:
+		return RTE_ETH_LINK_SPEED_10G;
+	case RTE_ETH_SPEED_NUM_20G:
+		return RTE_ETH_LINK_SPEED_20G;
+	case RTE_ETH_SPEED_NUM_25G:
+		return RTE_ETH_LINK_SPEED_25G;
+	case RTE_ETH_SPEED_NUM_40G:
+		return RTE_ETH_LINK_SPEED_40G;
+	case RTE_ETH_SPEED_NUM_50G:
+		return RTE_ETH_LINK_SPEED_50G;
+	case RTE_ETH_SPEED_NUM_56G:
+		return RTE_ETH_LINK_SPEED_56G;
+	case RTE_ETH_SPEED_NUM_100G:
+		return RTE_ETH_LINK_SPEED_100G;
+	case RTE_ETH_SPEED_NUM_200G:
+		return RTE_ETH_LINK_SPEED_200G;
 	default:
 		return 0;
 	}
@@ -1528,7 +1525,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 * If LRO is enabled, check that the maximum aggregated packet
 	 * size is supported by the configured device.
 	 */
-	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		uint32_t max_rx_pktlen;
 		uint32_t overhead_len;
 
@@ -1585,12 +1582,12 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	}
 
 	/* Check if Rx RSS distribution is disabled but RSS hash is enabled. */
-	if (((dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) == 0) &&
-	    (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
+	    (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		RTE_ETHDEV_LOG(ERR,
 			"Ethdev port_id=%u config invalid Rx mq_mode without RSS but %s offload is requested\n",
 			port_id,
-			rte_eth_dev_rx_offload_name(DEV_RX_OFFLOAD_RSS_HASH));
+			rte_eth_dev_rx_offload_name(RTE_ETH_RX_OFFLOAD_RSS_HASH));
 		ret = -EINVAL;
 		goto rollback;
 	}
@@ -2213,7 +2210,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	 * size is supported by the configured device.
 	 */
 	/* Get the real Ethernet overhead length */
-	if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (local_conf.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		uint32_t overhead_len;
 		uint32_t max_rx_pktlen;
 		int ret;
@@ -2793,21 +2790,21 @@ const char *
 rte_eth_link_speed_to_str(uint32_t link_speed)
 {
 	switch (link_speed) {
-	case ETH_SPEED_NUM_NONE: return "None";
-	case ETH_SPEED_NUM_10M:  return "10 Mbps";
-	case ETH_SPEED_NUM_100M: return "100 Mbps";
-	case ETH_SPEED_NUM_1G:   return "1 Gbps";
-	case ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
-	case ETH_SPEED_NUM_5G:   return "5 Gbps";
-	case ETH_SPEED_NUM_10G:  return "10 Gbps";
-	case ETH_SPEED_NUM_20G:  return "20 Gbps";
-	case ETH_SPEED_NUM_25G:  return "25 Gbps";
-	case ETH_SPEED_NUM_40G:  return "40 Gbps";
-	case ETH_SPEED_NUM_50G:  return "50 Gbps";
-	case ETH_SPEED_NUM_56G:  return "56 Gbps";
-	case ETH_SPEED_NUM_100G: return "100 Gbps";
-	case ETH_SPEED_NUM_200G: return "200 Gbps";
-	case ETH_SPEED_NUM_UNKNOWN: return "Unknown";
+	case RTE_ETH_SPEED_NUM_NONE: return "None";
+	case RTE_ETH_SPEED_NUM_10M:  return "10 Mbps";
+	case RTE_ETH_SPEED_NUM_100M: return "100 Mbps";
+	case RTE_ETH_SPEED_NUM_1G:   return "1 Gbps";
+	case RTE_ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
+	case RTE_ETH_SPEED_NUM_5G:   return "5 Gbps";
+	case RTE_ETH_SPEED_NUM_10G:  return "10 Gbps";
+	case RTE_ETH_SPEED_NUM_20G:  return "20 Gbps";
+	case RTE_ETH_SPEED_NUM_25G:  return "25 Gbps";
+	case RTE_ETH_SPEED_NUM_40G:  return "40 Gbps";
+	case RTE_ETH_SPEED_NUM_50G:  return "50 Gbps";
+	case RTE_ETH_SPEED_NUM_56G:  return "56 Gbps";
+	case RTE_ETH_SPEED_NUM_100G: return "100 Gbps";
+	case RTE_ETH_SPEED_NUM_200G: return "200 Gbps";
+	case RTE_ETH_SPEED_NUM_UNKNOWN: return "Unknown";
 	default: return "Invalid";
 	}
 }
@@ -2831,14 +2828,14 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
 		return -EINVAL;
 	}
 
-	if (eth_link->link_status == ETH_LINK_DOWN)
+	if (eth_link->link_status == RTE_ETH_LINK_DOWN)
 		return snprintf(str, len, "Link down");
 	else
 		return snprintf(str, len, "Link up at %s %s %s",
 			rte_eth_link_speed_to_str(eth_link->link_speed),
-			(eth_link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+			(eth_link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 			"FDX" : "HDX",
-			(eth_link->link_autoneg == ETH_LINK_AUTONEG) ?
+			(eth_link->link_autoneg == RTE_ETH_LINK_AUTONEG) ?
 			"Autoneg" : "Fixed");
 }
 
@@ -3745,7 +3742,7 @@ rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on)
 	dev = &rte_eth_devices[port_id];
 
 	if (!(dev->data->dev_conf.rxmode.offloads &
-	      DEV_RX_OFFLOAD_VLAN_FILTER)) {
+	      RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
 		RTE_ETHDEV_LOG(ERR, "Port %u: VLAN-filtering disabled\n",
 			port_id);
 		return -ENOSYS;
@@ -3832,44 +3829,44 @@ rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask)
 	dev_offloads = orig_offloads;
 
 	/* check which option changed by application */
-	cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	cur = !!(offload_mask & RTE_ETH_VLAN_STRIP_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
-		mask |= ETH_VLAN_STRIP_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+		mask |= RTE_ETH_VLAN_STRIP_MASK;
 	}
 
-	cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+	cur = !!(offload_mask & RTE_ETH_VLAN_FILTER_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
-		mask |= ETH_VLAN_FILTER_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+		mask |= RTE_ETH_VLAN_FILTER_MASK;
 	}
 
-	cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND);
+	cur = !!(offload_mask & RTE_ETH_VLAN_EXTEND_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
-		mask |= ETH_VLAN_EXTEND_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
+		mask |= RTE_ETH_VLAN_EXTEND_MASK;
 	}
 
-	cur = !!(offload_mask & ETH_QINQ_STRIP_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP);
+	cur = !!(offload_mask & RTE_ETH_QINQ_STRIP_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
-		mask |= ETH_QINQ_STRIP_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
+		mask |= RTE_ETH_QINQ_STRIP_MASK;
 	}
 
 	/*no change*/
@@ -3914,17 +3911,17 @@ rte_eth_dev_get_vlan_offload(uint16_t port_id)
 	dev = &rte_eth_devices[port_id];
 	dev_offloads = &dev->data->dev_conf.rxmode.offloads;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-		ret |= ETH_VLAN_STRIP_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+		ret |= RTE_ETH_VLAN_STRIP_OFFLOAD;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
-		ret |= ETH_VLAN_FILTER_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+		ret |= RTE_ETH_VLAN_FILTER_OFFLOAD;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
-		ret |= ETH_VLAN_EXTEND_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
+		ret |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
-		ret |= ETH_QINQ_STRIP_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+		ret |= RTE_ETH_QINQ_STRIP_OFFLOAD;
 
 	return ret;
 }
@@ -4001,7 +3998,7 @@ rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (pfc_conf->priority > (ETH_DCB_NUM_USER_PRIORITIES - 1)) {
+	if (pfc_conf->priority > (RTE_ETH_DCB_NUM_USER_PRIORITIES - 1)) {
 		RTE_ETHDEV_LOG(ERR, "Invalid priority, only 0-7 allowed\n");
 		return -EINVAL;
 	}
@@ -4019,7 +4016,7 @@ eth_check_reta_mask(struct rte_eth_rss_reta_entry64 *reta_conf,
 {
 	uint16_t i, num;
 
-	num = (reta_size + RTE_RETA_GROUP_SIZE - 1) / RTE_RETA_GROUP_SIZE;
+	num = (reta_size + RTE_ETH_RETA_GROUP_SIZE - 1) / RTE_ETH_RETA_GROUP_SIZE;
 	for (i = 0; i < num; i++) {
 		if (reta_conf[i].mask)
 			return 0;
@@ -4041,8 +4038,8 @@ eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & RTE_BIT64(shift)) &&
 			(reta_conf[idx].reta[shift] >= max_rxq)) {
 			RTE_ETHDEV_LOG(ERR,
@@ -4198,7 +4195,7 @@ rte_eth_dev_udp_tunnel_port_add(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+	if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
 		RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
 		return -EINVAL;
 	}
@@ -4224,7 +4221,7 @@ rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+	if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
 		RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
 		return -EINVAL;
 	}
@@ -4365,8 +4362,8 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr,
 			port_id);
 		return -EINVAL;
 	}
-	if (pool >= ETH_64_POOLS) {
-		RTE_ETHDEV_LOG(ERR, "Pool ID must be 0-%d\n", ETH_64_POOLS - 1);
+	if (pool >= RTE_ETH_64_POOLS) {
+		RTE_ETHDEV_LOG(ERR, "Pool ID must be 0-%d\n", RTE_ETH_64_POOLS - 1);
 		return -EINVAL;
 	}
 
@@ -6275,7 +6272,7 @@ eth_dev_handle_port_link_status(const char *cmd __rte_unused,
 	rte_tel_data_add_dict_string(d, status_str, "UP");
 	rte_tel_data_add_dict_u64(d, "speed", link.link_speed);
 	rte_tel_data_add_dict_string(d, "duplex",
-			(link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+			(link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 				"full-duplex" : "half-duplex");
 	return 0;
 }
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index fa4a68532db1..ff608afa960e 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -250,7 +250,7 @@ void rte_eth_iterator_cleanup(struct rte_dev_iterator *iter);
  * field is not supported, its value is 0.
  * All byte-related statistics do not include Ethernet FCS regardless
  * of whether these bytes have been delivered to the application
- * (see DEV_RX_OFFLOAD_KEEP_CRC).
+ * (see RTE_ETH_RX_OFFLOAD_KEEP_CRC).
  */
 struct rte_eth_stats {
 	uint64_t ipackets;  /**< Total number of successfully received packets. */
@@ -281,43 +281,75 @@ struct rte_eth_stats {
 /**@{@name Link speed capabilities
  * Device supported speeds bitmap flags
  */
-#define ETH_LINK_SPEED_AUTONEG 0             /**< Autonegotiate (all speeds) */
-#define ETH_LINK_SPEED_FIXED   RTE_BIT32(0)  /**< Disable autoneg (fixed speed) */
-#define ETH_LINK_SPEED_10M_HD  RTE_BIT32(1)  /**<  10 Mbps half-duplex */
-#define ETH_LINK_SPEED_10M     RTE_BIT32(2)  /**<  10 Mbps full-duplex */
-#define ETH_LINK_SPEED_100M_HD RTE_BIT32(3)  /**< 100 Mbps half-duplex */
-#define ETH_LINK_SPEED_100M    RTE_BIT32(4)  /**< 100 Mbps full-duplex */
-#define ETH_LINK_SPEED_1G      RTE_BIT32(5)  /**<   1 Gbps */
-#define ETH_LINK_SPEED_2_5G    RTE_BIT32(6)  /**< 2.5 Gbps */
-#define ETH_LINK_SPEED_5G      RTE_BIT32(7)  /**<   5 Gbps */
-#define ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**<  10 Gbps */
-#define ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**<  20 Gbps */
-#define ETH_LINK_SPEED_25G     RTE_BIT32(10) /**<  25 Gbps */
-#define ETH_LINK_SPEED_40G     RTE_BIT32(11) /**<  40 Gbps */
-#define ETH_LINK_SPEED_50G     RTE_BIT32(12) /**<  50 Gbps */
-#define ETH_LINK_SPEED_56G     RTE_BIT32(13) /**<  56 Gbps */
-#define ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
-#define ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
+#define RTE_ETH_LINK_SPEED_AUTONEG 0             /**< Autonegotiate (all speeds) */
+#define ETH_LINK_SPEED_AUTONEG     RTE_ETH_LINK_SPEED_AUTONEG
+#define RTE_ETH_LINK_SPEED_FIXED   RTE_BIT32(0)  /**< Disable autoneg (fixed speed) */
+#define ETH_LINK_SPEED_FIXED       RTE_ETH_LINK_SPEED_FIXED
+#define RTE_ETH_LINK_SPEED_10M_HD  RTE_BIT32(1)  /**<  10 Mbps half-duplex */
+#define ETH_LINK_SPEED_10M_HD      RTE_ETH_LINK_SPEED_10M_HD
+#define RTE_ETH_LINK_SPEED_10M     RTE_BIT32(2)  /**<  10 Mbps full-duplex */
+#define ETH_LINK_SPEED_10M         RTE_ETH_LINK_SPEED_10M
+#define RTE_ETH_LINK_SPEED_100M_HD RTE_BIT32(3)  /**< 100 Mbps half-duplex */
+#define ETH_LINK_SPEED_100M_HD     RTE_ETH_LINK_SPEED_100M_HD
+#define RTE_ETH_LINK_SPEED_100M    RTE_BIT32(4)  /**< 100 Mbps full-duplex */
+#define ETH_LINK_SPEED_100M        RTE_ETH_LINK_SPEED_100M
+#define RTE_ETH_LINK_SPEED_1G      RTE_BIT32(5)  /**<   1 Gbps */
+#define ETH_LINK_SPEED_1G          RTE_ETH_LINK_SPEED_1G
+#define RTE_ETH_LINK_SPEED_2_5G    RTE_BIT32(6)  /**< 2.5 Gbps */
+#define ETH_LINK_SPEED_2_5G        RTE_ETH_LINK_SPEED_2_5G
+#define RTE_ETH_LINK_SPEED_5G      RTE_BIT32(7)  /**<   5 Gbps */
+#define ETH_LINK_SPEED_5G          RTE_ETH_LINK_SPEED_5G
+#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**<  10 Gbps */
+#define ETH_LINK_SPEED_10G         RTE_ETH_LINK_SPEED_10G
+#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**<  20 Gbps */
+#define ETH_LINK_SPEED_20G         RTE_ETH_LINK_SPEED_20G
+#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**<  25 Gbps */
+#define ETH_LINK_SPEED_25G         RTE_ETH_LINK_SPEED_25G
+#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**<  40 Gbps */
+#define ETH_LINK_SPEED_40G         RTE_ETH_LINK_SPEED_40G
+#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**<  50 Gbps */
+#define ETH_LINK_SPEED_50G         RTE_ETH_LINK_SPEED_50G
+#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**<  56 Gbps */
+#define ETH_LINK_SPEED_56G         RTE_ETH_LINK_SPEED_56G
+#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
+#define ETH_LINK_SPEED_100G        RTE_ETH_LINK_SPEED_100G
+#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
+#define ETH_LINK_SPEED_200G        RTE_ETH_LINK_SPEED_200G
 /**@}*/
 
 /**@{@name Link speed
  * Ethernet numeric link speeds in Mbps
  */
-#define ETH_SPEED_NUM_NONE         0 /**< Not defined */
-#define ETH_SPEED_NUM_10M         10 /**<  10 Mbps */
-#define ETH_SPEED_NUM_100M       100 /**< 100 Mbps */
-#define ETH_SPEED_NUM_1G        1000 /**<   1 Gbps */
-#define ETH_SPEED_NUM_2_5G      2500 /**< 2.5 Gbps */
-#define ETH_SPEED_NUM_5G        5000 /**<   5 Gbps */
-#define ETH_SPEED_NUM_10G      10000 /**<  10 Gbps */
-#define ETH_SPEED_NUM_20G      20000 /**<  20 Gbps */
-#define ETH_SPEED_NUM_25G      25000 /**<  25 Gbps */
-#define ETH_SPEED_NUM_40G      40000 /**<  40 Gbps */
-#define ETH_SPEED_NUM_50G      50000 /**<  50 Gbps */
-#define ETH_SPEED_NUM_56G      56000 /**<  56 Gbps */
-#define ETH_SPEED_NUM_100G    100000 /**< 100 Gbps */
-#define ETH_SPEED_NUM_200G    200000 /**< 200 Gbps */
-#define ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define RTE_ETH_SPEED_NUM_NONE         0 /**< Not defined */
+#define ETH_SPEED_NUM_NONE        RTE_ETH_SPEED_NUM_NONE
+#define RTE_ETH_SPEED_NUM_10M         10 /**<  10 Mbps */
+#define ETH_SPEED_NUM_10M         RTE_ETH_SPEED_NUM_10M
+#define RTE_ETH_SPEED_NUM_100M       100 /**< 100 Mbps */
+#define ETH_SPEED_NUM_100M        RTE_ETH_SPEED_NUM_100M
+#define RTE_ETH_SPEED_NUM_1G        1000 /**<   1 Gbps */
+#define ETH_SPEED_NUM_1G          RTE_ETH_SPEED_NUM_1G
+#define RTE_ETH_SPEED_NUM_2_5G      2500 /**< 2.5 Gbps */
+#define ETH_SPEED_NUM_2_5G        RTE_ETH_SPEED_NUM_2_5G
+#define RTE_ETH_SPEED_NUM_5G        5000 /**<   5 Gbps */
+#define ETH_SPEED_NUM_5G          RTE_ETH_SPEED_NUM_5G
+#define RTE_ETH_SPEED_NUM_10G      10000 /**<  10 Gbps */
+#define ETH_SPEED_NUM_10G         RTE_ETH_SPEED_NUM_10G
+#define RTE_ETH_SPEED_NUM_20G      20000 /**<  20 Gbps */
+#define ETH_SPEED_NUM_20G         RTE_ETH_SPEED_NUM_20G
+#define RTE_ETH_SPEED_NUM_25G      25000 /**<  25 Gbps */
+#define ETH_SPEED_NUM_25G         RTE_ETH_SPEED_NUM_25G
+#define RTE_ETH_SPEED_NUM_40G      40000 /**<  40 Gbps */
+#define ETH_SPEED_NUM_40G         RTE_ETH_SPEED_NUM_40G
+#define RTE_ETH_SPEED_NUM_50G      50000 /**<  50 Gbps */
+#define ETH_SPEED_NUM_50G         RTE_ETH_SPEED_NUM_50G
+#define RTE_ETH_SPEED_NUM_56G      56000 /**<  56 Gbps */
+#define ETH_SPEED_NUM_56G         RTE_ETH_SPEED_NUM_56G
+#define RTE_ETH_SPEED_NUM_100G    100000 /**< 100 Gbps */
+#define ETH_SPEED_NUM_100G        RTE_ETH_SPEED_NUM_100G
+#define RTE_ETH_SPEED_NUM_200G    200000 /**< 200 Gbps */
+#define ETH_SPEED_NUM_200G        RTE_ETH_SPEED_NUM_200G
+#define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define ETH_SPEED_NUM_UNKNOWN     RTE_ETH_SPEED_NUM_UNKNOWN
 /**@}*/
 
 /**
@@ -325,21 +357,27 @@ struct rte_eth_stats {
  */
 __extension__
 struct rte_eth_link {
-	uint32_t link_speed;        /**< ETH_SPEED_NUM_ */
-	uint16_t link_duplex  : 1;  /**< ETH_LINK_[HALF/FULL]_DUPLEX */
-	uint16_t link_autoneg : 1;  /**< ETH_LINK_[AUTONEG/FIXED] */
-	uint16_t link_status  : 1;  /**< ETH_LINK_[DOWN/UP] */
+	uint32_t link_speed;        /**< RTE_ETH_SPEED_NUM_ */
+	uint16_t link_duplex  : 1;  /**< RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+	uint16_t link_autoneg : 1;  /**< RTE_ETH_LINK_[AUTONEG/FIXED] */
+	uint16_t link_status  : 1;  /**< RTE_ETH_LINK_[DOWN/UP] */
 } __rte_aligned(8);      /**< aligned for atomic64 read/write */
 
 /**@{@name Link negotiation
  * Constants used in link management.
  */
-#define ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
-#define ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
-#define ETH_LINK_DOWN        0 /**< Link is down (see link_status). */
-#define ETH_LINK_UP          1 /**< Link is up (see link_status). */
-#define ETH_LINK_FIXED       0 /**< No autonegotiation (see link_autoneg). */
-#define ETH_LINK_AUTONEG     1 /**< Autonegotiated (see link_autoneg). */
+#define RTE_ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
+#define ETH_LINK_HALF_DUPLEX     RTE_ETH_LINK_HALF_DUPLEX
+#define RTE_ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
+#define ETH_LINK_FULL_DUPLEX     RTE_ETH_LINK_FULL_DUPLEX
+#define RTE_ETH_LINK_DOWN        0 /**< Link is down (see link_status). */
+#define ETH_LINK_DOWN            RTE_ETH_LINK_DOWN
+#define RTE_ETH_LINK_UP          1 /**< Link is up (see link_status). */
+#define ETH_LINK_UP              RTE_ETH_LINK_UP
+#define RTE_ETH_LINK_FIXED       0 /**< No autonegotiation (see link_autoneg). */
+#define ETH_LINK_FIXED           RTE_ETH_LINK_FIXED
+#define RTE_ETH_LINK_AUTONEG     1 /**< Autonegotiated (see link_autoneg). */
+#define ETH_LINK_AUTONEG         RTE_ETH_LINK_AUTONEG
 #define RTE_ETH_LINK_MAX_STR_LEN 40 /**< Max length of default link string. */
 /**@}*/
 
@@ -356,9 +394,12 @@ struct rte_eth_thresh {
 /**@{@name Multi-queue mode
  * @see rte_eth_conf.rxmode.mq_mode.
  */
-#define ETH_MQ_RX_RSS_FLAG  0x1 /**< Enable RSS. @see rte_eth_rss_conf */
-#define ETH_MQ_RX_DCB_FLAG  0x2 /**< Enable DCB. */
-#define ETH_MQ_RX_VMDQ_FLAG 0x4 /**< Enable VMDq. */
+#define RTE_ETH_MQ_RX_RSS_FLAG  0x1
+#define ETH_MQ_RX_RSS_FLAG      RTE_ETH_MQ_RX_RSS_FLAG
+#define RTE_ETH_MQ_RX_DCB_FLAG  0x2
+#define ETH_MQ_RX_DCB_FLAG      RTE_ETH_MQ_RX_DCB_FLAG
+#define RTE_ETH_MQ_RX_VMDQ_FLAG 0x4
+#define ETH_MQ_RX_VMDQ_FLAG     RTE_ETH_MQ_RX_VMDQ_FLAG
 /**@}*/
 
 /**
@@ -367,50 +408,49 @@ struct rte_eth_thresh {
  */
 enum rte_eth_rx_mq_mode {
 	/** None of DCB, RSS or VMDq mode */
-	ETH_MQ_RX_NONE = 0,
+	RTE_ETH_MQ_RX_NONE = 0,
 
 	/** For Rx side, only RSS is on */
-	ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
+	RTE_ETH_MQ_RX_RSS = RTE_ETH_MQ_RX_RSS_FLAG,
 	/** For Rx side,only DCB is on. */
-	ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
+	RTE_ETH_MQ_RX_DCB = RTE_ETH_MQ_RX_DCB_FLAG,
 	/** Both DCB and RSS enable */
-	ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG,
+	RTE_ETH_MQ_RX_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
 
 	/** Only VMDq, no RSS nor DCB */
-	ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_ONLY = RTE_ETH_MQ_RX_VMDQ_FLAG,
 	/** RSS mode with VMDq */
-	ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG,
 	/** Use VMDq+DCB to route traffic to queues */
-	ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG | ETH_MQ_RX_DCB_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_DCB = RTE_ETH_MQ_RX_VMDQ_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
 	/** Enable both VMDq and DCB in VMDq */
-	ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG |
-				 ETH_MQ_RX_VMDQ_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG |
+				 RTE_ETH_MQ_RX_VMDQ_FLAG,
 };
 
-/**
- * for Rx mq mode backward compatible
- */
-#define ETH_RSS                       ETH_MQ_RX_RSS
-#define VMDQ_DCB                      ETH_MQ_RX_VMDQ_DCB
-#define ETH_DCB_RX                    ETH_MQ_RX_DCB
+#define ETH_MQ_RX_NONE		RTE_ETH_MQ_RX_NONE
+#define ETH_MQ_RX_RSS		RTE_ETH_MQ_RX_RSS
+#define ETH_MQ_RX_DCB		RTE_ETH_MQ_RX_DCB
+#define ETH_MQ_RX_DCB_RSS	RTE_ETH_MQ_RX_DCB_RSS
+#define ETH_MQ_RX_VMDQ_ONLY	RTE_ETH_MQ_RX_VMDQ_ONLY
+#define ETH_MQ_RX_VMDQ_RSS	RTE_ETH_MQ_RX_VMDQ_RSS
+#define ETH_MQ_RX_VMDQ_DCB	RTE_ETH_MQ_RX_VMDQ_DCB
+#define ETH_MQ_RX_VMDQ_DCB_RSS	RTE_ETH_MQ_RX_VMDQ_DCB_RSS
 
 /**
  * A set of values to identify what method is to be used to transmit
  * packets using multi-TCs.
  */
 enum rte_eth_tx_mq_mode {
-	ETH_MQ_TX_NONE    = 0,  /**< It is in neither DCB nor VT mode. */
-	ETH_MQ_TX_DCB,          /**< For Tx side,only DCB is on. */
-	ETH_MQ_TX_VMDQ_DCB,	/**< For Tx side,both DCB and VT is on. */
-	ETH_MQ_TX_VMDQ_ONLY,    /**< Only VT on, no DCB */
+	RTE_ETH_MQ_TX_NONE    = 0,  /**< It is in neither DCB nor VT mode. */
+	RTE_ETH_MQ_TX_DCB,          /**< For Tx side,only DCB is on. */
+	RTE_ETH_MQ_TX_VMDQ_DCB,     /**< For Tx side,both DCB and VT is on. */
+	RTE_ETH_MQ_TX_VMDQ_ONLY,    /**< Only VT on, no DCB */
 };
-
-/**
- * for Tx mq mode backward compatible
- */
-#define ETH_DCB_NONE                ETH_MQ_TX_NONE
-#define ETH_VMDQ_DCB_TX             ETH_MQ_TX_VMDQ_DCB
-#define ETH_DCB_TX                  ETH_MQ_TX_DCB
+#define ETH_MQ_TX_NONE		RTE_ETH_MQ_TX_NONE
+#define ETH_MQ_TX_DCB		RTE_ETH_MQ_TX_DCB
+#define ETH_MQ_TX_VMDQ_DCB	RTE_ETH_MQ_TX_VMDQ_DCB
+#define ETH_MQ_TX_VMDQ_ONLY	RTE_ETH_MQ_TX_VMDQ_ONLY
 
 /**
  * A structure used to configure the Rx features of an Ethernet port.
@@ -423,7 +463,7 @@ struct rte_eth_rxmode {
 	uint32_t max_lro_pkt_size;
 	uint16_t split_hdr_size;  /**< hdr buf size (header_split enabled).*/
 	/**
-	 * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Per-port Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
 	 * Only offloads set on rx_offload_capa field on rte_eth_dev_info
 	 * structure are allowed to be set.
 	 */
@@ -438,12 +478,17 @@ struct rte_eth_rxmode {
  * Note that single VLAN is treated the same as inner VLAN.
  */
 enum rte_vlan_type {
-	ETH_VLAN_TYPE_UNKNOWN = 0,
-	ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
-	ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
-	ETH_VLAN_TYPE_MAX,
+	RTE_ETH_VLAN_TYPE_UNKNOWN = 0,
+	RTE_ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
+	RTE_ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
+	RTE_ETH_VLAN_TYPE_MAX,
 };
 
+#define ETH_VLAN_TYPE_UNKNOWN	RTE_ETH_VLAN_TYPE_UNKNOWN
+#define ETH_VLAN_TYPE_INNER	RTE_ETH_VLAN_TYPE_INNER
+#define ETH_VLAN_TYPE_OUTER	RTE_ETH_VLAN_TYPE_OUTER
+#define ETH_VLAN_TYPE_MAX	RTE_ETH_VLAN_TYPE_MAX
+
 /**
  * A structure used to describe a VLAN filter.
  * If the bit corresponding to a VID is set, such VID is on.
@@ -514,38 +559,70 @@ struct rte_eth_rss_conf {
  * Below macros are defined for RSS offload types, they can be used to
  * fill rte_eth_rss_conf.rss_hf or rte_flow_action_rss.types.
  */
-#define ETH_RSS_IPV4               RTE_BIT64(2)
-#define ETH_RSS_FRAG_IPV4          RTE_BIT64(3)
-#define ETH_RSS_NONFRAG_IPV4_TCP   RTE_BIT64(4)
-#define ETH_RSS_NONFRAG_IPV4_UDP   RTE_BIT64(5)
-#define ETH_RSS_NONFRAG_IPV4_SCTP  RTE_BIT64(6)
-#define ETH_RSS_NONFRAG_IPV4_OTHER RTE_BIT64(7)
-#define ETH_RSS_IPV6               RTE_BIT64(8)
-#define ETH_RSS_FRAG_IPV6          RTE_BIT64(9)
-#define ETH_RSS_NONFRAG_IPV6_TCP   RTE_BIT64(10)
-#define ETH_RSS_NONFRAG_IPV6_UDP   RTE_BIT64(11)
-#define ETH_RSS_NONFRAG_IPV6_SCTP  RTE_BIT64(12)
-#define ETH_RSS_NONFRAG_IPV6_OTHER RTE_BIT64(13)
-#define ETH_RSS_L2_PAYLOAD         RTE_BIT64(14)
-#define ETH_RSS_IPV6_EX            RTE_BIT64(15)
-#define ETH_RSS_IPV6_TCP_EX        RTE_BIT64(16)
-#define ETH_RSS_IPV6_UDP_EX        RTE_BIT64(17)
-#define ETH_RSS_PORT               RTE_BIT64(18)
-#define ETH_RSS_VXLAN              RTE_BIT64(19)
-#define ETH_RSS_GENEVE             RTE_BIT64(20)
-#define ETH_RSS_NVGRE              RTE_BIT64(21)
-#define ETH_RSS_GTPU               RTE_BIT64(23)
-#define ETH_RSS_ETH                RTE_BIT64(24)
-#define ETH_RSS_S_VLAN             RTE_BIT64(25)
-#define ETH_RSS_C_VLAN             RTE_BIT64(26)
-#define ETH_RSS_ESP                RTE_BIT64(27)
-#define ETH_RSS_AH                 RTE_BIT64(28)
-#define ETH_RSS_L2TPV3             RTE_BIT64(29)
-#define ETH_RSS_PFCP               RTE_BIT64(30)
-#define ETH_RSS_PPPOE              RTE_BIT64(31)
-#define ETH_RSS_ECPRI              RTE_BIT64(32)
-#define ETH_RSS_MPLS               RTE_BIT64(33)
-#define ETH_RSS_IPV4_CHKSUM        RTE_BIT64(34)
+#define RTE_ETH_RSS_IPV4               RTE_BIT64(2)
+#define ETH_RSS_IPV4                   RTE_ETH_RSS_IPV4
+#define RTE_ETH_RSS_FRAG_IPV4          RTE_BIT64(3)
+#define ETH_RSS_FRAG_IPV4              RTE_ETH_RSS_FRAG_IPV4
+#define RTE_ETH_RSS_NONFRAG_IPV4_TCP   RTE_BIT64(4)
+#define ETH_RSS_NONFRAG_IPV4_TCP       RTE_ETH_RSS_NONFRAG_IPV4_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV4_UDP   RTE_BIT64(5)
+#define ETH_RSS_NONFRAG_IPV4_UDP       RTE_ETH_RSS_NONFRAG_IPV4_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV4_SCTP  RTE_BIT64(6)
+#define ETH_RSS_NONFRAG_IPV4_SCTP      RTE_ETH_RSS_NONFRAG_IPV4_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV4_OTHER RTE_BIT64(7)
+#define ETH_RSS_NONFRAG_IPV4_OTHER     RTE_ETH_RSS_NONFRAG_IPV4_OTHER
+#define RTE_ETH_RSS_IPV6               RTE_BIT64(8)
+#define ETH_RSS_IPV6                   RTE_ETH_RSS_IPV6
+#define RTE_ETH_RSS_FRAG_IPV6          RTE_BIT64(9)
+#define ETH_RSS_FRAG_IPV6              RTE_ETH_RSS_FRAG_IPV6
+#define RTE_ETH_RSS_NONFRAG_IPV6_TCP   RTE_BIT64(10)
+#define ETH_RSS_NONFRAG_IPV6_TCP       RTE_ETH_RSS_NONFRAG_IPV6_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV6_UDP   RTE_BIT64(11)
+#define ETH_RSS_NONFRAG_IPV6_UDP       RTE_ETH_RSS_NONFRAG_IPV6_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV6_SCTP  RTE_BIT64(12)
+#define ETH_RSS_NONFRAG_IPV6_SCTP      RTE_ETH_RSS_NONFRAG_IPV6_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV6_OTHER RTE_BIT64(13)
+#define ETH_RSS_NONFRAG_IPV6_OTHER     RTE_ETH_RSS_NONFRAG_IPV6_OTHER
+#define RTE_ETH_RSS_L2_PAYLOAD         RTE_BIT64(14)
+#define ETH_RSS_L2_PAYLOAD             RTE_ETH_RSS_L2_PAYLOAD
+#define RTE_ETH_RSS_IPV6_EX            RTE_BIT64(15)
+#define ETH_RSS_IPV6_EX                RTE_ETH_RSS_IPV6_EX
+#define RTE_ETH_RSS_IPV6_TCP_EX        RTE_BIT64(16)
+#define ETH_RSS_IPV6_TCP_EX            RTE_ETH_RSS_IPV6_TCP_EX
+#define RTE_ETH_RSS_IPV6_UDP_EX        RTE_BIT64(17)
+#define ETH_RSS_IPV6_UDP_EX            RTE_ETH_RSS_IPV6_UDP_EX
+#define RTE_ETH_RSS_PORT               RTE_BIT64(18)
+#define ETH_RSS_PORT                   RTE_ETH_RSS_PORT
+#define RTE_ETH_RSS_VXLAN              RTE_BIT64(19)
+#define ETH_RSS_VXLAN                  RTE_ETH_RSS_VXLAN
+#define RTE_ETH_RSS_GENEVE             RTE_BIT64(20)
+#define ETH_RSS_GENEVE                 RTE_ETH_RSS_GENEVE
+#define RTE_ETH_RSS_NVGRE              RTE_BIT64(21)
+#define ETH_RSS_NVGRE                  RTE_ETH_RSS_NVGRE
+#define RTE_ETH_RSS_GTPU               RTE_BIT64(23)
+#define ETH_RSS_GTPU                   RTE_ETH_RSS_GTPU
+#define RTE_ETH_RSS_ETH                RTE_BIT64(24)
+#define ETH_RSS_ETH                    RTE_ETH_RSS_ETH
+#define RTE_ETH_RSS_S_VLAN             RTE_BIT64(25)
+#define ETH_RSS_S_VLAN                 RTE_ETH_RSS_S_VLAN
+#define RTE_ETH_RSS_C_VLAN             RTE_BIT64(26)
+#define ETH_RSS_C_VLAN                 RTE_ETH_RSS_C_VLAN
+#define RTE_ETH_RSS_ESP                RTE_BIT64(27)
+#define ETH_RSS_ESP                    RTE_ETH_RSS_ESP
+#define RTE_ETH_RSS_AH                 RTE_BIT64(28)
+#define ETH_RSS_AH                     RTE_ETH_RSS_AH
+#define RTE_ETH_RSS_L2TPV3             RTE_BIT64(29)
+#define ETH_RSS_L2TPV3                 RTE_ETH_RSS_L2TPV3
+#define RTE_ETH_RSS_PFCP               RTE_BIT64(30)
+#define ETH_RSS_PFCP                   RTE_ETH_RSS_PFCP
+#define RTE_ETH_RSS_PPPOE              RTE_BIT64(31)
+#define ETH_RSS_PPPOE                  RTE_ETH_RSS_PPPOE
+#define RTE_ETH_RSS_ECPRI              RTE_BIT64(32)
+#define ETH_RSS_ECPRI                  RTE_ETH_RSS_ECPRI
+#define RTE_ETH_RSS_MPLS               RTE_BIT64(33)
+#define ETH_RSS_MPLS                   RTE_ETH_RSS_MPLS
+#define RTE_ETH_RSS_IPV4_CHKSUM        RTE_BIT64(34)
+#define ETH_RSS_IPV4_CHKSUM            RTE_ETH_RSS_IPV4_CHKSUM
 
 /**
  * The ETH_RSS_L4_CHKSUM works on checksum field of any L4 header.
@@ -554,41 +631,48 @@ struct rte_eth_rss_conf {
  * checksum type for constructing the use of RSS offload bits.
  *
  * Due to above reason, some old APIs (and configuration) don't support
- * ETH_RSS_L4_CHKSUM. The rte_flow RSS API supports it.
+ * RTE_ETH_RSS_L4_CHKSUM. The rte_flow RSS API supports it.
  *
  * For the case that checksum is not used in an UDP header,
  * it takes the reserved value 0 as input for the hash function.
  */
-#define ETH_RSS_L4_CHKSUM          RTE_BIT64(35)
+#define RTE_ETH_RSS_L4_CHKSUM          RTE_BIT64(35)
+#define ETH_RSS_L4_CHKSUM              RTE_ETH_RSS_L4_CHKSUM
 
 /*
- * We use the following macros to combine with above ETH_RSS_* for
+ * We use the following macros to combine with above RTE_ETH_RSS_* for
  * more specific input set selection. These bits are defined starting
  * from the high end of the 64 bits.
- * Note: If we use above ETH_RSS_* without SRC/DST_ONLY, it represents
+ * Note: If we use above RTE_ETH_RSS_* without SRC/DST_ONLY, it represents
  * both SRC and DST are taken into account. If SRC_ONLY and DST_ONLY of
  * the same level are used simultaneously, it is the same case as none of
  * them are added.
  */
-#define ETH_RSS_L3_SRC_ONLY        RTE_BIT64(63)
-#define ETH_RSS_L3_DST_ONLY        RTE_BIT64(62)
-#define ETH_RSS_L4_SRC_ONLY        RTE_BIT64(61)
-#define ETH_RSS_L4_DST_ONLY        RTE_BIT64(60)
-#define ETH_RSS_L2_SRC_ONLY        RTE_BIT64(59)
-#define ETH_RSS_L2_DST_ONLY        RTE_BIT64(58)
+#define RTE_ETH_RSS_L3_SRC_ONLY        RTE_BIT64(63)
+#define ETH_RSS_L3_SRC_ONLY            RTE_ETH_RSS_L3_SRC_ONLY
+#define RTE_ETH_RSS_L3_DST_ONLY        RTE_BIT64(62)
+#define ETH_RSS_L3_DST_ONLY            RTE_ETH_RSS_L3_DST_ONLY
+#define RTE_ETH_RSS_L4_SRC_ONLY        RTE_BIT64(61)
+#define ETH_RSS_L4_SRC_ONLY            RTE_ETH_RSS_L4_SRC_ONLY
+#define RTE_ETH_RSS_L4_DST_ONLY        RTE_BIT64(60)
+#define ETH_RSS_L4_DST_ONLY            RTE_ETH_RSS_L4_DST_ONLY
+#define RTE_ETH_RSS_L2_SRC_ONLY        RTE_BIT64(59)
+#define ETH_RSS_L2_SRC_ONLY            RTE_ETH_RSS_L2_SRC_ONLY
+#define RTE_ETH_RSS_L2_DST_ONLY        RTE_BIT64(58)
+#define ETH_RSS_L2_DST_ONLY            RTE_ETH_RSS_L2_DST_ONLY
 
 /*
  * Only select IPV6 address prefix as RSS input set according to
- * https://tools.ietf.org/html/rfc6052
- * Must be combined with ETH_RSS_IPV6, ETH_RSS_NONFRAG_IPV6_UDP,
- * ETH_RSS_NONFRAG_IPV6_TCP, ETH_RSS_NONFRAG_IPV6_SCTP.
+ * https:tools.ietf.org/html/rfc6052
+ * Must be combined with RTE_ETH_RSS_IPV6, RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ * RTE_ETH_RSS_NONFRAG_IPV6_TCP, RTE_ETH_RSS_NONFRAG_IPV6_SCTP.
  */
-#define RTE_ETH_RSS_L3_PRE32	   RTE_BIT64(57)
-#define RTE_ETH_RSS_L3_PRE40	   RTE_BIT64(56)
-#define RTE_ETH_RSS_L3_PRE48	   RTE_BIT64(55)
-#define RTE_ETH_RSS_L3_PRE56	   RTE_BIT64(54)
-#define RTE_ETH_RSS_L3_PRE64	   RTE_BIT64(53)
-#define RTE_ETH_RSS_L3_PRE96	   RTE_BIT64(52)
+#define RTE_ETH_RSS_L3_PRE32           RTE_BIT64(57)
+#define RTE_ETH_RSS_L3_PRE40           RTE_BIT64(56)
+#define RTE_ETH_RSS_L3_PRE48           RTE_BIT64(55)
+#define RTE_ETH_RSS_L3_PRE56           RTE_BIT64(54)
+#define RTE_ETH_RSS_L3_PRE64           RTE_BIT64(53)
+#define RTE_ETH_RSS_L3_PRE96           RTE_BIT64(52)
 
 /*
  * Use the following macros to combine with the above layers
@@ -603,22 +687,27 @@ struct rte_eth_rss_conf {
  * It basically stands for the innermost encapsulation level RSS
  * can be performed on according to PMD and device capabilities.
  */
-#define ETH_RSS_LEVEL_PMD_DEFAULT       (0ULL << 50)
+#define RTE_ETH_RSS_LEVEL_PMD_DEFAULT  (0ULL << 50)
+#define ETH_RSS_LEVEL_PMD_DEFAULT      RTE_ETH_RSS_LEVEL_PMD_DEFAULT
 
 /**
  * level 1, requests RSS to be performed on the outermost packet
  * encapsulation level.
  */
-#define ETH_RSS_LEVEL_OUTERMOST         (1ULL << 50)
+#define RTE_ETH_RSS_LEVEL_OUTERMOST    (1ULL << 50)
+#define ETH_RSS_LEVEL_OUTERMOST        RTE_ETH_RSS_LEVEL_OUTERMOST
 
 /**
  * level 2, requests RSS to be performed on the specified inner packet
  * encapsulation level, from outermost to innermost (lower to higher values).
  */
-#define ETH_RSS_LEVEL_INNERMOST         (2ULL << 50)
-#define ETH_RSS_LEVEL_MASK              (3ULL << 50)
+#define RTE_ETH_RSS_LEVEL_INNERMOST    (2ULL << 50)
+#define ETH_RSS_LEVEL_INNERMOST        RTE_ETH_RSS_LEVEL_INNERMOST
+#define RTE_ETH_RSS_LEVEL_MASK         (3ULL << 50)
+#define ETH_RSS_LEVEL_MASK             RTE_ETH_RSS_LEVEL_MASK
 
-#define ETH_RSS_LEVEL(rss_hf) ((rss_hf & ETH_RSS_LEVEL_MASK) >> 50)
+#define RTE_ETH_RSS_LEVEL(rss_hf) ((rss_hf & RTE_ETH_RSS_LEVEL_MASK) >> 50)
+#define ETH_RSS_LEVEL(rss_hf)          RTE_ETH_RSS_LEVEL(rss_hf)
 
 /**
  * For input set change of hash filter, if SRC_ONLY and DST_ONLY of
@@ -633,217 +722,275 @@ struct rte_eth_rss_conf {
 static inline uint64_t
 rte_eth_rss_hf_refine(uint64_t rss_hf)
 {
-	if ((rss_hf & ETH_RSS_L3_SRC_ONLY) && (rss_hf & ETH_RSS_L3_DST_ONLY))
-		rss_hf &= ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+	if ((rss_hf & RTE_ETH_RSS_L3_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L3_DST_ONLY))
+		rss_hf &= ~(RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
 
-	if ((rss_hf & ETH_RSS_L4_SRC_ONLY) && (rss_hf & ETH_RSS_L4_DST_ONLY))
-		rss_hf &= ~(ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+	if ((rss_hf & RTE_ETH_RSS_L4_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L4_DST_ONLY))
+		rss_hf &= ~(RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
 
 	return rss_hf;
 }
 
-#define ETH_RSS_IPV6_PRE32 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE32 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32	RTE_ETH_RSS_IPV6_PRE32
 
-#define ETH_RSS_IPV6_PRE40 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE40 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40	RTE_ETH_RSS_IPV6_PRE40
 
-#define ETH_RSS_IPV6_PRE48 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE48 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48	RTE_ETH_RSS_IPV6_PRE48
 
-#define ETH_RSS_IPV6_PRE56 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE56 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56	RTE_ETH_RSS_IPV6_PRE56
 
-#define ETH_RSS_IPV6_PRE64 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE64 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64	RTE_ETH_RSS_IPV6_PRE64
 
-#define ETH_RSS_IPV6_PRE96 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE96 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96	RTE_ETH_RSS_IPV6_PRE96
 
-#define ETH_RSS_IPV6_PRE32_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE32_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_UDP	RTE_ETH_RSS_IPV6_PRE32_UDP
 
-#define ETH_RSS_IPV6_PRE40_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE40_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_UDP	RTE_ETH_RSS_IPV6_PRE40_UDP
 
-#define ETH_RSS_IPV6_PRE48_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE48_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_UDP	RTE_ETH_RSS_IPV6_PRE48_UDP
 
-#define ETH_RSS_IPV6_PRE56_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE56_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_UDP	RTE_ETH_RSS_IPV6_PRE56_UDP
 
-#define ETH_RSS_IPV6_PRE64_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE64_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_UDP	RTE_ETH_RSS_IPV6_PRE64_UDP
 
-#define ETH_RSS_IPV6_PRE96_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE96_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_UDP	RTE_ETH_RSS_IPV6_PRE96_UDP
 
-#define ETH_RSS_IPV6_PRE32_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE32_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_TCP	RTE_ETH_RSS_IPV6_PRE32_TCP
 
-#define ETH_RSS_IPV6_PRE40_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE40_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_TCP	RTE_ETH_RSS_IPV6_PRE40_TCP
 
-#define ETH_RSS_IPV6_PRE48_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE48_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_TCP	RTE_ETH_RSS_IPV6_PRE48_TCP
 
-#define ETH_RSS_IPV6_PRE56_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE56_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_TCP	RTE_ETH_RSS_IPV6_PRE56_TCP
 
-#define ETH_RSS_IPV6_PRE64_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE64_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_TCP	RTE_ETH_RSS_IPV6_PRE64_TCP
 
-#define ETH_RSS_IPV6_PRE96_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE96_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_TCP	RTE_ETH_RSS_IPV6_PRE96_TCP
 
-#define ETH_RSS_IPV6_PRE32_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE32_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_SCTP	RTE_ETH_RSS_IPV6_PRE32_SCTP
 
-#define ETH_RSS_IPV6_PRE40_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE40_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_SCTP	RTE_ETH_RSS_IPV6_PRE40_SCTP
 
-#define ETH_RSS_IPV6_PRE48_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE48_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_SCTP	RTE_ETH_RSS_IPV6_PRE48_SCTP
 
-#define ETH_RSS_IPV6_PRE56_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE56_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_SCTP	RTE_ETH_RSS_IPV6_PRE56_SCTP
 
-#define ETH_RSS_IPV6_PRE64_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE64_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_SCTP	RTE_ETH_RSS_IPV6_PRE64_SCTP
 
-#define ETH_RSS_IPV6_PRE96_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE96_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE96)
-
-#define ETH_RSS_IP ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_IPV6_EX)
-
-#define ETH_RSS_UDP ( \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_UDP_EX)
-
-#define ETH_RSS_TCP ( \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_IPV6_TCP_EX)
-
-#define ETH_RSS_SCTP ( \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP)
-
-#define ETH_RSS_TUNNEL ( \
-	ETH_RSS_VXLAN  | \
-	ETH_RSS_GENEVE | \
-	ETH_RSS_NVGRE)
-
-#define ETH_RSS_VLAN ( \
-	ETH_RSS_S_VLAN  | \
-	ETH_RSS_C_VLAN)
+#define ETH_RSS_IPV6_PRE96_SCTP	RTE_ETH_RSS_IPV6_PRE96_SCTP
+
+#define RTE_ETH_RSS_IP ( \
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_IPV6_EX)
+#define ETH_RSS_IP	RTE_ETH_RSS_IP
+
+#define RTE_ETH_RSS_UDP ( \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
+#define ETH_RSS_UDP	RTE_ETH_RSS_UDP
+
+#define RTE_ETH_RSS_TCP ( \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_IPV6_TCP_EX)
+#define ETH_RSS_TCP	RTE_ETH_RSS_TCP
+
+#define RTE_ETH_RSS_SCTP ( \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+#define ETH_RSS_SCTP	RTE_ETH_RSS_SCTP
+
+#define RTE_ETH_RSS_TUNNEL ( \
+	RTE_ETH_RSS_VXLAN  | \
+	RTE_ETH_RSS_GENEVE | \
+	RTE_ETH_RSS_NVGRE)
+#define ETH_RSS_TUNNEL	RTE_ETH_RSS_TUNNEL
+
+#define RTE_ETH_RSS_VLAN ( \
+	RTE_ETH_RSS_S_VLAN  | \
+	RTE_ETH_RSS_C_VLAN)
+#define ETH_RSS_VLAN	RTE_ETH_RSS_VLAN
 
 /** Mask of valid RSS hash protocols */
-#define ETH_RSS_PROTO_MASK ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L2_PAYLOAD | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX | \
-	ETH_RSS_PORT  | \
-	ETH_RSS_VXLAN | \
-	ETH_RSS_GENEVE | \
-	ETH_RSS_NVGRE | \
-	ETH_RSS_MPLS)
+#define RTE_ETH_RSS_PROTO_MASK ( \
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L2_PAYLOAD | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX | \
+	RTE_ETH_RSS_PORT  | \
+	RTE_ETH_RSS_VXLAN | \
+	RTE_ETH_RSS_GENEVE | \
+	RTE_ETH_RSS_NVGRE | \
+	RTE_ETH_RSS_MPLS)
+#define ETH_RSS_PROTO_MASK	RTE_ETH_RSS_PROTO_MASK
 
 /*
  * Definitions used for redirection table entry size.
  * Some RSS RETA sizes may not be supported by some drivers, check the
  * documentation or the description of relevant functions for more details.
  */
-#define ETH_RSS_RETA_SIZE_64  64
-#define ETH_RSS_RETA_SIZE_128 128
-#define ETH_RSS_RETA_SIZE_256 256
-#define ETH_RSS_RETA_SIZE_512 512
-#define RTE_RETA_GROUP_SIZE   64
+#define RTE_ETH_RSS_RETA_SIZE_64  64
+#define ETH_RSS_RETA_SIZE_64      RTE_ETH_RSS_RETA_SIZE_64
+#define RTE_ETH_RSS_RETA_SIZE_128 128
+#define ETH_RSS_RETA_SIZE_128     RTE_ETH_RSS_RETA_SIZE_128
+#define RTE_ETH_RSS_RETA_SIZE_256 256
+#define ETH_RSS_RETA_SIZE_256     RTE_ETH_RSS_RETA_SIZE_256
+#define RTE_ETH_RSS_RETA_SIZE_512 512
+#define ETH_RSS_RETA_SIZE_512     RTE_ETH_RSS_RETA_SIZE_512
+#define RTE_ETH_RETA_GROUP_SIZE   64
+#define RTE_RETA_GROUP_SIZE       RTE_ETH_RETA_GROUP_SIZE
 
 /**@{@name VMDq and DCB maximums */
-#define ETH_VMDQ_MAX_VLAN_FILTERS   64 /**< Maximum nb. of VMDq VLAN filters. */
-#define ETH_DCB_NUM_USER_PRIORITIES 8  /**< Maximum nb. of DCB priorities. */
-#define ETH_VMDQ_DCB_NUM_QUEUES     128 /**< Maximum nb. of VMDq DCB queues. */
-#define ETH_DCB_NUM_QUEUES          128 /**< Maximum nb. of DCB queues. */
+#define RTE_ETH_VMDQ_MAX_VLAN_FILTERS   64 /**< Maximum nb. of VMDq VLAN filters. */
+#define ETH_VMDQ_MAX_VLAN_FILTERS       RTE_ETH_VMDQ_MAX_VLAN_FILTERS
+#define RTE_ETH_DCB_NUM_USER_PRIORITIES 8  /**< Maximum nb. of DCB priorities. */
+#define ETH_DCB_NUM_USER_PRIORITIES     RTE_ETH_DCB_NUM_USER_PRIORITIES
+#define RTE_ETH_VMDQ_DCB_NUM_QUEUES     128 /**< Maximum nb. of VMDq DCB queues. */
+#define ETH_VMDQ_DCB_NUM_QUEUES         RTE_ETH_VMDQ_DCB_NUM_QUEUES
+#define RTE_ETH_DCB_NUM_QUEUES          128 /**< Maximum nb. of DCB queues. */
+#define ETH_DCB_NUM_QUEUES              RTE_ETH_DCB_NUM_QUEUES
 /**@}*/
 
 /**@{@name DCB capabilities */
-#define ETH_DCB_PG_SUPPORT      0x00000001 /**< Priority Group(ETS) support. */
-#define ETH_DCB_PFC_SUPPORT     0x00000002 /**< Priority Flow Control support. */
+#define RTE_ETH_DCB_PG_SUPPORT      0x00000001 /**< Priority Group(ETS) support. */
+#define ETH_DCB_PG_SUPPORT          RTE_ETH_DCB_PG_SUPPORT
+#define RTE_ETH_DCB_PFC_SUPPORT     0x00000002 /**< Priority Flow Control support. */
+#define ETH_DCB_PFC_SUPPORT         RTE_ETH_DCB_PFC_SUPPORT
 /**@}*/
 
 /**@{@name VLAN offload bits */
-#define ETH_VLAN_STRIP_OFFLOAD   0x0001 /**< VLAN Strip  On/Off */
-#define ETH_VLAN_FILTER_OFFLOAD  0x0002 /**< VLAN Filter On/Off */
-#define ETH_VLAN_EXTEND_OFFLOAD  0x0004 /**< VLAN Extend On/Off */
-#define ETH_QINQ_STRIP_OFFLOAD   0x0008 /**< QINQ Strip On/Off */
-
-#define ETH_VLAN_STRIP_MASK   0x0001 /**< VLAN Strip  setting mask */
-#define ETH_VLAN_FILTER_MASK  0x0002 /**< VLAN Filter  setting mask*/
-#define ETH_VLAN_EXTEND_MASK  0x0004 /**< VLAN Extend  setting mask*/
-#define ETH_QINQ_STRIP_MASK   0x0008 /**< QINQ Strip  setting mask */
-#define ETH_VLAN_ID_MAX       0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define RTE_ETH_VLAN_STRIP_OFFLOAD   0x0001 /**< VLAN Strip  On/Off */
+#define ETH_VLAN_STRIP_OFFLOAD       RTE_ETH_VLAN_STRIP_OFFLOAD
+#define RTE_ETH_VLAN_FILTER_OFFLOAD  0x0002 /**< VLAN Filter On/Off */
+#define ETH_VLAN_FILTER_OFFLOAD      RTE_ETH_VLAN_FILTER_OFFLOAD
+#define RTE_ETH_VLAN_EXTEND_OFFLOAD  0x0004 /**< VLAN Extend On/Off */
+#define ETH_VLAN_EXTEND_OFFLOAD      RTE_ETH_VLAN_EXTEND_OFFLOAD
+#define RTE_ETH_QINQ_STRIP_OFFLOAD   0x0008 /**< QINQ Strip On/Off */
+#define ETH_QINQ_STRIP_OFFLOAD       RTE_ETH_QINQ_STRIP_OFFLOAD
+
+#define RTE_ETH_VLAN_STRIP_MASK      0x0001 /**< VLAN Strip  setting mask */
+#define ETH_VLAN_STRIP_MASK          RTE_ETH_VLAN_STRIP_MASK
+#define RTE_ETH_VLAN_FILTER_MASK     0x0002 /**< VLAN Filter  setting mask*/
+#define ETH_VLAN_FILTER_MASK         RTE_ETH_VLAN_FILTER_MASK
+#define RTE_ETH_VLAN_EXTEND_MASK     0x0004 /**< VLAN Extend  setting mask*/
+#define ETH_VLAN_EXTEND_MASK         RTE_ETH_VLAN_EXTEND_MASK
+#define RTE_ETH_QINQ_STRIP_MASK      0x0008 /**< QINQ Strip  setting mask */
+#define ETH_QINQ_STRIP_MASK          RTE_ETH_QINQ_STRIP_MASK
+#define RTE_ETH_VLAN_ID_MAX          0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define ETH_VLAN_ID_MAX              RTE_ETH_VLAN_ID_MAX
 /**@}*/
 
 /* Definitions used for receive MAC address   */
-#define ETH_NUM_RECEIVE_MAC_ADDR  128 /**< Maximum nb. of receive mac addr. */
+#define RTE_ETH_NUM_RECEIVE_MAC_ADDR   128 /**< Maximum nb. of receive mac addr. */
+#define ETH_NUM_RECEIVE_MAC_ADDR       RTE_ETH_NUM_RECEIVE_MAC_ADDR
 
 /* Definitions used for unicast hash  */
-#define ETH_VMDQ_NUM_UC_HASH_ARRAY  128 /**< Maximum nb. of UC hash array. */
+#define RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY 128 /**< Maximum nb. of UC hash array. */
+#define ETH_VMDQ_NUM_UC_HASH_ARRAY     RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY
 
 /**@{@name VMDq Rx mode
  * @see rte_eth_vmdq_rx_conf.rx_mode
  */
-#define ETH_VMDQ_ACCEPT_UNTAG   0x0001 /**< accept untagged packets. */
-#define ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
-#define ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
-#define ETH_VMDQ_ACCEPT_BROADCAST   0x0008 /**< accept broadcast packets. */
-#define ETH_VMDQ_ACCEPT_MULTICAST   0x0010 /**< multicast promiscuous. */
+#define RTE_ETH_VMDQ_ACCEPT_UNTAG      0x0001 /**< accept untagged packets. */
+#define ETH_VMDQ_ACCEPT_UNTAG          RTE_ETH_VMDQ_ACCEPT_UNTAG
+#define RTE_ETH_VMDQ_ACCEPT_HASH_MC    0x0002 /**< accept packets in multicast table . */
+#define ETH_VMDQ_ACCEPT_HASH_MC        RTE_ETH_VMDQ_ACCEPT_HASH_MC
+#define RTE_ETH_VMDQ_ACCEPT_HASH_UC    0x0004 /**< accept packets in unicast table. */
+#define ETH_VMDQ_ACCEPT_HASH_UC        RTE_ETH_VMDQ_ACCEPT_HASH_UC
+#define RTE_ETH_VMDQ_ACCEPT_BROADCAST  0x0008 /**< accept broadcast packets. */
+#define ETH_VMDQ_ACCEPT_BROADCAST      RTE_ETH_VMDQ_ACCEPT_BROADCAST
+#define RTE_ETH_VMDQ_ACCEPT_MULTICAST  0x0010 /**< multicast promiscuous. */
+#define ETH_VMDQ_ACCEPT_MULTICAST      RTE_ETH_VMDQ_ACCEPT_MULTICAST
 /**@}*/
 
 /**
@@ -856,7 +1003,7 @@ struct rte_eth_rss_reta_entry64 {
 	/** Mask bits indicate which entries need to be updated/queried. */
 	uint64_t mask;
 	/** Group of 64 redirection table entries. */
-	uint16_t reta[RTE_RETA_GROUP_SIZE];
+	uint16_t reta[RTE_ETH_RETA_GROUP_SIZE];
 };
 
 /**
@@ -864,38 +1011,44 @@ struct rte_eth_rss_reta_entry64 {
  * in DCB configurations
  */
 enum rte_eth_nb_tcs {
-	ETH_4_TCS = 4, /**< 4 TCs with DCB. */
-	ETH_8_TCS = 8  /**< 8 TCs with DCB. */
+	RTE_ETH_4_TCS = 4, /**< 4 TCs with DCB. */
+	RTE_ETH_8_TCS = 8  /**< 8 TCs with DCB. */
 };
+#define ETH_4_TCS RTE_ETH_4_TCS
+#define ETH_8_TCS RTE_ETH_8_TCS
 
 /**
  * This enum indicates the possible number of queue pools
  * in VMDq configurations.
  */
 enum rte_eth_nb_pools {
-	ETH_8_POOLS = 8,    /**< 8 VMDq pools. */
-	ETH_16_POOLS = 16,  /**< 16 VMDq pools. */
-	ETH_32_POOLS = 32,  /**< 32 VMDq pools. */
-	ETH_64_POOLS = 64   /**< 64 VMDq pools. */
+	RTE_ETH_8_POOLS = 8,    /**< 8 VMDq pools. */
+	RTE_ETH_16_POOLS = 16,  /**< 16 VMDq pools. */
+	RTE_ETH_32_POOLS = 32,  /**< 32 VMDq pools. */
+	RTE_ETH_64_POOLS = 64   /**< 64 VMDq pools. */
 };
+#define ETH_8_POOLS	RTE_ETH_8_POOLS
+#define ETH_16_POOLS	RTE_ETH_16_POOLS
+#define ETH_32_POOLS	RTE_ETH_32_POOLS
+#define ETH_64_POOLS	RTE_ETH_64_POOLS
 
 /* This structure may be extended in future. */
 struct rte_eth_dcb_rx_conf {
 	enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs */
 	/** Traffic class each UP mapped to. */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_vmdq_dcb_tx_conf {
 	enum rte_eth_nb_pools nb_queue_pools; /**< With DCB, 16 or 32 pools. */
 	/** Traffic class each UP mapped to. */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_dcb_tx_conf {
 	enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs. */
 	/** Traffic class each UP mapped to. */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_vmdq_tx_conf {
@@ -921,9 +1074,9 @@ struct rte_eth_vmdq_dcb_conf {
 	struct {
 		uint16_t vlan_id; /**< The VLAN ID of the received frame */
 		uint64_t pools;   /**< Bitmask of pools for packet Rx */
-	} pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */
+	} pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */
 	/** Selects a queue in a pool */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 /**
@@ -933,7 +1086,7 @@ struct rte_eth_vmdq_dcb_conf {
  * Using this feature, packets are routed to a pool of queues. By default,
  * the pool selection is based on the MAC address, the VLAN ID in the
  * VLAN tag as specified in the pool_map array.
- * Passing the ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool
+ * Passing the RTE_ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool
  * selection using only the MAC address. MAC address to pool mapping is done
  * using the rte_eth_dev_mac_addr_add function, with the pool parameter
  * corresponding to the pool ID.
@@ -954,7 +1107,7 @@ struct rte_eth_vmdq_rx_conf {
 	struct {
 		uint16_t vlan_id; /**< The VLAN ID of the received frame */
 		uint64_t pools;   /**< Bitmask of pools for packet Rx */
-	} pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */
+	} pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */
 };
 
 /**
@@ -963,7 +1116,7 @@ struct rte_eth_vmdq_rx_conf {
 struct rte_eth_txmode {
 	enum rte_eth_tx_mq_mode mq_mode; /**< Tx multi-queues mode. */
 	/**
-	 * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+	 * Per-port Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags.
 	 * Only offloads set on tx_offload_capa field on rte_eth_dev_info
 	 * structure are allowed to be set.
 	 */
@@ -1055,7 +1208,7 @@ struct rte_eth_rxconf {
 	uint16_t share_group;
 	uint16_t share_qid; /**< Shared Rx queue ID in group */
 	/**
-	 * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Per-queue Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
 	 * Only offloads set on rx_queue_offload_capa or rx_offload_capa
 	 * fields on rte_eth_dev_info structure are allowed to be set.
 	 */
@@ -1084,7 +1237,7 @@ struct rte_eth_txconf {
 
 	uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
 	/**
-	 * Per-queue Tx offloads to be set  using DEV_TX_OFFLOAD_* flags.
+	 * Per-queue Tx offloads to be set  using RTE_ETH_TX_OFFLOAD_* flags.
 	 * Only offloads set on tx_queue_offload_capa or tx_offload_capa
 	 * fields on rte_eth_dev_info structure are allowed to be set.
 	 */
@@ -1195,12 +1348,17 @@ struct rte_eth_desc_lim {
  * This enum indicates the flow control mode
  */
 enum rte_eth_fc_mode {
-	RTE_FC_NONE = 0, /**< Disable flow control. */
-	RTE_FC_RX_PAUSE, /**< Rx pause frame, enable flowctrl on Tx side. */
-	RTE_FC_TX_PAUSE, /**< Tx pause frame, enable flowctrl on Rx side. */
-	RTE_FC_FULL      /**< Enable flow control on both side. */
+	RTE_ETH_FC_NONE = 0, /**< Disable flow control. */
+	RTE_ETH_FC_RX_PAUSE, /**< Rx pause frame, enable flowctrl on Tx side. */
+	RTE_ETH_FC_TX_PAUSE, /**< Tx pause frame, enable flowctrl on Rx side. */
+	RTE_ETH_FC_FULL      /**< Enable flow control on both side. */
 };
 
+#define RTE_FC_NONE	RTE_ETH_FC_NONE
+#define RTE_FC_RX_PAUSE	RTE_ETH_FC_RX_PAUSE
+#define RTE_FC_TX_PAUSE	RTE_ETH_FC_TX_PAUSE
+#define RTE_FC_FULL	RTE_ETH_FC_FULL
+
 /**
  * A structure used to configure Ethernet flow control parameter.
  * These parameters will be configured into the register of the NIC.
@@ -1231,18 +1389,29 @@ struct rte_eth_pfc_conf {
  * @see rte_eth_udp_tunnel
  */
 enum rte_eth_tunnel_type {
-	RTE_TUNNEL_TYPE_NONE = 0,
-	RTE_TUNNEL_TYPE_VXLAN,
-	RTE_TUNNEL_TYPE_GENEVE,
-	RTE_TUNNEL_TYPE_TEREDO,
-	RTE_TUNNEL_TYPE_NVGRE,
-	RTE_TUNNEL_TYPE_IP_IN_GRE,
-	RTE_L2_TUNNEL_TYPE_E_TAG,
-	RTE_TUNNEL_TYPE_VXLAN_GPE,
-	RTE_TUNNEL_TYPE_ECPRI,
-	RTE_TUNNEL_TYPE_MAX,
+	RTE_ETH_TUNNEL_TYPE_NONE = 0,
+	RTE_ETH_TUNNEL_TYPE_VXLAN,
+	RTE_ETH_TUNNEL_TYPE_GENEVE,
+	RTE_ETH_TUNNEL_TYPE_TEREDO,
+	RTE_ETH_TUNNEL_TYPE_NVGRE,
+	RTE_ETH_TUNNEL_TYPE_IP_IN_GRE,
+	RTE_ETH_L2_TUNNEL_TYPE_E_TAG,
+	RTE_ETH_TUNNEL_TYPE_VXLAN_GPE,
+	RTE_ETH_TUNNEL_TYPE_ECPRI,
+	RTE_ETH_TUNNEL_TYPE_MAX,
 };
 
+#define RTE_TUNNEL_TYPE_NONE		RTE_ETH_TUNNEL_TYPE_NONE
+#define RTE_TUNNEL_TYPE_VXLAN		RTE_ETH_TUNNEL_TYPE_VXLAN
+#define RTE_TUNNEL_TYPE_GENEVE		RTE_ETH_TUNNEL_TYPE_GENEVE
+#define RTE_TUNNEL_TYPE_TEREDO		RTE_ETH_TUNNEL_TYPE_TEREDO
+#define RTE_TUNNEL_TYPE_NVGRE		RTE_ETH_TUNNEL_TYPE_NVGRE
+#define RTE_TUNNEL_TYPE_IP_IN_GRE	RTE_ETH_TUNNEL_TYPE_IP_IN_GRE
+#define RTE_L2_TUNNEL_TYPE_E_TAG	RTE_ETH_L2_TUNNEL_TYPE_E_TAG
+#define RTE_TUNNEL_TYPE_VXLAN_GPE	RTE_ETH_TUNNEL_TYPE_VXLAN_GPE
+#define RTE_TUNNEL_TYPE_ECPRI		RTE_ETH_TUNNEL_TYPE_ECPRI
+#define RTE_TUNNEL_TYPE_MAX		RTE_ETH_TUNNEL_TYPE_MAX
+
 /* Deprecated API file for rte_eth_dev_filter_* functions */
 #include "rte_eth_ctrl.h"
 
@@ -1250,11 +1419,16 @@ enum rte_eth_tunnel_type {
  *  Memory space that can be configured to store Flow Director filters
  *  in the board memory.
  */
-enum rte_fdir_pballoc_type {
-	RTE_FDIR_PBALLOC_64K = 0,  /**< 64k. */
-	RTE_FDIR_PBALLOC_128K,     /**< 128k. */
-	RTE_FDIR_PBALLOC_256K,     /**< 256k. */
+enum rte_eth_fdir_pballoc_type {
+	RTE_ETH_FDIR_PBALLOC_64K = 0,  /**< 64k. */
+	RTE_ETH_FDIR_PBALLOC_128K,     /**< 128k. */
+	RTE_ETH_FDIR_PBALLOC_256K,     /**< 256k. */
 };
+#define rte_fdir_pballoc_type	rte_eth_fdir_pballoc_type
+
+#define RTE_FDIR_PBALLOC_64K	RTE_ETH_FDIR_PBALLOC_64K
+#define RTE_FDIR_PBALLOC_128K	RTE_ETH_FDIR_PBALLOC_128K
+#define RTE_FDIR_PBALLOC_256K	RTE_ETH_FDIR_PBALLOC_256K
 
 /**
  *  Select report mode of FDIR hash information in Rx descriptors.
@@ -1271,9 +1445,9 @@ enum rte_fdir_status_mode {
  *
  * If mode is RTE_FDIR_MODE_NONE, the pballoc value is ignored.
  */
-struct rte_fdir_conf {
+struct rte_eth_fdir_conf {
 	enum rte_fdir_mode mode; /**< Flow Director mode. */
-	enum rte_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
+	enum rte_eth_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
 	enum rte_fdir_status_mode status;  /**< How to report FDIR hash. */
 	/** Rx queue of packets matching a "drop" filter in perfect mode. */
 	uint8_t drop_queue;
@@ -1282,6 +1456,8 @@ struct rte_fdir_conf {
 	struct rte_eth_fdir_flex_conf flex_conf;
 };
 
+#define rte_fdir_conf rte_eth_fdir_conf
+
 /**
  * UDP tunneling configuration.
  *
@@ -1299,7 +1475,7 @@ struct rte_eth_udp_tunnel {
 /**
  * A structure used to enable/disable specific device interrupts.
  */
-struct rte_intr_conf {
+struct rte_eth_intr_conf {
 	/** enable/disable lsc interrupt. 0 (default) - disable, 1 enable */
 	uint32_t lsc:1;
 	/** enable/disable rxq interrupt. 0 (default) - disable, 1 enable */
@@ -1308,18 +1484,20 @@ struct rte_intr_conf {
 	uint32_t rmv:1;
 };
 
+#define rte_intr_conf rte_eth_intr_conf
+
 /**
  * A structure used to configure an Ethernet port.
  * Depending upon the Rx multi-queue mode, extra advanced
  * configuration settings may be needed.
  */
 struct rte_eth_conf {
-	uint32_t link_speeds; /**< bitmap of ETH_LINK_SPEED_XXX of speeds to be
-				used. ETH_LINK_SPEED_FIXED disables link
+	uint32_t link_speeds; /**< bitmap of RTE_ETH_LINK_SPEED_XXX of speeds to be
+				used. RTE_ETH_LINK_SPEED_FIXED disables link
 				autonegotiation, and a unique speed shall be
 				set. Otherwise, the bitmap defines the set of
 				speeds to be advertised. If the special value
-				ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
+				RTE_ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
 				supported are advertised. */
 	struct rte_eth_rxmode rxmode; /**< Port Rx configuration. */
 	struct rte_eth_txmode txmode; /**< Port Tx configuration. */
@@ -1346,47 +1524,67 @@ struct rte_eth_conf {
 		struct rte_eth_vmdq_tx_conf vmdq_tx_conf;
 	} tx_adv_conf; /**< Port Tx DCB configuration (union). */
 	/** Currently,Priority Flow Control(PFC) are supported,if DCB with PFC
-	    is needed,and the variable must be set ETH_DCB_PFC_SUPPORT. */
+	    is needed,and the variable must be set RTE_ETH_DCB_PFC_SUPPORT. */
 	uint32_t dcb_capability_en;
-	struct rte_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
-	struct rte_intr_conf intr_conf; /**< Interrupt mode configuration. */
+	struct rte_eth_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
+	struct rte_eth_intr_conf intr_conf; /**< Interrupt mode configuration. */
 };
 
 /**
  * Rx offload capabilities of a device.
  */
-#define DEV_RX_OFFLOAD_VLAN_STRIP  0x00000001
-#define DEV_RX_OFFLOAD_IPV4_CKSUM  0x00000002
-#define DEV_RX_OFFLOAD_UDP_CKSUM   0x00000004
-#define DEV_RX_OFFLOAD_TCP_CKSUM   0x00000008
-#define DEV_RX_OFFLOAD_TCP_LRO     0x00000010
-#define DEV_RX_OFFLOAD_QINQ_STRIP  0x00000020
-#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
-#define DEV_RX_OFFLOAD_MACSEC_STRIP     0x00000080
-#define DEV_RX_OFFLOAD_HEADER_SPLIT	0x00000100
-#define DEV_RX_OFFLOAD_VLAN_FILTER	0x00000200
-#define DEV_RX_OFFLOAD_VLAN_EXTEND	0x00000400
-#define DEV_RX_OFFLOAD_SCATTER		0x00002000
+#define RTE_ETH_RX_OFFLOAD_VLAN_STRIP       0x00000001
+#define DEV_RX_OFFLOAD_VLAN_STRIP           RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+#define RTE_ETH_RX_OFFLOAD_IPV4_CKSUM       0x00000002
+#define DEV_RX_OFFLOAD_IPV4_CKSUM           RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_UDP_CKSUM        0x00000004
+#define DEV_RX_OFFLOAD_UDP_CKSUM            RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_CKSUM        0x00000008
+#define DEV_RX_OFFLOAD_TCP_CKSUM            RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_LRO          0x00000010
+#define DEV_RX_OFFLOAD_TCP_LRO              RTE_ETH_RX_OFFLOAD_TCP_LRO
+#define RTE_ETH_RX_OFFLOAD_QINQ_STRIP       0x00000020
+#define DEV_RX_OFFLOAD_QINQ_STRIP           RTE_ETH_RX_OFFLOAD_QINQ_STRIP
+#define RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
+#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_MACSEC_STRIP     0x00000080
+#define DEV_RX_OFFLOAD_MACSEC_STRIP         RTE_ETH_RX_OFFLOAD_MACSEC_STRIP
+#define RTE_ETH_RX_OFFLOAD_HEADER_SPLIT     0x00000100
+#define DEV_RX_OFFLOAD_HEADER_SPLIT         RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
+#define RTE_ETH_RX_OFFLOAD_VLAN_FILTER      0x00000200
+#define DEV_RX_OFFLOAD_VLAN_FILTER          RTE_ETH_RX_OFFLOAD_VLAN_FILTER
+#define RTE_ETH_RX_OFFLOAD_VLAN_EXTEND      0x00000400
+#define DEV_RX_OFFLOAD_VLAN_EXTEND          RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
+#define RTE_ETH_RX_OFFLOAD_SCATTER          0x00002000
+#define DEV_RX_OFFLOAD_SCATTER              RTE_ETH_RX_OFFLOAD_SCATTER
 /**
  * Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
  * and RTE_MBUF_DYNFLAG_RX_TIMESTAMP_NAME is set in ol_flags.
  * The mbuf field and flag are registered when the offload is configured.
  */
-#define DEV_RX_OFFLOAD_TIMESTAMP	0x00004000
-#define DEV_RX_OFFLOAD_SECURITY         0x00008000
-#define DEV_RX_OFFLOAD_KEEP_CRC		0x00010000
-#define DEV_RX_OFFLOAD_SCTP_CKSUM	0x00020000
-#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM  0x00040000
-#define DEV_RX_OFFLOAD_RSS_HASH		0x00080000
-#define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
-
-#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
-				 DEV_RX_OFFLOAD_UDP_CKSUM | \
-				 DEV_RX_OFFLOAD_TCP_CKSUM)
-#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
-			     DEV_RX_OFFLOAD_VLAN_FILTER | \
-			     DEV_RX_OFFLOAD_VLAN_EXTEND | \
-			     DEV_RX_OFFLOAD_QINQ_STRIP)
+#define RTE_ETH_RX_OFFLOAD_TIMESTAMP        0x00004000
+#define DEV_RX_OFFLOAD_TIMESTAMP            RTE_ETH_RX_OFFLOAD_TIMESTAMP
+#define RTE_ETH_RX_OFFLOAD_SECURITY         0x00008000
+#define DEV_RX_OFFLOAD_SECURITY             RTE_ETH_RX_OFFLOAD_SECURITY
+#define RTE_ETH_RX_OFFLOAD_KEEP_CRC         0x00010000
+#define DEV_RX_OFFLOAD_KEEP_CRC             RTE_ETH_RX_OFFLOAD_KEEP_CRC
+#define RTE_ETH_RX_OFFLOAD_SCTP_CKSUM       0x00020000
+#define DEV_RX_OFFLOAD_SCTP_CKSUM           RTE_ETH_RX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM  0x00040000
+#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM      RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_RSS_HASH         0x00080000
+#define DEV_RX_OFFLOAD_RSS_HASH             RTE_ETH_RX_OFFLOAD_RSS_HASH
+#define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT     0x00100000
+
+#define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+				 RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+				 RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_CHECKSUM	RTE_ETH_RX_OFFLOAD_CHECKSUM
+#define RTE_ETH_RX_OFFLOAD_VLAN (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+			     RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+			     RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+			     RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+#define DEV_RX_OFFLOAD_VLAN	RTE_ETH_RX_OFFLOAD_VLAN
 
 /*
  * If new Rx offload capabilities are defined, they also must be
@@ -1396,54 +1594,76 @@ struct rte_eth_conf {
 /**
  * Tx offload capabilities of a device.
  */
-#define DEV_TX_OFFLOAD_VLAN_INSERT 0x00000001
-#define DEV_TX_OFFLOAD_IPV4_CKSUM  0x00000002
-#define DEV_TX_OFFLOAD_UDP_CKSUM   0x00000004
-#define DEV_TX_OFFLOAD_TCP_CKSUM   0x00000008
-#define DEV_TX_OFFLOAD_SCTP_CKSUM  0x00000010
-#define DEV_TX_OFFLOAD_TCP_TSO     0x00000020
-#define DEV_TX_OFFLOAD_UDP_TSO     0x00000040
-#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_QINQ_INSERT 0x00000100
-#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO    0x00000200    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GRE_TNL_TSO      0x00000400    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_IPIP_TNL_TSO     0x00000800    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO   0x00001000    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_MACSEC_INSERT    0x00002000
+#define RTE_ETH_TX_OFFLOAD_VLAN_INSERT      0x00000001
+#define DEV_TX_OFFLOAD_VLAN_INSERT          RTE_ETH_TX_OFFLOAD_VLAN_INSERT
+#define RTE_ETH_TX_OFFLOAD_IPV4_CKSUM       0x00000002
+#define DEV_TX_OFFLOAD_IPV4_CKSUM           RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_UDP_CKSUM        0x00000004
+#define DEV_TX_OFFLOAD_UDP_CKSUM            RTE_ETH_TX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_CKSUM        0x00000008
+#define DEV_TX_OFFLOAD_TCP_CKSUM            RTE_ETH_TX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_SCTP_CKSUM       0x00000010
+#define DEV_TX_OFFLOAD_SCTP_CKSUM           RTE_ETH_TX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_TSO          0x00000020
+#define DEV_TX_OFFLOAD_TCP_TSO              RTE_ETH_TX_OFFLOAD_TCP_TSO
+#define RTE_ETH_TX_OFFLOAD_UDP_TSO          0x00000040
+#define DEV_TX_OFFLOAD_UDP_TSO              RTE_ETH_TX_OFFLOAD_UDP_TSO
+#define RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_QINQ_INSERT      0x00000100
+#define DEV_TX_OFFLOAD_QINQ_INSERT          RTE_ETH_TX_OFFLOAD_QINQ_INSERT
+#define RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO    0x00000200    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO        RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO      0x00000400    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GRE_TNL_TSO          RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO     0x00000800    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_IPIP_TNL_TSO         RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO   0x00001000    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO       RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_MACSEC_INSERT    0x00002000
+#define DEV_TX_OFFLOAD_MACSEC_INSERT        RTE_ETH_TX_OFFLOAD_MACSEC_INSERT
 /**
  * Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
  * Tx queue without SW lock.
  */
-#define DEV_TX_OFFLOAD_MT_LOCKFREE      0x00004000
+#define RTE_ETH_TX_OFFLOAD_MT_LOCKFREE      0x00004000
+#define DEV_TX_OFFLOAD_MT_LOCKFREE          RTE_ETH_TX_OFFLOAD_MT_LOCKFREE
 /** Device supports multi segment send. */
-#define DEV_TX_OFFLOAD_MULTI_SEGS	0x00008000
+#define RTE_ETH_TX_OFFLOAD_MULTI_SEGS       0x00008000
+#define DEV_TX_OFFLOAD_MULTI_SEGS           RTE_ETH_TX_OFFLOAD_MULTI_SEGS
 /**
  * Device supports optimization for fast release of mbufs.
  * When set application must guarantee that per-queue all mbufs comes from
  * the same mempool and has refcnt = 1.
  */
-#define DEV_TX_OFFLOAD_MBUF_FAST_FREE	0x00010000
-#define DEV_TX_OFFLOAD_SECURITY         0x00020000
+#define RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE   0x00010000
+#define DEV_TX_OFFLOAD_MBUF_FAST_FREE       RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
+#define RTE_ETH_TX_OFFLOAD_SECURITY         0x00020000
+#define DEV_TX_OFFLOAD_SECURITY             RTE_ETH_TX_OFFLOAD_SECURITY
 /**
  * Device supports generic UDP tunneled packet TSO.
  * Application must set PKT_TX_TUNNEL_UDP and other mbuf fields required
  * for tunnel TSO.
  */
-#define DEV_TX_OFFLOAD_UDP_TNL_TSO      0x00040000
+#define RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO      0x00040000
+#define DEV_TX_OFFLOAD_UDP_TNL_TSO          RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO
 /**
  * Device supports generic IP tunneled packet TSO.
  * Application must set PKT_TX_TUNNEL_IP and other mbuf fields required
  * for tunnel TSO.
  */
-#define DEV_TX_OFFLOAD_IP_TNL_TSO       0x00080000
+#define RTE_ETH_TX_OFFLOAD_IP_TNL_TSO       0x00080000
+#define DEV_TX_OFFLOAD_IP_TNL_TSO           RTE_ETH_TX_OFFLOAD_IP_TNL_TSO
 /** Device supports outer UDP checksum */
-#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM  0x00100000
+#define RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM  0x00100000
+#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM      RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM
 /**
  * Device sends on time read from RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
  * if RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME is set in ol_flags.
  * The mbuf field and flag are registered when the offload is configured.
  */
-#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP     RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP
 /*
  * If new Tx offload capabilities are defined, they also must be
  * mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1591,7 +1811,7 @@ struct rte_eth_dev_info {
 	uint16_t vmdq_pool_base;  /**< First ID of VMDq pools. */
 	struct rte_eth_desc_lim rx_desc_lim;  /**< Rx descriptors limits */
 	struct rte_eth_desc_lim tx_desc_lim;  /**< Tx descriptors limits */
-	uint32_t speed_capa;  /**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+	uint32_t speed_capa;  /**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
 	/** Configured number of Rx/Tx queues */
 	uint16_t nb_rx_queues; /**< Number of Rx queues. */
 	uint16_t nb_tx_queues; /**< Number of Tx queues. */
@@ -1695,8 +1915,10 @@ struct rte_eth_xstat_name {
 	char name[RTE_ETH_XSTATS_NAME_SIZE]; /**< The statistic name. */
 };
 
-#define ETH_DCB_NUM_TCS    8
-#define ETH_MAX_VMDQ_POOL  64
+#define RTE_ETH_DCB_NUM_TCS    8
+#define ETH_DCB_NUM_TCS        RTE_ETH_DCB_NUM_TCS
+#define RTE_ETH_MAX_VMDQ_POOL  64
+#define ETH_MAX_VMDQ_POOL      RTE_ETH_MAX_VMDQ_POOL
 
 /**
  * A structure used to get the information of queue and
@@ -1707,12 +1929,12 @@ struct rte_eth_dcb_tc_queue_mapping {
 	struct {
 		uint16_t base;
 		uint16_t nb_queue;
-	} tc_rxq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+	} tc_rxq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS];
 	/** Rx queues assigned to tc per Pool */
 	struct {
 		uint16_t base;
 		uint16_t nb_queue;
-	} tc_txq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+	} tc_txq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS];
 };
 
 /**
@@ -1721,8 +1943,8 @@ struct rte_eth_dcb_tc_queue_mapping {
  */
 struct rte_eth_dcb_info {
 	uint8_t nb_tcs;        /**< number of TCs */
-	uint8_t prio_tc[ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
-	uint8_t tc_bws[ETH_DCB_NUM_TCS]; /**< Tx BW percentage for each TC */
+	uint8_t prio_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
+	uint8_t tc_bws[RTE_ETH_DCB_NUM_TCS]; /**< Tx BW percentage for each TC */
 	/** Rx queues assigned to tc */
 	struct rte_eth_dcb_tc_queue_mapping tc_queue;
 };
@@ -1746,7 +1968,7 @@ enum rte_eth_fec_mode {
 
 /* A structure used to get capabilities per link speed */
 struct rte_eth_fec_capa {
-	uint32_t speed; /**< Link speed (see ETH_SPEED_NUM_*) */
+	uint32_t speed; /**< Link speed (see RTE_ETH_SPEED_NUM_*) */
 	uint32_t capa;  /**< FEC capabilities bitmask */
 };
 
@@ -2075,14 +2297,14 @@ uint16_t rte_eth_dev_count_total(void);
  * @param speed
  *   Numerical speed value in Mbps
  * @param duplex
- *   ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
+ *   RTE_ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
  * @return
  *   0 if the speed cannot be mapped
  */
 uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
 
 /**
- * Get DEV_RX_OFFLOAD_* flag name.
+ * Get RTE_ETH_RX_OFFLOAD_* flag name.
  *
  * @param offload
  *   Offload flag.
@@ -2092,7 +2314,7 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
 const char *rte_eth_dev_rx_offload_name(uint64_t offload);
 
 /**
- * Get DEV_TX_OFFLOAD_* flag name.
+ * Get RTE_ETH_TX_OFFLOAD_* flag name.
  *
  * @param offload
  *   Offload flag.
@@ -2200,7 +2422,7 @@ rte_eth_dev_is_removed(uint16_t port_id);
  *   of the Prefetch, Host, and Write-Back threshold registers of the receive
  *   ring.
  *   In addition it contains the hardware offloads features to activate using
- *   the DEV_RX_OFFLOAD_* flags.
+ *   the RTE_ETH_RX_OFFLOAD_* flags.
  *   If an offloading set in rx_conf->offloads
  *   hasn't been set in the input argument eth_conf->rxmode.offloads
  *   to rte_eth_dev_configure(), it is a new added offloading, it must be
@@ -2777,7 +2999,7 @@ const char *rte_eth_link_speed_to_str(uint32_t link_speed);
  *
  * @param str
  *   A pointer to a string to be filled with textual representation of
- *   device status. At least ETH_LINK_MAX_STR_LEN bytes should be allocated to
+ *   device status. At least RTE_ETH_LINK_MAX_STR_LEN bytes should be allocated to
  *   store default link status text.
  * @param len
  *   Length of available memory at 'str' string.
@@ -3323,10 +3545,10 @@ int rte_eth_dev_set_vlan_ether_type(uint16_t port_id,
  *   The port identifier of the Ethernet device.
  * @param offload_mask
  *   The VLAN Offload bit mask can be mixed use with "OR"
- *       ETH_VLAN_STRIP_OFFLOAD
- *       ETH_VLAN_FILTER_OFFLOAD
- *       ETH_VLAN_EXTEND_OFFLOAD
- *       ETH_QINQ_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_FILTER_OFFLOAD
+ *       RTE_ETH_VLAN_EXTEND_OFFLOAD
+ *       RTE_ETH_QINQ_STRIP_OFFLOAD
  * @return
  *   - (0) if successful.
  *   - (-ENOTSUP) if hardware-assisted VLAN filtering not configured.
@@ -3342,10 +3564,10 @@ int rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask);
  *   The port identifier of the Ethernet device.
  * @return
  *   - (>0) if successful. Bit mask to indicate
- *       ETH_VLAN_STRIP_OFFLOAD
- *       ETH_VLAN_FILTER_OFFLOAD
- *       ETH_VLAN_EXTEND_OFFLOAD
- *       ETH_QINQ_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_FILTER_OFFLOAD
+ *       RTE_ETH_VLAN_EXTEND_OFFLOAD
+ *       RTE_ETH_QINQ_STRIP_OFFLOAD
  *   - (-ENODEV) if *port_id* invalid.
  */
 int rte_eth_dev_get_vlan_offload(uint16_t port_id);
@@ -5371,7 +5593,7 @@ uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
  * rte_eth_tx_burst() function must [attempt to] free the *rte_mbuf*  buffers
  * of those packets whose transmission was effectively completed.
  *
- * If the PMD is DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+ * If the PMD is RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
  * invoke this function concurrently on the same Tx queue without SW lock.
  * @see rte_eth_dev_info_get, struct rte_eth_txconf::offloads
  *
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index db3392bf9759..59d9d9eeb63f 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2957,7 +2957,7 @@ struct rte_flow_action_rss {
 	 * through.
 	 */
 	uint32_t level;
-	uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+	uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
 	uint32_t key_len; /**< Hash key length in bytes. */
 	uint32_t queue_num; /**< Number of entries in @p queue. */
 	const uint8_t *key; /**< Hash key. */
diff --git a/lib/gso/rte_gso.c b/lib/gso/rte_gso.c
index 0d02ec3cee05..119fdcac0b7f 100644
--- a/lib/gso/rte_gso.c
+++ b/lib/gso/rte_gso.c
@@ -15,13 +15,13 @@
 #include "gso_udp4.h"
 
 #define ILLEGAL_UDP_GSO_CTX(ctx) \
-	((((ctx)->gso_types & DEV_TX_OFFLOAD_UDP_TSO) == 0) || \
+	((((ctx)->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO) == 0) || \
 	 (ctx)->gso_size < RTE_GSO_UDP_SEG_SIZE_MIN)
 
 #define ILLEGAL_TCP_GSO_CTX(ctx) \
-	((((ctx)->gso_types & (DEV_TX_OFFLOAD_TCP_TSO | \
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-		DEV_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
+	((((ctx)->gso_types & (RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
 		(ctx)->gso_size < RTE_GSO_SEG_SIZE_MIN)
 
 int
@@ -54,28 +54,28 @@ rte_gso_segment(struct rte_mbuf *pkt,
 	ol_flags = pkt->ol_flags;
 
 	if ((IS_IPV4_VXLAN_TCP4(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
 			((IS_IPV4_GRE_TCP4(pkt->ol_flags) &&
-			 (gso_ctx->gso_types & DEV_TX_OFFLOAD_GRE_TNL_TSO)))) {
+			 (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))) {
 		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
 		ret = gso_tunnel_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_VXLAN_UDP4(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) &&
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
 		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
 		ret = gso_tunnel_udp4_segment(pkt, gso_size,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_TCP(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_TCP_TSO)) {
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
 		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
 		ret = gso_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_UDP(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
 		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
 		ret = gso_udp4_segment(pkt, gso_size, direct_pool,
 				indirect_pool, pkts_out, nb_pkts_out);
diff --git a/lib/gso/rte_gso.h b/lib/gso/rte_gso.h
index d93ee8e5b171..0a65afc11e64 100644
--- a/lib/gso/rte_gso.h
+++ b/lib/gso/rte_gso.h
@@ -52,11 +52,11 @@ struct rte_gso_ctx {
 	uint32_t gso_types;
 	/**< the bit mask of required GSO types. The GSO library
 	 * uses the same macros as that of describing device TX
-	 * offloading capabilities (i.e. DEV_TX_OFFLOAD_*_TSO) for
+	 * offloading capabilities (i.e. RTE_ETH_TX_OFFLOAD_*_TSO) for
 	 * gso_types.
 	 *
 	 * For example, if applications want to segment TCP/IPv4
-	 * packets, set DEV_TX_OFFLOAD_TCP_TSO in gso_types.
+	 * packets, set RTE_ETH_TX_OFFLOAD_TCP_TSO in gso_types.
 	 */
 	uint16_t gso_size;
 	/**< maximum size of an output GSO segment, including packet
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index fdaaaf67f2f3..57e871201816 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -185,7 +185,7 @@ extern "C" {
  * The detection of PKT_RX_OUTER_L4_CKSUM_GOOD shall be based on the given
  * HW capability, At minimum, the PMD should support
  * PKT_RX_OUTER_L4_CKSUM_UNKNOWN and PKT_RX_OUTER_L4_CKSUM_BAD states
- * if the DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
+ * if the RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
  */
 #define PKT_RX_OUTER_L4_CKSUM_MASK	((1ULL << 21) | (1ULL << 22))
 
@@ -208,7 +208,7 @@ extern "C" {
  * a) Fill outer_l2_len and outer_l3_len in mbuf.
  * b) Set the PKT_TX_OUTER_UDP_CKSUM flag.
  * c) Set the PKT_TX_OUTER_IPV4 or PKT_TX_OUTER_IPV6 flag.
- * 2) Configure DEV_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
+ * 2) Configure RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
  */
 #define PKT_TX_OUTER_UDP_CKSUM     (1ULL << 41)
 
@@ -254,7 +254,7 @@ extern "C" {
  * It can be used for tunnels which are not standards or listed above.
  * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_GRE
  * or PKT_TX_TUNNEL_IPIP if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_IP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_IP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
  * PKT_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
@@ -267,7 +267,7 @@ extern "C" {
  * It can be used for tunnels which are not standards or listed above.
  * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_VXLAN
  * if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_UDP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
  * PKT_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
diff --git a/lib/mbuf/rte_mbuf_dyn.h b/lib/mbuf/rte_mbuf_dyn.h
index fb03cf1dcf90..29abe8da53cf 100644
--- a/lib/mbuf/rte_mbuf_dyn.h
+++ b/lib/mbuf/rte_mbuf_dyn.h
@@ -37,7 +37,7 @@
  *   of the dynamic field to be registered:
  *   const struct rte_mbuf_dynfield rte_dynfield_my_feature = { ... };
  * - The application initializes the PMD, and asks for this feature
- *   at port initialization by passing DEV_RX_OFFLOAD_MY_FEATURE in
+ *   at port initialization by passing RTE_ETH_RX_OFFLOAD_MY_FEATURE in
  *   rxconf. This will make the PMD to register the field by calling
  *   rte_mbuf_dynfield_register(&rte_dynfield_my_feature). The PMD
  *   stores the returned offset.
-- 
2.31.1


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v6] ethdev: add namespace
  2021-10-20 19:23  1%   ` [dpdk-dev] [PATCH v5] " Ferruh Yigit
@ 2021-10-22  2:02  1%     ` Ferruh Yigit
  2021-10-22 11:03  1%       ` [dpdk-dev] [PATCH v7] " Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-10-22  2:02 UTC (permalink / raw)
  To: Maryam Tahhan, Reshma Pattan, Jerin Jacob, Wisam Jaddo,
	Cristian Dumitrescu, Xiaoyun Li, Thomas Monjalon,
	Andrew Rybchenko, Jay Jayatheerthan, Chas Williams,
	Min Hu (Connor),
	Pavan Nikhilesh, Shijith Thotton, Ajit Khaparde, Somnath Kotur,
	John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, Haiyue Wang,
	Beilei Xing, Matan Azrad, Viacheslav Ovsiienko, Keith Wiles,
	Jiayu Hu, Olivier Matz, Ori Kam, Akhil Goyal, Declan Doherty,
	Ray Kinsella, Radu Nicolau, Hemant Agrawal, Sachin Saxena,
	Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
	John W. Linville, Ciara Loftus, Shepard Siegel, Ed Czeck,
	John Miller, Igor Russkikh, Steven Webster, Matt Peters,
	Chandubabu Namburu, Rasesh Mody, Shahed Shaikh, Bruce Richardson,
	Konstantin Ananyev, Ruifeng Wang, Rahul Lakkireddy,
	Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
	Igor Chauskin, Gagandeep Singh, Gaetan Rivet, Ziyang Xuan,
	Xiaoyun Wang, Guoyang Zhou, Yisen Zhuang, Lijun Ou, Jingjing Wu,
	Qiming Yang, Andrew Boyer, Rosen Xu,
	Srisivasubramanian Srinivasan, Jakub Grajciar, Zyta Szpak,
	Liron Himi, Stephen Hemminger, Long Li, Martin Spinler,
	Heinrich Kuhn, Jiawen Wu, Tetsuya Mukawa, Harman Kalra,
	Anoob Joseph, Nalla Pradeep, Radha Mohan Chintakuntla,
	Veerasenareddy Burru, Devendra Singh Rawat, Jasvinder Singh,
	Maciej Czekaj, Jian Wang, Maxime Coquelin, Chenbo Xia, Yong Wang,
	Nicolas Chautru, David Hunt, Harry van Haaren, Bernard Iremonger,
	Anatoly Burakov, John McNamara, Kirill Rybalchenko, Byron Marohn,
	Yipeng Wang
  Cc: Ferruh Yigit, dev, Tyler Retzlaff, David Marchand

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=true, Size: 1216805 bytes --]

Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.

All internal components switched to using new names.

Syntax fixed on lines that this patch touches.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
Cc: David Marchand <david.marchand@redhat.com>
Cc: Thomas Monjalon <thomas@monjalon.net>

v2:
* Updated internal components
* Removed deprecation notice

v3:
* Updated missing macros / structs that David highlighted
* Added release notes update

v4:
* rebased on latest next-net
* depends on https://patches.dpdk.org/user/todo/dpdk/?series=19744
* Not able to complete scripts to update user code, although some
  shared by Aman:
  https://patches.dpdk.org/project/dpdk/patch/20211008102949.70716-1-aman.deep.singh@intel.com/
  Sending new version for possible option to get this patch for -rc1 and
  work for scripts later, before release.

v5:
* rebased on latest next-net

v6:
* rebased on latest next-net
---
 app/proc-info/main.c                          |    8 +-
 app/test-eventdev/test_perf_common.c          |    4 +-
 app/test-eventdev/test_pipeline_common.c      |   10 +-
 app/test-flow-perf/config.h                   |    2 +-
 app/test-pipeline/init.c                      |    8 +-
 app/test-pmd/cmdline.c                        |  286 ++---
 app/test-pmd/config.c                         |  200 ++--
 app/test-pmd/csumonly.c                       |   28 +-
 app/test-pmd/flowgen.c                        |    6 +-
 app/test-pmd/macfwd.c                         |    6 +-
 app/test-pmd/macswap_common.h                 |    6 +-
 app/test-pmd/parameters.c                     |   54 +-
 app/test-pmd/testpmd.c                        |   52 +-
 app/test-pmd/testpmd.h                        |    2 +-
 app/test-pmd/txonly.c                         |    6 +-
 app/test/test_ethdev_link.c                   |   68 +-
 app/test/test_event_eth_rx_adapter.c          |    4 +-
 app/test/test_kni.c                           |    2 +-
 app/test/test_link_bonding.c                  |    4 +-
 app/test/test_link_bonding_mode4.c            |    4 +-
 app/test/test_link_bonding_rssconf.c          |   28 +-
 app/test/test_pmd_perf.c                      |   12 +-
 app/test/virtual_pmd.c                        |   10 +-
 doc/guides/eventdevs/cnxk.rst                 |    2 +-
 doc/guides/eventdevs/octeontx2.rst            |    2 +-
 doc/guides/nics/af_packet.rst                 |    2 +-
 doc/guides/nics/bnxt.rst                      |   24 +-
 doc/guides/nics/enic.rst                      |    2 +-
 doc/guides/nics/features.rst                  |  114 +-
 doc/guides/nics/fm10k.rst                     |    6 +-
 doc/guides/nics/intel_vf.rst                  |   10 +-
 doc/guides/nics/ixgbe.rst                     |   12 +-
 doc/guides/nics/mlx5.rst                      |    4 +-
 doc/guides/nics/tap.rst                       |    2 +-
 .../generic_segmentation_offload_lib.rst      |    8 +-
 doc/guides/prog_guide/mbuf_lib.rst            |   18 +-
 doc/guides/prog_guide/poll_mode_drv.rst       |    8 +-
 doc/guides/prog_guide/rte_flow.rst            |   34 +-
 doc/guides/prog_guide/rte_security.rst        |    2 +-
 doc/guides/rel_notes/deprecation.rst          |   10 +-
 doc/guides/rel_notes/release_21_11.rst        |    3 +
 doc/guides/sample_app_ug/ipsec_secgw.rst      |    4 +-
 doc/guides/testpmd_app_ug/run_app.rst         |    2 +-
 drivers/bus/dpaa/include/process.h            |   16 +-
 drivers/common/cnxk/roc_npc.h                 |    2 +-
 drivers/net/af_packet/rte_eth_af_packet.c     |   20 +-
 drivers/net/af_xdp/rte_eth_af_xdp.c           |   12 +-
 drivers/net/ark/ark_ethdev.c                  |   16 +-
 drivers/net/atlantic/atl_ethdev.c             |   88 +-
 drivers/net/atlantic/atl_ethdev.h             |   18 +-
 drivers/net/atlantic/atl_rxtx.c               |    6 +-
 drivers/net/avp/avp_ethdev.c                  |   26 +-
 drivers/net/axgbe/axgbe_dev.c                 |    6 +-
 drivers/net/axgbe/axgbe_ethdev.c              |  104 +-
 drivers/net/axgbe/axgbe_ethdev.h              |   12 +-
 drivers/net/axgbe/axgbe_mdio.c                |    2 +-
 drivers/net/axgbe/axgbe_rxtx.c                |    6 +-
 drivers/net/bnx2x/bnx2x_ethdev.c              |   12 +-
 drivers/net/bnxt/bnxt.h                       |   62 +-
 drivers/net/bnxt/bnxt_ethdev.c                |  172 +--
 drivers/net/bnxt/bnxt_flow.c                  |    6 +-
 drivers/net/bnxt/bnxt_hwrm.c                  |  112 +-
 drivers/net/bnxt/bnxt_reps.c                  |    2 +-
 drivers/net/bnxt/bnxt_ring.c                  |    4 +-
 drivers/net/bnxt/bnxt_rxq.c                   |   28 +-
 drivers/net/bnxt/bnxt_rxr.c                   |    4 +-
 drivers/net/bnxt/bnxt_rxtx_vec_avx2.c         |    2 +-
 drivers/net/bnxt/bnxt_rxtx_vec_common.h       |    2 +-
 drivers/net/bnxt/bnxt_rxtx_vec_neon.c         |    2 +-
 drivers/net/bnxt/bnxt_rxtx_vec_sse.c          |    2 +-
 drivers/net/bnxt/bnxt_txr.c                   |    4 +-
 drivers/net/bnxt/bnxt_vnic.c                  |   30 +-
 drivers/net/bnxt/rte_pmd_bnxt.c               |    8 +-
 drivers/net/bonding/eth_bond_private.h        |    4 +-
 drivers/net/bonding/rte_eth_bond_8023ad.c     |   16 +-
 drivers/net/bonding/rte_eth_bond_api.c        |    6 +-
 drivers/net/bonding/rte_eth_bond_pmd.c        |   50 +-
 drivers/net/cnxk/cn10k_ethdev.c               |   42 +-
 drivers/net/cnxk/cn10k_rte_flow.c             |    2 +-
 drivers/net/cnxk/cn10k_rx.c                   |    4 +-
 drivers/net/cnxk/cn10k_tx.c                   |    4 +-
 drivers/net/cnxk/cn9k_ethdev.c                |   60 +-
 drivers/net/cnxk/cn9k_rx.c                    |    4 +-
 drivers/net/cnxk/cn9k_tx.c                    |    4 +-
 drivers/net/cnxk/cnxk_ethdev.c                |  112 +-
 drivers/net/cnxk/cnxk_ethdev.h                |   49 +-
 drivers/net/cnxk/cnxk_ethdev_devargs.c        |    6 +-
 drivers/net/cnxk/cnxk_ethdev_ops.c            |  106 +-
 drivers/net/cnxk/cnxk_link.c                  |   14 +-
 drivers/net/cnxk/cnxk_ptp.c                   |    4 +-
 drivers/net/cnxk/cnxk_rte_flow.c              |    2 +-
 drivers/net/cxgbe/cxgbe.h                     |   46 +-
 drivers/net/cxgbe/cxgbe_ethdev.c              |   42 +-
 drivers/net/cxgbe/cxgbe_main.c                |   12 +-
 drivers/net/dpaa/dpaa_ethdev.c                |  180 +--
 drivers/net/dpaa/dpaa_ethdev.h                |   10 +-
 drivers/net/dpaa/dpaa_flow.c                  |   32 +-
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c        |   47 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  138 +--
 drivers/net/dpaa2/dpaa2_ethdev.h              |   22 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |    8 +-
 drivers/net/e1000/e1000_ethdev.h              |   18 +-
 drivers/net/e1000/em_ethdev.c                 |   64 +-
 drivers/net/e1000/em_rxtx.c                   |   38 +-
 drivers/net/e1000/igb_ethdev.c                |  158 +--
 drivers/net/e1000/igb_pf.c                    |    2 +-
 drivers/net/e1000/igb_rxtx.c                  |  116 +-
 drivers/net/ena/ena_ethdev.c                  |   70 +-
 drivers/net/ena/ena_ethdev.h                  |    4 +-
 drivers/net/ena/ena_rss.c                     |   74 +-
 drivers/net/enetc/enetc_ethdev.c              |   30 +-
 drivers/net/enic/enic.h                       |    2 +-
 drivers/net/enic/enic_ethdev.c                |   88 +-
 drivers/net/enic/enic_main.c                  |   40 +-
 drivers/net/enic/enic_res.c                   |   50 +-
 drivers/net/failsafe/failsafe.c               |    8 +-
 drivers/net/failsafe/failsafe_intr.c          |    4 +-
 drivers/net/failsafe/failsafe_ops.c           |   78 +-
 drivers/net/fm10k/fm10k.h                     |    4 +-
 drivers/net/fm10k/fm10k_ethdev.c              |  146 +--
 drivers/net/fm10k/fm10k_rxtx_vec.c            |    6 +-
 drivers/net/hinic/base/hinic_pmd_hwdev.c      |   22 +-
 drivers/net/hinic/hinic_pmd_ethdev.c          |  136 +--
 drivers/net/hinic/hinic_pmd_rx.c              |   36 +-
 drivers/net/hinic/hinic_pmd_rx.h              |   22 +-
 drivers/net/hns3/hns3_dcb.c                   |   14 +-
 drivers/net/hns3/hns3_ethdev.c                |  352 +++---
 drivers/net/hns3/hns3_ethdev.h                |   12 +-
 drivers/net/hns3/hns3_ethdev_vf.c             |  100 +-
 drivers/net/hns3/hns3_flow.c                  |    6 +-
 drivers/net/hns3/hns3_ptp.c                   |    2 +-
 drivers/net/hns3/hns3_rss.c                   |  108 +-
 drivers/net/hns3/hns3_rss.h                   |   28 +-
 drivers/net/hns3/hns3_rxtx.c                  |   30 +-
 drivers/net/hns3/hns3_rxtx.h                  |    2 +-
 drivers/net/hns3/hns3_rxtx_vec.c              |   10 +-
 drivers/net/i40e/i40e_ethdev.c                |  272 ++---
 drivers/net/i40e/i40e_ethdev.h                |   24 +-
 drivers/net/i40e/i40e_flow.c                  |   32 +-
 drivers/net/i40e/i40e_hash.c                  |  158 +--
 drivers/net/i40e/i40e_pf.c                    |   14 +-
 drivers/net/i40e/i40e_rxtx.c                  |    8 +-
 drivers/net/i40e/i40e_rxtx.h                  |    4 +-
 drivers/net/i40e/i40e_rxtx_vec_avx512.c       |    2 +-
 drivers/net/i40e/i40e_rxtx_vec_common.h       |    8 +-
 drivers/net/i40e/i40e_vf_representor.c        |   48 +-
 drivers/net/iavf/iavf.h                       |   24 +-
 drivers/net/iavf/iavf_ethdev.c                |  178 +--
 drivers/net/iavf/iavf_hash.c                  |  320 ++---
 drivers/net/iavf/iavf_rxtx.c                  |    2 +-
 drivers/net/iavf/iavf_rxtx.h                  |   24 +-
 drivers/net/iavf/iavf_rxtx_vec_avx2.c         |    4 +-
 drivers/net/iavf/iavf_rxtx_vec_avx512.c       |    6 +-
 drivers/net/iavf/iavf_rxtx_vec_sse.c          |    2 +-
 drivers/net/ice/ice_dcf.c                     |    2 +-
 drivers/net/ice/ice_dcf_ethdev.c              |   86 +-
 drivers/net/ice/ice_dcf_vf_representor.c      |   56 +-
 drivers/net/ice/ice_ethdev.c                  |  180 +--
 drivers/net/ice/ice_ethdev.h                  |   26 +-
 drivers/net/ice/ice_hash.c                    |  290 ++---
 drivers/net/ice/ice_rxtx.c                    |   16 +-
 drivers/net/ice/ice_rxtx_vec_avx2.c           |    2 +-
 drivers/net/ice/ice_rxtx_vec_avx512.c         |    4 +-
 drivers/net/ice/ice_rxtx_vec_common.h         |   28 +-
 drivers/net/ice/ice_rxtx_vec_sse.c            |    2 +-
 drivers/net/igc/igc_ethdev.c                  |  138 +--
 drivers/net/igc/igc_ethdev.h                  |   54 +-
 drivers/net/igc/igc_txrx.c                    |   48 +-
 drivers/net/ionic/ionic_ethdev.c              |  138 +--
 drivers/net/ionic/ionic_ethdev.h              |   12 +-
 drivers/net/ionic/ionic_lif.c                 |   36 +-
 drivers/net/ionic/ionic_rxtx.c                |   10 +-
 drivers/net/ipn3ke/ipn3ke_representor.c       |   64 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |  285 +++--
 drivers/net/ixgbe/ixgbe_ethdev.h              |   18 +-
 drivers/net/ixgbe/ixgbe_fdir.c                |   24 +-
 drivers/net/ixgbe/ixgbe_flow.c                |    2 +-
 drivers/net/ixgbe/ixgbe_ipsec.c               |   12 +-
 drivers/net/ixgbe/ixgbe_pf.c                  |   34 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                |  249 ++--
 drivers/net/ixgbe/ixgbe_rxtx.h                |    4 +-
 drivers/net/ixgbe/ixgbe_rxtx_vec_common.h     |    2 +-
 drivers/net/ixgbe/ixgbe_tm.c                  |   16 +-
 drivers/net/ixgbe/ixgbe_vf_representor.c      |   16 +-
 drivers/net/ixgbe/rte_pmd_ixgbe.c             |   14 +-
 drivers/net/ixgbe/rte_pmd_ixgbe.h             |    4 +-
 drivers/net/kni/rte_eth_kni.c                 |    8 +-
 drivers/net/liquidio/lio_ethdev.c             |  114 +-
 drivers/net/memif/memif_socket.c              |    2 +-
 drivers/net/memif/rte_eth_memif.c             |   16 +-
 drivers/net/mlx4/mlx4_ethdev.c                |   32 +-
 drivers/net/mlx4/mlx4_flow.c                  |   30 +-
 drivers/net/mlx4/mlx4_intr.c                  |    8 +-
 drivers/net/mlx4/mlx4_rxq.c                   |   18 +-
 drivers/net/mlx4/mlx4_txq.c                   |   24 +-
 drivers/net/mlx5/linux/mlx5_ethdev_os.c       |   54 +-
 drivers/net/mlx5/linux/mlx5_os.c              |    6 +-
 drivers/net/mlx5/mlx5.c                       |    4 +-
 drivers/net/mlx5/mlx5.h                       |    2 +-
 drivers/net/mlx5/mlx5_defs.h                  |    6 +-
 drivers/net/mlx5/mlx5_ethdev.c                |    6 +-
 drivers/net/mlx5/mlx5_flow.c                  |   54 +-
 drivers/net/mlx5/mlx5_flow.h                  |   12 +-
 drivers/net/mlx5/mlx5_flow_dv.c               |   44 +-
 drivers/net/mlx5/mlx5_flow_verbs.c            |    4 +-
 drivers/net/mlx5/mlx5_rss.c                   |   10 +-
 drivers/net/mlx5/mlx5_rxq.c                   |   40 +-
 drivers/net/mlx5/mlx5_rxtx_vec.h              |    8 +-
 drivers/net/mlx5/mlx5_tx.c                    |   30 +-
 drivers/net/mlx5/mlx5_txq.c                   |   58 +-
 drivers/net/mlx5/mlx5_vlan.c                  |    4 +-
 drivers/net/mlx5/windows/mlx5_os.c            |    4 +-
 drivers/net/mvneta/mvneta_ethdev.c            |   32 +-
 drivers/net/mvneta/mvneta_ethdev.h            |   10 +-
 drivers/net/mvneta/mvneta_rxtx.c              |    2 +-
 drivers/net/mvpp2/mrvl_ethdev.c               |  112 +-
 drivers/net/netvsc/hn_ethdev.c                |   70 +-
 drivers/net/netvsc/hn_rndis.c                 |   50 +-
 drivers/net/nfb/nfb_ethdev.c                  |   20 +-
 drivers/net/nfb/nfb_rx.c                      |    2 +-
 drivers/net/nfp/nfp_common.c                  |  122 +-
 drivers/net/nfp/nfp_ethdev.c                  |    2 +-
 drivers/net/nfp/nfp_ethdev_vf.c               |    2 +-
 drivers/net/ngbe/ngbe_ethdev.c                |   50 +-
 drivers/net/null/rte_eth_null.c               |   28 +-
 drivers/net/octeontx/octeontx_ethdev.c        |   74 +-
 drivers/net/octeontx/octeontx_ethdev.h        |   30 +-
 drivers/net/octeontx/octeontx_ethdev_ops.c    |   26 +-
 drivers/net/octeontx2/otx2_ethdev.c           |   96 +-
 drivers/net/octeontx2/otx2_ethdev.h           |   64 +-
 drivers/net/octeontx2/otx2_ethdev_devargs.c   |   12 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c       |   14 +-
 drivers/net/octeontx2/otx2_ethdev_sec.c       |    8 +-
 drivers/net/octeontx2/otx2_flow.c             |    2 +-
 drivers/net/octeontx2/otx2_flow_ctrl.c        |   36 +-
 drivers/net/octeontx2/otx2_flow_parse.c       |    4 +-
 drivers/net/octeontx2/otx2_link.c             |   40 +-
 drivers/net/octeontx2/otx2_mcast.c            |    2 +-
 drivers/net/octeontx2/otx2_ptp.c              |    4 +-
 drivers/net/octeontx2/otx2_rss.c              |   70 +-
 drivers/net/octeontx2/otx2_rx.c               |    4 +-
 drivers/net/octeontx2/otx2_tx.c               |    2 +-
 drivers/net/octeontx2/otx2_vlan.c             |   42 +-
 drivers/net/octeontx_ep/otx_ep_ethdev.c       |    6 +-
 drivers/net/octeontx_ep/otx_ep_rxtx.c         |    6 +-
 drivers/net/pcap/pcap_ethdev.c                |   12 +-
 drivers/net/pfe/pfe_ethdev.c                  |   18 +-
 drivers/net/qede/base/mcp_public.h            |    4 +-
 drivers/net/qede/qede_ethdev.c                |  156 +--
 drivers/net/qede/qede_filter.c                |   42 +-
 drivers/net/qede/qede_rxtx.c                  |    2 +-
 drivers/net/qede/qede_rxtx.h                  |   16 +-
 drivers/net/ring/rte_eth_ring.c               |   20 +-
 drivers/net/sfc/sfc.c                         |   30 +-
 drivers/net/sfc/sfc_ef100_rx.c                |   10 +-
 drivers/net/sfc/sfc_ef100_tx.c                |   20 +-
 drivers/net/sfc/sfc_ef10_essb_rx.c            |    4 +-
 drivers/net/sfc/sfc_ef10_rx.c                 |    8 +-
 drivers/net/sfc/sfc_ef10_tx.c                 |   32 +-
 drivers/net/sfc/sfc_ethdev.c                  |   50 +-
 drivers/net/sfc/sfc_flow.c                    |    2 +-
 drivers/net/sfc/sfc_port.c                    |   52 +-
 drivers/net/sfc/sfc_repr.c                    |   10 +-
 drivers/net/sfc/sfc_rx.c                      |   50 +-
 drivers/net/sfc/sfc_tx.c                      |   50 +-
 drivers/net/softnic/rte_eth_softnic.c         |   12 +-
 drivers/net/szedata2/rte_eth_szedata2.c       |   14 +-
 drivers/net/tap/rte_eth_tap.c                 |  104 +-
 drivers/net/tap/tap_rss.h                     |    2 +-
 drivers/net/thunderx/nicvf_ethdev.c           |  102 +-
 drivers/net/thunderx/nicvf_ethdev.h           |   40 +-
 drivers/net/txgbe/txgbe_ethdev.c              |  242 ++--
 drivers/net/txgbe/txgbe_ethdev.h              |   18 +-
 drivers/net/txgbe/txgbe_ethdev_vf.c           |   24 +-
 drivers/net/txgbe/txgbe_fdir.c                |   20 +-
 drivers/net/txgbe/txgbe_flow.c                |    2 +-
 drivers/net/txgbe/txgbe_ipsec.c               |   12 +-
 drivers/net/txgbe/txgbe_pf.c                  |   34 +-
 drivers/net/txgbe/txgbe_rxtx.c                |  308 ++---
 drivers/net/txgbe/txgbe_rxtx.h                |    4 +-
 drivers/net/txgbe/txgbe_tm.c                  |   16 +-
 drivers/net/vhost/rte_eth_vhost.c             |   16 +-
 drivers/net/virtio/virtio_ethdev.c            |  124 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.c          |   72 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.h          |   16 +-
 drivers/net/vmxnet3/vmxnet3_rxtx.c            |   16 +-
 examples/bbdev_app/main.c                     |    6 +-
 examples/bond/main.c                          |   14 +-
 examples/distributor/main.c                   |   12 +-
 examples/ethtool/ethtool-app/main.c           |    2 +-
 examples/ethtool/lib/rte_ethtool.c            |   18 +-
 .../pipeline_worker_generic.c                 |   16 +-
 .../eventdev_pipeline/pipeline_worker_tx.c    |   12 +-
 examples/flow_classify/flow_classify.c        |    4 +-
 examples/flow_filtering/main.c                |   16 +-
 examples/ioat/ioatfwd.c                       |    8 +-
 examples/ip_fragmentation/main.c              |   12 +-
 examples/ip_pipeline/link.c                   |   20 +-
 examples/ip_reassembly/main.c                 |   18 +-
 examples/ipsec-secgw/ipsec-secgw.c            |   32 +-
 examples/ipsec-secgw/sa.c                     |    8 +-
 examples/ipv4_multicast/main.c                |    6 +-
 examples/kni/main.c                           |    8 +-
 examples/l2fwd-crypto/main.c                  |   10 +-
 examples/l2fwd-event/l2fwd_common.c           |   10 +-
 examples/l2fwd-event/main.c                   |    2 +-
 examples/l2fwd-jobstats/main.c                |    8 +-
 examples/l2fwd-keepalive/main.c               |    8 +-
 examples/l2fwd/main.c                         |    8 +-
 examples/l3fwd-acl/main.c                     |   18 +-
 examples/l3fwd-graph/main.c                   |   14 +-
 examples/l3fwd-power/main.c                   |   16 +-
 examples/l3fwd/l3fwd_event.c                  |    4 +-
 examples/l3fwd/main.c                         |   18 +-
 examples/link_status_interrupt/main.c         |   10 +-
 .../client_server_mp/mp_server/init.c         |    4 +-
 examples/multi_process/symmetric_mp/main.c    |   14 +-
 examples/ntb/ntb_fwd.c                        |    6 +-
 examples/packet_ordering/main.c               |    4 +-
 .../performance-thread/l3fwd-thread/main.c    |   16 +-
 examples/pipeline/obj.c                       |   20 +-
 examples/ptpclient/ptpclient.c                |   10 +-
 examples/qos_meter/main.c                     |   16 +-
 examples/qos_sched/init.c                     |    6 +-
 examples/rxtx_callbacks/main.c                |    8 +-
 examples/server_node_efd/server/init.c        |    8 +-
 examples/skeleton/basicfwd.c                  |    4 +-
 examples/vhost/main.c                         |   26 +-
 examples/vm_power_manager/main.c              |    6 +-
 examples/vmdq/main.c                          |   20 +-
 examples/vmdq_dcb/main.c                      |   40 +-
 lib/ethdev/ethdev_driver.h                    |   36 +-
 lib/ethdev/rte_ethdev.c                       |  181 ++-
 lib/ethdev/rte_ethdev.h                       | 1035 +++++++++++------
 lib/ethdev/rte_flow.h                         |    2 +-
 lib/gso/rte_gso.c                             |   20 +-
 lib/gso/rte_gso.h                             |    4 +-
 lib/mbuf/rte_mbuf_core.h                      |    8 +-
 lib/mbuf/rte_mbuf_dyn.h                       |    2 +-
 339 files changed, 6645 insertions(+), 6390 deletions(-)

diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index bfe5ce825b70..a4271047e693 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -757,11 +757,11 @@ show_port(void)
 		}
 
 		ret = rte_eth_dev_flow_ctrl_get(i, &fc_conf);
-		if (ret == 0 && fc_conf.mode != RTE_FC_NONE)  {
+		if (ret == 0 && fc_conf.mode != RTE_ETH_FC_NONE)  {
 			printf("\t  -- flow control mode %s%s high %u low %u pause %u%s%s\n",
-			       fc_conf.mode == RTE_FC_RX_PAUSE ? "rx " :
-			       fc_conf.mode == RTE_FC_TX_PAUSE ? "tx " :
-			       fc_conf.mode == RTE_FC_FULL ? "full" : "???",
+			       fc_conf.mode == RTE_ETH_FC_RX_PAUSE ? "rx " :
+			       fc_conf.mode == RTE_ETH_FC_TX_PAUSE ? "tx " :
+			       fc_conf.mode == RTE_ETH_FC_FULL ? "full" : "???",
 			       fc_conf.autoneg ? " auto" : "",
 			       fc_conf.high_water,
 			       fc_conf.low_water,
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index 660d5a0364b6..31d1b0e14653 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -668,13 +668,13 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 	struct test_perf *t = evt_test_priv(test);
 	struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 			.split_hdr_size = 0,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 			},
 		},
 	};
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 2775e72c580d..d202091077a6 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -176,12 +176,12 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 	struct rte_eth_rxconf rx_conf;
 	struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 			},
 		},
 	};
@@ -223,7 +223,7 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 
 		if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT))
 			local_port_conf.rxmode.offloads |=
-				DEV_RX_OFFLOAD_RSS_HASH;
+				RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 		ret = rte_eth_dev_info_get(i, &dev_info);
 		if (ret != 0) {
@@ -233,9 +233,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 		}
 
 		/* Enable mbuf fast free if PMD has the capability. */
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		rx_conf = dev_info.default_rxconf;
 		rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h
index a14d4e05e185..4249b6175b82 100644
--- a/app/test-flow-perf/config.h
+++ b/app/test-flow-perf/config.h
@@ -5,7 +5,7 @@
 #define FLOW_ITEM_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ACTION_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ATTR_MASK(_x) (UINT64_C(1) << _x)
-#define GET_RSS_HF() (ETH_RSS_IP)
+#define GET_RSS_HF() (RTE_ETH_RSS_IP)
 
 /* Configuration */
 #define RXQ_NUM 4
diff --git a/app/test-pipeline/init.c b/app/test-pipeline/init.c
index fe37d63730c6..c73801904103 100644
--- a/app/test-pipeline/init.c
+++ b/app/test-pipeline/init.c
@@ -70,16 +70,16 @@ struct app_params app = {
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -178,7 +178,7 @@ app_ports_check_link(void)
 		RTE_LOG(INFO, USER1, "Port %u %s\n",
 			port,
 			link_status_text);
-		if (link.link_status == ETH_LINK_DOWN)
+		if (link.link_status == RTE_ETH_LINK_DOWN)
 			all_ports_up = 0;
 	}
 
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 3221f6e1aa40..ebea13f86ab0 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1478,51 +1478,51 @@ parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
 	int duplex;
 
 	if (!strcmp(duplexstr, "half")) {
-		duplex = ETH_LINK_HALF_DUPLEX;
+		duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	} else if (!strcmp(duplexstr, "full")) {
-		duplex = ETH_LINK_FULL_DUPLEX;
+		duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	} else if (!strcmp(duplexstr, "auto")) {
-		duplex = ETH_LINK_FULL_DUPLEX;
+		duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	} else {
 		fprintf(stderr, "Unknown duplex parameter\n");
 		return -1;
 	}
 
 	if (!strcmp(speedstr, "10")) {
-		*speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
-				ETH_LINK_SPEED_10M_HD : ETH_LINK_SPEED_10M;
+		*speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+				RTE_ETH_LINK_SPEED_10M_HD : RTE_ETH_LINK_SPEED_10M;
 	} else if (!strcmp(speedstr, "100")) {
-		*speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
-				ETH_LINK_SPEED_100M_HD : ETH_LINK_SPEED_100M;
+		*speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+				RTE_ETH_LINK_SPEED_100M_HD : RTE_ETH_LINK_SPEED_100M;
 	} else {
-		if (duplex != ETH_LINK_FULL_DUPLEX) {
+		if (duplex != RTE_ETH_LINK_FULL_DUPLEX) {
 			fprintf(stderr, "Invalid speed/duplex parameters\n");
 			return -1;
 		}
 		if (!strcmp(speedstr, "1000")) {
-			*speed = ETH_LINK_SPEED_1G;
+			*speed = RTE_ETH_LINK_SPEED_1G;
 		} else if (!strcmp(speedstr, "10000")) {
-			*speed = ETH_LINK_SPEED_10G;
+			*speed = RTE_ETH_LINK_SPEED_10G;
 		} else if (!strcmp(speedstr, "25000")) {
-			*speed = ETH_LINK_SPEED_25G;
+			*speed = RTE_ETH_LINK_SPEED_25G;
 		} else if (!strcmp(speedstr, "40000")) {
-			*speed = ETH_LINK_SPEED_40G;
+			*speed = RTE_ETH_LINK_SPEED_40G;
 		} else if (!strcmp(speedstr, "50000")) {
-			*speed = ETH_LINK_SPEED_50G;
+			*speed = RTE_ETH_LINK_SPEED_50G;
 		} else if (!strcmp(speedstr, "100000")) {
-			*speed = ETH_LINK_SPEED_100G;
+			*speed = RTE_ETH_LINK_SPEED_100G;
 		} else if (!strcmp(speedstr, "200000")) {
-			*speed = ETH_LINK_SPEED_200G;
+			*speed = RTE_ETH_LINK_SPEED_200G;
 		} else if (!strcmp(speedstr, "auto")) {
-			*speed = ETH_LINK_SPEED_AUTONEG;
+			*speed = RTE_ETH_LINK_SPEED_AUTONEG;
 		} else {
 			fprintf(stderr, "Unknown speed parameter\n");
 			return -1;
 		}
 	}
 
-	if (*speed != ETH_LINK_SPEED_AUTONEG)
-		*speed |= ETH_LINK_SPEED_FIXED;
+	if (*speed != RTE_ETH_LINK_SPEED_AUTONEG)
+		*speed |= RTE_ETH_LINK_SPEED_FIXED;
 
 	return 0;
 }
@@ -2166,33 +2166,33 @@ cmd_config_rss_parsed(void *parsed_result,
 	int ret;
 
 	if (!strcmp(res->value, "all"))
-		rss_conf.rss_hf = ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP |
-			ETH_RSS_TCP | ETH_RSS_UDP | ETH_RSS_SCTP |
-			ETH_RSS_L2_PAYLOAD | ETH_RSS_L2TPV3 | ETH_RSS_ESP |
-			ETH_RSS_AH | ETH_RSS_PFCP | ETH_RSS_GTPU |
-			ETH_RSS_ECPRI;
+		rss_conf.rss_hf = RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP |
+			RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP |
+			RTE_ETH_RSS_L2_PAYLOAD | RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP |
+			RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP | RTE_ETH_RSS_GTPU |
+			RTE_ETH_RSS_ECPRI;
 	else if (!strcmp(res->value, "eth"))
-		rss_conf.rss_hf = ETH_RSS_ETH;
+		rss_conf.rss_hf = RTE_ETH_RSS_ETH;
 	else if (!strcmp(res->value, "vlan"))
-		rss_conf.rss_hf = ETH_RSS_VLAN;
+		rss_conf.rss_hf = RTE_ETH_RSS_VLAN;
 	else if (!strcmp(res->value, "ip"))
-		rss_conf.rss_hf = ETH_RSS_IP;
+		rss_conf.rss_hf = RTE_ETH_RSS_IP;
 	else if (!strcmp(res->value, "udp"))
-		rss_conf.rss_hf = ETH_RSS_UDP;
+		rss_conf.rss_hf = RTE_ETH_RSS_UDP;
 	else if (!strcmp(res->value, "tcp"))
-		rss_conf.rss_hf = ETH_RSS_TCP;
+		rss_conf.rss_hf = RTE_ETH_RSS_TCP;
 	else if (!strcmp(res->value, "sctp"))
-		rss_conf.rss_hf = ETH_RSS_SCTP;
+		rss_conf.rss_hf = RTE_ETH_RSS_SCTP;
 	else if (!strcmp(res->value, "ether"))
-		rss_conf.rss_hf = ETH_RSS_L2_PAYLOAD;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2_PAYLOAD;
 	else if (!strcmp(res->value, "port"))
-		rss_conf.rss_hf = ETH_RSS_PORT;
+		rss_conf.rss_hf = RTE_ETH_RSS_PORT;
 	else if (!strcmp(res->value, "vxlan"))
-		rss_conf.rss_hf = ETH_RSS_VXLAN;
+		rss_conf.rss_hf = RTE_ETH_RSS_VXLAN;
 	else if (!strcmp(res->value, "geneve"))
-		rss_conf.rss_hf = ETH_RSS_GENEVE;
+		rss_conf.rss_hf = RTE_ETH_RSS_GENEVE;
 	else if (!strcmp(res->value, "nvgre"))
-		rss_conf.rss_hf = ETH_RSS_NVGRE;
+		rss_conf.rss_hf = RTE_ETH_RSS_NVGRE;
 	else if (!strcmp(res->value, "l3-pre32"))
 		rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE32;
 	else if (!strcmp(res->value, "l3-pre40"))
@@ -2206,46 +2206,46 @@ cmd_config_rss_parsed(void *parsed_result,
 	else if (!strcmp(res->value, "l3-pre96"))
 		rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE96;
 	else if (!strcmp(res->value, "l3-src-only"))
-		rss_conf.rss_hf = ETH_RSS_L3_SRC_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L3_SRC_ONLY;
 	else if (!strcmp(res->value, "l3-dst-only"))
-		rss_conf.rss_hf = ETH_RSS_L3_DST_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L3_DST_ONLY;
 	else if (!strcmp(res->value, "l4-src-only"))
-		rss_conf.rss_hf = ETH_RSS_L4_SRC_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L4_SRC_ONLY;
 	else if (!strcmp(res->value, "l4-dst-only"))
-		rss_conf.rss_hf = ETH_RSS_L4_DST_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L4_DST_ONLY;
 	else if (!strcmp(res->value, "l2-src-only"))
-		rss_conf.rss_hf = ETH_RSS_L2_SRC_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2_SRC_ONLY;
 	else if (!strcmp(res->value, "l2-dst-only"))
-		rss_conf.rss_hf = ETH_RSS_L2_DST_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2_DST_ONLY;
 	else if (!strcmp(res->value, "l2tpv3"))
-		rss_conf.rss_hf = ETH_RSS_L2TPV3;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2TPV3;
 	else if (!strcmp(res->value, "esp"))
-		rss_conf.rss_hf = ETH_RSS_ESP;
+		rss_conf.rss_hf = RTE_ETH_RSS_ESP;
 	else if (!strcmp(res->value, "ah"))
-		rss_conf.rss_hf = ETH_RSS_AH;
+		rss_conf.rss_hf = RTE_ETH_RSS_AH;
 	else if (!strcmp(res->value, "pfcp"))
-		rss_conf.rss_hf = ETH_RSS_PFCP;
+		rss_conf.rss_hf = RTE_ETH_RSS_PFCP;
 	else if (!strcmp(res->value, "pppoe"))
-		rss_conf.rss_hf = ETH_RSS_PPPOE;
+		rss_conf.rss_hf = RTE_ETH_RSS_PPPOE;
 	else if (!strcmp(res->value, "gtpu"))
-		rss_conf.rss_hf = ETH_RSS_GTPU;
+		rss_conf.rss_hf = RTE_ETH_RSS_GTPU;
 	else if (!strcmp(res->value, "ecpri"))
-		rss_conf.rss_hf = ETH_RSS_ECPRI;
+		rss_conf.rss_hf = RTE_ETH_RSS_ECPRI;
 	else if (!strcmp(res->value, "mpls"))
-		rss_conf.rss_hf = ETH_RSS_MPLS;
+		rss_conf.rss_hf = RTE_ETH_RSS_MPLS;
 	else if (!strcmp(res->value, "ipv4-chksum"))
-		rss_conf.rss_hf = ETH_RSS_IPV4_CHKSUM;
+		rss_conf.rss_hf = RTE_ETH_RSS_IPV4_CHKSUM;
 	else if (!strcmp(res->value, "none"))
 		rss_conf.rss_hf = 0;
 	else if (!strcmp(res->value, "level-default")) {
-		rss_hf &= (~ETH_RSS_LEVEL_MASK);
-		rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_PMD_DEFAULT);
+		rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+		rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_PMD_DEFAULT);
 	} else if (!strcmp(res->value, "level-outer")) {
-		rss_hf &= (~ETH_RSS_LEVEL_MASK);
-		rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_OUTERMOST);
+		rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+		rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_OUTERMOST);
 	} else if (!strcmp(res->value, "level-inner")) {
-		rss_hf &= (~ETH_RSS_LEVEL_MASK);
-		rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_INNERMOST);
+		rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+		rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_INNERMOST);
 	} else if (!strcmp(res->value, "default"))
 		use_default = 1;
 	else if (isdigit(res->value[0]) && atoi(res->value) > 0 &&
@@ -2982,8 +2982,8 @@ parse_reta_config(const char *str,
 			return -1;
 		}
 
-		idx = hash_index / RTE_RETA_GROUP_SIZE;
-		shift = hash_index % RTE_RETA_GROUP_SIZE;
+		idx = hash_index / RTE_ETH_RETA_GROUP_SIZE;
+		shift = hash_index % RTE_ETH_RETA_GROUP_SIZE;
 		reta_conf[idx].mask |= (1ULL << shift);
 		reta_conf[idx].reta[shift] = nb_queue;
 	}
@@ -3012,10 +3012,10 @@ cmd_set_rss_reta_parsed(void *parsed_result,
 	} else
 		printf("The reta size of port %d is %u\n",
 			res->port_id, dev_info.reta_size);
-	if (dev_info.reta_size > ETH_RSS_RETA_SIZE_512) {
+	if (dev_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
 		fprintf(stderr,
 			"Currently do not support more than %u entries of redirection table\n",
-			ETH_RSS_RETA_SIZE_512);
+			RTE_ETH_RSS_RETA_SIZE_512);
 		return;
 	}
 
@@ -3086,8 +3086,8 @@ showport_parse_reta_config(struct rte_eth_rss_reta_entry64 *conf,
 	char *end;
 	char *str_fld[8];
 	uint16_t i;
-	uint16_t num = (nb_entries + RTE_RETA_GROUP_SIZE - 1) /
-			RTE_RETA_GROUP_SIZE;
+	uint16_t num = (nb_entries + RTE_ETH_RETA_GROUP_SIZE - 1) /
+			RTE_ETH_RETA_GROUP_SIZE;
 	int ret;
 
 	p = strchr(p0, '(');
@@ -3132,7 +3132,7 @@ cmd_showport_reta_parsed(void *parsed_result,
 	if (ret != 0)
 		return;
 
-	max_reta_size = RTE_MIN(dev_info.reta_size, ETH_RSS_RETA_SIZE_512);
+	max_reta_size = RTE_MIN(dev_info.reta_size, RTE_ETH_RSS_RETA_SIZE_512);
 	if (res->size == 0 || res->size > max_reta_size) {
 		fprintf(stderr, "Invalid redirection table size: %u (1-%u)\n",
 			res->size, max_reta_size);
@@ -3272,7 +3272,7 @@ cmd_config_dcb_parsed(void *parsed_result,
 		return;
 	}
 
-	if ((res->num_tcs != ETH_4_TCS) && (res->num_tcs != ETH_8_TCS)) {
+	if ((res->num_tcs != RTE_ETH_4_TCS) && (res->num_tcs != RTE_ETH_8_TCS)) {
 		fprintf(stderr,
 			"The invalid number of traffic class, only 4 or 8 allowed.\n");
 		return;
@@ -4276,9 +4276,9 @@ cmd_vlan_tpid_parsed(void *parsed_result,
 	enum rte_vlan_type vlan_type;
 
 	if (!strcmp(res->vlan_type, "inner"))
-		vlan_type = ETH_VLAN_TYPE_INNER;
+		vlan_type = RTE_ETH_VLAN_TYPE_INNER;
 	else if (!strcmp(res->vlan_type, "outer"))
-		vlan_type = ETH_VLAN_TYPE_OUTER;
+		vlan_type = RTE_ETH_VLAN_TYPE_OUTER;
 	else {
 		fprintf(stderr, "Unknown vlan type\n");
 		return;
@@ -4615,55 +4615,55 @@ csum_show(int port_id)
 	printf("Parse tunnel is %s\n",
 		(ports[port_id].parse_tunnel) ? "on" : "off");
 	printf("IP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
 	printf("UDP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
 	printf("TCP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
 	printf("SCTP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
 	printf("Outer-Ip checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
 	printf("Outer-Udp checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
 
 	/* display warnings if configuration is not supported by the NIC */
 	ret = eth_dev_info_get_print_err(port_id, &dev_info);
 	if (ret != 0)
 		return;
 
-	if ((tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware IP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware UDP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware TCP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_SCTP_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware SCTP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware outer IP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 			== 0) {
 		fprintf(stderr,
 			"Warning: hardware outer UDP checksum enabled but not supported by port %d\n",
@@ -4713,8 +4713,8 @@ cmd_csum_parsed(void *parsed_result,
 
 		if (!strcmp(res->proto, "ip")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_IPV4_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+						RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 			} else {
 				fprintf(stderr,
 					"IP checksum offload is not supported by port %u\n",
@@ -4722,8 +4722,8 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "udp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_UDP_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"UDP checksum offload is not supported by port %u\n",
@@ -4731,8 +4731,8 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "tcp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_TCP_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"TCP checksum offload is not supported by port %u\n",
@@ -4740,8 +4740,8 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "sctp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_SCTP_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_SCTP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"SCTP checksum offload is not supported by port %u\n",
@@ -4749,9 +4749,9 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "outer-ip")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-					DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+					RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 				csum_offloads |=
-						DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+						RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 			} else {
 				fprintf(stderr,
 					"Outer IP checksum offload is not supported by port %u\n",
@@ -4759,9 +4759,9 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "outer-udp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-					DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+					RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
 				csum_offloads |=
-						DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"Outer UDP checksum offload is not supported by port %u\n",
@@ -4916,7 +4916,7 @@ cmd_tso_set_parsed(void *parsed_result,
 		return;
 
 	if ((ports[res->port_id].tso_segsz != 0) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
 		fprintf(stderr, "Error: TSO is not supported by port %d\n",
 			res->port_id);
 		return;
@@ -4924,11 +4924,11 @@ cmd_tso_set_parsed(void *parsed_result,
 
 	if (ports[res->port_id].tso_segsz == 0) {
 		ports[res->port_id].dev_conf.txmode.offloads &=
-						~DEV_TX_OFFLOAD_TCP_TSO;
+						~RTE_ETH_TX_OFFLOAD_TCP_TSO;
 		printf("TSO for non-tunneled packets is disabled\n");
 	} else {
 		ports[res->port_id].dev_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_TCP_TSO;
+						RTE_ETH_TX_OFFLOAD_TCP_TSO;
 		printf("TSO segment size for non-tunneled packets is %d\n",
 			ports[res->port_id].tso_segsz);
 	}
@@ -4940,7 +4940,7 @@ cmd_tso_set_parsed(void *parsed_result,
 		return;
 
 	if ((ports[res->port_id].tso_segsz != 0) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
 		fprintf(stderr,
 			"Warning: TSO enabled but not supported by port %d\n",
 			res->port_id);
@@ -5011,27 +5011,27 @@ check_tunnel_tso_nic_support(portid_t port_id)
 	if (eth_dev_info_get_print_err(port_id, &dev_info) != 0)
 		return dev_info;
 
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VXLAN_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO))
 		fprintf(stderr,
 			"Warning: VXLAN TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		fprintf(stderr,
 			"Warning: GRE TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPIP_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO))
 		fprintf(stderr,
 			"Warning: IPIP TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
 		fprintf(stderr,
 			"Warning: GENEVE TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IP_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IP_TNL_TSO))
 		fprintf(stderr,
 			"Warning: IP TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO))
 		fprintf(stderr,
 			"Warning: UDP TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
@@ -5059,20 +5059,20 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
 	dev_info = check_tunnel_tso_nic_support(res->port_id);
 	if (ports[res->port_id].tunnel_tso_segsz == 0) {
 		ports[res->port_id].dev_conf.txmode.offloads &=
-			~(DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			  DEV_TX_OFFLOAD_GRE_TNL_TSO |
-			  DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-			  DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-			  DEV_TX_OFFLOAD_IP_TNL_TSO |
-			  DEV_TX_OFFLOAD_UDP_TNL_TSO);
+			~(RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 		printf("TSO for tunneled packets is disabled\n");
 	} else {
-		uint64_t tso_offloads = (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-					 DEV_TX_OFFLOAD_GRE_TNL_TSO |
-					 DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-					 DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-					 DEV_TX_OFFLOAD_IP_TNL_TSO |
-					 DEV_TX_OFFLOAD_UDP_TNL_TSO);
+		uint64_t tso_offloads = (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 
 		ports[res->port_id].dev_conf.txmode.offloads |=
 			(tso_offloads & dev_info.tx_offload_capa);
@@ -5095,7 +5095,7 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
 			fprintf(stderr,
 				"Warning: csum parse_tunnel must be set so that tunneled packets are recognized\n");
 		if (!(ports[res->port_id].dev_conf.txmode.offloads &
-		      DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+		      RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
 			fprintf(stderr,
 				"Warning: csum set outer-ip must be set to hw if outer L3 is IPv4; not necessary for IPv6\n");
 	}
@@ -7227,9 +7227,9 @@ cmd_link_flow_ctrl_show_parsed(void *parsed_result,
 		return;
 	}
 
-	if (fc_conf.mode == RTE_FC_RX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+	if (fc_conf.mode == RTE_ETH_FC_RX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
 		rx_fc_en = true;
-	if (fc_conf.mode == RTE_FC_TX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+	if (fc_conf.mode == RTE_ETH_FC_TX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
 		tx_fc_en = true;
 
 	printf("\n%s Flow control infos for port %-2d %s\n",
@@ -7507,12 +7507,12 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
 
 	/*
 	 * Rx on/off, flow control is enabled/disabled on RX side. This can indicate
-	 * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+	 * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
 	 * Tx on/off, flow control is enabled/disabled on TX side. This can indicate
-	 * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+	 * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
 	 */
 	static enum rte_eth_fc_mode rx_tx_onoff_2_lfc_mode[2][2] = {
-			{RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+			{RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
 	};
 
 	/* Partial command line, retrieve current configuration */
@@ -7525,11 +7525,11 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
 			return;
 		}
 
-		if ((fc_conf.mode == RTE_FC_RX_PAUSE) ||
-		    (fc_conf.mode == RTE_FC_FULL))
+		if ((fc_conf.mode == RTE_ETH_FC_RX_PAUSE) ||
+		    (fc_conf.mode == RTE_ETH_FC_FULL))
 			rx_fc_en = 1;
-		if ((fc_conf.mode == RTE_FC_TX_PAUSE) ||
-		    (fc_conf.mode == RTE_FC_FULL))
+		if ((fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ||
+		    (fc_conf.mode == RTE_ETH_FC_FULL))
 			tx_fc_en = 1;
 	}
 
@@ -7597,12 +7597,12 @@ cmd_priority_flow_ctrl_set_parsed(void *parsed_result,
 
 	/*
 	 * Rx on/off, flow control is enabled/disabled on RX side. This can indicate
-	 * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+	 * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
 	 * Tx on/off, flow control is enabled/disabled on TX side. This can indicate
-	 * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+	 * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
 	 */
 	static enum rte_eth_fc_mode rx_tx_onoff_2_pfc_mode[2][2] = {
-		{RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+		{RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
 	};
 
 	memset(&pfc_conf, 0, sizeof(struct rte_eth_pfc_conf));
@@ -9250,13 +9250,13 @@ cmd_set_vf_rxmode_parsed(void *parsed_result,
 	int is_on = (strcmp(res->on, "on") == 0) ? 1 : 0;
 	if (!strcmp(res->what,"rxmode")) {
 		if (!strcmp(res->mode, "AUPE"))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_UNTAG;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_UNTAG;
 		else if (!strcmp(res->mode, "ROPE"))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_HASH_UC;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_HASH_UC;
 		else if (!strcmp(res->mode, "BAM"))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_BROADCAST;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_BROADCAST;
 		else if (!strncmp(res->mode, "MPE",3))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_MULTICAST;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_MULTICAST;
 	}
 
 	RTE_SET_USED(is_on);
@@ -9656,7 +9656,7 @@ cmd_tunnel_udp_config_parsed(void *parsed_result,
 	int ret;
 
 	tunnel_udp.udp_port = res->udp_port;
-	tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+	tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
 
 	if (!strcmp(res->what, "add"))
 		ret = rte_eth_dev_udp_tunnel_port_add(res->port_id,
@@ -9722,13 +9722,13 @@ cmd_cfg_tunnel_udp_port_parsed(void *parsed_result,
 	tunnel_udp.udp_port = res->udp_port;
 
 	if (!strcmp(res->tunnel_type, "vxlan")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
 	} else if (!strcmp(res->tunnel_type, "geneve")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_GENEVE;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_GENEVE;
 	} else if (!strcmp(res->tunnel_type, "vxlan-gpe")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN_GPE;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN_GPE;
 	} else if (!strcmp(res->tunnel_type, "ecpri")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_ECPRI;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_ECPRI;
 	} else {
 		fprintf(stderr, "Invalid tunnel type\n");
 		return;
@@ -11859,7 +11859,7 @@ cmd_set_macsec_offload_on_parsed(
 	if (ret != 0)
 		return;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
 #ifdef RTE_NET_IXGBE
 		ret = rte_pmd_ixgbe_macsec_enable(port_id, en, rp);
 #endif
@@ -11870,7 +11870,7 @@ cmd_set_macsec_offload_on_parsed(
 	switch (ret) {
 	case 0:
 		ports[port_id].dev_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_MACSEC_INSERT;
+						RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 		cmd_reconfig_device_queue(port_id, 1, 1);
 		break;
 	case -ENODEV:
@@ -11956,7 +11956,7 @@ cmd_set_macsec_offload_off_parsed(
 	if (ret != 0)
 		return;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
 #ifdef RTE_NET_IXGBE
 		ret = rte_pmd_ixgbe_macsec_disable(port_id);
 #endif
@@ -11964,7 +11964,7 @@ cmd_set_macsec_offload_off_parsed(
 	switch (ret) {
 	case 0:
 		ports[port_id].dev_conf.txmode.offloads &=
-						~DEV_TX_OFFLOAD_MACSEC_INSERT;
+						~RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 		cmd_reconfig_device_queue(port_id, 1, 1);
 		break;
 	case -ENODEV:
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index cad78350dcc9..a18871d461c4 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -86,62 +86,62 @@ static const struct {
 };
 
 const struct rss_type_info rss_type_table[] = {
-	{ "all", ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP | ETH_RSS_TCP |
-		ETH_RSS_UDP | ETH_RSS_SCTP | ETH_RSS_L2_PAYLOAD |
-		ETH_RSS_L2TPV3 | ETH_RSS_ESP | ETH_RSS_AH | ETH_RSS_PFCP |
-		ETH_RSS_GTPU | ETH_RSS_ECPRI | ETH_RSS_MPLS},
+	{ "all", RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP |
+		RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_L2_PAYLOAD |
+		RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP | RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP |
+		RTE_ETH_RSS_GTPU | RTE_ETH_RSS_ECPRI | RTE_ETH_RSS_MPLS},
 	{ "none", 0 },
-	{ "eth", ETH_RSS_ETH },
-	{ "l2-src-only", ETH_RSS_L2_SRC_ONLY },
-	{ "l2-dst-only", ETH_RSS_L2_DST_ONLY },
-	{ "vlan", ETH_RSS_VLAN },
-	{ "s-vlan", ETH_RSS_S_VLAN },
-	{ "c-vlan", ETH_RSS_C_VLAN },
-	{ "ipv4", ETH_RSS_IPV4 },
-	{ "ipv4-frag", ETH_RSS_FRAG_IPV4 },
-	{ "ipv4-tcp", ETH_RSS_NONFRAG_IPV4_TCP },
-	{ "ipv4-udp", ETH_RSS_NONFRAG_IPV4_UDP },
-	{ "ipv4-sctp", ETH_RSS_NONFRAG_IPV4_SCTP },
-	{ "ipv4-other", ETH_RSS_NONFRAG_IPV4_OTHER },
-	{ "ipv6", ETH_RSS_IPV6 },
-	{ "ipv6-frag", ETH_RSS_FRAG_IPV6 },
-	{ "ipv6-tcp", ETH_RSS_NONFRAG_IPV6_TCP },
-	{ "ipv6-udp", ETH_RSS_NONFRAG_IPV6_UDP },
-	{ "ipv6-sctp", ETH_RSS_NONFRAG_IPV6_SCTP },
-	{ "ipv6-other", ETH_RSS_NONFRAG_IPV6_OTHER },
-	{ "l2-payload", ETH_RSS_L2_PAYLOAD },
-	{ "ipv6-ex", ETH_RSS_IPV6_EX },
-	{ "ipv6-tcp-ex", ETH_RSS_IPV6_TCP_EX },
-	{ "ipv6-udp-ex", ETH_RSS_IPV6_UDP_EX },
-	{ "port", ETH_RSS_PORT },
-	{ "vxlan", ETH_RSS_VXLAN },
-	{ "geneve", ETH_RSS_GENEVE },
-	{ "nvgre", ETH_RSS_NVGRE },
-	{ "ip", ETH_RSS_IP },
-	{ "udp", ETH_RSS_UDP },
-	{ "tcp", ETH_RSS_TCP },
-	{ "sctp", ETH_RSS_SCTP },
-	{ "tunnel", ETH_RSS_TUNNEL },
+	{ "eth", RTE_ETH_RSS_ETH },
+	{ "l2-src-only", RTE_ETH_RSS_L2_SRC_ONLY },
+	{ "l2-dst-only", RTE_ETH_RSS_L2_DST_ONLY },
+	{ "vlan", RTE_ETH_RSS_VLAN },
+	{ "s-vlan", RTE_ETH_RSS_S_VLAN },
+	{ "c-vlan", RTE_ETH_RSS_C_VLAN },
+	{ "ipv4", RTE_ETH_RSS_IPV4 },
+	{ "ipv4-frag", RTE_ETH_RSS_FRAG_IPV4 },
+	{ "ipv4-tcp", RTE_ETH_RSS_NONFRAG_IPV4_TCP },
+	{ "ipv4-udp", RTE_ETH_RSS_NONFRAG_IPV4_UDP },
+	{ "ipv4-sctp", RTE_ETH_RSS_NONFRAG_IPV4_SCTP },
+	{ "ipv4-other", RTE_ETH_RSS_NONFRAG_IPV4_OTHER },
+	{ "ipv6", RTE_ETH_RSS_IPV6 },
+	{ "ipv6-frag", RTE_ETH_RSS_FRAG_IPV6 },
+	{ "ipv6-tcp", RTE_ETH_RSS_NONFRAG_IPV6_TCP },
+	{ "ipv6-udp", RTE_ETH_RSS_NONFRAG_IPV6_UDP },
+	{ "ipv6-sctp", RTE_ETH_RSS_NONFRAG_IPV6_SCTP },
+	{ "ipv6-other", RTE_ETH_RSS_NONFRAG_IPV6_OTHER },
+	{ "l2-payload", RTE_ETH_RSS_L2_PAYLOAD },
+	{ "ipv6-ex", RTE_ETH_RSS_IPV6_EX },
+	{ "ipv6-tcp-ex", RTE_ETH_RSS_IPV6_TCP_EX },
+	{ "ipv6-udp-ex", RTE_ETH_RSS_IPV6_UDP_EX },
+	{ "port", RTE_ETH_RSS_PORT },
+	{ "vxlan", RTE_ETH_RSS_VXLAN },
+	{ "geneve", RTE_ETH_RSS_GENEVE },
+	{ "nvgre", RTE_ETH_RSS_NVGRE },
+	{ "ip", RTE_ETH_RSS_IP },
+	{ "udp", RTE_ETH_RSS_UDP },
+	{ "tcp", RTE_ETH_RSS_TCP },
+	{ "sctp", RTE_ETH_RSS_SCTP },
+	{ "tunnel", RTE_ETH_RSS_TUNNEL },
 	{ "l3-pre32", RTE_ETH_RSS_L3_PRE32 },
 	{ "l3-pre40", RTE_ETH_RSS_L3_PRE40 },
 	{ "l3-pre48", RTE_ETH_RSS_L3_PRE48 },
 	{ "l3-pre56", RTE_ETH_RSS_L3_PRE56 },
 	{ "l3-pre64", RTE_ETH_RSS_L3_PRE64 },
 	{ "l3-pre96", RTE_ETH_RSS_L3_PRE96 },
-	{ "l3-src-only", ETH_RSS_L3_SRC_ONLY },
-	{ "l3-dst-only", ETH_RSS_L3_DST_ONLY },
-	{ "l4-src-only", ETH_RSS_L4_SRC_ONLY },
-	{ "l4-dst-only", ETH_RSS_L4_DST_ONLY },
-	{ "esp", ETH_RSS_ESP },
-	{ "ah", ETH_RSS_AH },
-	{ "l2tpv3", ETH_RSS_L2TPV3 },
-	{ "pfcp", ETH_RSS_PFCP },
-	{ "pppoe", ETH_RSS_PPPOE },
-	{ "gtpu", ETH_RSS_GTPU },
-	{ "ecpri", ETH_RSS_ECPRI },
-	{ "mpls", ETH_RSS_MPLS },
-	{ "ipv4-chksum", ETH_RSS_IPV4_CHKSUM },
-	{ "l4-chksum", ETH_RSS_L4_CHKSUM },
+	{ "l3-src-only", RTE_ETH_RSS_L3_SRC_ONLY },
+	{ "l3-dst-only", RTE_ETH_RSS_L3_DST_ONLY },
+	{ "l4-src-only", RTE_ETH_RSS_L4_SRC_ONLY },
+	{ "l4-dst-only", RTE_ETH_RSS_L4_DST_ONLY },
+	{ "esp", RTE_ETH_RSS_ESP },
+	{ "ah", RTE_ETH_RSS_AH },
+	{ "l2tpv3", RTE_ETH_RSS_L2TPV3 },
+	{ "pfcp", RTE_ETH_RSS_PFCP },
+	{ "pppoe", RTE_ETH_RSS_PPPOE },
+	{ "gtpu", RTE_ETH_RSS_GTPU },
+	{ "ecpri", RTE_ETH_RSS_ECPRI },
+	{ "mpls", RTE_ETH_RSS_MPLS },
+	{ "ipv4-chksum", RTE_ETH_RSS_IPV4_CHKSUM },
+	{ "l4-chksum", RTE_ETH_RSS_L4_CHKSUM },
 	{ NULL, 0 },
 };
 
@@ -538,39 +538,39 @@ static void
 device_infos_display_speeds(uint32_t speed_capa)
 {
 	printf("\n\tDevice speed capability:");
-	if (speed_capa == ETH_LINK_SPEED_AUTONEG)
+	if (speed_capa == RTE_ETH_LINK_SPEED_AUTONEG)
 		printf(" Autonegotiate (all speeds)");
-	if (speed_capa & ETH_LINK_SPEED_FIXED)
+	if (speed_capa & RTE_ETH_LINK_SPEED_FIXED)
 		printf(" Disable autonegotiate (fixed speed)  ");
-	if (speed_capa & ETH_LINK_SPEED_10M_HD)
+	if (speed_capa & RTE_ETH_LINK_SPEED_10M_HD)
 		printf(" 10 Mbps half-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_10M)
+	if (speed_capa & RTE_ETH_LINK_SPEED_10M)
 		printf(" 10 Mbps full-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_100M_HD)
+	if (speed_capa & RTE_ETH_LINK_SPEED_100M_HD)
 		printf(" 100 Mbps half-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_100M)
+	if (speed_capa & RTE_ETH_LINK_SPEED_100M)
 		printf(" 100 Mbps full-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_1G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_1G)
 		printf(" 1 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_2_5G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_2_5G)
 		printf(" 2.5 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_5G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_5G)
 		printf(" 5 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_10G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_10G)
 		printf(" 10 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_20G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_20G)
 		printf(" 20 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_25G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_25G)
 		printf(" 25 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_40G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_40G)
 		printf(" 40 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_50G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_50G)
 		printf(" 50 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_56G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_56G)
 		printf(" 56 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_100G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_100G)
 		printf(" 100 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_200G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_200G)
 		printf(" 200 Gbps  ");
 }
 
@@ -723,9 +723,9 @@ port_infos_display(portid_t port_id)
 
 	printf("\nLink status: %s\n", (link.link_status) ? ("up") : ("down"));
 	printf("Link speed: %s\n", rte_eth_link_speed_to_str(link.link_speed));
-	printf("Link duplex: %s\n", (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+	printf("Link duplex: %s\n", (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 	       ("full-duplex") : ("half-duplex"));
-	printf("Autoneg status: %s\n", (link.link_autoneg == ETH_LINK_AUTONEG) ?
+	printf("Autoneg status: %s\n", (link.link_autoneg == RTE_ETH_LINK_AUTONEG) ?
 	       ("On") : ("Off"));
 
 	if (!rte_eth_dev_get_mtu(port_id, &mtu))
@@ -743,22 +743,22 @@ port_infos_display(portid_t port_id)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 	if (vlan_offload >= 0){
 		printf("VLAN offload: \n");
-		if (vlan_offload & ETH_VLAN_STRIP_OFFLOAD)
+		if (vlan_offload & RTE_ETH_VLAN_STRIP_OFFLOAD)
 			printf("  strip on, ");
 		else
 			printf("  strip off, ");
 
-		if (vlan_offload & ETH_VLAN_FILTER_OFFLOAD)
+		if (vlan_offload & RTE_ETH_VLAN_FILTER_OFFLOAD)
 			printf("filter on, ");
 		else
 			printf("filter off, ");
 
-		if (vlan_offload & ETH_VLAN_EXTEND_OFFLOAD)
+		if (vlan_offload & RTE_ETH_VLAN_EXTEND_OFFLOAD)
 			printf("extend on, ");
 		else
 			printf("extend off, ");
 
-		if (vlan_offload & ETH_QINQ_STRIP_OFFLOAD)
+		if (vlan_offload & RTE_ETH_QINQ_STRIP_OFFLOAD)
 			printf("qinq strip on\n");
 		else
 			printf("qinq strip off\n");
@@ -2953,8 +2953,8 @@ port_rss_reta_info(portid_t port_id,
 	}
 
 	for (i = 0; i < nb_entries; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (!(reta_conf[idx].mask & (1ULL << shift)))
 			continue;
 		printf("RSS RETA configuration: hash index=%u, queue=%u\n",
@@ -3427,7 +3427,7 @@ dcb_fwd_config_setup(void)
 	for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
 		fwd_lcores[lc_id]->stream_nb = 0;
 		fwd_lcores[lc_id]->stream_idx = sm_id;
-		for (i = 0; i < ETH_MAX_VMDQ_POOL; i++) {
+		for (i = 0; i < RTE_ETH_MAX_VMDQ_POOL; i++) {
 			/* if the nb_queue is zero, means this tc is
 			 * not enabled on the POOL
 			 */
@@ -4490,11 +4490,11 @@ vlan_extend_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_VLAN_EXTEND_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+		vlan_offload |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	} else {
-		vlan_offload &= ~ETH_VLAN_EXTEND_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
+		vlan_offload &= ~RTE_ETH_VLAN_EXTEND_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4520,11 +4520,11 @@ rx_vlan_strip_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
-		vlan_offload &= ~ETH_VLAN_STRIP_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		vlan_offload &= ~RTE_ETH_VLAN_STRIP_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4565,11 +4565,11 @@ rx_vlan_filter_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_VLAN_FILTER_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+		vlan_offload |= RTE_ETH_VLAN_FILTER_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	} else {
-		vlan_offload &= ~ETH_VLAN_FILTER_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+		vlan_offload &= ~RTE_ETH_VLAN_FILTER_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4595,11 +4595,11 @@ rx_vlan_qinq_strip_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_QINQ_STRIP_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+		vlan_offload |= RTE_ETH_QINQ_STRIP_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 	} else {
-		vlan_offload &= ~ETH_QINQ_STRIP_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+		vlan_offload &= ~RTE_ETH_QINQ_STRIP_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4669,7 +4669,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
 		return;
 
 	if (ports[port_id].dev_conf.txmode.offloads &
-	    DEV_TX_OFFLOAD_QINQ_INSERT) {
+	    RTE_ETH_TX_OFFLOAD_QINQ_INSERT) {
 		fprintf(stderr, "Error, as QinQ has been enabled.\n");
 		return;
 	}
@@ -4678,7 +4678,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
 	if (ret != 0)
 		return;
 
-	if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VLAN_INSERT) == 0) {
+	if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) == 0) {
 		fprintf(stderr,
 			"Error: vlan insert is not supported by port %d\n",
 			port_id);
@@ -4686,7 +4686,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
 	}
 
 	tx_vlan_reset(port_id);
-	ports[port_id].dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
+	ports[port_id].dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	ports[port_id].tx_vlan_id = vlan_id;
 }
 
@@ -4705,7 +4705,7 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
 	if (ret != 0)
 		return;
 
-	if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_QINQ_INSERT) == 0) {
+	if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) == 0) {
 		fprintf(stderr,
 			"Error: qinq insert not supported by port %d\n",
 			port_id);
@@ -4713,8 +4713,8 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
 	}
 
 	tx_vlan_reset(port_id);
-	ports[port_id].dev_conf.txmode.offloads |= (DEV_TX_OFFLOAD_VLAN_INSERT |
-						    DEV_TX_OFFLOAD_QINQ_INSERT);
+	ports[port_id].dev_conf.txmode.offloads |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+						    RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
 	ports[port_id].tx_vlan_id = vlan_id;
 	ports[port_id].tx_vlan_id_outer = vlan_id_outer;
 }
@@ -4723,8 +4723,8 @@ void
 tx_vlan_reset(portid_t port_id)
 {
 	ports[port_id].dev_conf.txmode.offloads &=
-				~(DEV_TX_OFFLOAD_VLAN_INSERT |
-				  DEV_TX_OFFLOAD_QINQ_INSERT);
+				~(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				  RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
 	ports[port_id].tx_vlan_id = 0;
 	ports[port_id].tx_vlan_id_outer = 0;
 }
@@ -5130,7 +5130,7 @@ set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint16_t rate)
 	ret = eth_link_get_nowait_print_err(port_id, &link);
 	if (ret < 0)
 		return 1;
-	if (link.link_speed != ETH_SPEED_NUM_UNKNOWN &&
+	if (link.link_speed != RTE_ETH_SPEED_NUM_UNKNOWN &&
 	    rate > link.link_speed) {
 		fprintf(stderr,
 			"Invalid rate value:%u bigger than link speed: %u\n",
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 090797318a35..75b24487e72e 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -485,7 +485,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		if (info->l4_proto == IPPROTO_TCP && tso_segsz) {
 			ol_flags |= PKT_TX_IP_CKSUM;
 		} else {
-			if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+			if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
 				ol_flags |= PKT_TX_IP_CKSUM;
 			} else {
 				ipv4_hdr->hdr_checksum = 0;
@@ -502,7 +502,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		udp_hdr = (struct rte_udp_hdr *)((char *)l3_hdr + info->l3_len);
 		/* do not recalculate udp cksum if it was 0 */
 		if (udp_hdr->dgram_cksum != 0) {
-			if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+			if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
 				ol_flags |= PKT_TX_UDP_CKSUM;
 			} else {
 				udp_hdr->dgram_cksum = 0;
@@ -517,7 +517,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		tcp_hdr = (struct rte_tcp_hdr *)((char *)l3_hdr + info->l3_len);
 		if (tso_segsz)
 			ol_flags |= PKT_TX_TCP_SEG;
-		else if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+		else if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
 			ol_flags |= PKT_TX_TCP_CKSUM;
 		} else {
 			tcp_hdr->cksum = 0;
@@ -532,7 +532,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 			((char *)l3_hdr + info->l3_len);
 		/* sctp payload must be a multiple of 4 to be
 		 * offloaded */
-		if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
+		if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
 			((ipv4_hdr->total_length & 0x3) == 0)) {
 			ol_flags |= PKT_TX_SCTP_CKSUM;
 		} else {
@@ -559,7 +559,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 		ipv4_hdr->hdr_checksum = 0;
 		ol_flags |= PKT_TX_OUTER_IPV4;
 
-		if (tx_offloads	& DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+		if (tx_offloads	& RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 			ol_flags |= PKT_TX_OUTER_IP_CKSUM;
 		else
 			ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
@@ -576,7 +576,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 		ol_flags |= PKT_TX_TCP_SEG;
 
 	/* Skip SW outer UDP checksum generation if HW supports it */
-	if (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) {
 		if (info->outer_ethertype == _htons(RTE_ETHER_TYPE_IPV4))
 			udp_hdr->dgram_cksum
 				= rte_ipv4_phdr_cksum(ipv4_hdr, ol_flags);
@@ -959,9 +959,9 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 		if (info.is_tunnel == 1) {
 			if (info.tunnel_tso_segsz ||
 			    (tx_offloads &
-			     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+			     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
 			    (tx_offloads &
-			     DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+			     RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
 				m->outer_l2_len = info.outer_l2_len;
 				m->outer_l3_len = info.outer_l3_len;
 				m->l2_len = info.l2_len;
@@ -1022,19 +1022,19 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 					rte_be_to_cpu_16(info.outer_ethertype),
 					info.outer_l3_len);
 			/* dump tx packet info */
-			if ((tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-					    DEV_TX_OFFLOAD_UDP_CKSUM |
-					    DEV_TX_OFFLOAD_TCP_CKSUM |
-					    DEV_TX_OFFLOAD_SCTP_CKSUM)) ||
+			if ((tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+					    RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+					    RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+					    RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) ||
 				info.tso_segsz != 0)
 				printf("tx: m->l2_len=%d m->l3_len=%d "
 					"m->l4_len=%d\n",
 					m->l2_len, m->l3_len, m->l4_len);
 			if (info.is_tunnel == 1) {
 				if ((tx_offloads &
-				    DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+				    RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
 				    (tx_offloads &
-				    DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
+				    RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
 				    (tx_ol_flags & PKT_TX_OUTER_IPV6))
 					printf("tx: m->outer_l2_len=%d "
 						"m->outer_l3_len=%d\n",
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 7ebed9fed334..03d026dec169 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -99,11 +99,11 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
 	vlan_tci_outer = ports[fs->tx_port].tx_vlan_id_outer;
 
 	tx_offloads = ports[fs->tx_port].dev_conf.txmode.offloads;
-	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ol_flags |= PKT_TX_VLAN_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		ol_flags |= PKT_TX_QINQ_PKT;
-	if (tx_offloads	& DEV_TX_OFFLOAD_MACSEC_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 
 	for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index ee76df7f0323..57e00bca20e7 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -72,11 +72,11 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
 	fs->rx_packets += nb_rx;
 	txp = &ports[fs->tx_port];
 	tx_offloads = txp->dev_conf.txmode.offloads;
-	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ol_flags = PKT_TX_VLAN_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		ol_flags |= PKT_TX_QINQ_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 	for (i = 0; i < nb_rx; i++) {
 		if (likely(i < nb_rx - 1))
diff --git a/app/test-pmd/macswap_common.h b/app/test-pmd/macswap_common.h
index 7e9a3590a436..7ade9a686b7c 100644
--- a/app/test-pmd/macswap_common.h
+++ b/app/test-pmd/macswap_common.h
@@ -10,11 +10,11 @@ ol_flags_init(uint64_t tx_offload)
 {
 	uint64_t ol_flags = 0;
 
-	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_VLAN_INSERT) ?
+	ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) ?
 			PKT_TX_VLAN : 0;
-	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_QINQ_INSERT) ?
+	ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) ?
 			PKT_TX_QINQ : 0;
-	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_MACSEC_INSERT) ?
+	ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) ?
 			PKT_TX_MACSEC : 0;
 
 	return ol_flags;
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index afc75f6bd213..cb40917077ea 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -547,29 +547,29 @@ parse_xstats_list(const char *in_str, struct rte_eth_xstat_name **xstats,
 static int
 parse_link_speed(int n)
 {
-	uint32_t speed = ETH_LINK_SPEED_FIXED;
+	uint32_t speed = RTE_ETH_LINK_SPEED_FIXED;
 
 	switch (n) {
 	case 1000:
-		speed |= ETH_LINK_SPEED_1G;
+		speed |= RTE_ETH_LINK_SPEED_1G;
 		break;
 	case 10000:
-		speed |= ETH_LINK_SPEED_10G;
+		speed |= RTE_ETH_LINK_SPEED_10G;
 		break;
 	case 25000:
-		speed |= ETH_LINK_SPEED_25G;
+		speed |= RTE_ETH_LINK_SPEED_25G;
 		break;
 	case 40000:
-		speed |= ETH_LINK_SPEED_40G;
+		speed |= RTE_ETH_LINK_SPEED_40G;
 		break;
 	case 50000:
-		speed |= ETH_LINK_SPEED_50G;
+		speed |= RTE_ETH_LINK_SPEED_50G;
 		break;
 	case 100000:
-		speed |= ETH_LINK_SPEED_100G;
+		speed |= RTE_ETH_LINK_SPEED_100G;
 		break;
 	case 200000:
-		speed |= ETH_LINK_SPEED_200G;
+		speed |= RTE_ETH_LINK_SPEED_200G;
 		break;
 	case 100:
 	case 10:
@@ -1002,13 +1002,13 @@ launch_args_parse(int argc, char** argv)
 			if (!strcmp(lgopts[opt_idx].name, "pkt-filter-size")) {
 				if (!strcmp(optarg, "64K"))
 					fdir_conf.pballoc =
-						RTE_FDIR_PBALLOC_64K;
+						RTE_ETH_FDIR_PBALLOC_64K;
 				else if (!strcmp(optarg, "128K"))
 					fdir_conf.pballoc =
-						RTE_FDIR_PBALLOC_128K;
+						RTE_ETH_FDIR_PBALLOC_128K;
 				else if (!strcmp(optarg, "256K"))
 					fdir_conf.pballoc =
-						RTE_FDIR_PBALLOC_256K;
+						RTE_ETH_FDIR_PBALLOC_256K;
 				else
 					rte_exit(EXIT_FAILURE, "pkt-filter-size %s invalid -"
 						 " must be: 64K or 128K or 256K\n",
@@ -1050,34 +1050,34 @@ launch_args_parse(int argc, char** argv)
 			}
 #endif
 			if (!strcmp(lgopts[opt_idx].name, "disable-crc-strip"))
-				rx_offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 			if (!strcmp(lgopts[opt_idx].name, "enable-lro"))
-				rx_offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 			if (!strcmp(lgopts[opt_idx].name, "enable-scatter"))
-				rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 			if (!strcmp(lgopts[opt_idx].name, "enable-rx-cksum"))
-				rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-rx-timestamp"))
-				rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 			if (!strcmp(lgopts[opt_idx].name, "enable-hw-vlan"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-vlan-filter"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-vlan-strip"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-vlan-extend"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-qinq-strip"))
-				rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 
 			if (!strcmp(lgopts[opt_idx].name, "enable-drop-en"))
 				rx_drop_en = 1;
@@ -1099,13 +1099,13 @@ launch_args_parse(int argc, char** argv)
 			if (!strcmp(lgopts[opt_idx].name, "forward-mode"))
 				set_pkt_forwarding_mode(optarg);
 			if (!strcmp(lgopts[opt_idx].name, "rss-ip"))
-				rss_hf = ETH_RSS_IP;
+				rss_hf = RTE_ETH_RSS_IP;
 			if (!strcmp(lgopts[opt_idx].name, "rss-udp"))
-				rss_hf = ETH_RSS_UDP;
+				rss_hf = RTE_ETH_RSS_UDP;
 			if (!strcmp(lgopts[opt_idx].name, "rss-level-inner"))
-				rss_hf |= ETH_RSS_LEVEL_INNERMOST;
+				rss_hf |= RTE_ETH_RSS_LEVEL_INNERMOST;
 			if (!strcmp(lgopts[opt_idx].name, "rss-level-outer"))
-				rss_hf |= ETH_RSS_LEVEL_OUTERMOST;
+				rss_hf |= RTE_ETH_RSS_LEVEL_OUTERMOST;
 			if (!strcmp(lgopts[opt_idx].name, "rxq")) {
 				n = atoi(optarg);
 				if (n >= 0 && check_nb_rxq((queueid_t)n) == 0)
@@ -1495,12 +1495,12 @@ launch_args_parse(int argc, char** argv)
 			if (!strcmp(lgopts[opt_idx].name, "rx-mq-mode")) {
 				char *end = NULL;
 				n = strtoul(optarg, &end, 16);
-				if (n >= 0 && n <= ETH_MQ_RX_VMDQ_DCB_RSS)
+				if (n >= 0 && n <= RTE_ETH_MQ_RX_VMDQ_DCB_RSS)
 					rx_mq_mode = (enum rte_eth_rx_mq_mode)n;
 				else
 					rte_exit(EXIT_FAILURE,
 						 "rx-mq-mode must be >= 0 and <= %d\n",
-						 ETH_MQ_RX_VMDQ_DCB_RSS);
+						 RTE_ETH_MQ_RX_VMDQ_DCB_RSS);
 			}
 			if (!strcmp(lgopts[opt_idx].name, "record-core-cycles"))
 				record_core_cycles = 1;
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 6d5bbc82404e..abfa8395ccdc 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -349,7 +349,7 @@ uint64_t noisy_lkup_num_reads_writes;
 /*
  * Receive Side Scaling (RSS) configuration.
  */
-uint64_t rss_hf = ETH_RSS_IP; /* RSS IP by default. */
+uint64_t rss_hf = RTE_ETH_RSS_IP; /* RSS IP by default. */
 
 /*
  * Port topology configuration
@@ -460,12 +460,12 @@ lcoreid_t latencystats_lcore_id = -1;
 struct rte_eth_rxmode rx_mode;
 
 struct rte_eth_txmode tx_mode = {
-	.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
+	.offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
 };
 
-struct rte_fdir_conf fdir_conf = {
+struct rte_eth_fdir_conf fdir_conf = {
 	.mode = RTE_FDIR_MODE_NONE,
-	.pballoc = RTE_FDIR_PBALLOC_64K,
+	.pballoc = RTE_ETH_FDIR_PBALLOC_64K,
 	.status = RTE_FDIR_REPORT_STATUS,
 	.mask = {
 		.vlan_tci_mask = 0xFFEF,
@@ -524,7 +524,7 @@ uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
 /*
  * hexadecimal bitmask of RX mq mode can be enabled.
  */
-enum rte_eth_rx_mq_mode rx_mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
+enum rte_eth_rx_mq_mode rx_mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
 
 /*
  * Used to set forced link speed
@@ -1578,9 +1578,9 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
 	if (ret != 0)
 		rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
 
-	if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(port->dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		port->dev_conf.txmode.offloads &=
-			~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Apply Rx offloads configuration */
 	for (i = 0; i < port->dev_info.max_rx_queues; i++)
@@ -1717,8 +1717,8 @@ init_config(void)
 
 	init_port_config();
 
-	gso_types = DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_UDP_TSO;
+	gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO;
 	/*
 	 * Records which Mbuf pool to use by each logical core, if needed.
 	 */
@@ -3466,7 +3466,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -3769,17 +3769,17 @@ init_port_config(void)
 			if (port->dev_conf.rx_adv_conf.rss_conf.rss_hf != 0) {
 				port->dev_conf.rxmode.mq_mode =
 					(enum rte_eth_rx_mq_mode)
-						(rx_mq_mode & ETH_MQ_RX_RSS);
+						(rx_mq_mode & RTE_ETH_MQ_RX_RSS);
 			} else {
-				port->dev_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+				port->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
 				port->dev_conf.rxmode.offloads &=
-						~DEV_RX_OFFLOAD_RSS_HASH;
+						~RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 				for (i = 0;
 				     i < port->dev_info.nb_rx_queues;
 				     i++)
 					port->rx_conf[i].offloads &=
-						~DEV_RX_OFFLOAD_RSS_HASH;
+						~RTE_ETH_RX_OFFLOAD_RSS_HASH;
 			}
 		}
 
@@ -3867,9 +3867,9 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 		vmdq_rx_conf->enable_default_pool = 0;
 		vmdq_rx_conf->default_pool = 0;
 		vmdq_rx_conf->nb_queue_pools =
-			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+			(num_tcs ==  RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
 		vmdq_tx_conf->nb_queue_pools =
-			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+			(num_tcs ==  RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
 
 		vmdq_rx_conf->nb_pool_maps = vmdq_rx_conf->nb_queue_pools;
 		for (i = 0; i < vmdq_rx_conf->nb_pool_maps; i++) {
@@ -3877,7 +3877,7 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 			vmdq_rx_conf->pool_map[i].pools =
 				1 << (i % vmdq_rx_conf->nb_queue_pools);
 		}
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			vmdq_rx_conf->dcb_tc[i] = i % num_tcs;
 			vmdq_tx_conf->dcb_tc[i] = i % num_tcs;
 		}
@@ -3885,8 +3885,8 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 		/* set DCB mode of RX and TX of multiple queues */
 		eth_conf->rxmode.mq_mode =
 				(enum rte_eth_rx_mq_mode)
-					(rx_mq_mode & ETH_MQ_RX_VMDQ_DCB);
-		eth_conf->txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+					(rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB);
+		eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
 	} else {
 		struct rte_eth_dcb_rx_conf *rx_conf =
 				&eth_conf->rx_adv_conf.dcb_rx_conf;
@@ -3902,23 +3902,23 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 		rx_conf->nb_tcs = num_tcs;
 		tx_conf->nb_tcs = num_tcs;
 
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			rx_conf->dcb_tc[i] = i % num_tcs;
 			tx_conf->dcb_tc[i] = i % num_tcs;
 		}
 
 		eth_conf->rxmode.mq_mode =
 				(enum rte_eth_rx_mq_mode)
-					(rx_mq_mode & ETH_MQ_RX_DCB_RSS);
+					(rx_mq_mode & RTE_ETH_MQ_RX_DCB_RSS);
 		eth_conf->rx_adv_conf.rss_conf = rss_conf;
-		eth_conf->txmode.mq_mode = ETH_MQ_TX_DCB;
+		eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_DCB;
 	}
 
 	if (pfc_en)
 		eth_conf->dcb_capability_en =
-				ETH_DCB_PG_SUPPORT | ETH_DCB_PFC_SUPPORT;
+				RTE_ETH_DCB_PG_SUPPORT | RTE_ETH_DCB_PFC_SUPPORT;
 	else
-		eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
+		eth_conf->dcb_capability_en = RTE_ETH_DCB_PG_SUPPORT;
 
 	return 0;
 }
@@ -3947,7 +3947,7 @@ init_port_dcb_config(portid_t pid,
 	retval = get_eth_dcb_conf(pid, &port_conf, dcb_mode, num_tcs, pfc_en);
 	if (retval < 0)
 		return retval;
-	port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	/* re-configure the device . */
 	retval = rte_eth_dev_configure(pid, nb_rxq, nb_rxq, &port_conf);
@@ -3997,7 +3997,7 @@ init_port_dcb_config(portid_t pid,
 
 	rxtx_port_config(pid);
 	/* VLAN filter */
-	rte_port->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	rte_port->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	for (i = 0; i < RTE_DIM(vlan_tags); i++)
 		rx_vft_set(pid, vlan_tags[i], 1);
 
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index bf3669134aa0..cd1e623ad67a 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -493,7 +493,7 @@ extern lcoreid_t bitrate_lcore_id;
 extern uint8_t bitrate_enabled;
 #endif
 
-extern struct rte_fdir_conf fdir_conf;
+extern struct rte_eth_fdir_conf fdir_conf;
 
 extern uint32_t max_rx_pkt_len;
 
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index e45f8840c91c..9eb7992815e8 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -354,11 +354,11 @@ pkt_burst_transmit(struct fwd_stream *fs)
 	tx_offloads = txp->dev_conf.txmode.offloads;
 	vlan_tci = txp->tx_vlan_id;
 	vlan_tci_outer = txp->tx_vlan_id_outer;
-	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ol_flags = PKT_TX_VLAN_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		ol_flags |= PKT_TX_QINQ_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 
 	/*
diff --git a/app/test/test_ethdev_link.c b/app/test/test_ethdev_link.c
index ee11987bae28..6248aea49abd 100644
--- a/app/test/test_ethdev_link.c
+++ b/app/test/test_ethdev_link.c
@@ -14,10 +14,10 @@ test_link_status_up_default(void)
 {
 	int ret = 0;
 	struct rte_eth_link link_status = {
-		.link_speed = ETH_SPEED_NUM_2_5G,
-		.link_status = ETH_LINK_UP,
-		.link_autoneg = ETH_LINK_AUTONEG,
-		.link_duplex = ETH_LINK_FULL_DUPLEX
+		.link_speed = RTE_ETH_SPEED_NUM_2_5G,
+		.link_status = RTE_ETH_LINK_UP,
+		.link_autoneg = RTE_ETH_LINK_AUTONEG,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -27,9 +27,9 @@ test_link_status_up_default(void)
 	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg",
 		text, strlen(text), "Invalid default link status string");
 
-	link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link_status.link_autoneg = ETH_LINK_FIXED;
-	link_status.link_speed = ETH_SPEED_NUM_10M,
+	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link_status.link_autoneg = RTE_ETH_LINK_FIXED;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_10M;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #2: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -37,7 +37,7 @@ test_link_status_up_default(void)
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
-	link_status.link_speed = ETH_SPEED_NUM_UNKNOWN;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -45,7 +45,7 @@ test_link_status_up_default(void)
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
-	link_status.link_speed = ETH_SPEED_NUM_NONE;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -54,9 +54,9 @@ test_link_status_up_default(void)
 		"string with HDX");
 
 	/* test max str len */
-	link_status.link_speed = ETH_SPEED_NUM_200G;
-	link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link_status.link_autoneg = ETH_LINK_AUTONEG;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_200G;
+	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link_status.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #4:len = %d, %s\n", ret, text);
 	RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
@@ -69,10 +69,10 @@ test_link_status_down_default(void)
 {
 	int ret = 0;
 	struct rte_eth_link link_status = {
-		.link_speed = ETH_SPEED_NUM_2_5G,
-		.link_status = ETH_LINK_DOWN,
-		.link_autoneg = ETH_LINK_AUTONEG,
-		.link_duplex = ETH_LINK_FULL_DUPLEX
+		.link_speed = RTE_ETH_SPEED_NUM_2_5G,
+		.link_status = RTE_ETH_LINK_DOWN,
+		.link_autoneg = RTE_ETH_LINK_AUTONEG,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -90,9 +90,9 @@ test_link_status_invalid(void)
 	int ret = 0;
 	struct rte_eth_link link_status = {
 		.link_speed = 55555,
-		.link_status = ETH_LINK_UP,
-		.link_autoneg = ETH_LINK_AUTONEG,
-		.link_duplex = ETH_LINK_FULL_DUPLEX
+		.link_status = RTE_ETH_LINK_UP,
+		.link_autoneg = RTE_ETH_LINK_AUTONEG,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -116,21 +116,21 @@ test_link_speed_all_values(void)
 		const char *value;
 		uint32_t link_speed;
 	} speed_str_map[] = {
-		{ "None",   ETH_SPEED_NUM_NONE },
-		{ "10 Mbps",  ETH_SPEED_NUM_10M },
-		{ "100 Mbps", ETH_SPEED_NUM_100M },
-		{ "1 Gbps",   ETH_SPEED_NUM_1G },
-		{ "2.5 Gbps", ETH_SPEED_NUM_2_5G },
-		{ "5 Gbps",   ETH_SPEED_NUM_5G },
-		{ "10 Gbps",  ETH_SPEED_NUM_10G },
-		{ "20 Gbps",  ETH_SPEED_NUM_20G },
-		{ "25 Gbps",  ETH_SPEED_NUM_25G },
-		{ "40 Gbps",  ETH_SPEED_NUM_40G },
-		{ "50 Gbps",  ETH_SPEED_NUM_50G },
-		{ "56 Gbps",  ETH_SPEED_NUM_56G },
-		{ "100 Gbps", ETH_SPEED_NUM_100G },
-		{ "200 Gbps", ETH_SPEED_NUM_200G },
-		{ "Unknown",  ETH_SPEED_NUM_UNKNOWN },
+		{ "None",   RTE_ETH_SPEED_NUM_NONE },
+		{ "10 Mbps",  RTE_ETH_SPEED_NUM_10M },
+		{ "100 Mbps", RTE_ETH_SPEED_NUM_100M },
+		{ "1 Gbps",   RTE_ETH_SPEED_NUM_1G },
+		{ "2.5 Gbps", RTE_ETH_SPEED_NUM_2_5G },
+		{ "5 Gbps",   RTE_ETH_SPEED_NUM_5G },
+		{ "10 Gbps",  RTE_ETH_SPEED_NUM_10G },
+		{ "20 Gbps",  RTE_ETH_SPEED_NUM_20G },
+		{ "25 Gbps",  RTE_ETH_SPEED_NUM_25G },
+		{ "40 Gbps",  RTE_ETH_SPEED_NUM_40G },
+		{ "50 Gbps",  RTE_ETH_SPEED_NUM_50G },
+		{ "56 Gbps",  RTE_ETH_SPEED_NUM_56G },
+		{ "100 Gbps", RTE_ETH_SPEED_NUM_100G },
+		{ "200 Gbps", RTE_ETH_SPEED_NUM_200G },
+		{ "Unknown",  RTE_ETH_SPEED_NUM_UNKNOWN },
 		{ "Invalid",   50505 }
 	};
 
diff --git a/app/test/test_event_eth_rx_adapter.c b/app/test/test_event_eth_rx_adapter.c
index add4d8a67821..a09253e91814 100644
--- a/app/test/test_event_eth_rx_adapter.c
+++ b/app/test/test_event_eth_rx_adapter.c
@@ -103,7 +103,7 @@ port_init_rx_intr(uint16_t port, struct rte_mempool *mp)
 {
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_NONE,
+			.mq_mode = RTE_ETH_MQ_RX_NONE,
 		},
 		.intr_conf = {
 			.rxq = 1,
@@ -118,7 +118,7 @@ port_init(uint16_t port, struct rte_mempool *mp)
 {
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_NONE,
+			.mq_mode = RTE_ETH_MQ_RX_NONE,
 		},
 	};
 
diff --git a/app/test/test_kni.c b/app/test/test_kni.c
index 96733554b6c4..40ab0d5c4ca4 100644
--- a/app/test/test_kni.c
+++ b/app/test/test_kni.c
@@ -74,7 +74,7 @@ static const struct rte_eth_txconf tx_conf = {
 
 static const struct rte_eth_conf port_conf = {
 	.txmode = {
-		.mq_mode = ETH_DCB_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 5388d18125a6..8a9ef851789f 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -134,11 +134,11 @@ static uint16_t vlan_id = 0x100;
 
 static struct rte_eth_conf default_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 189d2430f27e..351129de2f9b 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -107,11 +107,11 @@ static struct link_bonding_unittest_params test_params  = {
 
 static struct rte_eth_conf default_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index e7bb0497b663..f9eae9397386 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -52,7 +52,7 @@ struct slave_conf {
 
 	struct rte_eth_rss_conf rss_conf;
 	uint8_t rss_key[40];
-	struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
 
 	uint8_t is_slave;
 	struct rte_ring *rxtx_queue[RXTX_QUEUE_COUNT];
@@ -61,7 +61,7 @@ struct slave_conf {
 struct link_bonding_rssconf_unittest_params {
 	uint8_t bond_port_id;
 	struct rte_eth_dev_info bond_dev_info;
-	struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
 	struct slave_conf slave_ports[SLAVE_COUNT];
 
 	struct rte_mempool *mbuf_pool;
@@ -80,27 +80,27 @@ static struct link_bonding_rssconf_unittest_params test_params  = {
  */
 static struct rte_eth_conf default_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
 
 static struct rte_eth_conf rss_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IPV6,
+			.rss_hf = RTE_ETH_RSS_IPV6,
 		},
 	},
 	.lpbk_mode = 0,
@@ -207,13 +207,13 @@ bond_slaves(void)
 static int
 reta_set(uint16_t port_id, uint8_t value, int reta_size)
 {
-	struct rte_eth_rss_reta_entry64 reta_conf[512/RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[512/RTE_ETH_RETA_GROUP_SIZE];
 	int i, j;
 
-	for (i = 0; i < reta_size / RTE_RETA_GROUP_SIZE; i++) {
+	for (i = 0; i < reta_size / RTE_ETH_RETA_GROUP_SIZE; i++) {
 		/* select all fields to set */
 		reta_conf[i].mask = ~0LL;
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			reta_conf[i].reta[j] = value;
 	}
 
@@ -232,8 +232,8 @@ reta_check_synced(struct slave_conf *port)
 	for (i = 0; i < test_params.bond_dev_info.reta_size;
 			i++) {
 
-		int index = i / RTE_RETA_GROUP_SIZE;
-		int shift = i % RTE_RETA_GROUP_SIZE;
+		int index = i / RTE_ETH_RETA_GROUP_SIZE;
+		int shift = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (port->reta_conf[index].reta[shift] !=
 				test_params.bond_reta_conf[index].reta[shift])
@@ -251,7 +251,7 @@ static int
 bond_reta_fetch(void) {
 	unsigned j;
 
-	for (j = 0; j < test_params.bond_dev_info.reta_size / RTE_RETA_GROUP_SIZE;
+	for (j = 0; j < test_params.bond_dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE;
 			j++)
 		test_params.bond_reta_conf[j].mask = ~0LL;
 
@@ -268,7 +268,7 @@ static int
 slave_reta_fetch(struct slave_conf *port) {
 	unsigned j;
 
-	for (j = 0; j < port->dev_info.reta_size / RTE_RETA_GROUP_SIZE; j++)
+	for (j = 0; j < port->dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE; j++)
 		port->reta_conf[j].mask = ~0LL;
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_rss_reta_query(port->port_id,
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index a3b4f52c65e6..1df86ce080e5 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -62,11 +62,11 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 1,  /* enable loopback */
 };
@@ -155,7 +155,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -822,7 +822,7 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
 		/* bulk alloc rx, full-featured tx */
 		tx_conf.tx_rs_thresh = 32;
 		tx_conf.tx_free_thresh = 32;
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 		return 0;
 	} else if (!strcmp(mode, "hybrid")) {
 		/* bulk alloc rx, vector tx
@@ -831,13 +831,13 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
 		 */
 		tx_conf.tx_rs_thresh = 32;
 		tx_conf.tx_free_thresh = 32;
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 		return 0;
 	} else if (!strcmp(mode, "full")) {
 		/* full feature rx,tx pair */
 		tx_conf.tx_rs_thresh = 32;
 		tx_conf.tx_free_thresh = 32;
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 		return 0;
 	}
 
diff --git a/app/test/virtual_pmd.c b/app/test/virtual_pmd.c
index 7e15b47eb0fb..d9f2e4f66bde 100644
--- a/app/test/virtual_pmd.c
+++ b/app/test/virtual_pmd.c
@@ -53,7 +53,7 @@ static int  virtual_ethdev_stop(struct rte_eth_dev *eth_dev __rte_unused)
 	void *pkt = NULL;
 	struct virtual_ethdev_private *prv = eth_dev->data->dev_private;
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 0;
 	while (rte_ring_dequeue(prv->rx_queue, &pkt) != -ENOENT)
 		rte_pktmbuf_free(pkt);
@@ -168,7 +168,7 @@ virtual_ethdev_link_update_success(struct rte_eth_dev *bonded_eth_dev,
 		int wait_to_complete __rte_unused)
 {
 	if (!bonded_eth_dev->data->dev_started)
-		bonded_eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		bonded_eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -562,9 +562,9 @@ virtual_ethdev_create(const char *name, struct rte_ether_addr *mac_addr,
 	eth_dev->data->nb_rx_queues = (uint16_t)1;
 	eth_dev->data->nb_tx_queues = (uint16_t)1;
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
-	eth_dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
-	eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	eth_dev->data->mac_addrs = rte_zmalloc(name, RTE_ETHER_ADDR_LEN, 0);
 	if (eth_dev->data->mac_addrs == NULL)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 53560d3830d7..1c0ea988f239 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -42,7 +42,7 @@ Features of the OCTEON cnxk SSO PMD are:
 - HW managed packets enqueued from ethdev to eventdev exposed through event eth
   RX adapter.
 - N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
   capability while maintaining receive packet order.
 - Full Rx/Tx offload support defined through ethdev queue configuration.
 - HW managed event vectorization on CN10K for packets enqueued from ethdev to
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 11fbebfcd243..0fa57abfa3e0 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -35,7 +35,7 @@ Features of the OCTEON TX2 SSO PMD are:
 - HW managed packets enqueued from ethdev to eventdev exposed through event eth
   RX adapter.
 - N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
   capability while maintaining receive packet order.
 - Full Rx/Tx offload support defined through ethdev queue config.
 
diff --git a/doc/guides/nics/af_packet.rst b/doc/guides/nics/af_packet.rst
index bdd6e7263c85..54feffdef4bd 100644
--- a/doc/guides/nics/af_packet.rst
+++ b/doc/guides/nics/af_packet.rst
@@ -70,5 +70,5 @@ Features and Limitations
 ------------------------
 
 The PMD will re-insert the VLAN tag transparently to the packet if the kernel
-strips it, as long as the ``DEV_RX_OFFLOAD_VLAN_STRIP`` is not enabled by the
+strips it, as long as the ``RTE_ETH_RX_OFFLOAD_VLAN_STRIP`` is not enabled by the
 application.
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index aa6032889a55..b3d10f30dc77 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -877,21 +877,21 @@ processing. This improved performance is derived from a number of optimizations:
     * TX: only the following reduced set of transmit offloads is supported in
       vector mode::
 
-       DEV_TX_OFFLOAD_MBUF_FAST_FREE
+       RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
 
     * RX: only the following reduced set of receive offloads is supported in
       vector mode (note that jumbo MTU is allowed only when the MTU setting
-      does not require `DEV_RX_OFFLOAD_SCATTER` to be enabled)::
-
-       DEV_RX_OFFLOAD_VLAN_STRIP
-       DEV_RX_OFFLOAD_KEEP_CRC
-       DEV_RX_OFFLOAD_IPV4_CKSUM
-       DEV_RX_OFFLOAD_UDP_CKSUM
-       DEV_RX_OFFLOAD_TCP_CKSUM
-       DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
-       DEV_RX_OFFLOAD_OUTER_UDP_CKSUM
-       DEV_RX_OFFLOAD_RSS_HASH
-       DEV_RX_OFFLOAD_VLAN_FILTER
+      does not require `RTE_ETH_RX_OFFLOAD_SCATTER` to be enabled)::
+
+       RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+       RTE_ETH_RX_OFFLOAD_KEEP_CRC
+       RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+       RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+       RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+       RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+       RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+       RTE_ETH_RX_OFFLOAD_RSS_HASH
+       RTE_ETH_RX_OFFLOAD_VLAN_FILTER
 
 The BNXT Vector PMD is enabled in DPDK builds by default. The decision to enable
 vector processing is made at run-time when the port is started; if no transmit
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index 91bdcd065a95..0209730b904a 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -432,7 +432,7 @@ Limitations
 .. code-block:: console
 
      vlan_offload = rte_eth_dev_get_vlan_offload(port);
-     vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
+     vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
      rte_eth_dev_set_vlan_offload(port, vlan_offload);
 
 Another alternative is modify the adapter's ingress VLAN rewrite mode so that
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index d35751d5b5a7..594e98a6b803 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -30,7 +30,7 @@ Speed capabilities
 
 Supports getting the speed capabilities that the current device is capable of.
 
-* **[provides] rte_eth_dev_info**: ``speed_capa:ETH_LINK_SPEED_*``.
+* **[provides] rte_eth_dev_info**: ``speed_capa:RTE_ETH_LINK_SPEED_*``.
 * **[related]  API**: ``rte_eth_dev_info_get()``.
 
 
@@ -101,11 +101,11 @@ Supports Rx interrupts.
 Lock-free Tx queue
 ------------------
 
-If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+If a PMD advertises RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
 invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
 
-* **[uses]    rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
+* **[uses]    rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
 * **[related]  API**: ``rte_eth_tx_burst()``.
 
 
@@ -117,8 +117,8 @@ Fast mbuf free
 Supports optimization for fast release of mbufs following successful Tx.
 Requires that per queue, all mbufs come from the same mempool and has refcnt = 1.
 
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
-* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
 
 
 .. _nic_features_free_tx_mbuf_on_demand:
@@ -177,7 +177,7 @@ Scattered Rx
 
 Supports receiving segmented mbufs.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SCATTER``.
 * **[implements] datapath**: ``Scattered Rx function``.
 * **[implements] rte_eth_dev_data**: ``scattered_rx``.
 * **[provides]   eth_dev_ops**: ``rxq_info_get:scattered_rx``.
@@ -205,12 +205,12 @@ LRO
 
 Supports Large Receive Offload.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
   ``dev_conf.rxmode.max_lro_pkt_size``.
 * **[implements] datapath**: ``LRO functionality``.
 * **[implements] rte_eth_dev_data**: ``lro``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
 * **[provides]   rte_eth_dev_info**: ``max_lro_pkt_size``.
 
 
@@ -221,12 +221,12 @@ TSO
 
 Supports TCP Segmentation Offloading.
 
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_TCP_TSO``.
 * **[uses]       rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:`` ``PKT_TX_TCP_SEG``, ``PKT_TX_IPV4``, ``PKT_TX_IPV6``, ``PKT_TX_IP_CKSUM``.
 * **[uses]       mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
 * **[implements] datapath**: ``TSO functionality``.
-* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_TCP_TSO,RTE_ETH_TX_OFFLOAD_UDP_TSO``.
 
 
 .. _nic_features_promiscuous_mode:
@@ -287,9 +287,9 @@ RSS hash
 
 Supports RSS hashing on RX.
 
-* **[uses]     user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_RSS_FLAG``.
+* **[uses]     user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_RSS_FLAG``.
 * **[uses]     user config**: ``dev_conf.rx_adv_conf.rss_conf``.
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
 * **[provides] rte_eth_dev_info**: ``flow_type_rss_offloads``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
 
@@ -302,7 +302,7 @@ Inner RSS
 Supports RX RSS hashing on Inner headers.
 
 * **[uses]    rte_flow_action_rss**: ``level``.
-* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
 
 
@@ -339,7 +339,7 @@ VMDq
 
 Supports Virtual Machine Device Queues (VMDq).
 
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_VMDQ_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_VMDQ_FLAG``.
 * **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
 * **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_rx_conf``.
 * **[uses] user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -362,7 +362,7 @@ DCB
 
 Supports Data Center Bridging (DCB).
 
-* **[uses]       user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_DCB_FLAG``.
+* **[uses]       user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_DCB_FLAG``.
 * **[uses]       user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
 * **[uses]       user config**: ``dev_conf.rx_adv_conf.dcb_rx_conf``.
 * **[uses]       user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -378,7 +378,7 @@ VLAN filter
 
 Supports filtering of a VLAN Tag identifier.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_FILTER``.
 * **[implements] eth_dev_ops**: ``vlan_filter_set``.
 * **[related]    API**: ``rte_eth_dev_vlan_filter()``.
 
@@ -416,13 +416,13 @@ Supports inline crypto processing defined by rte_security library to perform cry
 operations of security protocol while packet is received in NIC. NIC is not aware
 of protocol operations. See Security library and PMD documentation for more details.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[uses]       mbuf**: ``mbuf.l2_len``.
 * **[implements] rte_security_ops**: ``session_create``, ``session_update``,
   ``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
   ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO``
@@ -438,14 +438,14 @@ protocol processing for the security protocol (e.g. IPsec, MACSEC) while the
 packet is received at NIC. The NIC is capable of understanding the security
 protocol operations. See security library and PMD documentation for more details.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[uses]       mbuf**: ``mbuf.l2_len``.
 * **[implements] rte_security_ops**: ``session_create``, ``session_update``,
   ``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``get_userdata``,
   ``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
   ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL``
@@ -459,7 +459,7 @@ CRC offload
 Supports CRC stripping by hardware.
 A PMD assumed to support CRC stripping by default. PMD should advertise if it supports keeping CRC.
 
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_KEEP_CRC``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_KEEP_CRC``.
 
 
 .. _nic_features_vlan_offload:
@@ -469,13 +469,13 @@ VLAN offload
 
 Supports VLAN offload to hardware.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_STRIP,RTE_ETH_RX_OFFLOAD_VLAN_FILTER,RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:PKT_TX_VLAN``, ``mbuf.vlan_tci``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN`` ``mbuf.vlan_tci``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_VLAN_STRIP``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
   ``rte_eth_dev_get_vlan_offload()``.
 
@@ -487,14 +487,14 @@ QinQ offload
 
 Supports QinQ (queue in queue) offload.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ``, ``mbuf.vlan_tci_outer``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.ol_flags:PKT_RX_QINQ``,
   ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN``
   ``mbuf.vlan_tci``, ``mbuf.vlan_tci_outer``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
 
 
 .. _nic_features_fec:
@@ -508,7 +508,7 @@ information to correct the bit errors generated during data packet transmission
 improves signal quality but also brings a delay to signals. This function can be enabled or disabled as required.
 
 * **[implements] eth_dev_ops**: ``fec_get_capability``, ``fec_get``, ``fec_set``.
-* **[provides]   rte_eth_fec_capa**: ``speed:ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
+* **[provides]   rte_eth_fec_capa**: ``speed:RTE_ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
 * **[related]    API**: ``rte_eth_fec_get_capability()``, ``rte_eth_fec_get()``, ``rte_eth_fec_set()``.
 
 
@@ -519,16 +519,16 @@ L3 checksum offload
 
 Supports L3 checksum offload.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[uses]     mbuf**: ``mbuf.l2_len``, ``mbuf.l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
   ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
   ``PKT_RX_IP_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
 
 
 .. _nic_features_l4_checksum_offload:
@@ -538,8 +538,8 @@ L4 checksum offload
 
 Supports L4 checksum offload.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -547,8 +547,8 @@ Supports L4 checksum offload.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
   ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
   ``PKT_RX_L4_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
 
 .. _nic_features_hw_timestamp:
 
@@ -557,10 +557,10 @@ Timestamp offload
 
 Supports Timestamp.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_TIMESTAMP``.
 * **[provides] mbuf**: ``mbuf.timestamp``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
 * **[related] eth_dev_ops**: ``read_clock``.
 
 .. _nic_features_macsec_offload:
@@ -570,11 +570,11 @@ MACsec offload
 
 Supports MACsec.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
 
 
 .. _nic_features_inner_l3_checksum:
@@ -584,16 +584,16 @@ Inner L3 checksum
 
 Supports inner packet L3 checksum.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_IP_CKSUM_BAD``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
 
 .. _nic_features_inner_l4_checksum:
@@ -603,15 +603,15 @@ Inner L4 checksum
 
 Supports inner packet L4 checksum.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_L4_CKSUM_UNKNOWN`` |
   ``PKT_RX_OUTER_L4_CKSUM_BAD`` | ``PKT_RX_OUTER_L4_CKSUM_GOOD`` | ``PKT_RX_OUTER_L4_CKSUM_INVALID``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
   ``mbuf.ol_flags:PKT_TX_OUTER_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
 
 
 .. _nic_features_shared_rx_queue:
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index ed6afd62703d..bba53f5a64ee 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -78,11 +78,11 @@ To enable via ``RX_OLFLAGS`` use ``RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y``.
 To guarantee the constraint, the following capabilities in ``dev_conf.rxmode.offloads``
 will be checked:
 
-*   ``DEV_RX_OFFLOAD_VLAN_EXTEND``
+*   ``RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``
 
-*   ``DEV_RX_OFFLOAD_CHECKSUM``
+*   ``RTE_ETH_RX_OFFLOAD_CHECKSUM``
 
-*   ``DEV_RX_OFFLOAD_HEADER_SPLIT``
+*   ``RTE_ETH_RX_OFFLOAD_HEADER_SPLIT``
 
 *   ``fdir_conf->mode``
 
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index 2efdd1a41bb4..a1e236ad75e5 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -216,21 +216,21 @@ For example,
     *   If the max number of VFs (max_vfs) is set in the range of 1 to 32:
 
         If the number of Rx queues is specified as 4 (``--rxq=4`` in testpmd), then there are totally 32
-        pools (ETH_32_POOLS), and each VF could have 4 Rx queues;
+        pools (RTE_ETH_32_POOLS), and each VF could have 4 Rx queues;
 
         If the number of Rx queues is specified as 2 (``--rxq=2`` in testpmd), then there are totally 32
-        pools (ETH_32_POOLS), and each VF could have 2 Rx queues;
+        pools (RTE_ETH_32_POOLS), and each VF could have 2 Rx queues;
 
     *   If the max number of VFs (max_vfs) is in the range of 33 to 64:
 
         If the number of Rx queues in specified as 4 (``--rxq=4`` in testpmd), then error message is expected
         as ``rxq`` is not correct at this case;
 
-        If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (ETH_64_POOLS),
+        If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (RTE_ETH_64_POOLS),
         and each VF have 2 Rx queues;
 
-    On host, to enable VF RSS functionality, rx mq mode should be set as ETH_MQ_RX_VMDQ_RSS
-    or ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
+    On host, to enable VF RSS functionality, rx mq mode should be set as RTE_ETH_MQ_RX_VMDQ_RSS
+    or RTE_ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
     It also needs config VF RSS information like hash function, RSS key, RSS key length.
 
 .. note::
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index 20a74b9b5bcd..148d2f5fc2be 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -89,13 +89,13 @@ Other features are supported using optional MACRO configuration. They include:
 
 To guarantee the constraint, capabilities in dev_conf.rxmode.offloads will be checked:
 
-*   DEV_RX_OFFLOAD_VLAN_STRIP
+*   RTE_ETH_RX_OFFLOAD_VLAN_STRIP
 
-*   DEV_RX_OFFLOAD_VLAN_EXTEND
+*   RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
 
-*   DEV_RX_OFFLOAD_CHECKSUM
+*   RTE_ETH_RX_OFFLOAD_CHECKSUM
 
-*   DEV_RX_OFFLOAD_HEADER_SPLIT
+*   RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
 
 *   dev_conf
 
@@ -163,13 +163,13 @@ l3fwd
 ~~~~~
 
 When running l3fwd with vPMD, there is one thing to note.
-In the configuration, ensure that DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
+In the configuration, ensure that RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
 Otherwise, by default, RX vPMD is disabled.
 
 load_balancer
 ~~~~~~~~~~~~~
 
-As in the case of l3fwd, to enable vPMD, do NOT set DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
+As in the case of l3fwd, to enable vPMD, do NOT set RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
 In addition, for improved performance, use -bsz "(32,32),(64,64),(32,32)" in load_balancer to avoid using the default burst size of 144.
 
 
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index dd059b227d8e..86927a0b56b0 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -371,7 +371,7 @@ Limitations
 
 - CRC:
 
-  - ``DEV_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
+  - ``RTE_ETH_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
     for some NICs (such as ConnectX-6 Dx, ConnectX-6 Lx, and BlueField-2).
     The capability bit ``scatter_fcs_w_decap_disable`` shows NIC support.
 
@@ -611,7 +611,7 @@ Driver options
   small-packet traffic.
 
   When MPRQ is enabled, MTU can be larger than the size of
-  user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
+  user-provided mbuf even if RTE_ETH_RX_OFFLOAD_SCATTER isn't enabled. PMD will
   configure large stride size enough to accommodate MTU as long as
   device allows. Note that this can waste system memory compared to enabling Rx
   scatter and multi-segment packet.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 3ce696b605d1..681010d9ed7d 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -275,7 +275,7 @@ An example utility for eBPF instruction generation in the format of C arrays wil
 be added in next releases
 
 TAP reports on supported RSS functions as part of dev_infos_get callback:
-``ETH_RSS_IP``, ``ETH_RSS_UDP`` and ``ETH_RSS_TCP``.
+``RTE_ETH_RSS_IP``, ``RTE_ETH_RSS_UDP`` and ``RTE_ETH_RSS_TCP``.
 **Known limitation:** TAP supports all of the above hash functions together
 and not in partial combinations.
 
diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
index 7bff0aef0b74..9b2c31a2f0bc 100644
--- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
@@ -194,11 +194,11 @@ To segment an outgoing packet, an application must:
 
    - the bit mask of required GSO types. The GSO library uses the same macros as
      those that describe a physical device's TX offloading capabilities (i.e.
-     ``DEV_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
+     ``RTE_ETH_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
      wants to segment TCP/IPv4 packets, it should set gso_types to
-     ``DEV_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
-     supported for gso_types are ``DEV_TX_OFFLOAD_VXLAN_TNL_TSO``, and
-     ``DEV_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
+     ``RTE_ETH_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
+     supported for gso_types are ``RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO``, and
+     ``RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
      allowed.
 
    - a flag, that indicates whether the IPv4 headers of output segments should
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 2f190b40e43a..dc6186a44ae2 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -137,7 +137,7 @@ a vxlan-encapsulated tcp packet:
     mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
     set out_ip checksum to 0 in the packet
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
 
 - calculate checksum of out_ip and out_udp::
 
@@ -147,8 +147,8 @@ a vxlan-encapsulated tcp packet:
     set out_ip checksum to 0 in the packet
     set out_udp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM
-  and DEV_TX_OFFLOAD_UDP_CKSUM.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+  and RTE_ETH_TX_OFFLOAD_UDP_CKSUM.
 
 - calculate checksum of in_ip::
 
@@ -158,7 +158,7 @@ a vxlan-encapsulated tcp packet:
     set in_ip checksum to 0 in the packet
 
   This is similar to case 1), but l2_len is different. It is supported
-  on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+  on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
   Note that it can only work if outer L4 checksum is 0.
 
 - calculate checksum of in_ip and in_tcp::
@@ -170,8 +170,8 @@ a vxlan-encapsulated tcp packet:
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
   This is similar to case 2), but l2_len is different. It is supported
-  on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM and
-  DEV_TX_OFFLOAD_TCP_CKSUM.
+  on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM and
+  RTE_ETH_TX_OFFLOAD_TCP_CKSUM.
   Note that it can only work if outer L4 checksum is 0.
 
 - segment inner TCP::
@@ -185,7 +185,7 @@ a vxlan-encapsulated tcp packet:
     set in_tcp checksum to pseudo header without including the IP
       payload length using rte_ipv4_phdr_cksum()
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_TCP_TSO.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_TCP_TSO.
   Note that it can only work if outer L4 checksum is 0.
 
 - calculate checksum of out_ip, in_ip, in_tcp::
@@ -200,8 +200,8 @@ a vxlan-encapsulated tcp packet:
     set in_ip checksum to 0 in the packet
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM,
-  DEV_TX_OFFLOAD_UDP_CKSUM and DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
+  RTE_ETH_TX_OFFLOAD_UDP_CKSUM and RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM.
 
 The list of flags and their precise meaning is described in the mbuf API
 documentation (rte_mbuf.h). Also refer to the testpmd source code
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 0d4ac77a7ccf..68312898448c 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -57,7 +57,7 @@ Whenever needed and appropriate, asynchronous communication should be introduced
 
 Avoiding lock contention is a key issue in a multi-core environment.
 To address this issue, PMDs are designed to work with per-core private resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable.
+For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
 In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
 
 To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core
@@ -119,7 +119,7 @@ This is also true for the pipe-line model provided all logical cores used are lo
 
 Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance.
 
-If the PMD is ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
+If the PMD is ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
 concurrently on the same tx queue without SW lock. This PMD feature found in some NICs and useful in the following use cases:
 
 *  Remove explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
@@ -127,7 +127,7 @@ concurrently on the same tx queue without SW lock. This PMD feature found in som
 *  In the eventdev use case, avoid dedicating a separate TX core for transmitting and thus
    enables more scaling as all workers can send the packets.
 
-See `Hardware Offload`_ for ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
+See `Hardware Offload`_ for ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
 
 Device Identification, Ownership and Configuration
 --------------------------------------------------
@@ -311,7 +311,7 @@ The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get(
 The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities.
 Supported offloads can be either per-port or per-queue.
 
-Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
+Offloads are enabled using the existing ``RTE_ETH_TX_OFFLOAD_*`` or ``RTE_ETH_RX_OFFLOAD_*`` flags.
 Any requested offloading by an application must be within the device capabilities.
 Any offloading is disabled by default if it is not set in the parameter
 ``dev_conf->[rt]xmode.offloads`` to ``rte_eth_dev_configure()`` and
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index a2169517c3f9..d798adb83e1d 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1993,23 +1993,23 @@ only matching traffic goes through.
 
 .. table:: RSS
 
-   +---------------+---------------------------------------------+
-   | Field         | Value                                       |
-   +===============+=============================================+
-   | ``func``      | RSS hash function to apply                  |
-   +---------------+---------------------------------------------+
-   | ``level``     | encapsulation level for ``types``           |
-   +---------------+---------------------------------------------+
-   | ``types``     | specific RSS hash types (see ``ETH_RSS_*``) |
-   +---------------+---------------------------------------------+
-   | ``key_len``   | hash key length in bytes                    |
-   +---------------+---------------------------------------------+
-   | ``queue_num`` | number of entries in ``queue``              |
-   +---------------+---------------------------------------------+
-   | ``key``       | hash key                                    |
-   +---------------+---------------------------------------------+
-   | ``queue``     | queue indices to use                        |
-   +---------------+---------------------------------------------+
+   +---------------+-------------------------------------------------+
+   | Field         | Value                                           |
+   +===============+=================================================+
+   | ``func``      | RSS hash function to apply                      |
+   +---------------+-------------------------------------------------+
+   | ``level``     | encapsulation level for ``types``               |
+   +---------------+-------------------------------------------------+
+   | ``types``     | specific RSS hash types (see ``RTE_ETH_RSS_*``) |
+   +---------------+-------------------------------------------------+
+   | ``key_len``   | hash key length in bytes                        |
+   +---------------+-------------------------------------------------+
+   | ``queue_num`` | number of entries in ``queue``                  |
+   +---------------+-------------------------------------------------+
+   | ``key``       | hash key                                        |
+   +---------------+-------------------------------------------------+
+   | ``queue``     | queue indices to use                            |
+   +---------------+-------------------------------------------------+
 
 Action: ``PF``
 ^^^^^^^^^^^^^^
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
index ad92c16868c1..46c9b51d1bf9 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -569,7 +569,7 @@ created by the application is attached to the security session by the API
 
 For Inline Crypto and Inline protocol offload, device specific defined metadata is
 updated in the mbuf using ``rte_security_set_pkt_metadata()`` if
-``DEV_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
+``RTE_ETH_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
 
 For inline protocol offloaded ingress traffic, the application can register a
 pointer, ``userdata`` , in the security session. When the packet is received,
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index cc2b89850b07..f11550dc78ac 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -69,22 +69,16 @@ Deprecation Notices
   ``RTE_ETH_FLOW_MAX`` is one sample of the mentioned case, adding a new flow
   type will break the ABI because of ``flex_mask[RTE_ETH_FLOW_MAX]`` array
   usage in following public struct hierarchy:
-  ``rte_eth_fdir_flex_conf -> rte_fdir_conf -> rte_eth_conf (in the middle)``.
+  ``rte_eth_fdir_flex_conf -> rte_eth_fdir_conf -> rte_eth_conf (in the middle)``.
   Need to identify this kind of usages and fix in 20.11, otherwise this blocks
   us extending existing enum/define.
   One solution can be using a fixed size array instead of ``.*MAX.*`` value.
 
-* ethdev: Will add ``RTE_ETH_`` prefix to all ethdev macros/enums in v21.11.
-  Macros will be added for backward compatibility.
-  Backward compatibility macros will be removed on v22.11.
-  A few old backward compatibility macros from 2013 that does not have
-  proper prefix will be removed on v21.11.
-
 * ethdev: The flow director API, including ``rte_eth_conf.fdir_conf`` field,
   and the related structures (``rte_fdir_*`` and ``rte_eth_fdir_*``),
   will be removed in DPDK 20.11.
 
-* ethdev: New offload flags ``DEV_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
+* ethdev: New offload flags ``RTE_ETH_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
   This will allow application to enable or disable PMDs from updating
   ``rte_mbuf::hash::fdir``.
   This scheme will allow PMDs to avoid writes to ``rte_mbuf`` fields on Rx and
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 569d3c00b9ee..b327c2bfca1c 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -446,6 +446,9 @@ ABI Changes
 * bbdev: Added capability related to more comprehensive CRC options,
   shifting values of the ``enum rte_bbdev_op_ldpcdec_flag_bitmasks``.
 
+* ethdev: All enums & macros updated to have ``RTE_ETH`` prefix and structures
+  updated to have ``rte_eth`` prefix. DPDK components updated to use new names.
+
 
 Known Issues
 ------------
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 78171b25f96e..782574dd39d5 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -209,12 +209,12 @@ Where:
     device will ensure the ordering. Ordering will be lost when tried in PARALLEL.
 
 *   ``--rxoffload MASK``: RX HW offload capabilities to enable/use on this port
-    (bitmask of DEV_RX_OFFLOAD_* values). It is an optional parameter and
+    (bitmask of RTE_ETH_RX_OFFLOAD_* values). It is an optional parameter and
     allows user to disable some of the RX HW offload capabilities.
     By default all HW RX offloads are enabled.
 
 *   ``--txoffload MASK``: TX HW offload capabilities to enable/use on this port
-    (bitmask of DEV_TX_OFFLOAD_* values). It is an optional parameter and
+    (bitmask of RTE_ETH_TX_OFFLOAD_* values). It is an optional parameter and
     allows user to disable some of the TX HW offload capabilities.
     By default all HW TX offloads are enabled.
 
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index d23e0b6a7a2e..30edef07ea20 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -546,7 +546,7 @@ The command line options are:
     Set the hexadecimal bitmask of RX multi queue mode which can be enabled.
     The default value is 0x7::
 
-       ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG | ETH_MQ_RX_VMDQ_FLAG
+       RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG
 
 *   ``--record-core-cycles``
 
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index be52e6f72dab..a922988607ef 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -90,20 +90,20 @@ int dpaa_intr_disable(char *if_name);
 struct usdpaa_ioctl_link_status_args_old {
 	/* network device node name */
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link status(ETH_LINK_UP/DOWN) */
+	/* link status(RTE_ETH_LINK_UP/DOWN) */
 	int     link_status;
 };
 
 struct usdpaa_ioctl_link_status_args {
 	/* network device node name */
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link status(ETH_LINK_UP/DOWN) */
+	/* link status(RTE_ETH_LINK_UP/DOWN) */
 	int     link_status;
-	/* link speed (ETH_SPEED_NUM_)*/
+	/* link speed (RTE_ETH_SPEED_NUM_)*/
 	int     link_speed;
-	/* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+	/* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
 	int     link_duplex;
-	/* link autoneg (ETH_LINK_AUTONEG/FIXED)*/
+	/* link autoneg (RTE_ETH_LINK_AUTONEG/FIXED)*/
 	int     link_autoneg;
 
 };
@@ -111,16 +111,16 @@ struct usdpaa_ioctl_link_status_args {
 struct usdpaa_ioctl_update_link_status_args {
 	/* network device node name */
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link status(ETH_LINK_UP/DOWN) */
+	/* link status(RTE_ETH_LINK_UP/DOWN) */
 	int     link_status;
 };
 
 struct usdpaa_ioctl_update_link_speed {
 	/* network device node name*/
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link speed (ETH_SPEED_NUM_)*/
+	/* link speed (RTE_ETH_SPEED_NUM_)*/
 	int     link_speed;
-	/* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+	/* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
 	int     link_duplex;
 };
 
diff --git a/drivers/common/cnxk/roc_npc.h b/drivers/common/cnxk/roc_npc.h
index ef85073b17e1..e13d55713625 100644
--- a/drivers/common/cnxk/roc_npc.h
+++ b/drivers/common/cnxk/roc_npc.h
@@ -167,7 +167,7 @@ enum roc_npc_rss_hash_function {
 struct roc_npc_action_rss {
 	enum roc_npc_rss_hash_function func;
 	uint32_t level;
-	uint64_t types;	       /**< Specific RSS hash types (see ETH_RSS_*). */
+	uint64_t types;	       /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
 	uint32_t key_len;      /**< Hash key length in bytes. */
 	uint32_t queue_num;    /**< Number of entries in @p queue. */
 	const uint8_t *key;    /**< Hash key. */
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index a077376dc0fb..8f778f0c2419 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -93,10 +93,10 @@ static const char *valid_arguments[] = {
 };
 
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(af_packet_logtype, NOTICE);
@@ -290,7 +290,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 static int
 eth_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -320,7 +320,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
 		internals->tx_queue[i].sockfd = -1;
 	}
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
@@ -331,7 +331,7 @@ eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 	const struct rte_eth_rxmode *rxmode = &dev_conf->rxmode;
 	struct pmd_internals *internals = dev->data->dev_private;
 
-	internals->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	internals->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 	return 0;
 }
 
@@ -346,9 +346,9 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_rx_queues = (uint16_t)internals->nb_queues;
 	dev_info->max_tx_queues = (uint16_t)internals->nb_queues;
 	dev_info->min_rx_bufsize = 0;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_VLAN_INSERT;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	return 0;
 }
diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index b362ccdcd38c..e156246f24df 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -163,10 +163,10 @@ static const char * const valid_arguments[] = {
 };
 
 static const struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_AUTONEG
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_AUTONEG
 };
 
 /* List which tracks PMDs to facilitate sharing UMEMs across them. */
@@ -652,7 +652,7 @@ eth_af_xdp_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 static int
 eth_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -661,7 +661,7 @@ eth_dev_start(struct rte_eth_dev *dev)
 static int
 eth_dev_stop(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index 377299b14c7a..b618cba3f023 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -736,14 +736,14 @@ eth_ark_dev_info_get(struct rte_eth_dev *dev,
 		.nb_align = ARK_TX_MIN_QUEUE}; /* power of 2 */
 
 	/* ARK PMD supports all line rates, how do we indicate that here ?? */
-	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
-				ETH_LINK_SPEED_10G |
-				ETH_LINK_SPEED_25G |
-				ETH_LINK_SPEED_40G |
-				ETH_LINK_SPEED_50G |
-				ETH_LINK_SPEED_100G);
-
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_TIMESTAMP;
+	dev_info->speed_capa = (RTE_ETH_LINK_SPEED_1G |
+				RTE_ETH_LINK_SPEED_10G |
+				RTE_ETH_LINK_SPEED_25G |
+				RTE_ETH_LINK_SPEED_40G |
+				RTE_ETH_LINK_SPEED_50G |
+				RTE_ETH_LINK_SPEED_100G);
+
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	return 0;
 }
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 5a198f53fce7..f7bfac796c07 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -154,20 +154,20 @@ static struct rte_pci_driver rte_atl_pmd = {
 	.remove = eth_atl_pci_remove,
 };
 
-#define ATL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP \
-			| DEV_RX_OFFLOAD_IPV4_CKSUM \
-			| DEV_RX_OFFLOAD_UDP_CKSUM \
-			| DEV_RX_OFFLOAD_TCP_CKSUM \
-			| DEV_RX_OFFLOAD_MACSEC_STRIP \
-			| DEV_RX_OFFLOAD_VLAN_FILTER)
-
-#define ATL_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT \
-			| DEV_TX_OFFLOAD_IPV4_CKSUM \
-			| DEV_TX_OFFLOAD_UDP_CKSUM \
-			| DEV_TX_OFFLOAD_TCP_CKSUM \
-			| DEV_TX_OFFLOAD_TCP_TSO \
-			| DEV_TX_OFFLOAD_MACSEC_INSERT \
-			| DEV_TX_OFFLOAD_MULTI_SEGS)
+#define ATL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP \
+			| RTE_ETH_RX_OFFLOAD_IPV4_CKSUM \
+			| RTE_ETH_RX_OFFLOAD_UDP_CKSUM \
+			| RTE_ETH_RX_OFFLOAD_TCP_CKSUM \
+			| RTE_ETH_RX_OFFLOAD_MACSEC_STRIP \
+			| RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+
+#define ATL_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT \
+			| RTE_ETH_TX_OFFLOAD_IPV4_CKSUM \
+			| RTE_ETH_TX_OFFLOAD_UDP_CKSUM \
+			| RTE_ETH_TX_OFFLOAD_TCP_CKSUM \
+			| RTE_ETH_TX_OFFLOAD_TCP_TSO \
+			| RTE_ETH_TX_OFFLOAD_MACSEC_INSERT \
+			| RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define SFP_EEPROM_SIZE 0x100
 
@@ -488,7 +488,7 @@ atl_dev_start(struct rte_eth_dev *dev)
 	/* set adapter started */
 	hw->adapter_stopped = 0;
 
-	if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(ERR,
 		"Invalid link_speeds for port %u, fix speed not supported",
 				dev->data->port_id);
@@ -655,18 +655,18 @@ atl_dev_set_link_up(struct rte_eth_dev *dev)
 	uint32_t link_speeds = dev->data->dev_conf.link_speeds;
 	uint32_t speed_mask = 0;
 
-	if (link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		speed_mask = hw->aq_nic_cfg->link_speed_msk;
 	} else {
-		if (link_speeds & ETH_LINK_SPEED_10G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 			speed_mask |= AQ_NIC_RATE_10G;
-		if (link_speeds & ETH_LINK_SPEED_5G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_5G)
 			speed_mask |= AQ_NIC_RATE_5G;
-		if (link_speeds & ETH_LINK_SPEED_1G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed_mask |= AQ_NIC_RATE_1G;
-		if (link_speeds & ETH_LINK_SPEED_2_5G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_2_5G)
 			speed_mask |=  AQ_NIC_RATE_2G5;
-		if (link_speeds & ETH_LINK_SPEED_100M)
+		if (link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed_mask |= AQ_NIC_RATE_100M;
 	}
 
@@ -1127,10 +1127,10 @@ atl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->reta_size = HW_ATL_B0_RSS_REDIRECTION_MAX;
 	dev_info->flow_type_rss_offloads = ATL_RSS_OFFLOAD_ALL;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
-	dev_info->speed_capa |= ETH_LINK_SPEED_100M;
-	dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
-	dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
 
 	return 0;
 }
@@ -1175,10 +1175,10 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
 	u32 fc = AQ_NIC_FC_OFF;
 	int err = 0;
 
-	link.link_status = ETH_LINK_DOWN;
+	link.link_status = RTE_ETH_LINK_DOWN;
 	link.link_speed = 0;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_autoneg = hw->is_autoneg ? ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_autoneg = hw->is_autoneg ? RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
 	memset(&old, 0, sizeof(old));
 
 	/* load old link status */
@@ -1198,8 +1198,8 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
 		return 0;
 	}
 
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_speed = hw->aq_link_status.mbps;
 
 	rte_eth_linkstatus_set(dev, &link);
@@ -1333,7 +1333,7 @@ atl_dev_link_status_print(struct rte_eth_dev *dev)
 		PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned int)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -1532,13 +1532,13 @@ atl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	hw->aq_fw_ops->get_flow_control(hw, &fc);
 
 	if (fc == AQ_NIC_FC_OFF)
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	else if ((fc & AQ_NIC_FC_RX) && (fc & AQ_NIC_FC_TX))
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (fc & AQ_NIC_FC_RX)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (fc & AQ_NIC_FC_TX)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 
 	return 0;
 }
@@ -1553,13 +1553,13 @@ atl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	if (hw->aq_fw_ops->set_flow_control == NULL)
 		return -ENOTSUP;
 
-	if (fc_conf->mode == RTE_FC_NONE)
+	if (fc_conf->mode == RTE_ETH_FC_NONE)
 		hw->aq_nic_cfg->flow_control = AQ_NIC_FC_OFF;
-	else if (fc_conf->mode == RTE_FC_RX_PAUSE)
+	else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
 		hw->aq_nic_cfg->flow_control = AQ_NIC_FC_RX;
-	else if (fc_conf->mode == RTE_FC_TX_PAUSE)
+	else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
 		hw->aq_nic_cfg->flow_control = AQ_NIC_FC_TX;
-	else if (fc_conf->mode == RTE_FC_FULL)
+	else if (fc_conf->mode == RTE_ETH_FC_FULL)
 		hw->aq_nic_cfg->flow_control = (AQ_NIC_FC_RX | AQ_NIC_FC_TX);
 
 	if (old_flow_control != hw->aq_nic_cfg->flow_control)
@@ -1727,14 +1727,14 @@ atl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	PMD_INIT_FUNC_TRACE();
 
-	ret = atl_enable_vlan_filter(dev, mask & ETH_VLAN_FILTER_MASK);
+	ret = atl_enable_vlan_filter(dev, mask & RTE_ETH_VLAN_FILTER_MASK);
 
-	cfg->vlan_strip = !!(mask & ETH_VLAN_STRIP_MASK);
+	cfg->vlan_strip = !!(mask & RTE_ETH_VLAN_STRIP_MASK);
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++)
 		hw_atl_rpo_rx_desc_vlan_stripping_set(hw, cfg->vlan_strip, i);
 
-	if (mask & ETH_VLAN_EXTEND_MASK)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK)
 		ret = -ENOTSUP;
 
 	return ret;
@@ -1750,10 +1750,10 @@ atl_vlan_tpid_set(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 	PMD_INIT_FUNC_TRACE();
 
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
+	case RTE_ETH_VLAN_TYPE_INNER:
 		hw_atl_rpf_vlan_inner_etht_set(hw, tpid);
 		break;
-	case ETH_VLAN_TYPE_OUTER:
+	case RTE_ETH_VLAN_TYPE_OUTER:
 		hw_atl_rpf_vlan_outer_etht_set(hw, tpid);
 		break;
 	default:
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index fbc9917ed30d..ed9ef9f0cc52 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -11,15 +11,15 @@
 #include "hw_atl/hw_atl_utils.h"
 
 #define ATL_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define ATL_DEV_PRIVATE_TO_HW(adapter) \
 	(&((struct atl_adapter *)adapter)->hw)
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 0d3460383a50..2ff426892df2 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -145,10 +145,10 @@ atl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
 
 	rxq->l3_csum_enabled = dev->data->dev_conf.rxmode.offloads &
-		DEV_RX_OFFLOAD_IPV4_CKSUM;
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 	rxq->l4_csum_enabled = dev->data->dev_conf.rxmode.offloads &
-		(DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM);
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		(RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		PMD_DRV_LOG(ERR, "PMD does not support KEEP_CRC offload");
 
 	/* allocate memory for the software ring */
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 932ec90265cf..5d94db02c506 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1998,9 +1998,9 @@ avp_dev_configure(struct rte_eth_dev *eth_dev)
 	/* Setup required number of queues */
 	_avp_set_queue_counts(eth_dev);
 
-	mask = (ETH_VLAN_STRIP_MASK |
-		ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK);
+	mask = (RTE_ETH_VLAN_STRIP_MASK |
+		RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK);
 	ret = avp_vlan_offload_set(eth_dev, mask);
 	if (ret < 0) {
 		PMD_DRV_LOG(ERR, "VLAN offload set failed by host, ret=%d\n",
@@ -2140,8 +2140,8 @@ avp_dev_link_update(struct rte_eth_dev *eth_dev,
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	struct rte_eth_link *link = &eth_dev->data->dev_link;
 
-	link->link_speed = ETH_SPEED_NUM_10G;
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_speed = RTE_ETH_SPEED_NUM_10G;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link->link_status = !!(avp->flags & AVP_F_LINKUP);
 
 	return -1;
@@ -2191,8 +2191,8 @@ avp_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->max_rx_pktlen = avp->max_rx_pkt_len;
 	dev_info->max_mac_addrs = AVP_MAX_MAC_ADDRS;
 	if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
-		dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
-		dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+		dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+		dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	}
 
 	return 0;
@@ -2205,9 +2205,9 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 	struct rte_eth_conf *dev_conf = &eth_dev->data->dev_conf;
 	uint64_t offloads = dev_conf->rxmode.offloads;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
-			if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 				avp->features |= RTE_AVP_FEATURE_VLAN_OFFLOAD;
 			else
 				avp->features &= ~RTE_AVP_FEATURE_VLAN_OFFLOAD;
@@ -2216,13 +2216,13 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 		}
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			PMD_DRV_LOG(ERR, "VLAN filter offload not supported\n");
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			PMD_DRV_LOG(ERR, "VLAN extend offload not supported\n");
 	}
 
diff --git a/drivers/net/axgbe/axgbe_dev.c b/drivers/net/axgbe/axgbe_dev.c
index ca32ad641873..3aaa2193272f 100644
--- a/drivers/net/axgbe/axgbe_dev.c
+++ b/drivers/net/axgbe/axgbe_dev.c
@@ -840,11 +840,11 @@ static void axgbe_rss_options(struct axgbe_port *pdata)
 	pdata->rss_hf = rss_conf->rss_hf;
 	rss_hf = rss_conf->rss_hf;
 
-	if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+	if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
-	if (rss_hf & (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+	if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
-	if (rss_hf & (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+	if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
 }
 
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 0250256830ac..dab0c6775d1d 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -326,7 +326,7 @@ axgbe_dev_configure(struct rte_eth_dev *dev)
 	struct axgbe_port *pdata =  dev->data->dev_private;
 	/* Checksum offload to hardware */
 	pdata->rx_csum_enable = dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_CHECKSUM;
+				RTE_ETH_RX_OFFLOAD_CHECKSUM;
 	return 0;
 }
 
@@ -335,9 +335,9 @@ axgbe_dev_rx_mq_config(struct rte_eth_dev *dev)
 {
 	struct axgbe_port *pdata = dev->data->dev_private;
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 		pdata->rss_enable = 1;
-	else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+	else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
 		pdata->rss_enable = 0;
 	else
 		return  -1;
@@ -385,7 +385,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
 	rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
 
 	max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 				max_pkt_len > pdata->rx_buf_size)
 		dev_data->scattered_rx = 1;
 
@@ -521,8 +521,8 @@ axgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & (1ULL << shift)) == 0)
 			continue;
 		pdata->rss_table[i] = reta_conf[idx].reta[shift];
@@ -552,8 +552,8 @@ axgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & (1ULL << shift)) == 0)
 			continue;
 		reta_conf[idx].reta[shift] = pdata->rss_table[i];
@@ -590,13 +590,13 @@ axgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
 
 	pdata->rss_hf = rss_conf->rss_hf & AXGBE_RSS_OFFLOAD;
 
-	if (pdata->rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+	if (pdata->rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
 	if (pdata->rss_hf &
-	    (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+	    (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
 	if (pdata->rss_hf &
-	    (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+	    (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
 
 	/* Set the RSS options */
@@ -765,7 +765,7 @@ axgbe_dev_link_update(struct rte_eth_dev *dev,
 	link.link_status = pdata->phy_link;
 	link.link_speed = pdata->phy_speed;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			      ETH_LINK_SPEED_FIXED);
+			      RTE_ETH_LINK_SPEED_FIXED);
 	ret = rte_eth_linkstatus_set(dev, &link);
 	if (ret == -1)
 		PMD_DRV_LOG(ERR, "No change in link status\n");
@@ -1208,24 +1208,24 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_rx_pktlen = AXGBE_RX_MAX_BUF_SIZE;
 	dev_info->max_mac_addrs = pdata->hw_feat.addn_mac + 1;
 	dev_info->max_hash_mac_addrs = pdata->hw_feat.hash_table_size;
-	dev_info->speed_capa =  ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM  |
-		DEV_RX_OFFLOAD_TCP_CKSUM  |
-		DEV_RX_OFFLOAD_SCATTER	  |
-		DEV_RX_OFFLOAD_KEEP_CRC;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_SCATTER	  |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if (pdata->hw_feat.rss) {
 		dev_info->flow_type_rss_offloads = AXGBE_RSS_OFFLOAD;
@@ -1262,13 +1262,13 @@ axgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	fc.autoneg = pdata->pause_autoneg;
 
 	if (pdata->rx_pause && pdata->tx_pause)
-		fc.mode = RTE_FC_FULL;
+		fc.mode = RTE_ETH_FC_FULL;
 	else if (pdata->rx_pause)
-		fc.mode = RTE_FC_RX_PAUSE;
+		fc.mode = RTE_ETH_FC_RX_PAUSE;
 	else if (pdata->tx_pause)
-		fc.mode = RTE_FC_TX_PAUSE;
+		fc.mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc.mode = RTE_FC_NONE;
+		fc.mode = RTE_ETH_FC_NONE;
 
 	fc_conf->high_water =  (1024 + (fc.low_water[0] << 9)) / 1024;
 	fc_conf->low_water =  (1024 + (fc.high_water[0] << 9)) / 1024;
@@ -1298,13 +1298,13 @@ axgbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	AXGMAC_IOWRITE(pdata, reg, reg_val);
 	fc.mode = fc_conf->mode;
 
-	if (fc.mode == RTE_FC_FULL) {
+	if (fc.mode == RTE_ETH_FC_FULL) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 1;
-	} else if (fc.mode == RTE_FC_RX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
 		pdata->tx_pause = 0;
 		pdata->rx_pause = 1;
-	} else if (fc.mode == RTE_FC_TX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 0;
 	} else {
@@ -1386,15 +1386,15 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
 
 	fc.mode = pfc_conf->fc.mode;
 
-	if (fc.mode == RTE_FC_FULL) {
+	if (fc.mode == RTE_ETH_FC_FULL) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 1;
 		AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
-	} else if (fc.mode == RTE_FC_RX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
 		pdata->tx_pause = 0;
 		pdata->rx_pause = 1;
 		AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
-	} else if (fc.mode == RTE_FC_TX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 0;
 		AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 0);
@@ -1830,8 +1830,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 	PMD_DRV_LOG(DEBUG, "EDVLP: qinq = 0x%x\n", qinq);
 
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
-		PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_INNER\n");
+	case RTE_ETH_VLAN_TYPE_INNER:
+		PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_INNER\n");
 		if (qinq) {
 			if (tpid != 0x8100 && tpid != 0x88a8)
 				PMD_DRV_LOG(ERR,
@@ -1848,8 +1848,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 				    "Inner type not supported in single tag\n");
 		}
 		break;
-	case ETH_VLAN_TYPE_OUTER:
-		PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_OUTER\n");
+	case RTE_ETH_VLAN_TYPE_OUTER:
+		PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_OUTER\n");
 		if (qinq) {
 			PMD_DRV_LOG(DEBUG, "double tagging is enabled\n");
 			/*Enable outer VLAN tag*/
@@ -1866,11 +1866,11 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 					    "tag supported 0x8100/0x88A8\n");
 		}
 		break;
-	case ETH_VLAN_TYPE_MAX:
-		PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_MAX\n");
+	case RTE_ETH_VLAN_TYPE_MAX:
+		PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_MAX\n");
 		break;
-	case ETH_VLAN_TYPE_UNKNOWN:
-		PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_UNKNOWN\n");
+	case RTE_ETH_VLAN_TYPE_UNKNOWN:
+		PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_UNKNOWN\n");
 		break;
 	}
 	return 0;
@@ -1904,8 +1904,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, CSVL, 0);
 	AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, VLTI, 1);
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 			PMD_DRV_LOG(DEBUG, "Strip ON for device = %s\n",
 				    pdata->eth_dev->device->name);
 			pdata->hw_if.enable_rx_vlan_stripping(pdata);
@@ -1915,8 +1915,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			pdata->hw_if.disable_rx_vlan_stripping(pdata);
 		}
 	}
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 			PMD_DRV_LOG(DEBUG, "Filter ON for device = %s\n",
 				    pdata->eth_dev->device->name);
 			pdata->hw_if.enable_rx_vlan_filtering(pdata);
@@ -1926,14 +1926,14 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			pdata->hw_if.disable_rx_vlan_filtering(pdata);
 		}
 	}
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
 			PMD_DRV_LOG(DEBUG, "enabling vlan extended mode\n");
 			axgbe_vlan_extend_enable(pdata);
 			/* Set global registers with default ethertype*/
-			axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+			axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					    RTE_ETHER_TYPE_VLAN);
-			axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+			axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
 					    RTE_ETHER_TYPE_VLAN);
 		} else {
 			PMD_DRV_LOG(DEBUG, "disabling vlan extended mode\n");
diff --git a/drivers/net/axgbe/axgbe_ethdev.h b/drivers/net/axgbe/axgbe_ethdev.h
index a6226729fe4d..0a3e1c59df1a 100644
--- a/drivers/net/axgbe/axgbe_ethdev.h
+++ b/drivers/net/axgbe/axgbe_ethdev.h
@@ -97,12 +97,12 @@
 
 /* Receive Side Scaling */
 #define AXGBE_RSS_OFFLOAD  ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define AXGBE_RSS_HASH_KEY_SIZE		40
 #define AXGBE_RSS_MAX_TABLE_SIZE	256
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 4f98e695ae74..59fa9175aded 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -597,7 +597,7 @@ static void axgbe_an73_state_machine(struct axgbe_port *pdata)
 		pdata->an_int = 0;
 		axgbe_an73_clear_interrupts(pdata);
 		pdata->eth_dev->data->dev_link.link_status =
-			ETH_LINK_DOWN;
+			RTE_ETH_LINK_DOWN;
 	} else if (pdata->an_state == AXGBE_AN_ERROR) {
 		PMD_DRV_LOG(ERR, "error during auto-negotiation, state=%u\n",
 			    cur_state);
diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c
index c8618d2d6daa..aa2c27ebaa49 100644
--- a/drivers/net/axgbe/axgbe_rxtx.c
+++ b/drivers/net/axgbe/axgbe_rxtx.c
@@ -75,7 +75,7 @@ int axgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		(DMA_CH_INC * rxq->queue_id));
 	rxq->dma_tail_reg = (volatile uint32_t *)((uint8_t *)rxq->dma_regs +
 						  DMA_CH_RDTR_LO);
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -286,7 +286,7 @@ axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
-				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+				if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
 				else
 					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
@@ -430,7 +430,7 @@ uint16_t eth_axgbe_recv_scattered_pkts(void *rx_queue,
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
-				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+				if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
 				else
 					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 567ea2382864..78fc717ec44a 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -94,14 +94,14 @@ bnx2x_link_update(struct rte_eth_dev *dev)
 	link.link_speed = sc->link_vars.line_speed;
 	switch (sc->link_vars.duplex) {
 		case DUPLEX_FULL:
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			break;
 		case DUPLEX_HALF:
-			link.link_duplex = ETH_LINK_HALF_DUPLEX;
+			link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 			break;
 	}
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+		 RTE_ETH_LINK_SPEED_FIXED);
 	link.link_status = sc->link_vars.link_up;
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -408,7 +408,7 @@ bnx2xvf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_comple
 	if (sc->old_bulletin.valid_bitmap & (1 << CHANNEL_DOWN)) {
 		PMD_DRV_LOG(ERR, sc, "PF indicated channel is down."
 				"VF device is no longer operational");
-		dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	}
 
 	return ret;
@@ -534,7 +534,7 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_rx_bufsize = BNX2X_MIN_RX_BUF_SIZE;
 	dev_info->max_rx_pktlen  = BNX2X_MAX_RX_PKT_LEN;
 	dev_info->max_mac_addrs  = BNX2X_MAX_MAC_ADDRS;
-	dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G;
 
 	dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
 	dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
@@ -669,7 +669,7 @@ bnx2x_common_dev_init(struct rte_eth_dev *eth_dev, int is_vf)
 	bnx2x_load_firmware(sc);
 	assert(sc->firmware);
 
-	if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		sc->udp_rss = 1;
 
 	sc->rx_budget = BNX2X_RX_BUDGET;
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 6743cf92b0e6..39bd739c7bc9 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -569,37 +569,37 @@ struct bnxt_rep_info {
 #define BNXT_FW_STATUS_SHUTDOWN		0x100000
 
 #define BNXT_ETH_RSS_SUPPORT (	\
-	ETH_RSS_IPV4 |		\
-	ETH_RSS_NONFRAG_IPV4_TCP |	\
-	ETH_RSS_NONFRAG_IPV4_UDP |	\
-	ETH_RSS_IPV6 |		\
-	ETH_RSS_NONFRAG_IPV6_TCP |	\
-	ETH_RSS_NONFRAG_IPV6_UDP |	\
-	ETH_RSS_LEVEL_MASK)
-
-#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_CKSUM | \
-				     DEV_TX_OFFLOAD_UDP_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_TSO | \
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GRE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_QINQ_INSERT | \
-				     DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
-				     DEV_RX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_UDP_CKSUM | \
-				     DEV_RX_OFFLOAD_TCP_CKSUM | \
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
-				     DEV_RX_OFFLOAD_KEEP_CRC | \
-				     DEV_RX_OFFLOAD_VLAN_EXTEND | \
-				     DEV_RX_OFFLOAD_TCP_LRO | \
-				     DEV_RX_OFFLOAD_SCATTER | \
-				     DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RSS_IPV4 |		\
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP |	\
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP |	\
+	RTE_ETH_RSS_IPV6 |		\
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP |	\
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP |	\
+	RTE_ETH_RSS_LEVEL_MASK)
+
+#define BNXT_DEV_TX_OFFLOAD_SUPPORT (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+				     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+				     RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define BNXT_DEV_RX_OFFLOAD_SUPPORT (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+				     RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_KEEP_CRC | \
+				     RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+				     RTE_ETH_RX_OFFLOAD_TCP_LRO | \
+				     RTE_ETH_RX_OFFLOAD_SCATTER | \
+				     RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index f385723a9f65..2791a5c62db1 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -426,7 +426,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 		goto err_out;
 
 	/* Alloc RSS context only if RSS mode is enabled */
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
 		int j, nr_ctxs = bnxt_rss_ctxts(bp);
 
 		/* RSS table size in Thor is 512.
@@ -458,7 +458,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 	 * setting is not available at this time, it will not be
 	 * configured correctly in the CFA.
 	 */
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 		vnic->vlan_strip = true;
 	else
 		vnic->vlan_strip = false;
@@ -493,7 +493,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 	bnxt_hwrm_vnic_plcmode_cfg(bp, vnic);
 
 	rc = bnxt_hwrm_vnic_tpa_cfg(bp, vnic,
-				    (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) ?
+				    (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) ?
 				    true : false);
 	if (rc)
 		goto err_out;
@@ -923,35 +923,35 @@ uint32_t bnxt_get_speed_capabilities(struct bnxt *bp)
 		link_speed = bp->link_info->support_pam4_speeds;
 
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB)
-		speed_capa |= ETH_LINK_SPEED_100M;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100MBHD)
-		speed_capa |= ETH_LINK_SPEED_100M_HD;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_1GB)
-		speed_capa |= ETH_LINK_SPEED_1G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_2_5GB)
-		speed_capa |= ETH_LINK_SPEED_2_5G;
+		speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_10GB)
-		speed_capa |= ETH_LINK_SPEED_10G;
+		speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_20GB)
-		speed_capa |= ETH_LINK_SPEED_20G;
+		speed_capa |= RTE_ETH_LINK_SPEED_20G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_25GB)
-		speed_capa |= ETH_LINK_SPEED_25G;
+		speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_40GB)
-		speed_capa |= ETH_LINK_SPEED_40G;
+		speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_50GB)
-		speed_capa |= ETH_LINK_SPEED_50G;
+		speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100GB)
-		speed_capa |= ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_50G)
-		speed_capa |= ETH_LINK_SPEED_50G;
+		speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_100G)
-		speed_capa |= ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_200G)
-		speed_capa |= ETH_LINK_SPEED_200G;
+		speed_capa |= RTE_ETH_LINK_SPEED_200G;
 
 	if (bp->link_info->auto_mode ==
 	    HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE)
-		speed_capa |= ETH_LINK_SPEED_FIXED;
+		speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
 
 	return speed_capa;
 }
@@ -995,14 +995,14 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 
 	dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
 	if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	if (bp->vnic_cap_flags & BNXT_VNIC_CAP_VLAN_RX_STRIP)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_STRIP;
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT |
 				    dev_info->tx_queue_offload_capa;
 	if (bp->fw_cap & BNXT_FW_CAP_VLAN_TX_INSERT)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_VLAN_INSERT;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
 
 	dev_info->speed_capa = bnxt_get_speed_capabilities(bp);
@@ -1049,8 +1049,8 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	 */
 
 	/* VMDq resources */
-	vpool = 64; /* ETH_64_POOLS */
-	vrxq = 128; /* ETH_VMDQ_DCB_NUM_QUEUES */
+	vpool = 64; /* RTE_ETH_64_POOLS */
+	vrxq = 128; /* RTE_ETH_VMDQ_DCB_NUM_QUEUES */
 	for (i = 0; i < 4; vpool >>= 1, i++) {
 		if (max_vnics > vpool) {
 			for (j = 0; j < 5; vrxq >>= 1, j++) {
@@ -1145,15 +1145,15 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
 	    (uint32_t)(eth_dev->data->nb_rx_queues) > bp->max_ring_grps)
 		goto resource_error;
 
-	if (!(eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) &&
+	if (!(eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) &&
 	    bp->max_vnics < eth_dev->data->nb_rx_queues)
 		goto resource_error;
 
 	bp->rx_cp_nr_rings = bp->rx_nr_rings;
 	bp->tx_cp_nr_rings = bp->tx_nr_rings;
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rx_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
 
 	bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
@@ -1182,7 +1182,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
 		PMD_DRV_LOG(INFO, "Port %d Link Up - speed %u Mbps - %s\n",
 			eth_dev->data->port_id,
 			(uint32_t)link->link_speed,
-			(link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+			(link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 			("full-duplex") : ("half-duplex\n"));
 	else
 		PMD_DRV_LOG(INFO, "Port %d Link Down\n",
@@ -1199,10 +1199,10 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
 	uint16_t buf_size;
 	int i;
 
-	if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		return 1;
 
-	if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO)
+	if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		return 1;
 
 	for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1247,15 +1247,15 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
 	 * a limited subset have been enabled.
 	 */
 	if (eth_dev->data->dev_conf.rxmode.offloads &
-		~(DEV_RX_OFFLOAD_VLAN_STRIP |
-		  DEV_RX_OFFLOAD_KEEP_CRC |
-		  DEV_RX_OFFLOAD_IPV4_CKSUM |
-		  DEV_RX_OFFLOAD_UDP_CKSUM |
-		  DEV_RX_OFFLOAD_TCP_CKSUM |
-		  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		  DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-		  DEV_RX_OFFLOAD_RSS_HASH |
-		  DEV_RX_OFFLOAD_VLAN_FILTER))
+		~(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		  RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		  RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_RSS_HASH |
+		  RTE_ETH_RX_OFFLOAD_VLAN_FILTER))
 		goto use_scalar_rx;
 
 #if defined(RTE_ARCH_X86) && defined(CC_AVX2_SUPPORT)
@@ -1307,7 +1307,7 @@ bnxt_transmit_function(struct rte_eth_dev *eth_dev)
 	 * or tx offloads.
 	 */
 	if (eth_dev->data->scattered_rx ||
-	    (offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) ||
+	    (offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) ||
 	    BNXT_TRUFLOW_EN(bp))
 		goto use_scalar_tx;
 
@@ -1608,10 +1608,10 @@ static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
 
 	bnxt_link_update_op(eth_dev, 1);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
-		vlan_mask |= ETH_VLAN_FILTER_MASK;
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-		vlan_mask |= ETH_VLAN_STRIP_MASK;
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+		vlan_mask |= RTE_ETH_VLAN_FILTER_MASK;
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+		vlan_mask |= RTE_ETH_VLAN_STRIP_MASK;
 	rc = bnxt_vlan_offload_set_op(eth_dev, vlan_mask);
 	if (rc)
 		goto error;
@@ -1833,8 +1833,8 @@ int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete)
 		/* Retrieve link info from hardware */
 		rc = bnxt_get_hwrm_link_config(bp, &new);
 		if (rc) {
-			new.link_speed = ETH_LINK_SPEED_100M;
-			new.link_duplex = ETH_LINK_FULL_DUPLEX;
+			new.link_speed = RTE_ETH_LINK_SPEED_100M;
+			new.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR,
 				"Failed to retrieve link rc = 0x%x!\n", rc);
 			goto out;
@@ -2028,7 +2028,7 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
 	if (!vnic->rss_table)
 		return -EINVAL;
 
-	if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+	if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 		return -EINVAL;
 
 	if (reta_size != tbl_size) {
@@ -2041,8 +2041,8 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
 	for (i = 0; i < reta_size; i++) {
 		struct bnxt_rx_queue *rxq;
 
-		idx = i / RTE_RETA_GROUP_SIZE;
-		sft = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		sft = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (!(reta_conf[idx].mask & (1ULL << sft)))
 			continue;
@@ -2095,8 +2095,8 @@ static int bnxt_reta_query_op(struct rte_eth_dev *eth_dev,
 	}
 
 	for (idx = 0, i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		sft = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		sft = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (reta_conf[idx].mask & (1ULL << sft)) {
 			uint16_t qid;
@@ -2134,7 +2134,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
 	 * If RSS enablement were different than dev_configure,
 	 * then return -EINVAL
 	 */
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (!rss_conf->rss_hf)
 			PMD_DRV_LOG(ERR, "Hash type NONE\n");
 	} else {
@@ -2152,7 +2152,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
 	vnic->hash_type = bnxt_rte_to_hwrm_hash_types(rss_conf->rss_hf);
 	vnic->hash_mode =
 		bnxt_rte_to_hwrm_hash_level(bp, rss_conf->rss_hf,
-					    ETH_RSS_LEVEL(rss_conf->rss_hf));
+					    RTE_ETH_RSS_LEVEL(rss_conf->rss_hf));
 
 	/*
 	 * If hashkey is not specified, use the previously configured
@@ -2197,30 +2197,30 @@ static int bnxt_rss_hash_conf_get_op(struct rte_eth_dev *eth_dev,
 		hash_types = vnic->hash_type;
 		rss_conf->rss_hf = 0;
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4) {
-			rss_conf->rss_hf |= ETH_RSS_IPV4;
+			rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
 			hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6) {
-			rss_conf->rss_hf |= ETH_RSS_IPV6;
+			rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
 			hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
 		}
@@ -2260,17 +2260,17 @@ static int bnxt_flow_ctrl_get_op(struct rte_eth_dev *dev,
 		fc_conf->autoneg = 1;
 	switch (bp->link_info->pause) {
 	case 0:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case (HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX |
 			HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX):
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	}
 	return 0;
@@ -2293,11 +2293,11 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		bp->link_info->auto_pause = 0;
 		bp->link_info->force_pause = 0;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		if (fc_conf->autoneg) {
 			bp->link_info->auto_pause =
 					HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_RX;
@@ -2308,7 +2308,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
 					HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_RX;
 		}
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		if (fc_conf->autoneg) {
 			bp->link_info->auto_pause =
 					HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX;
@@ -2319,7 +2319,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
 					HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_TX;
 		}
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		if (fc_conf->autoneg) {
 			bp->link_info->auto_pause =
 					HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX |
@@ -2350,7 +2350,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
 		return rc;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (bp->vxlan_port_cnt) {
 			PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
 				udp_tunnel->udp_port);
@@ -2364,7 +2364,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
 		tunnel_type =
 			HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_VXLAN;
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (bp->geneve_port_cnt) {
 			PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
 				udp_tunnel->udp_port);
@@ -2413,7 +2413,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
 		return rc;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (!bp->vxlan_port_cnt) {
 			PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
 			return -EINVAL;
@@ -2430,7 +2430,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
 			HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_VXLAN;
 		port = bp->vxlan_fw_dst_port_id;
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (!bp->geneve_port_cnt) {
 			PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
 			return -EINVAL;
@@ -2608,7 +2608,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
 	int rc;
 
 	vnic = BNXT_GET_DEFAULT_VNIC(bp);
-	if (!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)) {
+	if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
 		/* Remove any VLAN filters programmed */
 		for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
 			bnxt_del_vlan_filter(bp, i);
@@ -2628,7 +2628,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
 		bnxt_add_vlan_filter(bp, 0);
 	}
 	PMD_DRV_LOG(DEBUG, "VLAN Filtering: %d\n",
-		    !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER));
+		    !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER));
 
 	return 0;
 }
@@ -2641,7 +2641,7 @@ static int bnxt_free_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 
 	/* Destroy vnic filters and vnic */
 	if (bp->eth_dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_VLAN_FILTER) {
+	    RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
 			bnxt_del_vlan_filter(bp, i);
 	}
@@ -2680,7 +2680,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
 		return rc;
 
 	if (bp->eth_dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_VLAN_FILTER) {
+	    RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		rc = bnxt_add_vlan_filter(bp, 0);
 		if (rc)
 			return rc;
@@ -2698,7 +2698,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
 		return rc;
 
 	PMD_DRV_LOG(DEBUG, "VLAN Strip Offload: %d\n",
-		    !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP));
+		    !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP));
 
 	return rc;
 }
@@ -2718,22 +2718,22 @@ bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask)
 	if (!dev->data->dev_started)
 		return 0;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* Enable or disable VLAN filtering */
 		rc = bnxt_config_vlan_hw_filter(bp, rx_offloads);
 		if (rc)
 			return rc;
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
 		rc = bnxt_config_vlan_hw_stripping(bp, rx_offloads);
 		if (rc)
 			return rc;
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			PMD_DRV_LOG(DEBUG, "Extend VLAN supported\n");
 		else
 			PMD_DRV_LOG(INFO, "Extend VLAN unsupported\n");
@@ -2748,10 +2748,10 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 {
 	struct bnxt *bp = dev->data->dev_private;
 	int qinq = dev->data->dev_conf.rxmode.offloads &
-		   DEV_RX_OFFLOAD_VLAN_EXTEND;
+		   RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
-	if (vlan_type != ETH_VLAN_TYPE_INNER &&
-	    vlan_type != ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+	    vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
 		PMD_DRV_LOG(ERR,
 			    "Unsupported vlan type.");
 		return -EINVAL;
@@ -2763,7 +2763,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 		return -EINVAL;
 	}
 
-	if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		switch (tpid) {
 		case RTE_ETHER_TYPE_QINQ:
 			bp->outer_tpid_bd =
@@ -2791,7 +2791,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 		}
 		bp->outer_tpid_bd |= tpid;
 		PMD_DRV_LOG(INFO, "outer_tpid_bd = %x\n", bp->outer_tpid_bd);
-	} else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+	} else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
 		PMD_DRV_LOG(ERR,
 			    "Can accelerate only outer vlan in QinQ\n");
 		return -EINVAL;
@@ -2831,7 +2831,7 @@ bnxt_set_default_mac_addr_op(struct rte_eth_dev *dev,
 	bnxt_del_dflt_mac_filter(bp, vnic);
 
 	memcpy(bp->mac_addr, addr, RTE_ETHER_ADDR_LEN);
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		/* This filter will allow only untagged packets */
 		rc = bnxt_add_vlan_filter(bp, 0);
 	} else {
@@ -6556,4 +6556,4 @@ bool is_bnxt_supported(struct rte_eth_dev *dev)
 RTE_LOG_REGISTER_SUFFIX(bnxt_logtype_driver, driver, NOTICE);
 RTE_PMD_REGISTER_PCI(net_bnxt, bnxt_rte_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_bnxt, bnxt_pci_id_map);
-RTE_PMD_REGISTER_KMOD_DEP(net_bnxt, "* igb_uio | uio_pci_generic | vfio-pci");
+
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index b2ebb5634e3a..ced697a73980 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -978,7 +978,7 @@ static int bnxt_vnic_prep(struct bnxt *bp, struct bnxt_vnic_info *vnic,
 		}
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 		vnic->vlan_strip = true;
 	else
 		vnic->vlan_strip = false;
@@ -1177,7 +1177,7 @@ bnxt_vnic_rss_cfg_update(struct bnxt *bp,
 	}
 
 	/* If RSS types is 0, use a best effort configuration */
-	types = rss->types ? rss->types : ETH_RSS_IPV4;
+	types = rss->types ? rss->types : RTE_ETH_RSS_IPV4;
 
 	hash_type = bnxt_rte_to_hwrm_hash_types(types);
 
@@ -1322,7 +1322,7 @@ bnxt_validate_and_parse_flow(struct rte_eth_dev *dev,
 
 		rxq = bp->rx_queues[act_q->index];
 
-		if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) && rxq &&
+		if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) && rxq &&
 		    vnic->fw_vnic_id != INVALID_HW_RING_ID)
 			goto use_vnic;
 
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 181e607d7bf8..82e89b7c8af7 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -628,7 +628,7 @@ int bnxt_hwrm_set_l2_filter(struct bnxt *bp,
 	uint16_t j = dst_id - 1;
 
 	//TODO: Is there a better way to add VLANs to each VNIC in case of VMDQ
-	if ((dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) &&
+	if ((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) &&
 	    conf->pool_map[j].pools & (1UL << j)) {
 		PMD_DRV_LOG(DEBUG,
 			"Add vlan %u to vmdq pool %u\n",
@@ -2979,12 +2979,12 @@ static uint16_t bnxt_parse_eth_link_duplex(uint32_t conf_link_speed)
 {
 	uint8_t hw_link_duplex = HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
 
-	if ((conf_link_speed & ETH_LINK_SPEED_FIXED) == ETH_LINK_SPEED_AUTONEG)
+	if ((conf_link_speed & RTE_ETH_LINK_SPEED_FIXED) == RTE_ETH_LINK_SPEED_AUTONEG)
 		return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
 
 	switch (conf_link_speed) {
-	case ETH_LINK_SPEED_10M_HD:
-	case ETH_LINK_SPEED_100M_HD:
+	case RTE_ETH_LINK_SPEED_10M_HD:
+	case RTE_ETH_LINK_SPEED_100M_HD:
 		/* FALLTHROUGH */
 		return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF;
 	}
@@ -3001,51 +3001,51 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
 {
 	uint16_t eth_link_speed = 0;
 
-	if (conf_link_speed == ETH_LINK_SPEED_AUTONEG)
-		return ETH_LINK_SPEED_AUTONEG;
+	if (conf_link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
+		return RTE_ETH_LINK_SPEED_AUTONEG;
 
-	switch (conf_link_speed & ~ETH_LINK_SPEED_FIXED) {
-	case ETH_LINK_SPEED_100M:
-	case ETH_LINK_SPEED_100M_HD:
+	switch (conf_link_speed & ~RTE_ETH_LINK_SPEED_FIXED) {
+	case RTE_ETH_LINK_SPEED_100M:
+	case RTE_ETH_LINK_SPEED_100M_HD:
 		/* FALLTHROUGH */
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_100MB;
 		break;
-	case ETH_LINK_SPEED_1G:
+	case RTE_ETH_LINK_SPEED_1G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_1GB;
 		break;
-	case ETH_LINK_SPEED_2_5G:
+	case RTE_ETH_LINK_SPEED_2_5G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_2_5GB;
 		break;
-	case ETH_LINK_SPEED_10G:
+	case RTE_ETH_LINK_SPEED_10G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_10GB;
 		break;
-	case ETH_LINK_SPEED_20G:
+	case RTE_ETH_LINK_SPEED_20G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_20GB;
 		break;
-	case ETH_LINK_SPEED_25G:
+	case RTE_ETH_LINK_SPEED_25G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_25GB;
 		break;
-	case ETH_LINK_SPEED_40G:
+	case RTE_ETH_LINK_SPEED_40G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_40GB;
 		break;
-	case ETH_LINK_SPEED_50G:
+	case RTE_ETH_LINK_SPEED_50G:
 		eth_link_speed = pam4_link ?
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_50GB :
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_50GB;
 		break;
-	case ETH_LINK_SPEED_100G:
+	case RTE_ETH_LINK_SPEED_100G:
 		eth_link_speed = pam4_link ?
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_100GB :
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_100GB;
 		break;
-	case ETH_LINK_SPEED_200G:
+	case RTE_ETH_LINK_SPEED_200G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
 		break;
@@ -3058,11 +3058,11 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
 	return eth_link_speed;
 }
 
-#define BNXT_SUPPORTED_SPEEDS (ETH_LINK_SPEED_100M | ETH_LINK_SPEED_100M_HD | \
-		ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G | \
-		ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G | ETH_LINK_SPEED_25G | \
-		ETH_LINK_SPEED_40G | ETH_LINK_SPEED_50G | \
-		ETH_LINK_SPEED_100G | ETH_LINK_SPEED_200G)
+#define BNXT_SUPPORTED_SPEEDS (RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_100M_HD | \
+		RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G | \
+		RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G | RTE_ETH_LINK_SPEED_25G | \
+		RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_50G | \
+		RTE_ETH_LINK_SPEED_100G | RTE_ETH_LINK_SPEED_200G)
 
 static int bnxt_validate_link_speed(struct bnxt *bp)
 {
@@ -3071,13 +3071,13 @@ static int bnxt_validate_link_speed(struct bnxt *bp)
 	uint32_t link_speed_capa;
 	uint32_t one_speed;
 
-	if (link_speed == ETH_LINK_SPEED_AUTONEG)
+	if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
 		return 0;
 
 	link_speed_capa = bnxt_get_speed_capabilities(bp);
 
-	if (link_speed & ETH_LINK_SPEED_FIXED) {
-		one_speed = link_speed & ~ETH_LINK_SPEED_FIXED;
+	if (link_speed & RTE_ETH_LINK_SPEED_FIXED) {
+		one_speed = link_speed & ~RTE_ETH_LINK_SPEED_FIXED;
 
 		if (one_speed & (one_speed - 1)) {
 			PMD_DRV_LOG(ERR,
@@ -3107,71 +3107,71 @@ bnxt_parse_eth_link_speed_mask(struct bnxt *bp, uint32_t link_speed)
 {
 	uint16_t ret = 0;
 
-	if (link_speed == ETH_LINK_SPEED_AUTONEG) {
+	if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG) {
 		if (bp->link_info->support_speeds)
 			return bp->link_info->support_speeds;
 		link_speed = BNXT_SUPPORTED_SPEEDS;
 	}
 
-	if (link_speed & ETH_LINK_SPEED_100M)
+	if (link_speed & RTE_ETH_LINK_SPEED_100M)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
-	if (link_speed & ETH_LINK_SPEED_100M_HD)
+	if (link_speed & RTE_ETH_LINK_SPEED_100M_HD)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
-	if (link_speed & ETH_LINK_SPEED_1G)
+	if (link_speed & RTE_ETH_LINK_SPEED_1G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_1GB;
-	if (link_speed & ETH_LINK_SPEED_2_5G)
+	if (link_speed & RTE_ETH_LINK_SPEED_2_5G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_2_5GB;
-	if (link_speed & ETH_LINK_SPEED_10G)
+	if (link_speed & RTE_ETH_LINK_SPEED_10G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_10GB;
-	if (link_speed & ETH_LINK_SPEED_20G)
+	if (link_speed & RTE_ETH_LINK_SPEED_20G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_20GB;
-	if (link_speed & ETH_LINK_SPEED_25G)
+	if (link_speed & RTE_ETH_LINK_SPEED_25G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_25GB;
-	if (link_speed & ETH_LINK_SPEED_40G)
+	if (link_speed & RTE_ETH_LINK_SPEED_40G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_40GB;
-	if (link_speed & ETH_LINK_SPEED_50G)
+	if (link_speed & RTE_ETH_LINK_SPEED_50G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_50GB;
-	if (link_speed & ETH_LINK_SPEED_100G)
+	if (link_speed & RTE_ETH_LINK_SPEED_100G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100GB;
-	if (link_speed & ETH_LINK_SPEED_200G)
+	if (link_speed & RTE_ETH_LINK_SPEED_200G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
 	return ret;
 }
 
 static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
 {
-	uint32_t eth_link_speed = ETH_SPEED_NUM_NONE;
+	uint32_t eth_link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	switch (hw_link_speed) {
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB:
-		eth_link_speed = ETH_SPEED_NUM_100M;
+		eth_link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_1GB:
-		eth_link_speed = ETH_SPEED_NUM_1G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2_5GB:
-		eth_link_speed = ETH_SPEED_NUM_2_5G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_10GB:
-		eth_link_speed = ETH_SPEED_NUM_10G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_20GB:
-		eth_link_speed = ETH_SPEED_NUM_20G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_25GB:
-		eth_link_speed = ETH_SPEED_NUM_25G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_40GB:
-		eth_link_speed = ETH_SPEED_NUM_40G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_50GB:
-		eth_link_speed = ETH_SPEED_NUM_50G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100GB:
-		eth_link_speed = ETH_SPEED_NUM_100G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_200GB:
-		eth_link_speed = ETH_SPEED_NUM_200G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_200G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2GB:
 	default:
@@ -3184,16 +3184,16 @@ static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
 
 static uint16_t bnxt_parse_hw_link_duplex(uint16_t hw_link_duplex)
 {
-	uint16_t eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+	uint16_t eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (hw_link_duplex) {
 	case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH:
 	case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_FULL:
 		/* FALLTHROUGH */
-		eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+		eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF:
-		eth_link_duplex = ETH_LINK_HALF_DUPLEX;
+		eth_link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "HWRM link duplex %d not defined\n",
@@ -3222,12 +3222,12 @@ int bnxt_get_hwrm_link_config(struct bnxt *bp, struct rte_eth_link *link)
 		link->link_speed =
 			bnxt_parse_hw_link_speed(link_info->link_speed);
 	else
-		link->link_speed = ETH_SPEED_NUM_NONE;
+		link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 	link->link_duplex = bnxt_parse_hw_link_duplex(link_info->duplex);
 	link->link_status = link_info->link_up;
 	link->link_autoneg = link_info->auto_mode ==
 		HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE ?
-		ETH_LINK_FIXED : ETH_LINK_AUTONEG;
+		RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
 exit:
 	return rc;
 }
@@ -3253,7 +3253,7 @@ int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up)
 
 	autoneg = bnxt_check_eth_link_autoneg(dev_conf->link_speeds);
 	if (BNXT_CHIP_P5(bp) &&
-	    dev_conf->link_speeds == ETH_LINK_SPEED_40G) {
+	    dev_conf->link_speeds == RTE_ETH_LINK_SPEED_40G) {
 		/* 40G is not supported as part of media auto detect.
 		 * The speed should be forced and autoneg disabled
 		 * to configure 40G speed.
@@ -3344,7 +3344,7 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 
 	HWRM_CHECK_RESULT();
 
-	bp->vlan = rte_le_to_cpu_16(resp->vlan) & ETH_VLAN_ID_MAX;
+	bp->vlan = rte_le_to_cpu_16(resp->vlan) & RTE_ETH_VLAN_ID_MAX;
 
 	svif_info = rte_le_to_cpu_16(resp->svif_info);
 	if (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID)
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index b7e88e013a84..1c07db3ca9c5 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -537,7 +537,7 @@ int bnxt_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
 
 	dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
 	if (parent_bp->flags & BNXT_FLAG_PTP_SUPPORTED)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT;
 	dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
 
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index 08cefa1baaef..7940d489a102 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -187,7 +187,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
 			rx_ring_info->rx_ring_struct->ring_size *
 			AGG_RING_SIZE_FACTOR)) : 0;
 
-		if (rx_ring_info && (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+		if (rx_ring_info && (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 			int tpa_max = BNXT_TPA_MAX_AGGS(bp);
 
 			tpa_info_len = tpa_max * sizeof(struct bnxt_tpa_info);
@@ -283,7 +283,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
 					    ag_bitmap_start, ag_bitmap_len);
 
 			/* TPA info */
-			if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+			if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 				rx_ring_info->tpa_info =
 					((struct bnxt_tpa_info *)
 					 ((char *)mz->addr + tpa_info_start));
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index 38ec4aa14b77..1456f8b54ffa 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -52,13 +52,13 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 	bp->nr_vnics = 0;
 
 	/* Multi-queue mode */
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB_RSS) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
 		/* VMDq ONLY, VMDq+RSS, VMDq+DCB, VMDq+DCB+RSS */
 
 		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_RSS:
-		case ETH_MQ_RX_VMDQ_ONLY:
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
 			/* FALLTHROUGH */
 			/* ETH_8/64_POOLs */
 			pools = conf->nb_queue_pools;
@@ -66,14 +66,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 			max_pools = RTE_MIN(bp->max_vnics,
 					    RTE_MIN(bp->max_l2_ctx,
 					    RTE_MIN(bp->max_rsscos_ctx,
-						    ETH_64_POOLS)));
+						    RTE_ETH_64_POOLS)));
 			PMD_DRV_LOG(DEBUG,
 				    "pools = %u max_pools = %u\n",
 				    pools, max_pools);
 			if (pools > max_pools)
 				pools = max_pools;
 			break;
-		case ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_RSS:
 			pools = bp->rx_cosq_cnt ? bp->rx_cosq_cnt : 1;
 			break;
 		default:
@@ -111,7 +111,7 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 				    ring_idx, rxq, i, vnic);
 		}
 		if (i == 0) {
-			if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB) {
+			if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB) {
 				bp->eth_dev->data->promiscuous = 1;
 				vnic->flags |= BNXT_VNIC_INFO_PROMISC;
 			}
@@ -121,8 +121,8 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 		vnic->end_grp_id = end_grp_id;
 
 		if (i) {
-			if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB ||
-			    !(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS))
+			if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB ||
+			    !(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS))
 				vnic->rss_dflt_cr = true;
 			goto skip_filter_allocation;
 		}
@@ -147,14 +147,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 
 	bp->rx_num_qs_per_vnic = nb_q_per_grp;
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		struct rte_eth_rss_conf *rss = &dev_conf->rx_adv_conf.rss_conf;
 
 		if (bp->flags & BNXT_FLAG_UPDATE_HASH)
 			bp->flags &= ~BNXT_FLAG_UPDATE_HASH;
 
 		for (i = 0; i < bp->nr_vnics; i++) {
-			uint32_t lvl = ETH_RSS_LEVEL(rss->rss_hf);
+			uint32_t lvl = RTE_ETH_RSS_LEVEL(rss->rss_hf);
 
 			vnic = &bp->vnic_info[i];
 			vnic->hash_type =
@@ -363,7 +363,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 	PMD_DRV_LOG(DEBUG, "RX Buf size is %d\n", rxq->rx_buf_size);
 	rxq->queue_id = queue_idx;
 	rxq->port_id = eth_dev->data->port_id;
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -478,7 +478,7 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 	PMD_DRV_LOG(INFO, "Rx queue started %d\n", rx_queue_id);
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		vnic = rxq->vnic;
 
 		if (BNXT_HAS_RING_GRPS(bp)) {
@@ -549,7 +549,7 @@ int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	rxq->rx_started = false;
 	PMD_DRV_LOG(DEBUG, "Rx queue stopped\n");
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (BNXT_HAS_RING_GRPS(bp))
 			vnic->fw_grp_ids[rx_queue_id] = INVALID_HW_RING_ID;
 
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index aeacc60a0127..eb555c4545e6 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -566,8 +566,8 @@ bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
 	dev_conf = &rxq->bp->eth_dev->data->dev_conf;
 	offloads = dev_conf->rxmode.offloads;
 
-	outer_cksum_enabled = !!(offloads & (DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-					     DEV_RX_OFFLOAD_OUTER_UDP_CKSUM));
+	outer_cksum_enabled = !!(offloads & (RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+					     RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM));
 
 	/* Initialize ol_flags table. */
 	pt = rxr->ol_flags_table;
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
index d08854ff61e2..e4905b4fd169 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
@@ -416,7 +416,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_common.h b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
index 9b9489a695a2..0627fd212d0a 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_common.h
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
@@ -96,7 +96,7 @@ bnxt_rxq_rearm(struct bnxt_rx_queue *rxq, struct bnxt_rx_ring_info *rxr)
 }
 
 /*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
  * is enabled.
  */
 static inline void
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
index 13211060cf0e..f15e2d3b4ed4 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
@@ -352,7 +352,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
index 6e563053260a..ffd560166cac 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
@@ -333,7 +333,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 9e45ddd7a82e..f2fcaf53021c 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -353,7 +353,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 }
 
 /*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
  * is enabled.
  */
 static void bnxt_tx_cmp_fast(struct bnxt_tx_queue *txq, int nr_pkts)
@@ -479,7 +479,7 @@ static int bnxt_handle_tx_cp(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index 26253a7e17f2..c63cf4b943fa 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -239,17 +239,17 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
 {
 	uint16_t hwrm_type = 0;
 
-	if (rte_type & ETH_RSS_IPV4)
+	if (rte_type & RTE_ETH_RSS_IPV4)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
-	if (rte_type & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
-	if (rte_type & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
-	if (rte_type & ETH_RSS_IPV6)
+	if (rte_type & RTE_ETH_RSS_IPV6)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
-	if (rte_type & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
-	if (rte_type & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
 
 	return hwrm_type;
@@ -258,11 +258,11 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
 int bnxt_rte_to_hwrm_hash_level(struct bnxt *bp, uint64_t hash_f, uint32_t lvl)
 {
 	uint32_t mode = HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_DEFAULT;
-	bool l3 = (hash_f & (ETH_RSS_IPV4 | ETH_RSS_IPV6));
-	bool l4 = (hash_f & (ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV6_UDP |
-			     ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV6_TCP));
+	bool l3 = (hash_f & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6));
+	bool l4 = (hash_f & (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_TCP));
 	bool l3_only = l3 && !l4;
 	bool l3_and_l4 = l3 && l4;
 
@@ -307,16 +307,16 @@ uint64_t bnxt_hwrm_to_rte_rss_level(struct bnxt *bp, uint32_t mode)
 	 * return default hash mode.
 	 */
 	if (!(bp->vnic_cap_flags & BNXT_VNIC_CAP_OUTER_RSS))
-		return ETH_RSS_LEVEL_PMD_DEFAULT;
+		return RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
 
 	if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_2 ||
 	    mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_4)
-		rss_level |= ETH_RSS_LEVEL_OUTERMOST;
+		rss_level |= RTE_ETH_RSS_LEVEL_OUTERMOST;
 	else if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_2 ||
 		 mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_4)
-		rss_level |= ETH_RSS_LEVEL_INNERMOST;
+		rss_level |= RTE_ETH_RSS_LEVEL_INNERMOST;
 	else
-		rss_level |= ETH_RSS_LEVEL_PMD_DEFAULT;
+		rss_level |= RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
 
 	return rss_level;
 }
diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c
index f71543810970..77ecbef04c3d 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt.c
+++ b/drivers/net/bnxt/rte_pmd_bnxt.c
@@ -421,18 +421,18 @@ int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf,
 	if (vf >= bp->pdev->max_vfs)
 		return -EINVAL;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG) {
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG) {
 		PMD_DRV_LOG(ERR, "Currently cannot toggle this setting\n");
 		return -ENOTSUP;
 	}
 
 	/* Is this really the correct mapping?  VFd seems to think it is. */
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 		flag |= BNXT_VNIC_INFO_PROMISC;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 		flag |= BNXT_VNIC_INFO_BCAST;
-	if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 		flag |= BNXT_VNIC_INFO_ALLMULTI | BNXT_VNIC_INFO_MCAST;
 
 	if (on)
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index fc179a2732ac..8b104b639184 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -167,8 +167,8 @@ struct bond_dev_private {
 	struct rte_eth_desc_lim tx_desc_lim;	/**< Tx descriptor limits */
 
 	uint16_t reta_size;
-	struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_512 /
-			RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_512 /
+			RTE_ETH_RETA_GROUP_SIZE];
 
 	uint8_t rss_key[52];				/**< 52-byte hash key buffer. */
 	uint8_t rss_key_len;				/**< hash key length in bytes. */
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 2029955c1092..ca50583d62d8 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -770,25 +770,25 @@ link_speed_key(uint16_t speed) {
 	uint16_t key_speed;
 
 	switch (speed) {
-	case ETH_SPEED_NUM_NONE:
+	case RTE_ETH_SPEED_NUM_NONE:
 		key_speed = 0x00;
 		break;
-	case ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_10M:
 		key_speed = BOND_LINK_SPEED_KEY_10M;
 		break;
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		key_speed = BOND_LINK_SPEED_KEY_100M;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		key_speed = BOND_LINK_SPEED_KEY_1000M;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		key_speed = BOND_LINK_SPEED_KEY_10G;
 		break;
-	case ETH_SPEED_NUM_20G:
+	case RTE_ETH_SPEED_NUM_20G:
 		key_speed = BOND_LINK_SPEED_KEY_20G;
 		break;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		key_speed = BOND_LINK_SPEED_KEY_40G;
 		break;
 	default:
@@ -887,7 +887,7 @@ bond_mode_8023ad_periodic_cb(void *arg)
 
 		if (ret >= 0 && link_info.link_status != 0) {
 			key = link_speed_key(link_info.link_speed) << 1;
-			if (link_info.link_duplex == ETH_LINK_FULL_DUPLEX)
+			if (link_info.link_duplex == RTE_ETH_LINK_FULL_DUPLEX)
 				key |= BOND_LINK_FULL_DUPLEX_KEY;
 		} else {
 			key = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 5140ef14c2ee..84943cffe2bb 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -204,7 +204,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
 
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	if ((bonded_eth_dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_VLAN_FILTER) == 0)
+			RTE_ETH_RX_OFFLOAD_VLAN_FILTER) == 0)
 		return 0;
 
 	internals = bonded_eth_dev->data->dev_private;
@@ -592,7 +592,7 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
 			return -1;
 		}
 
-		 if (link_props.link_status == ETH_LINK_UP) {
+		if (link_props.link_status == RTE_ETH_LINK_UP) {
 			if (internals->active_slave_count == 0 &&
 			    !internals->user_defined_primary_port)
 				bond_ethdev_primary_set(internals,
@@ -727,7 +727,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
 		internals->tx_offload_capa = 0;
 		internals->rx_queue_offload_capa = 0;
 		internals->tx_queue_offload_capa = 0;
-		internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+		internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
 		internals->reta_size = 0;
 		internals->candidate_max_rx_pktlen = 0;
 		internals->max_rx_pktlen = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 8d038ba6b6c4..834a5937b3aa 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1369,8 +1369,8 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
 		 * In any other mode the link properties are set to default
 		 * values of AUTONEG/DUPLEX
 		 */
-		ethdev->data->dev_link.link_autoneg = ETH_LINK_AUTONEG;
-		ethdev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		ethdev->data->dev_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
+		ethdev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	}
 }
 
@@ -1700,7 +1700,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 		slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
 
 	/* If RSS is enabled for bonding, try to enable it for slaves  */
-	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		/* rss_key won't be empty if RSS is configured in bonded dev */
 		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
 					internals->rss_key_len;
@@ -1714,12 +1714,12 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 	}
 
 	if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_VLAN_FILTER)
+			RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 		slave_eth_dev->data->dev_conf.rxmode.offloads |=
-				DEV_RX_OFFLOAD_VLAN_FILTER;
+				RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	else
 		slave_eth_dev->data->dev_conf.rxmode.offloads &=
-				~DEV_RX_OFFLOAD_VLAN_FILTER;
+				~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	slave_eth_dev->data->dev_conf.rxmode.mtu =
 			bonded_eth_dev->data->dev_conf.rxmode.mtu;
@@ -1823,7 +1823,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 	}
 
 	/* If RSS is enabled for bonding, synchronize RETA */
-	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
 		int i;
 		struct bond_dev_private *internals;
 
@@ -1946,7 +1946,7 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 		return -1;
 	}
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 1;
 
 	internals = eth_dev->data->dev_private;
@@ -2086,7 +2086,7 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 			tlb_last_obytets[internals->active_slaves[i]] = 0;
 	}
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 0;
 
 	internals->link_status_polling_enabled = 0;
@@ -2416,15 +2416,15 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 
 	bond_ctx = ethdev->data->dev_private;
 
-	ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+	ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	if (ethdev->data->dev_started == 0 ||
 			bond_ctx->active_slave_count == 0) {
-		ethdev->data->dev_link.link_status = ETH_LINK_DOWN;
+		ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 		return 0;
 	}
 
-	ethdev->data->dev_link.link_status = ETH_LINK_UP;
+	ethdev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	if (wait_to_complete)
 		link_update = rte_eth_link_get;
@@ -2449,7 +2449,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 					  &slave_link);
 			if (ret < 0) {
 				ethdev->data->dev_link.link_speed =
-					ETH_SPEED_NUM_NONE;
+					RTE_ETH_SPEED_NUM_NONE;
 				RTE_BOND_LOG(ERR,
 					"Slave (port %u) link get failed: %s",
 					bond_ctx->active_slaves[idx],
@@ -2491,7 +2491,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 		 * In theses mode the maximum theoretical link speed is the sum
 		 * of all the slaves
 		 */
-		ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		one_link_update_succeeded = false;
 
 		for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
@@ -2865,7 +2865,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 			goto link_update;
 
 		/* check link state properties if bonded link is up*/
-		if (bonded_eth_dev->data->dev_link.link_status == ETH_LINK_UP) {
+		if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
 			if (link_properties_valid(bonded_eth_dev, &link) != 0)
 				RTE_BOND_LOG(ERR, "Invalid link properties "
 					     "for slave %d in bonding mode %d",
@@ -2881,7 +2881,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 		if (internals->active_slave_count < 1) {
 			/* If first active slave, then change link status */
 			bonded_eth_dev->data->dev_link.link_status =
-								ETH_LINK_UP;
+								RTE_ETH_LINK_UP;
 			internals->current_primary_port = port_id;
 			lsc_flag = 1;
 
@@ -2973,12 +2973,12 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	 /* Copy RETA table */
-	reta_count = (reta_size + RTE_RETA_GROUP_SIZE - 1) /
-			RTE_RETA_GROUP_SIZE;
+	reta_count = (reta_size + RTE_ETH_RETA_GROUP_SIZE - 1) /
+			RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < reta_count; i++) {
 		internals->reta_conf[i].mask = reta_conf[i].mask;
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				internals->reta_conf[i].reta[j] = reta_conf[i].reta[j];
 	}
@@ -3011,8 +3011,8 @@ bond_ethdev_rss_reta_query(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	 /* Copy RETA table */
-	for (i = 0; i < reta_size / RTE_RETA_GROUP_SIZE; i++)
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < reta_size / RTE_ETH_RETA_GROUP_SIZE; i++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = internals->reta_conf[i].reta[j];
 
@@ -3274,7 +3274,7 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
 	internals->max_rx_pktlen = 0;
 
 	/* Initially allow to choose any offload type */
-	internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+	internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
 
 	memset(&internals->default_rxconf, 0,
 	       sizeof(internals->default_rxconf));
@@ -3501,7 +3501,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 	 * set key to the the value specified in port RSS configuration.
 	 * Fall back to default RSS key if the key is not specified
 	 */
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
 		struct rte_eth_rss_conf *rss_conf =
 			&dev->data->dev_conf.rx_adv_conf.rss_conf;
 		if (rss_conf->rss_key != NULL) {
@@ -3526,9 +3526,9 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 
 		for (i = 0; i < RTE_DIM(internals->reta_conf); i++) {
 			internals->reta_conf[i].mask = ~0LL;
-			for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+			for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 				internals->reta_conf[i].reta[j] =
-						(i * RTE_RETA_GROUP_SIZE + j) %
+						(i * RTE_ETH_RETA_GROUP_SIZE + j) %
 						dev->data->nb_rx_queues;
 		}
 	}
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 25da5f6691d0..f7eb0f437b77 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -15,28 +15,28 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rxmode *rxmode = &conf->rxmode;
 	uint16_t flags = 0;
 
-	if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
-	    (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+	    (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		flags |= NIX_RX_OFFLOAD_RSS_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		flags |= NIX_RX_MULTI_SEG_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 
 	if (!dev->ptype_disable)
 		flags |= NIX_RX_OFFLOAD_PTYPE_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		flags |= NIX_RX_OFFLOAD_SECURITY_F;
 
 	return flags;
@@ -72,39 +72,39 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
 			 offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
 
-	if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
-	    conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+	    conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
 
-	if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= NIX_TX_MULTI_SEG_F;
 
 	/* Enable Inner checksum for TSO */
-	if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+	if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
 	/* Enable Inner and Outer checksum for Tunnel TSO */
-	if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		    DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
-	if (conf & DEV_TX_OFFLOAD_SECURITY)
+	if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
 		flags |= NIX_TX_OFFLOAD_SECURITY_F;
 
 	return flags;
diff --git a/drivers/net/cnxk/cn10k_rte_flow.c b/drivers/net/cnxk/cn10k_rte_flow.c
index 8c87452934eb..dff4c7746cf5 100644
--- a/drivers/net/cnxk/cn10k_rte_flow.c
+++ b/drivers/net/cnxk/cn10k_rte_flow.c
@@ -98,7 +98,7 @@ cn10k_rss_action_validate(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		plt_err("multi-queue mode is disabled");
 		return -ENOTSUP;
 	}
diff --git a/drivers/net/cnxk/cn10k_rx.c b/drivers/net/cnxk/cn10k_rx.c
index d6af54b56de6..5d603514c045 100644
--- a/drivers/net/cnxk/cn10k_rx.c
+++ b/drivers/net/cnxk/cn10k_rx.c
@@ -77,12 +77,12 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 			nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
 
 	if (dev->scalar_ena) {
-		if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+		if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 			return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
 		return pick_rx_func(eth_dev, nix_eth_rx_burst);
 	}
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
 	return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
 }
diff --git a/drivers/net/cnxk/cn10k_tx.c b/drivers/net/cnxk/cn10k_tx.c
index eb962ef08cab..5e6c5ee11188 100644
--- a/drivers/net/cnxk/cn10k_tx.c
+++ b/drivers/net/cnxk/cn10k_tx.c
@@ -78,11 +78,11 @@ cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 
 	if (dev->scalar_ena) {
 		pick_tx_func(eth_dev, nix_eth_tx_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
 	} else {
 		pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
 	}
 
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 08c86f9e6b7b..17f8f6debbc8 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -15,28 +15,28 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rxmode *rxmode = &conf->rxmode;
 	uint16_t flags = 0;
 
-	if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
-	    (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+	    (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		flags |= NIX_RX_OFFLOAD_RSS_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		flags |= NIX_RX_MULTI_SEG_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 
 	if (!dev->ptype_disable)
 		flags |= NIX_RX_OFFLOAD_PTYPE_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		flags |= NIX_RX_OFFLOAD_SECURITY_F;
 
 	return flags;
@@ -72,39 +72,39 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
 			 offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
 
-	if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
-	    conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+	    conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
 
-	if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= NIX_TX_MULTI_SEG_F;
 
 	/* Enable Inner checksum for TSO */
-	if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+	if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
 	/* Enable Inner and Outer checksum for Tunnel TSO */
-	if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		    DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY)
 		flags |= NIX_TX_OFFLOAD_SECURITY_F;
 
 	return flags;
@@ -298,9 +298,9 @@ cn9k_nix_configure(struct rte_eth_dev *eth_dev)
 
 	/* Platform specific checks */
 	if ((roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) &&
-	    (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
-	    ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
-	     (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+	    ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+	     (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
 		plt_err("Outer IP and SCTP checksum unsupported");
 		return -EINVAL;
 	}
@@ -553,17 +553,17 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
 	 * TSO not supported for earlier chip revisions
 	 */
 	if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0())
-		dev->tx_offload_capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
-					  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-					  DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-					  DEV_TX_OFFLOAD_GRE_TNL_TSO);
+		dev->tx_offload_capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+					  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+					  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+					  RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
 
 	/* 50G and 100G to be supported for board version C0
 	 * and above of CN9K.
 	 */
 	if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) {
-		dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_50G;
-		dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_100G;
+		dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_50G;
+		dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_100G;
 	}
 
 	dev->hwcap = 0;
diff --git a/drivers/net/cnxk/cn9k_rx.c b/drivers/net/cnxk/cn9k_rx.c
index 5c4387e74e0b..8d504c4a6d92 100644
--- a/drivers/net/cnxk/cn9k_rx.c
+++ b/drivers/net/cnxk/cn9k_rx.c
@@ -77,12 +77,12 @@ cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 			nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
 
 	if (dev->scalar_ena) {
-		if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+		if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 			return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
 		return pick_rx_func(eth_dev, nix_eth_rx_burst);
 	}
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
 	return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
 }
diff --git a/drivers/net/cnxk/cn9k_tx.c b/drivers/net/cnxk/cn9k_tx.c
index e5691a2a7e16..f3f19fed9780 100644
--- a/drivers/net/cnxk/cn9k_tx.c
+++ b/drivers/net/cnxk/cn9k_tx.c
@@ -77,11 +77,11 @@ cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 
 	if (dev->scalar_ena) {
 		pick_tx_func(eth_dev, nix_eth_tx_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
 	} else {
 		pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
 	}
 
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 2e05d8bf1552..db54468dbca1 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -10,7 +10,7 @@ nix_get_rx_offload_capa(struct cnxk_eth_dev *dev)
 
 	if (roc_nix_is_vf_or_sdp(&dev->nix) ||
 	    dev->npc.switch_header_type == ROC_PRIV_FLAGS_HIGIG)
-		capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+		capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	return capa;
 }
@@ -28,11 +28,11 @@ nix_get_speed_capa(struct cnxk_eth_dev *dev)
 	uint32_t speed_capa;
 
 	/* Auto negotiation disabled */
-	speed_capa = ETH_LINK_SPEED_FIXED;
+	speed_capa = RTE_ETH_LINK_SPEED_FIXED;
 	if (!roc_nix_is_vf_or_sdp(&dev->nix) && !roc_nix_is_lbk(&dev->nix)) {
-		speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			      ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
-			      ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			      RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+			      RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
 	}
 
 	return speed_capa;
@@ -65,7 +65,7 @@ nix_security_setup(struct cnxk_eth_dev *dev)
 	struct roc_nix *nix = &dev->nix;
 	int i, rc = 0;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		/* Setup Inline Inbound */
 		rc = roc_nix_inl_inb_init(nix);
 		if (rc) {
@@ -80,8 +80,8 @@ nix_security_setup(struct cnxk_eth_dev *dev)
 		cnxk_nix_inb_mode_set(dev, true);
 	}
 
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
-	    dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY ||
+	    dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		struct plt_bitmap *bmap;
 		size_t bmap_sz;
 		void *mem;
@@ -100,8 +100,8 @@ nix_security_setup(struct cnxk_eth_dev *dev)
 
 		dev->outb.lf_base = roc_nix_inl_outb_lf_base_get(nix);
 
-		/* Skip the rest if DEV_TX_OFFLOAD_SECURITY is not enabled */
-		if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY))
+		/* Skip the rest if RTE_ETH_TX_OFFLOAD_SECURITY is not enabled */
+		if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY))
 			goto done;
 
 		rc = -ENOMEM;
@@ -136,7 +136,7 @@ nix_security_setup(struct cnxk_eth_dev *dev)
 done:
 	return 0;
 cleanup:
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		rc |= roc_nix_inl_inb_fini(nix);
 	return rc;
 }
@@ -182,7 +182,7 @@ nix_security_release(struct cnxk_eth_dev *dev)
 	int rc, ret = 0;
 
 	/* Cleanup Inline inbound */
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		/* Destroy inbound sessions */
 		tvar = NULL;
 		RTE_TAILQ_FOREACH_SAFE(eth_sec, &dev->inb.list, entry, tvar)
@@ -199,8 +199,8 @@ nix_security_release(struct cnxk_eth_dev *dev)
 	}
 
 	/* Cleanup Inline outbound */
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
-	    dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY ||
+	    dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		/* Destroy outbound sessions */
 		tvar = NULL;
 		RTE_TAILQ_FOREACH_SAFE(eth_sec, &dev->outb.list, entry, tvar)
@@ -242,8 +242,8 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
 	buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
 
 	if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD > buffsz) {
-		dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
-		dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+		dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	}
 }
 
@@ -273,7 +273,7 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
 	struct rte_eth_fc_conf fc_conf = {0};
 	int rc;
 
-	/* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+	/* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
 	 * by AF driver, update those info in PMD structure.
 	 */
 	rc = cnxk_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -281,10 +281,10 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
 		goto exit;
 
 	fc->mode = fc_conf.mode;
-	fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_RX_PAUSE);
-	fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_TX_PAUSE);
+	fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+	fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
 
 exit:
 	return rc;
@@ -305,11 +305,11 @@ nix_update_flow_ctrl_config(struct rte_eth_dev *eth_dev)
 	/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
 	if (roc_model_is_cn96_ax() &&
 	    dev->npc.switch_header_type != ROC_PRIV_FLAGS_HIGIG &&
-	    (fc_cfg.mode == RTE_FC_FULL || fc_cfg.mode == RTE_FC_RX_PAUSE)) {
+	    (fc_cfg.mode == RTE_ETH_FC_FULL || fc_cfg.mode == RTE_ETH_FC_RX_PAUSE)) {
 		fc_cfg.mode =
-				(fc_cfg.mode == RTE_FC_FULL ||
-				fc_cfg.mode == RTE_FC_TX_PAUSE) ?
-				RTE_FC_TX_PAUSE : RTE_FC_NONE;
+				(fc_cfg.mode == RTE_ETH_FC_FULL ||
+				fc_cfg.mode == RTE_ETH_FC_TX_PAUSE) ?
+				RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
 	}
 
 	return cnxk_nix_flow_ctrl_set(eth_dev, &fc_cfg);
@@ -352,7 +352,7 @@ nix_sq_max_sqe_sz(struct cnxk_eth_dev *dev)
 	 * Maximum three segments can be supported with W8, Choose
 	 * NIX_MAXSQESZ_W16 for multi segment offload.
 	 */
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		return NIX_MAXSQESZ_W16;
 	else
 		return NIX_MAXSQESZ_W8;
@@ -380,7 +380,7 @@ cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	/* When Tx Security offload is enabled, increase tx desc count by
 	 * max possible outbound desc count.
 	 */
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY)
 		nb_desc += dev->outb.nb_desc;
 
 	/* Setup ROC SQ */
@@ -499,7 +499,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	 * to avoid meta packet drop as LBK does not currently support
 	 * backpressure.
 	 */
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY && roc_nix_is_lbk(nix)) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY && roc_nix_is_lbk(nix)) {
 		uint64_t pkt_pool_limit = roc_nix_inl_dev_rq_limit_get();
 
 		/* Use current RQ's aura limit if inl rq is not available */
@@ -561,7 +561,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	rxq_sp->qconf.nb_desc = nb_desc;
 	rxq_sp->qconf.mp = mp;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		/* Setup rq reference for inline dev if present */
 		rc = roc_nix_inl_dev_rq_get(rq);
 		if (rc)
@@ -579,7 +579,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	 * These are needed in deriving raw clock value from tsc counter.
 	 * read_clock eth op returns raw clock value.
 	 */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
 		rc = cnxk_nix_tsc_convert(dev);
 		if (rc) {
 			plt_err("Failed to calculate delta and freq mult");
@@ -618,7 +618,7 @@ cnxk_nix_rx_queue_release(struct rte_eth_dev *eth_dev, uint16_t qid)
 	plt_nix_dbg("Releasing rxq %u", qid);
 
 	/* Release rq reference for inline dev if present */
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		roc_nix_inl_dev_rq_put(rq);
 
 	/* Cleanup ROC RQ */
@@ -657,24 +657,24 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
 
 	dev->ethdev_rss_hf = ethdev_rss;
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
 	    dev->npc.switch_header_type == ROC_PRIV_FLAGS_LEN_90B) {
 		flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
 	}
 
-	if (ethdev_rss & ETH_RSS_C_VLAN)
+	if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
 
-	if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
 
-	if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
 
-	if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
 
-	if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
 
 	if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -683,34 +683,34 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
 	if (ethdev_rss & RSS_IPV6_ENABLE)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
 
-	if (ethdev_rss & ETH_RSS_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_TCP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_UDP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_SCTP)
+	if (ethdev_rss & RTE_ETH_RSS_SCTP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
 
 	if (ethdev_rss & RSS_IPV6_EX_ENABLE)
 		flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
 
-	if (ethdev_rss & ETH_RSS_PORT)
+	if (ethdev_rss & RTE_ETH_RSS_PORT)
 		flowkey_cfg |= FLOW_KEY_TYPE_PORT;
 
-	if (ethdev_rss & ETH_RSS_NVGRE)
+	if (ethdev_rss & RTE_ETH_RSS_NVGRE)
 		flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
 
-	if (ethdev_rss & ETH_RSS_VXLAN)
+	if (ethdev_rss & RTE_ETH_RSS_VXLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
 
-	if (ethdev_rss & ETH_RSS_GENEVE)
+	if (ethdev_rss & RTE_ETH_RSS_GENEVE)
 		flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
 
-	if (ethdev_rss & ETH_RSS_GTPU)
+	if (ethdev_rss & RTE_ETH_RSS_GTPU)
 		flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
 
 	return flowkey_cfg;
@@ -746,7 +746,7 @@ nix_rss_default_setup(struct cnxk_eth_dev *dev)
 	uint64_t rss_hf;
 
 	rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
-	rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 
@@ -958,8 +958,8 @@ nix_lso_fmt_setup(struct cnxk_eth_dev *dev)
 
 	/* Nothing much to do if offload is not enabled */
 	if (!(dev->tx_offloads &
-	      (DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-	       DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO)))
+	      (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+	       RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))
 		return 0;
 
 	/* Setup LSO formats in AF. Its a no-op if other ethdev has
@@ -1007,13 +1007,13 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 		goto fail_configure;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-	    rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+	    rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		plt_err("Unsupported mq rx mode %d", rxmode->mq_mode);
 		goto fail_configure;
 	}
 
-	if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		plt_err("Unsupported mq tx mode %d", txmode->mq_mode);
 		goto fail_configure;
 	}
@@ -1054,7 +1054,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 	/* Prepare rx cfg */
 	rx_cfg = ROC_NIX_LF_RX_CFG_DIS_APAD;
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+	    (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
 		rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_OL4;
 		rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_IL4;
 	}
@@ -1062,7 +1062,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 		   ROC_NIX_LF_RX_CFG_LEN_IL4 | ROC_NIX_LF_RX_CFG_LEN_IL3 |
 		   ROC_NIX_LF_RX_CFG_LEN_OL4 | ROC_NIX_LF_RX_CFG_LEN_OL3);
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		rx_cfg |= ROC_NIX_LF_RX_CFG_IP6_UDP_OPT;
 		/* Disable drop re if rx offload security is enabled and
 		 * platform does not support it.
@@ -1454,12 +1454,12 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
 	 * enabled on PF owning this VF
 	 */
 	memset(&dev->tstamp, 0, sizeof(struct cnxk_timesync_info));
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
 		cnxk_eth_dev_ops.timesync_enable(eth_dev);
 	else
 		cnxk_eth_dev_ops.timesync_disable(eth_dev);
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 		rc = rte_mbuf_dyn_rx_timestamp_register
 			(&dev->tstamp.tstamp_dynfield_offset,
 			 &dev->tstamp.rx_tstamp_dynflag);
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 72f80ae948cf..29a3540ed3f8 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -58,41 +58,44 @@
 	 CNXK_NIX_TX_NB_SEG_MAX)
 
 #define CNXK_NIX_RSS_L3_L4_SRC_DST                                             \
-	(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | ETH_RSS_L4_SRC_ONLY |     \
-	 ETH_RSS_L4_DST_ONLY)
+	(RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |                   \
+	 RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
 
 #define CNXK_NIX_RSS_OFFLOAD                                                   \
-	(ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP |               \
-	 ETH_RSS_SCTP | ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD |                  \
-	 CNXK_NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | ETH_RSS_C_VLAN)
+	(RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |                 \
+	 RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_TUNNEL |             \
+	 RTE_ETH_RSS_L2_PAYLOAD | CNXK_NIX_RSS_L3_L4_SRC_DST |                 \
+	 RTE_ETH_RSS_LEVEL_MASK | RTE_ETH_RSS_C_VLAN)
 
 #define CNXK_NIX_TX_OFFLOAD_CAPA                                               \
-	(DEV_TX_OFFLOAD_MBUF_FAST_FREE | DEV_TX_OFFLOAD_MT_LOCKFREE |          \
-	 DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT |             \
-	 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |    \
-	 DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM |                 \
-	 DEV_TX_OFFLOAD_SCTP_CKSUM | DEV_TX_OFFLOAD_TCP_TSO |                  \
-	 DEV_TX_OFFLOAD_VXLAN_TNL_TSO | DEV_TX_OFFLOAD_GENEVE_TNL_TSO |        \
-	 DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_MULTI_SEGS |              \
-	 DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_SECURITY)
+	(RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |          \
+	 RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT |             \
+	 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |    \
+	 RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM |                 \
+	 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO |                  \
+	 RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |        \
+	 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS |              \
+	 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_SECURITY)
 
 #define CNXK_NIX_RX_OFFLOAD_CAPA                                               \
-	(DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM |                 \
-	 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER |            \
-	 DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | DEV_RX_OFFLOAD_RSS_HASH |            \
-	 DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_VLAN_STRIP |                \
-	 DEV_RX_OFFLOAD_SECURITY)
+	(RTE_ETH_RX_OFFLOAD_CHECKSUM | RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |         \
+	 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_SCATTER |    \
+	 RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_RSS_HASH |    \
+	 RTE_ETH_RX_OFFLOAD_TIMESTAMP | RTE_ETH_RX_OFFLOAD_VLAN_STRIP |        \
+	 RTE_ETH_RX_OFFLOAD_SECURITY)
 
 #define RSS_IPV4_ENABLE                                                        \
-	(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP |         \
-	 ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_SCTP)
+	(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |                            \
+	 RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV4_TCP |         \
+	 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
 #define RSS_IPV6_ENABLE                                                        \
-	(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP |         \
-	 ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_SCTP)
+	(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |                            \
+	 RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |         \
+	 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 #define RSS_IPV6_EX_ENABLE                                                     \
-	(ETH_RSS_IPV6_EX | ETH_RSS_IPV6_TCP_EX | ETH_RSS_IPV6_UDP_EX)
+	(RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define RSS_MAX_LEVELS 3
 
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index c0b949e21ab0..e068f553495c 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -104,11 +104,11 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
 
 	val = atoi(value);
 
-	if (val <= ETH_RSS_RETA_SIZE_64)
+	if (val <= RTE_ETH_RSS_RETA_SIZE_64)
 		val = ROC_NIX_RSS_RETA_SZ_64;
-	else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
+	else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
 		val = ROC_NIX_RSS_RETA_SZ_128;
-	else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
+	else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
 		val = ROC_NIX_RSS_RETA_SZ_256;
 	else
 		val = ROC_NIX_RSS_RETA_SZ_64;
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index d0924df76152..67464302653d 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -81,24 +81,24 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
 		uint64_t flags;
 		const char *output;
 	} rx_offload_map[] = {
-		{DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
-		{DEV_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
-		{DEV_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
-		{DEV_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
-		{DEV_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
-		{DEV_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
-		{DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
-		{DEV_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
-		{DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
-		{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
-		{DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
-		{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
-		{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
-		{DEV_RX_OFFLOAD_SECURITY, " Security,"},
-		{DEV_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
-		{DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
-		{DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
-		{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+		{RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
+		{RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
+		{RTE_ETH_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
+		{RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
+		{RTE_ETH_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
+		{RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
+		{RTE_ETH_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
+		{RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+		{RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+		{RTE_ETH_RX_OFFLOAD_SECURITY, " Security,"},
+		{RTE_ETH_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
+		{RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
+		{RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
 	};
 	static const char *const burst_mode[] = {"Vector Neon, Rx Offloads:",
 						 "Scalar, Rx Offloads:"
@@ -142,28 +142,28 @@ cnxk_nix_tx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
 		uint64_t flags;
 		const char *output;
 	} tx_offload_map[] = {
-		{DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
-		{DEV_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
-		{DEV_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
-		{DEV_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
-		{DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
-		{DEV_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
-		{DEV_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
-		{DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
-		{DEV_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
-		{DEV_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
-		{DEV_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
-		{DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
-		{DEV_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
-		{DEV_TX_OFFLOAD_SECURITY, " Security,"},
-		{DEV_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
-		{DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
+		{RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+		{RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
+		{RTE_ETH_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
+		{RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
+		{RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
+		{RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
+		{RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
+		{RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
+		{RTE_ETH_TX_OFFLOAD_SECURITY, " Security,"},
+		{RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
 	};
 	static const char *const burst_mode[] = {"Vector Neon, Tx Offloads:",
 						 "Scalar, Tx Offloads:"
@@ -203,8 +203,8 @@ cnxk_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 	enum rte_eth_fc_mode mode_map[] = {
-					   RTE_FC_NONE, RTE_FC_RX_PAUSE,
-					   RTE_FC_TX_PAUSE, RTE_FC_FULL
+					   RTE_ETH_FC_NONE, RTE_ETH_FC_RX_PAUSE,
+					   RTE_ETH_FC_TX_PAUSE, RTE_ETH_FC_FULL
 					  };
 	struct roc_nix *nix = &dev->nix;
 	int mode;
@@ -264,10 +264,10 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	if (fc_conf->mode == fc->mode)
 		return 0;
 
-	rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_RX_PAUSE);
-	tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_TX_PAUSE);
+	rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+	tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
 
 	/* Check if TX pause frame is already enabled or not */
 	if (fc->tx_pause ^ tx_pause) {
@@ -408,13 +408,13 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	 * when this feature has not been enabled before.
 	 */
 	if (data->dev_started && frame_size > buffsz &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		plt_err("Scatter offload is not enabled for mtu");
 		goto exit;
 	}
 
 	/* Check <seg size> * <max_seg>  >= max_frame */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)	&&
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	&&
 	    frame_size > (buffsz * CNXK_NIX_RX_NB_SEG_MAX)) {
 		plt_err("Greater than maximum supported packet length");
 		goto exit;
@@ -734,8 +734,8 @@ cnxk_nix_reta_update(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Copy RETA table */
-	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta[idx] = reta_conf[i].reta[j];
 			idx++;
@@ -770,8 +770,8 @@ cnxk_nix_reta_query(struct rte_eth_dev *eth_dev,
 		goto fail;
 
 	/* Copy RETA table */
-	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = reta[idx];
 			idx++;
@@ -804,7 +804,7 @@ cnxk_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
 	if (rss_conf->rss_key)
 		roc_nix_rss_key_set(nix, rss_conf->rss_key);
 
-	rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 	flowkey_cfg =
diff --git a/drivers/net/cnxk/cnxk_link.c b/drivers/net/cnxk/cnxk_link.c
index 6a7080167598..f10a502826c6 100644
--- a/drivers/net/cnxk/cnxk_link.c
+++ b/drivers/net/cnxk/cnxk_link.c
@@ -38,7 +38,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
 		plt_info("Port %d: Link Up - speed %u Mbps - %s",
 			 (int)(eth_dev->data->port_id),
 			 (uint32_t)link->link_speed,
-			 link->link_duplex == ETH_LINK_FULL_DUPLEX
+			 link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX
 				 ? "full-duplex"
 				 : "half-duplex");
 	else
@@ -89,7 +89,7 @@ cnxk_eth_dev_link_status_cb(struct roc_nix *nix, struct roc_nix_link_info *link)
 
 	eth_link.link_status = link->status;
 	eth_link.link_speed = link->speed;
-	eth_link.link_autoneg = ETH_LINK_AUTONEG;
+	eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	eth_link.link_duplex = link->full_duplex;
 
 	/* Print link info */
@@ -117,17 +117,17 @@ cnxk_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
 		return 0;
 
 	if (roc_nix_is_lbk(&dev->nix)) {
-		link.link_status = ETH_LINK_UP;
-		link.link_speed = ETH_SPEED_NUM_100G;
-		link.link_autoneg = ETH_LINK_FIXED;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_status = RTE_ETH_LINK_UP;
+		link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	} else {
 		rc = roc_nix_mac_link_info_get(&dev->nix, &info);
 		if (rc)
 			return rc;
 		link.link_status = info.status;
 		link.link_speed = info.speed;
-		link.link_autoneg = ETH_LINK_AUTONEG;
+		link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 		if (info.full_duplex)
 			link.link_duplex = info.full_duplex;
 	}
diff --git a/drivers/net/cnxk/cnxk_ptp.c b/drivers/net/cnxk/cnxk_ptp.c
index 449489f599c4..139fea256ccd 100644
--- a/drivers/net/cnxk/cnxk_ptp.c
+++ b/drivers/net/cnxk/cnxk_ptp.c
@@ -227,7 +227,7 @@ cnxk_nix_timesync_enable(struct rte_eth_dev *eth_dev)
 	dev->rx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
 	dev->tx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
 
-	dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+	dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	rc = roc_nix_ptp_rx_ena_dis(nix, true);
 	if (!rc) {
@@ -257,7 +257,7 @@ int
 cnxk_nix_timesync_disable(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
-	uint64_t rx_offloads = DEV_RX_OFFLOAD_TIMESTAMP;
+	uint64_t rx_offloads = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	struct roc_nix *nix = &dev->nix;
 	int rc = 0;
 
diff --git a/drivers/net/cnxk/cnxk_rte_flow.c b/drivers/net/cnxk/cnxk_rte_flow.c
index dfc33ba8654a..b08d7c34faa9 100644
--- a/drivers/net/cnxk/cnxk_rte_flow.c
+++ b/drivers/net/cnxk/cnxk_rte_flow.c
@@ -69,7 +69,7 @@ npc_rss_action_validate(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		plt_err("multi-queue mode is disabled");
 		return -ENOTSUP;
 	}
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 37625c5bfb69..dbcbfaf68a30 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -28,31 +28,31 @@
 #define CXGBE_LINK_STATUS_POLL_CNT 100 /* Max number of times to poll */
 
 #define CXGBE_DEFAULT_RSS_KEY_LEN     40 /* 320-bits */
-#define CXGBE_RSS_HF_IPV4_MASK (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
-				ETH_RSS_NONFRAG_IPV4_OTHER)
-#define CXGBE_RSS_HF_IPV6_MASK (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
-				ETH_RSS_NONFRAG_IPV6_OTHER | \
-				ETH_RSS_IPV6_EX)
-#define CXGBE_RSS_HF_TCP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_TCP | \
-				    ETH_RSS_IPV6_TCP_EX)
-#define CXGBE_RSS_HF_UDP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_UDP | \
-				    ETH_RSS_IPV6_UDP_EX)
-#define CXGBE_RSS_HF_ALL (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP)
+#define CXGBE_RSS_HF_IPV4_MASK (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+				RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
+#define CXGBE_RSS_HF_IPV6_MASK (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
+				RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+				RTE_ETH_RSS_IPV6_EX)
+#define CXGBE_RSS_HF_TCP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+				    RTE_ETH_RSS_IPV6_TCP_EX)
+#define CXGBE_RSS_HF_UDP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+				    RTE_ETH_RSS_IPV6_UDP_EX)
+#define CXGBE_RSS_HF_ALL (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP)
 
 /* Tx/Rx Offloads supported */
-#define CXGBE_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT | \
-			   DEV_TX_OFFLOAD_IPV4_CKSUM | \
-			   DEV_TX_OFFLOAD_UDP_CKSUM | \
-			   DEV_TX_OFFLOAD_TCP_CKSUM | \
-			   DEV_TX_OFFLOAD_TCP_TSO | \
-			   DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define CXGBE_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP | \
-			   DEV_RX_OFFLOAD_IPV4_CKSUM | \
-			   DEV_RX_OFFLOAD_UDP_CKSUM | \
-			   DEV_RX_OFFLOAD_TCP_CKSUM | \
-			   DEV_RX_OFFLOAD_SCATTER | \
-			   DEV_RX_OFFLOAD_RSS_HASH)
+#define CXGBE_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+			   RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+			   RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+			   RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+			   RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define CXGBE_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+			   RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+			   RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+			   RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+			   RTE_ETH_RX_OFFLOAD_SCATTER | \
+			   RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 /* Devargs filtermode and filtermask representation */
 enum cxgbe_devargs_filter_mode_flags {
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index f77b2976002c..4758321778d1 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -231,9 +231,9 @@ int cxgbe_dev_link_update(struct rte_eth_dev *eth_dev,
 	}
 
 	new_link.link_status = cxgbe_force_linkup(adapter) ?
-			       ETH_LINK_UP : pi->link_cfg.link_ok;
+			       RTE_ETH_LINK_UP : pi->link_cfg.link_ok;
 	new_link.link_autoneg = (lc->link_caps & FW_PORT_CAP32_ANEG) ? 1 : 0;
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	new_link.link_speed = t4_fwcap_to_speed(lc->link_caps);
 
 	return rte_eth_linkstatus_set(eth_dev, &new_link);
@@ -374,7 +374,7 @@ int cxgbe_dev_start(struct rte_eth_dev *eth_dev)
 			goto out;
 	}
 
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		eth_dev->data->scattered_rx = 1;
 	else
 		eth_dev->data->scattered_rx = 0;
@@ -438,9 +438,9 @@ int cxgbe_dev_configure(struct rte_eth_dev *eth_dev)
 
 	CXGBE_FUNC_TRACE();
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (!(adapter->flags & FW_QUEUE_BOUND)) {
 		err = cxgbe_setup_sge_fwevtq(adapter);
@@ -1080,13 +1080,13 @@ static int cxgbe_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 		rx_pause = 1;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	return 0;
 }
 
@@ -1099,12 +1099,12 @@ static int cxgbe_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	u8 tx_pause = 0, rx_pause = 0;
 	int ret;
 
-	if (fc_conf->mode == RTE_FC_FULL) {
+	if (fc_conf->mode == RTE_ETH_FC_FULL) {
 		tx_pause = 1;
 		rx_pause = 1;
-	} else if (fc_conf->mode == RTE_FC_TX_PAUSE) {
+	} else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE) {
 		tx_pause = 1;
-	} else if (fc_conf->mode == RTE_FC_RX_PAUSE) {
+	} else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE) {
 		rx_pause = 1;
 	}
 
@@ -1200,9 +1200,9 @@ static int cxgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 		rss_hf |= CXGBE_RSS_HF_IPV6_MASK;
 
 	if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN) {
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		if (flags & F_FW_RSS_VI_CONFIG_CMD_UDPEN)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	}
 
 	if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN)
@@ -1246,8 +1246,8 @@ static int cxgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 
 	rte_memcpy(rss, pi->rss, pi->rss_size * sizeof(u16));
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (!(reta_conf[idx].mask & (1ULL << shift)))
 			continue;
 
@@ -1277,8 +1277,8 @@ static int cxgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (!(reta_conf[idx].mask & (1ULL << shift)))
 			continue;
 
@@ -1479,7 +1479,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
 
 	if (lc->pcaps & FW_PORT_CAP32_SPEED_100G) {
 		if (capa_arr) {
-			capa_arr[num].speed = ETH_SPEED_NUM_100G;
+			capa_arr[num].speed = RTE_ETH_SPEED_NUM_100G;
 			capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(RS);
 		}
@@ -1488,7 +1488,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
 
 	if (lc->pcaps & FW_PORT_CAP32_SPEED_50G) {
 		if (capa_arr) {
-			capa_arr[num].speed = ETH_SPEED_NUM_50G;
+			capa_arr[num].speed = RTE_ETH_SPEED_NUM_50G;
 			capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(BASER);
 		}
@@ -1497,7 +1497,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
 
 	if (lc->pcaps & FW_PORT_CAP32_SPEED_25G) {
 		if (capa_arr) {
-			capa_arr[num].speed = ETH_SPEED_NUM_25G;
+			capa_arr[num].speed = RTE_ETH_SPEED_NUM_25G;
 			capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(RS);
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 91d6bb9bbcb0..f1ac32270961 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -1670,7 +1670,7 @@ int cxgbe_link_start(struct port_info *pi)
 	 * that step explicitly.
 	 */
 	ret = t4_set_rxmode(adapter, adapter->mbox, pi->viid, mtu, -1, -1, -1,
-			    !!(conf_offloads & DEV_RX_OFFLOAD_VLAN_STRIP),
+			    !!(conf_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP),
 			    true);
 	if (ret == 0) {
 		ret = cxgbe_mpstcam_modify(pi, (int)pi->xact_addr_filt,
@@ -1694,7 +1694,7 @@ int cxgbe_link_start(struct port_info *pi)
 	}
 
 	if (ret == 0 && cxgbe_force_linkup(adapter))
-		pi->eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+		pi->eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return ret;
 }
 
@@ -1725,10 +1725,10 @@ int cxgbe_write_rss_conf(const struct port_info *pi, uint64_t rss_hf)
 	if (rss_hf & CXGBE_RSS_HF_IPV4_MASK)
 		flags |= F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN |
 			 F_FW_RSS_VI_CONFIG_CMD_UDPEN;
 
@@ -1865,7 +1865,7 @@ static void fw_caps_to_speed_caps(enum fw_port_type port_type,
 {
 #define SET_SPEED(__speed_name) \
 	do { \
-		*speed_caps |= ETH_LINK_ ## __speed_name; \
+		*speed_caps |= RTE_ETH_LINK_ ## __speed_name; \
 	} while (0)
 
 #define FW_CAPS_TO_SPEED(__fw_name) \
@@ -1952,7 +1952,7 @@ void cxgbe_get_speed_caps(struct port_info *pi, u32 *speed_caps)
 			      speed_caps);
 
 	if (!(pi->link_cfg.pcaps & FW_PORT_CAP32_ANEG))
-		*speed_caps |= ETH_LINK_SPEED_FIXED;
+		*speed_caps |= RTE_ETH_LINK_SPEED_FIXED;
 }
 
 /**
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index c79cdb8d8ad7..89ea7dd47c0b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -54,29 +54,29 @@
 
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
-		DEV_RX_OFFLOAD_SCATTER;
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 /* Rx offloads which cannot be disabled */
 static uint64_t dev_rx_offloads_nodis =
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 /* Supported Tx offloads */
 static uint64_t dev_tx_offloads_sup =
-		DEV_TX_OFFLOAD_MT_LOCKFREE |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 /* Tx offloads which cannot be disabled */
 static uint64_t dev_tx_offloads_nodis =
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 /* Keep track of whether QMAN and BMAN have been globally initialized */
 static int is_global_init;
@@ -238,7 +238,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 
 	fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		DPAA_PMD_DEBUG("enabling scatter mode");
 		fman_if_set_sg(dev->process_private, 1);
 		dev->data->scattered_rx = 1;
@@ -283,43 +283,43 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 
 	/* Configure link only if link is UP*/
 	if (link->link_status) {
-		if (eth_conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
+		if (eth_conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 			/* Start autoneg only if link is not in autoneg mode */
 			if (!link->link_autoneg)
 				dpaa_restart_link_autoneg(__fif->node_name);
-		} else if (eth_conf->link_speeds & ETH_LINK_SPEED_FIXED) {
-			switch (eth_conf->link_speeds & ~ETH_LINK_SPEED_FIXED) {
-			case ETH_LINK_SPEED_10M_HD:
-				speed = ETH_SPEED_NUM_10M;
-				duplex = ETH_LINK_HALF_DUPLEX;
+		} else if (eth_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+			switch (eth_conf->link_speeds &  RTE_ETH_LINK_SPEED_FIXED) {
+			case RTE_ETH_LINK_SPEED_10M_HD:
+				speed = RTE_ETH_SPEED_NUM_10M;
+				duplex = RTE_ETH_LINK_HALF_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_10M:
-				speed = ETH_SPEED_NUM_10M;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_10M:
+				speed = RTE_ETH_SPEED_NUM_10M;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_100M_HD:
-				speed = ETH_SPEED_NUM_100M;
-				duplex = ETH_LINK_HALF_DUPLEX;
+			case RTE_ETH_LINK_SPEED_100M_HD:
+				speed = RTE_ETH_SPEED_NUM_100M;
+				duplex = RTE_ETH_LINK_HALF_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_100M:
-				speed = ETH_SPEED_NUM_100M;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_100M:
+				speed = RTE_ETH_SPEED_NUM_100M;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_1G:
-				speed = ETH_SPEED_NUM_1G;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_1G:
+				speed = RTE_ETH_SPEED_NUM_1G;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_2_5G:
-				speed = ETH_SPEED_NUM_2_5G;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_2_5G:
+				speed = RTE_ETH_SPEED_NUM_2_5G;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_10G:
-				speed = ETH_SPEED_NUM_10G;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_10G:
+				speed = RTE_ETH_SPEED_NUM_10G;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
 			default:
-				speed = ETH_SPEED_NUM_NONE;
-				duplex = ETH_LINK_FULL_DUPLEX;
+				speed = RTE_ETH_SPEED_NUM_NONE;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
 			}
 			/* Set link speed */
@@ -535,30 +535,30 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_mac_addrs = DPAA_MAX_MAC_FILTER;
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
-	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
 
 	if (fif->mac_type == fman_mac_1g) {
-		dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
-					| ETH_LINK_SPEED_10M
-					| ETH_LINK_SPEED_100M_HD
-					| ETH_LINK_SPEED_100M
-					| ETH_LINK_SPEED_1G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+					| RTE_ETH_LINK_SPEED_10M
+					| RTE_ETH_LINK_SPEED_100M_HD
+					| RTE_ETH_LINK_SPEED_100M
+					| RTE_ETH_LINK_SPEED_1G;
 	} else if (fif->mac_type == fman_mac_2_5g) {
-		dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
-					| ETH_LINK_SPEED_10M
-					| ETH_LINK_SPEED_100M_HD
-					| ETH_LINK_SPEED_100M
-					| ETH_LINK_SPEED_1G
-					| ETH_LINK_SPEED_2_5G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+					| RTE_ETH_LINK_SPEED_10M
+					| RTE_ETH_LINK_SPEED_100M_HD
+					| RTE_ETH_LINK_SPEED_100M
+					| RTE_ETH_LINK_SPEED_1G
+					| RTE_ETH_LINK_SPEED_2_5G;
 	} else if (fif->mac_type == fman_mac_10g) {
-		dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
-					| ETH_LINK_SPEED_10M
-					| ETH_LINK_SPEED_100M_HD
-					| ETH_LINK_SPEED_100M
-					| ETH_LINK_SPEED_1G
-					| ETH_LINK_SPEED_2_5G
-					| ETH_LINK_SPEED_10G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+					| RTE_ETH_LINK_SPEED_10M
+					| RTE_ETH_LINK_SPEED_100M_HD
+					| RTE_ETH_LINK_SPEED_100M
+					| RTE_ETH_LINK_SPEED_1G
+					| RTE_ETH_LINK_SPEED_2_5G
+					| RTE_ETH_LINK_SPEED_10G;
 	} else {
 		DPAA_PMD_ERR("invalid link_speed: %s, %d",
 			     dpaa_intf->name, fif->mac_type);
@@ -591,12 +591,12 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} rx_offload_map[] = {
-			{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
-			{DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
-			{DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
-			{DEV_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
-			{DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+			{RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+			{RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+			{RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+			{RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+			{RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
 	};
 
 	/* Update Rx offload info */
@@ -623,14 +623,14 @@ dpaa_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} tx_offload_map[] = {
-			{DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
-			{DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
-			{DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
-			{DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
-			{DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
-			{DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
-			{DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+			{RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+			{RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+			{RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+			{RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+			{RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+			{RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
 	};
 
 	/* Update Tx offload info */
@@ -664,7 +664,7 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 			ret = dpaa_get_link_status(__fif->node_name, link);
 			if (ret)
 				return ret;
-			if (link->link_status == ETH_LINK_DOWN &&
+			if (link->link_status == RTE_ETH_LINK_DOWN &&
 			    wait_to_complete)
 				rte_delay_ms(CHECK_INTERVAL);
 			else
@@ -675,15 +675,15 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	}
 
 	if (ioctl_version < 2) {
-		link->link_duplex = ETH_LINK_FULL_DUPLEX;
-		link->link_autoneg = ETH_LINK_AUTONEG;
+		link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+		link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 		if (fif->mac_type == fman_mac_1g)
-			link->link_speed = ETH_SPEED_NUM_1G;
+			link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		else if (fif->mac_type == fman_mac_2_5g)
-			link->link_speed = ETH_SPEED_NUM_2_5G;
+			link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		else if (fif->mac_type == fman_mac_10g)
-			link->link_speed = ETH_SPEED_NUM_10G;
+			link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		else
 			DPAA_PMD_ERR("invalid link_speed: %s, %d",
 				     dpaa_intf->name, fif->mac_type);
@@ -962,7 +962,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (max_rx_pktlen <= buffsz) {
 		;
 	} else if (dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_SCATTER) {
+			RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
 			DPAA_PMD_ERR("Maximum Rx packet size %d too big to fit "
 				"MaxSGlist %d",
@@ -1268,7 +1268,7 @@ static int dpaa_link_down(struct rte_eth_dev *dev)
 	__fif = container_of(fif, struct __fman_if, __if);
 
 	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
-		dpaa_update_link_status(__fif->node_name, ETH_LINK_DOWN);
+		dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_DOWN);
 	else
 		return dpaa_eth_dev_stop(dev);
 	return 0;
@@ -1284,7 +1284,7 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
 	__fif = container_of(fif, struct __fman_if, __if);
 
 	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
-		dpaa_update_link_status(__fif->node_name, ETH_LINK_UP);
+		dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_UP);
 	else
 		dpaa_eth_dev_start(dev);
 	return 0;
@@ -1314,10 +1314,10 @@ dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
 		return -EINVAL;
 	}
 
-	if (fc_conf->mode == RTE_FC_NONE) {
+	if (fc_conf->mode == RTE_ETH_FC_NONE) {
 		return 0;
-	} else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
-		 fc_conf->mode == RTE_FC_FULL) {
+	} else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE ||
+		 fc_conf->mode == RTE_ETH_FC_FULL) {
 		fman_if_set_fc_threshold(dev->process_private,
 					 fc_conf->high_water,
 					 fc_conf->low_water,
@@ -1361,11 +1361,11 @@ dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
 	}
 	ret = fman_if_get_fc_threshold(dev->process_private);
 	if (ret) {
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		fc_conf->pause_time =
 			fman_if_get_fc_quanta(dev->process_private);
 	} else {
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return 0;
@@ -1626,10 +1626,10 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf,
 	fc_conf = dpaa_intf->fc_conf;
 	ret = fman_if_get_fc_threshold(fman_intf);
 	if (ret) {
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		fc_conf->pause_time = fman_if_get_fc_quanta(fman_intf);
 	} else {
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b5728e09c29f..c868e9d5bd9b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -74,11 +74,11 @@
 #define DPAA_DEBUG_FQ_TX_ERROR   1
 
 #define DPAA_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_L2_PAYLOAD | \
-	ETH_RSS_IP | \
-	ETH_RSS_UDP | \
-	ETH_RSS_TCP | \
-	ETH_RSS_SCTP)
+	RTE_ETH_RSS_L2_PAYLOAD | \
+	RTE_ETH_RSS_IP | \
+	RTE_ETH_RSS_UDP | \
+	RTE_ETH_RSS_TCP | \
+	RTE_ETH_RSS_SCTP)
 
 #define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
 		PKT_TX_IP_CKSUM |                \
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index c5b5ec869519..1ccd03602790 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -394,7 +394,7 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 		if (req_dist_set % 2 != 0) {
 			dist_field = 1U << loop;
 			switch (dist_field) {
-			case ETH_RSS_L2_PAYLOAD:
+			case RTE_ETH_RSS_L2_PAYLOAD:
 
 				if (l2_configured)
 					break;
@@ -404,9 +404,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_ETH;
 				break;
 
-			case ETH_RSS_IPV4:
-			case ETH_RSS_FRAG_IPV4:
-			case ETH_RSS_NONFRAG_IPV4_OTHER:
+			case RTE_ETH_RSS_IPV4:
+			case RTE_ETH_RSS_FRAG_IPV4:
+			case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
 
 				if (ipv4_configured)
 					break;
@@ -415,10 +415,10 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_IPV4;
 				break;
 
-			case ETH_RSS_IPV6:
-			case ETH_RSS_FRAG_IPV6:
-			case ETH_RSS_NONFRAG_IPV6_OTHER:
-			case ETH_RSS_IPV6_EX:
+			case RTE_ETH_RSS_IPV6:
+			case RTE_ETH_RSS_FRAG_IPV6:
+			case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+			case RTE_ETH_RSS_IPV6_EX:
 
 				if (ipv6_configured)
 					break;
@@ -427,9 +427,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_IPV6;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_TCP:
-			case ETH_RSS_NONFRAG_IPV6_TCP:
-			case ETH_RSS_IPV6_TCP_EX:
+			case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+			case RTE_ETH_RSS_IPV6_TCP_EX:
 
 				if (tcp_configured)
 					break;
@@ -438,9 +438,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_TCP;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_UDP:
-			case ETH_RSS_NONFRAG_IPV6_UDP:
-			case ETH_RSS_IPV6_UDP_EX:
+			case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+			case RTE_ETH_RSS_IPV6_UDP_EX:
 
 				if (udp_configured)
 					break;
@@ -449,8 +449,8 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_UDP;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_SCTP:
-			case ETH_RSS_NONFRAG_IPV6_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
 
 				if (sctp_configured)
 					break;
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 08f49af7685d..3170694841df 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -220,9 +220,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
 		if (req_dist_set % 2 != 0) {
 			dist_field = 1ULL << loop;
 			switch (dist_field) {
-			case ETH_RSS_L2_PAYLOAD:
-			case ETH_RSS_ETH:
-
+			case RTE_ETH_RSS_L2_PAYLOAD:
+			case RTE_ETH_RSS_ETH:
 				if (l2_configured)
 					break;
 				l2_configured = 1;
@@ -238,7 +237,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_PPPOE:
+			case RTE_ETH_RSS_PPPOE:
 				if (pppoe_configured)
 					break;
 				kg_cfg->extracts[i].extract.from_hdr.prot =
@@ -252,7 +251,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_ESP:
+			case RTE_ETH_RSS_ESP:
 				if (esp_configured)
 					break;
 				esp_configured = 1;
@@ -268,7 +267,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_AH:
+			case RTE_ETH_RSS_AH:
 				if (ah_configured)
 					break;
 				ah_configured = 1;
@@ -284,8 +283,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_C_VLAN:
-			case ETH_RSS_S_VLAN:
+			case RTE_ETH_RSS_C_VLAN:
+			case RTE_ETH_RSS_S_VLAN:
 				if (vlan_configured)
 					break;
 				vlan_configured = 1;
@@ -301,7 +300,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_MPLS:
+			case RTE_ETH_RSS_MPLS:
 
 				if (mpls_configured)
 					break;
@@ -338,13 +337,13 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_IPV4:
-			case ETH_RSS_FRAG_IPV4:
-			case ETH_RSS_NONFRAG_IPV4_OTHER:
-			case ETH_RSS_IPV6:
-			case ETH_RSS_FRAG_IPV6:
-			case ETH_RSS_NONFRAG_IPV6_OTHER:
-			case ETH_RSS_IPV6_EX:
+			case RTE_ETH_RSS_IPV4:
+			case RTE_ETH_RSS_FRAG_IPV4:
+			case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
+			case RTE_ETH_RSS_IPV6:
+			case RTE_ETH_RSS_FRAG_IPV6:
+			case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+			case RTE_ETH_RSS_IPV6_EX:
 
 				if (l3_configured)
 					break;
@@ -382,12 +381,12 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 			break;
 
-			case ETH_RSS_NONFRAG_IPV4_TCP:
-			case ETH_RSS_NONFRAG_IPV6_TCP:
-			case ETH_RSS_NONFRAG_IPV4_UDP:
-			case ETH_RSS_NONFRAG_IPV6_UDP:
-			case ETH_RSS_IPV6_TCP_EX:
-			case ETH_RSS_IPV6_UDP_EX:
+			case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+			case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+			case RTE_ETH_RSS_IPV6_TCP_EX:
+			case RTE_ETH_RSS_IPV6_UDP_EX:
 
 				if (l4_configured)
 					break;
@@ -414,8 +413,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_SCTP:
-			case ETH_RSS_NONFRAG_IPV6_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
 
 				if (sctp_configured)
 					break;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index a0270e78520e..59e728577f53 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -38,33 +38,33 @@
 
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
-		DEV_RX_OFFLOAD_CHECKSUM |
-		DEV_RX_OFFLOAD_SCTP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_CHECKSUM |
+		RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 /* Rx offloads which cannot be disabled */
 static uint64_t dev_rx_offloads_nodis =
-		DEV_RX_OFFLOAD_RSS_HASH |
-		DEV_RX_OFFLOAD_SCATTER;
+		RTE_ETH_RX_OFFLOAD_RSS_HASH |
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 /* Supported Tx offloads */
 static uint64_t dev_tx_offloads_sup =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_MT_LOCKFREE |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 /* Tx offloads which cannot be disabled */
 static uint64_t dev_tx_offloads_nodis =
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 /* enable timestamp in mbuf */
 bool dpaa2_enable_ts[RTE_MAX_ETHPORTS];
@@ -142,7 +142,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* VLAN Filter not avaialble */
 		if (!priv->max_vlan_filters) {
 			DPAA2_PMD_INFO("VLAN filter not available");
@@ -150,7 +150,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		}
 
 		if (dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_VLAN_FILTER)
+			RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ret = dpni_enable_vlan_filter(dpni, CMD_PRI_LOW,
 						      priv->token, true);
 		else
@@ -251,13 +251,13 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 					dev_rx_offloads_nodis;
 	dev_info->tx_offload_capa = dev_tx_offloads_sup |
 					dev_tx_offloads_nodis;
-	dev_info->speed_capa = ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_2_5G |
-			ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_2_5G |
+			RTE_ETH_LINK_SPEED_10G;
 
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
-	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	dev_info->flow_type_rss_offloads = DPAA2_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxportconf.burst_size = dpaa2_dqrr_size;
@@ -270,10 +270,10 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->default_rxportconf.ring_size = DPAA2_RX_DEFAULT_NBDESC;
 
 	if (dpaa2_svr_family == SVR_LX2160A) {
-		dev_info->speed_capa |= ETH_LINK_SPEED_25G |
-				ETH_LINK_SPEED_40G |
-				ETH_LINK_SPEED_50G |
-				ETH_LINK_SPEED_100G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G |
+				RTE_ETH_LINK_SPEED_40G |
+				RTE_ETH_LINK_SPEED_50G |
+				RTE_ETH_LINK_SPEED_100G;
 	}
 
 	return 0;
@@ -291,15 +291,15 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} rx_offload_map[] = {
-			{DEV_RX_OFFLOAD_CHECKSUM, " Checksum,"},
-			{DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
-			{DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
-			{DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
-			{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
-			{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
-			{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
-			{DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
+			{RTE_ETH_RX_OFFLOAD_CHECKSUM, " Checksum,"},
+			{RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+			{RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
+			{RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
+			{RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
+			{RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+			{RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"},
+			{RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"}
 	};
 
 	/* Update Rx offload info */
@@ -326,15 +326,15 @@ dpaa2_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} tx_offload_map[] = {
-			{DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
-			{DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
-			{DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
-			{DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
-			{DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
-			{DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
-			{DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
-			{DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+			{RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+			{RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+			{RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+			{RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+			{RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+			{RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+			{RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
 	};
 
 	/* Update Tx offload info */
@@ -573,7 +573,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		return -1;
 	}
 
-	if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (eth_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) {
 			ret = dpaa2_setup_flow_dist(dev,
 					eth_conf->rx_adv_conf.rss_conf.rss_hf,
@@ -587,12 +587,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		rx_l3_csum_offload = true;
 
-	if ((rx_offloads & DEV_RX_OFFLOAD_UDP_CKSUM) ||
-		(rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) ||
-		(rx_offloads & DEV_RX_OFFLOAD_SCTP_CKSUM))
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) ||
+		(rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) ||
+		(rx_offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM))
 		rx_l4_csum_offload = true;
 
 	ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -610,7 +610,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 	}
 
 #if !defined(RTE_LIBRTE_IEEE1588)
-	if (rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 #endif
 	{
 		ret = rte_mbuf_dyn_rx_timestamp_register(
@@ -623,12 +623,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		dpaa2_enable_ts[dev->data->port_id] = true;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		tx_l3_csum_offload = true;
 
-	if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ||
-		(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
-		(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ||
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM))
 		tx_l4_csum_offload = true;
 
 	ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -660,8 +660,8 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
-		dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+		dpaa2_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
 
 	dpaa2_tm_init(dev);
 
@@ -1856,7 +1856,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 			DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret);
 			return -1;
 		}
-		if (state.up == ETH_LINK_DOWN &&
+		if (state.up == RTE_ETH_LINK_DOWN &&
 		    wait_to_complete)
 			rte_delay_ms(CHECK_INTERVAL);
 		else
@@ -1868,9 +1868,9 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 	link.link_speed = state.rate;
 
 	if (state.options & DPNI_LINK_OPT_HALF_DUPLEX)
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	else
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	ret = rte_eth_linkstatus_set(dev, &link);
 	if (ret == -1)
@@ -2031,9 +2031,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *	No TX side flow control (send Pause frame disabled)
 		 */
 		if (!(state.options & DPNI_LINK_OPT_ASYM_PAUSE))
-			fc_conf->mode = RTE_FC_FULL;
+			fc_conf->mode = RTE_ETH_FC_FULL;
 		else
-			fc_conf->mode = RTE_FC_RX_PAUSE;
+			fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	} else {
 		/* DPNI_LINK_OPT_PAUSE not set
 		 *  if ASYM_PAUSE set,
@@ -2043,9 +2043,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *	Flow control disabled
 		 */
 		if (state.options & DPNI_LINK_OPT_ASYM_PAUSE)
-			fc_conf->mode = RTE_FC_TX_PAUSE;
+			fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		else
-			fc_conf->mode = RTE_FC_NONE;
+			fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return ret;
@@ -2089,14 +2089,14 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	/* update cfg with fc_conf */
 	switch (fc_conf->mode) {
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		/* Full flow control;
 		 * OPT_PAUSE set, ASYM_PAUSE not set
 		 */
 		cfg.options |= DPNI_LINK_OPT_PAUSE;
 		cfg.options &= ~DPNI_LINK_OPT_ASYM_PAUSE;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		/* Enable RX flow control
 		 * OPT_PAUSE not set;
 		 * ASYM_PAUSE set;
@@ -2104,7 +2104,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
 		cfg.options &= ~DPNI_LINK_OPT_PAUSE;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		/* Enable TX Flow control
 		 * OPT_PAUSE set
 		 * ASYM_PAUSE set
@@ -2112,7 +2112,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		cfg.options |= DPNI_LINK_OPT_PAUSE;
 		cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
 		break;
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		/* Disable Flow control
 		 * OPT_PAUSE not set
 		 * ASYM_PAUSE not set
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index fdc62ec30d22..c5e9267bf04d 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -65,17 +65,17 @@
 #define DPAA2_TX_CONF_ENABLE	0x08
 
 #define DPAA2_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_L2_PAYLOAD | \
-	ETH_RSS_IP | \
-	ETH_RSS_UDP | \
-	ETH_RSS_TCP | \
-	ETH_RSS_SCTP | \
-	ETH_RSS_MPLS | \
-	ETH_RSS_C_VLAN | \
-	ETH_RSS_S_VLAN | \
-	ETH_RSS_ESP | \
-	ETH_RSS_AH | \
-	ETH_RSS_PPPOE)
+	RTE_ETH_RSS_L2_PAYLOAD | \
+	RTE_ETH_RSS_IP | \
+	RTE_ETH_RSS_UDP | \
+	RTE_ETH_RSS_TCP | \
+	RTE_ETH_RSS_SCTP | \
+	RTE_ETH_RSS_MPLS | \
+	RTE_ETH_RSS_C_VLAN | \
+	RTE_ETH_RSS_S_VLAN | \
+	RTE_ETH_RSS_ESP | \
+	RTE_ETH_RSS_AH | \
+	RTE_ETH_RSS_PPPOE)
 
 /* LX2 FRC Parsed values (Little Endian) */
 #define DPAA2_PKT_TYPE_ETHER		0x0060
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f40369e2c3f9..7c77243b5d1a 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -773,7 +773,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 #endif
 
 		if (eth_data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_VLAN_STRIP)
+				RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			rte_vlan_strip(bufs[num_rx]);
 
 		dq_storage++;
@@ -987,7 +987,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 							eth_data->port_id);
 
 		if (eth_data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_VLAN_STRIP) {
+				RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 			rte_vlan_strip(bufs[num_rx]);
 		}
 
@@ -1230,7 +1230,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 					if (unlikely(((*bufs)->ol_flags
 						& PKT_TX_VLAN_PKT) ||
 						(eth_data->dev_conf.txmode.offloads
-						& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+						& RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
 						ret = rte_vlan_insert(bufs);
 						if (ret)
 							goto send_n_return;
@@ -1273,7 +1273,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 			if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN_PKT) ||
 				(eth_data->dev_conf.txmode.offloads
-				& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+				& RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
 				int ret = rte_vlan_insert(bufs);
 				if (ret)
 					goto send_n_return;
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 7d5d6377859a..a548ae2ccb2c 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -82,15 +82,15 @@
 #define E1000_FTQF_QUEUE_ENABLE          0x00000100
 
 #define IGB_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 /*
  * The overhead from MTU to max frame size.
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 73152dec6ed1..9da477e59def 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -597,8 +597,8 @@ eth_em_start(struct rte_eth_dev *dev)
 
 	e1000_clear_hw_cntrs_base_generic(hw);
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
-			ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+			RTE_ETH_VLAN_EXTEND_MASK;
 	ret = eth_em_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Unable to update vlan offload");
@@ -611,39 +611,39 @@ eth_em_start(struct rte_eth_dev *dev)
 
 	/* Setup link speed and duplex */
 	speeds = &dev->data->dev_conf.link_speeds;
-	if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
 		hw->mac.autoneg = 1;
 	} else {
 		num_speeds = 0;
-		autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+		autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 		/* Reset */
 		hw->phy.autoneg_advertised = 0;
 
-		if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+		if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
 			num_speeds = -1;
 			goto error_invalid_config;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_1G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_1G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
 			num_speeds++;
 		}
@@ -1102,9 +1102,9 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.nb_mtu_seg_max = EM_TX_MAX_MTU_SEG,
 	};
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G;
 
 	/* Preferred queue parameters */
 	dev_info->default_rxportconf.nb_queues = 1;
@@ -1162,17 +1162,17 @@ eth_em_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		uint16_t duplex, speed;
 		hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
 		link.link_duplex = (duplex == FULL_DUPLEX) ?
-				ETH_LINK_FULL_DUPLEX :
-				ETH_LINK_HALF_DUPLEX;
+				RTE_ETH_LINK_FULL_DUPLEX :
+				RTE_ETH_LINK_HALF_DUPLEX;
 		link.link_speed = speed;
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 		link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 	} else {
-		link.link_speed = ETH_SPEED_NUM_NONE;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_status = ETH_LINK_DOWN;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -1424,15 +1424,15 @@ eth_em_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if(mask & ETH_VLAN_STRIP_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			em_vlan_hw_strip_enable(dev);
 		else
 			em_vlan_hw_strip_disable(dev);
 	}
 
-	if(mask & ETH_VLAN_FILTER_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			em_vlan_hw_filter_enable(dev);
 		else
 			em_vlan_hw_filter_disable(dev);
@@ -1601,7 +1601,7 @@ eth_em_interrupt_action(struct rte_eth_dev *dev,
 	if (link.link_status) {
 		PMD_INIT_LOG(INFO, " Port %d: Link Up - speed %u Mbps - %s",
 			     dev->data->port_id, link.link_speed,
-			     link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			     link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 			     "full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down", dev->data->port_id);
@@ -1683,13 +1683,13 @@ eth_em_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		rx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 344149c19147..648b04154c5b 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -93,7 +93,7 @@ struct em_rx_queue {
 	struct em_rx_entry *sw_ring;   /**< address of RX software ring. */
 	struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */
 	struct rte_mbuf *pkt_last_seg;  /**< Last segment of current packet. */
-	uint64_t	    offloads;   /**< Offloads of DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads;   /**< Offloads of RTE_ETH_RX_OFFLOAD_* */
 	uint16_t            nb_rx_desc; /**< number of RX descriptors. */
 	uint16_t            rx_tail;    /**< current value of RDT register. */
 	uint16_t            nb_rx_hold; /**< number of held free RX desc. */
@@ -173,7 +173,7 @@ struct em_tx_queue {
 	uint8_t                wthresh;  /**< Write-back threshold register. */
 	struct em_ctx_info ctx_cache;
 	/**< Hardware context history.*/
-	uint64_t	       offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+	uint64_t	       offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
@@ -1171,11 +1171,11 @@ em_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
 
 	RTE_SET_USED(dev);
 	tx_offload_capa =
-		DEV_TX_OFFLOAD_MULTI_SEGS  |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	return tx_offload_capa;
 }
@@ -1369,13 +1369,13 @@ em_get_rx_port_offloads_capa(void)
 	uint64_t rx_offload_capa;
 
 	rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP  |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_IPV4_CKSUM  |
-		DEV_RX_OFFLOAD_UDP_CKSUM   |
-		DEV_RX_OFFLOAD_TCP_CKSUM   |
-		DEV_RX_OFFLOAD_KEEP_CRC    |
-		DEV_RX_OFFLOAD_SCATTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	return rx_offload_capa;
 }
@@ -1469,7 +1469,7 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
 	rxq->queue_id = queue_idx;
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1788,7 +1788,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 *  call to configure
 		 */
-		if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -1831,7 +1831,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (!dev->data->scattered_rx)
 			PMD_INIT_LOG(DEBUG, "forcing scatter mode");
 		dev->rx_pkt_burst = eth_em_recv_scattered_pkts;
@@ -1844,7 +1844,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 	 */
 	rxcsum = E1000_READ_REG(hw, E1000_RXCSUM);
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= E1000_RXCSUM_IPOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_IPOFL;
@@ -1870,7 +1870,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 	}
 
 	/* Setup the Receive Control Register. */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
 	else
 		rctl |= E1000_RCTL_SECRC; /* Strip Ethernet CRC. */
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index dbe811a1ad2f..ae3bc4a9c201 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -1073,21 +1073,21 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
 	uint16_t nb_rx_q = dev->data->nb_rx_queues;
 	uint16_t nb_tx_q = dev->data->nb_tx_queues;
 
-	if ((rx_mq_mode & ETH_MQ_RX_DCB_FLAG) ||
-	    tx_mq_mode == ETH_MQ_TX_DCB ||
-	    tx_mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	if ((rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) ||
+	    tx_mq_mode == RTE_ETH_MQ_TX_DCB ||
+	    tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		PMD_INIT_LOG(ERR, "DCB mode is not supported.");
 		return -EINVAL;
 	}
 	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
 		/* Check multi-queue mode.
-		 * To no break software we accept ETH_MQ_RX_NONE as this might
+		 * To no break software we accept RTE_ETH_MQ_RX_NONE as this might
 		 * be used to turn off VLAN filter.
 		 */
 
-		if (rx_mq_mode == ETH_MQ_RX_NONE ||
-		    rx_mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+		if (rx_mq_mode == RTE_ETH_MQ_RX_NONE ||
+		    rx_mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
 			RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
 		} else {
 			/* Only support one queue on VFs.
@@ -1099,12 +1099,12 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
 			return -EINVAL;
 		}
 		/* TX mode is not used here, so mode might be ignored.*/
-		if (tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+		if (tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
 			/* SRIOV only works in VMDq enable mode */
 			PMD_INIT_LOG(WARNING, "SRIOV is active,"
 					" TX mode %d is not supported. "
 					" Driver will behave as %d mode.",
-					tx_mq_mode, ETH_MQ_TX_VMDQ_ONLY);
+					tx_mq_mode, RTE_ETH_MQ_TX_VMDQ_ONLY);
 		}
 
 		/* check valid queue number */
@@ -1117,17 +1117,17 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
 		/* To no break software that set invalid mode, only display
 		 * warning if invalid mode is used.
 		 */
-		if (rx_mq_mode != ETH_MQ_RX_NONE &&
-		    rx_mq_mode != ETH_MQ_RX_VMDQ_ONLY &&
-		    rx_mq_mode != ETH_MQ_RX_RSS) {
+		if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+		    rx_mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY &&
+		    rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
 			/* RSS together with VMDq not supported*/
 			PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
 				     rx_mq_mode);
 			return -EINVAL;
 		}
 
-		if (tx_mq_mode != ETH_MQ_TX_NONE &&
-		    tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+		if (tx_mq_mode != RTE_ETH_MQ_TX_NONE &&
+		    tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
 			PMD_INIT_LOG(WARNING, "TX mode %d is not supported."
 					" Due to txmode is meaningless in this"
 					" driver, just ignore.",
@@ -1146,8 +1146,8 @@ eth_igb_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multipe queue mode checking */
 	ret  = igb_check_mq_mode(dev);
@@ -1287,8 +1287,8 @@ eth_igb_start(struct rte_eth_dev *dev)
 	/*
 	 * VLAN Offload Settings
 	 */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
-			ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+			RTE_ETH_VLAN_EXTEND_MASK;
 	ret = eth_igb_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Unable to set vlan offload");
@@ -1296,7 +1296,7 @@ eth_igb_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
 		/* Enable VLAN filter since VMDq always use VLAN filter */
 		igb_vmdq_vlan_hw_filter_enable(dev);
 	}
@@ -1310,39 +1310,39 @@ eth_igb_start(struct rte_eth_dev *dev)
 
 	/* Setup link speed and duplex */
 	speeds = &dev->data->dev_conf.link_speeds;
-	if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
 		hw->mac.autoneg = 1;
 	} else {
 		num_speeds = 0;
-		autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+		autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 		/* Reset */
 		hw->phy.autoneg_advertised = 0;
 
-		if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+		if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
 			num_speeds = -1;
 			goto error_invalid_config;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_1G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_1G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
 			num_speeds++;
 		}
@@ -2185,21 +2185,21 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	case e1000_82576:
 		dev_info->max_rx_queues = 16;
 		dev_info->max_tx_queues = 16;
-		dev_info->max_vmdq_pools = ETH_8_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
 		dev_info->vmdq_queue_num = 16;
 		break;
 
 	case e1000_82580:
 		dev_info->max_rx_queues = 8;
 		dev_info->max_tx_queues = 8;
-		dev_info->max_vmdq_pools = ETH_8_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
 		dev_info->vmdq_queue_num = 8;
 		break;
 
 	case e1000_i350:
 		dev_info->max_rx_queues = 8;
 		dev_info->max_tx_queues = 8;
-		dev_info->max_vmdq_pools = ETH_8_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
 		dev_info->vmdq_queue_num = 8;
 		break;
 
@@ -2225,7 +2225,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		return -EINVAL;
 	}
 	dev_info->hash_key_size = IGB_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = IGB_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -2251,9 +2251,9 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->rx_desc_lim = rx_desc_lim;
 	dev_info->tx_desc_lim = tx_desc_lim;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G;
 
 	dev_info->max_mtu = dev_info->max_rx_pktlen - E1000_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -2296,12 +2296,12 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_rx_bufsize = 256; /* See BSIZE field of RCTL register. */
 	dev_info->max_rx_pktlen  = 0x3FFF; /* See RLPML register. */
 	dev_info->max_mac_addrs = hw->mac.rar_entry_count;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-				DEV_TX_OFFLOAD_IPV4_CKSUM  |
-				DEV_TX_OFFLOAD_UDP_CKSUM   |
-				DEV_TX_OFFLOAD_TCP_CKSUM   |
-				DEV_TX_OFFLOAD_SCTP_CKSUM  |
-				DEV_TX_OFFLOAD_TCP_TSO;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	switch (hw->mac.type) {
 	case e1000_vfadapt:
 		dev_info->max_rx_queues = 2;
@@ -2402,17 +2402,17 @@ eth_igb_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		uint16_t duplex, speed;
 		hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
 		link.link_duplex = (duplex == FULL_DUPLEX) ?
-				ETH_LINK_FULL_DUPLEX :
-				ETH_LINK_HALF_DUPLEX;
+				RTE_ETH_LINK_FULL_DUPLEX :
+				RTE_ETH_LINK_HALF_DUPLEX;
 		link.link_speed = speed;
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 		link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 	} else if (!link_check) {
 		link.link_speed = 0;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_status = ETH_LINK_DOWN;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -2588,7 +2588,7 @@ eth_igb_vlan_tpid_set(struct rte_eth_dev *dev,
 	qinq &= E1000_CTRL_EXT_EXT_VLAN;
 
 	/* only outer TPID of double VLAN can be configured*/
-	if (qinq && vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (qinq && vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		reg = E1000_READ_REG(hw, E1000_VET);
 		reg = (reg & (~E1000_VET_VET_EXT)) |
 			((uint32_t)tpid << E1000_VET_VET_EXT_SHIFT);
@@ -2703,22 +2703,22 @@ eth_igb_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if(mask & ETH_VLAN_STRIP_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			igb_vlan_hw_strip_enable(dev);
 		else
 			igb_vlan_hw_strip_disable(dev);
 	}
 
-	if(mask & ETH_VLAN_FILTER_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			igb_vlan_hw_filter_enable(dev);
 		else
 			igb_vlan_hw_filter_disable(dev);
 	}
 
-	if(mask & ETH_VLAN_EXTEND_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			igb_vlan_hw_extend_enable(dev);
 		else
 			igb_vlan_hw_extend_disable(dev);
@@ -2870,7 +2870,7 @@ eth_igb_interrupt_action(struct rte_eth_dev *dev,
 				     " Port %d: Link Up - speed %u Mbps - %s",
 				     dev->data->port_id,
 				     (unsigned)link.link_speed,
-				     link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+				     link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 				     "full-duplex" : "half-duplex");
 		} else {
 			PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3024,13 +3024,13 @@ eth_igb_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		rx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -3099,18 +3099,18 @@ eth_igb_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 * on configuration
 		 */
 		switch (fc_conf->mode) {
-		case RTE_FC_NONE:
+		case RTE_ETH_FC_NONE:
 			ctrl &= ~E1000_CTRL_RFCE & ~E1000_CTRL_TFCE;
 			break;
-		case RTE_FC_RX_PAUSE:
+		case RTE_ETH_FC_RX_PAUSE:
 			ctrl |= E1000_CTRL_RFCE;
 			ctrl &= ~E1000_CTRL_TFCE;
 			break;
-		case RTE_FC_TX_PAUSE:
+		case RTE_ETH_FC_TX_PAUSE:
 			ctrl |= E1000_CTRL_TFCE;
 			ctrl &= ~E1000_CTRL_RFCE;
 			break;
-		case RTE_FC_FULL:
+		case RTE_ETH_FC_FULL:
 			ctrl |= E1000_CTRL_RFCE | E1000_CTRL_TFCE;
 			break;
 		default:
@@ -3258,22 +3258,22 @@ igbvf_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
 		     dev->data->port_id);
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/*
 	 * VF has no ability to enable/disable HW CRC
 	 * Keep the persistent behavior the same as Host PF
 	 */
 #ifndef RTE_LIBRTE_E1000_PF_DISABLE_STRIP_CRC
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
-		conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #else
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
 		PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #endif
 
@@ -3571,16 +3571,16 @@ eth_igb_rss_reta_update(struct rte_eth_dev *dev,
 	uint16_t idx, shift;
 	struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += IGB_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IGB_4_BIT_MASK);
 		if (!mask)
@@ -3612,16 +3612,16 @@ eth_igb_rss_reta_query(struct rte_eth_dev *dev,
 	uint16_t idx, shift;
 	struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += IGB_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IGB_4_BIT_MASK);
 		if (!mask)
diff --git a/drivers/net/e1000/igb_pf.c b/drivers/net/e1000/igb_pf.c
index 2ce74dd5a9a5..fe355ef6b3b5 100644
--- a/drivers/net/e1000/igb_pf.c
+++ b/drivers/net/e1000/igb_pf.c
@@ -88,7 +88,7 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev)
 	if (*vfinfo == NULL)
 		rte_panic("Cannot allocate memory for private VF data\n");
 
-	RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
+	RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_8_POOLS;
 	RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
 	RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
 	RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index a1d5eecc14a1..bcce2fc726d8 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -111,7 +111,7 @@ struct igb_rx_queue {
 	uint8_t             crc_len;    /**< 0 if CRC stripped, 4 otherwise. */
 	uint8_t             drop_en;  /**< If not 0, set SRRCTL.Drop_En. */
 	uint32_t            flags;      /**< RX flags. */
-	uint64_t	    offloads;   /**< offloads of DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads;   /**< offloads of RTE_ETH_RX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
@@ -186,7 +186,7 @@ struct igb_tx_queue {
 	/**< Start context position for transmit queue. */
 	struct igb_advctx_info ctx_cache[IGB_CTX_NUM];
 	/**< Hardware context history.*/
-	uint64_t	       offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+	uint64_t	       offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
@@ -1459,13 +1459,13 @@ igb_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
 	uint64_t tx_offload_capa;
 
 	RTE_SET_USED(dev);
-	tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-			  DEV_TX_OFFLOAD_IPV4_CKSUM  |
-			  DEV_TX_OFFLOAD_UDP_CKSUM   |
-			  DEV_TX_OFFLOAD_TCP_CKSUM   |
-			  DEV_TX_OFFLOAD_SCTP_CKSUM  |
-			  DEV_TX_OFFLOAD_TCP_TSO     |
-			  DEV_TX_OFFLOAD_MULTI_SEGS;
+	tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+			  RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+			  RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+			  RTE_ETH_TX_OFFLOAD_TCP_TSO     |
+			  RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return tx_offload_capa;
 }
@@ -1640,19 +1640,19 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
 
 	hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP  |
-			  DEV_RX_OFFLOAD_VLAN_FILTER |
-			  DEV_RX_OFFLOAD_IPV4_CKSUM  |
-			  DEV_RX_OFFLOAD_UDP_CKSUM   |
-			  DEV_RX_OFFLOAD_TCP_CKSUM   |
-			  DEV_RX_OFFLOAD_KEEP_CRC    |
-			  DEV_RX_OFFLOAD_SCATTER     |
-			  DEV_RX_OFFLOAD_RSS_HASH;
+	rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+			  RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+			  RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+			  RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+			  RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+			  RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+			  RTE_ETH_RX_OFFLOAD_SCATTER     |
+			  RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (hw->mac.type == e1000_i350 ||
 	    hw->mac.type == e1000_i210 ||
 	    hw->mac.type == e1000_i211)
-		rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+		rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
 	return rx_offload_capa;
 }
@@ -1733,7 +1733,7 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
 		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1950,23 +1950,23 @@ igb_hw_rss_hash_set(struct e1000_hw *hw, struct rte_eth_rss_conf *rss_conf)
 	/* Set configured hashing protocols in MRQC register */
 	rss_hf = rss_conf->rss_hf;
 	mrqc = E1000_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV4_TCP;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6;
-	if (rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_EX)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP;
-	if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV4_UDP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP;
-	if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP_EX;
 	E1000_WRITE_REG(hw, E1000_MRQC, mrqc);
 }
@@ -2032,23 +2032,23 @@ int eth_igb_rss_hash_conf_get(struct rte_eth_dev *dev,
 	}
 	rss_hf = 0;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_EX)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP_EX)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP_EX)
-		rss_hf |= ETH_RSS_IPV6_UDP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
 	rss_conf->rss_hf = rss_hf;
 	return 0;
 }
@@ -2170,15 +2170,15 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 			E1000_VMOLR_ROPE | E1000_VMOLR_BAM |
 			E1000_VMOLR_MPME);
 
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_UNTAG)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_UNTAG)
 			vmolr |= E1000_VMOLR_AUPE;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_MC)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
 			vmolr |= E1000_VMOLR_ROMPE;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_UC)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 			vmolr |= E1000_VMOLR_ROPE;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_BROADCAST)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 			vmolr |= E1000_VMOLR_BAM;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_MULTICAST)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 			vmolr |= E1000_VMOLR_MPME;
 
 		E1000_WRITE_REG(hw, E1000_VMOLR(i), vmolr);
@@ -2214,9 +2214,9 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 	/* VLVF: set up filters for vlan tags as configured */
 	for (i = 0; i < cfg->nb_pool_maps; i++) {
 		/* set vlan id in VF register and set the valid bit */
-		E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE | \
-                        (cfg->pool_map[i].vlan_id & ETH_VLAN_ID_MAX) | \
-			((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT ) & \
+		E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE |
+			(cfg->pool_map[i].vlan_id & RTE_ETH_VLAN_ID_MAX) |
+			((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT) &
 			E1000_VLVF_POOLSEL_MASK)));
 	}
 
@@ -2268,7 +2268,7 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	uint32_t mrqc;
 
-	if (RTE_ETH_DEV_SRIOV(dev).active == ETH_8_POOLS) {
+	if (RTE_ETH_DEV_SRIOV(dev).active == RTE_ETH_8_POOLS) {
 		/*
 		 * SRIOV active scheme
 		 * FIXME if support RSS together with VMDq & SRIOV
@@ -2282,14 +2282,14 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * SRIOV inactive scheme
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-			case ETH_MQ_RX_RSS:
+			case RTE_ETH_MQ_RX_RSS:
 				igb_rss_configure(dev);
 				break;
-			case ETH_MQ_RX_VMDQ_ONLY:
+			case RTE_ETH_MQ_RX_VMDQ_ONLY:
 				/*Configure general VMDQ only RX parameters*/
 				igb_vmdq_rx_hw_configure(dev);
 				break;
-			case ETH_MQ_RX_NONE:
+			case RTE_ETH_MQ_RX_NONE:
 				/* if mq_mode is none, disable rss mode.*/
 			default:
 				igb_rss_disable(dev);
@@ -2338,7 +2338,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		 * Set maximum packet length by default, and might be updated
 		 * together with enabling/disabling dual VLAN.
 		 */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			max_len += VLAN_TAG_SIZE;
 
 		E1000_WRITE_REG(hw, E1000_RLPML, max_len);
@@ -2374,7 +2374,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 *  call to configure
 		 */
-		if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -2444,7 +2444,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		E1000_WRITE_REG(hw, E1000_RXDCTL(rxq->reg_idx), rxdctl);
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (!dev->data->scattered_rx)
 			PMD_INIT_LOG(DEBUG, "forcing scatter mode");
 		dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
@@ -2488,16 +2488,16 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 	rxcsum |= E1000_RXCSUM_PCSD;
 
 	/* Enable both L3/L4 rx checksum offload */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		rxcsum |= E1000_RXCSUM_IPOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_IPOFL;
 	if (rxmode->offloads &
-		(DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+		(RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		rxcsum |= E1000_RXCSUM_TUOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_TUOFL;
-	if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= E1000_RXCSUM_CRCOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_CRCOFL;
@@ -2505,7 +2505,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 	E1000_WRITE_REG(hw, E1000_RXCSUM, rxcsum);
 
 	/* Setup the Receive Control Register. */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
 
 		/* clear STRCRC bit in all queues */
@@ -2545,7 +2545,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		(hw->mac.mc_filter_type << E1000_RCTL_MO_SHIFT);
 
 	/* Make sure VLAN Filters are off. */
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_VMDQ_ONLY)
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY)
 		rctl &= ~E1000_RCTL_VFE;
 	/* Don't store bad packets. */
 	rctl &= ~E1000_RCTL_SBP;
@@ -2743,7 +2743,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
 		E1000_WRITE_REG(hw, E1000_RXDCTL(i), rxdctl);
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (!dev->data->scattered_rx)
 			PMD_INIT_LOG(DEBUG, "forcing scatter mode");
 		dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index f3b17d70c9a4..4d2601d15a57 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -117,10 +117,10 @@ static const struct ena_stats ena_stats_rx_strings[] = {
 #define ENA_STATS_ARRAY_TX	ARRAY_SIZE(ena_stats_tx_strings)
 #define ENA_STATS_ARRAY_RX	ARRAY_SIZE(ena_stats_rx_strings)
 
-#define QUEUE_OFFLOADS (DEV_TX_OFFLOAD_TCP_CKSUM |\
-			DEV_TX_OFFLOAD_UDP_CKSUM |\
-			DEV_TX_OFFLOAD_IPV4_CKSUM |\
-			DEV_TX_OFFLOAD_TCP_TSO)
+#define QUEUE_OFFLOADS (RTE_ETH_TX_OFFLOAD_TCP_CKSUM |\
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |\
+			RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |\
+			RTE_ETH_TX_OFFLOAD_TCP_TSO)
 #define MBUF_OFFLOADS (PKT_TX_L4_MASK |\
 		       PKT_TX_IP_CKSUM |\
 		       PKT_TX_TCP_SEG)
@@ -332,7 +332,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 	    (queue_offloads & QUEUE_OFFLOADS)) {
 		/* check if TSO is required */
 		if ((mbuf->ol_flags & PKT_TX_TCP_SEG) &&
-		    (queue_offloads & DEV_TX_OFFLOAD_TCP_TSO)) {
+		    (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
 			ena_tx_ctx->tso_enable = true;
 
 			ena_meta->l4_hdr_len = GET_L4_HDR_LEN(mbuf);
@@ -340,7 +340,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 
 		/* check if L3 checksum is needed */
 		if ((mbuf->ol_flags & PKT_TX_IP_CKSUM) &&
-		    (queue_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM))
+		    (queue_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM))
 			ena_tx_ctx->l3_csum_enable = true;
 
 		if (mbuf->ol_flags & PKT_TX_IPV6) {
@@ -357,12 +357,12 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 
 		/* check if L4 checksum is needed */
 		if (((mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) &&
-		    (queue_offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) {
+		    (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_TCP;
 			ena_tx_ctx->l4_csum_enable = true;
 		} else if (((mbuf->ol_flags & PKT_TX_L4_MASK) ==
 				PKT_TX_UDP_CKSUM) &&
-				(queue_offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+				(queue_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UDP;
 			ena_tx_ctx->l4_csum_enable = true;
 		} else {
@@ -643,9 +643,9 @@ static int ena_link_update(struct rte_eth_dev *dev,
 	struct rte_eth_link *link = &dev->data->dev_link;
 	struct ena_adapter *adapter = dev->data->dev_private;
 
-	link->link_status = adapter->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
-	link->link_speed = ETH_SPEED_NUM_NONE;
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_status = adapter->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
+	link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	return 0;
 }
@@ -923,7 +923,7 @@ static int ena_start(struct rte_eth_dev *dev)
 	if (rc)
 		goto err_start_tx;
 
-	if (adapter->edev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (adapter->edev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		rc = ena_rss_configure(adapter);
 		if (rc)
 			goto err_rss_init;
@@ -2004,9 +2004,9 @@ static int ena_dev_configure(struct rte_eth_dev *dev)
 
 	adapter->state = ENA_ADAPTER_STATE_CONFIG;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
-	dev->data->dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+	dev->data->dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	/* Scattered Rx cannot be turned off in the HW, so this capability must
 	 * be forced.
@@ -2067,17 +2067,17 @@ static uint64_t ena_get_rx_port_offloads(struct ena_adapter *adapter)
 	uint64_t port_offloads = 0;
 
 	if (adapter->offloads.rx_offloads & ENA_L3_IPV4_CSUM)
-		port_offloads |= DEV_RX_OFFLOAD_IPV4_CKSUM;
+		port_offloads |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 
 	if (adapter->offloads.rx_offloads &
 	    (ENA_L4_IPV4_CSUM | ENA_L4_IPV6_CSUM))
 		port_offloads |=
-			DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM;
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 	if (adapter->offloads.rx_offloads & ENA_RX_RSS_HASH)
-		port_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+		port_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
-	port_offloads |= DEV_RX_OFFLOAD_SCATTER;
+	port_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	return port_offloads;
 }
@@ -2087,17 +2087,17 @@ static uint64_t ena_get_tx_port_offloads(struct ena_adapter *adapter)
 	uint64_t port_offloads = 0;
 
 	if (adapter->offloads.tx_offloads & ENA_IPV4_TSO)
-		port_offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+		port_offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (adapter->offloads.tx_offloads & ENA_L3_IPV4_CSUM)
-		port_offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+		port_offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 	if (adapter->offloads.tx_offloads &
 	    (ENA_L4_IPV4_CSUM_PARTIAL | ENA_L4_IPV4_CSUM |
 	     ENA_L4_IPV6_CSUM | ENA_L4_IPV6_CSUM_PARTIAL))
 		port_offloads |=
-			DEV_TX_OFFLOAD_UDP_CKSUM | DEV_TX_OFFLOAD_TCP_CKSUM;
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
-	port_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	port_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return port_offloads;
 }
@@ -2130,14 +2130,14 @@ static int ena_infos_get(struct rte_eth_dev *dev,
 	ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
 
 	dev_info->speed_capa =
-			ETH_LINK_SPEED_1G   |
-			ETH_LINK_SPEED_2_5G |
-			ETH_LINK_SPEED_5G   |
-			ETH_LINK_SPEED_10G  |
-			ETH_LINK_SPEED_25G  |
-			ETH_LINK_SPEED_40G  |
-			ETH_LINK_SPEED_50G  |
-			ETH_LINK_SPEED_100G;
+			RTE_ETH_LINK_SPEED_1G   |
+			RTE_ETH_LINK_SPEED_2_5G |
+			RTE_ETH_LINK_SPEED_5G   |
+			RTE_ETH_LINK_SPEED_10G  |
+			RTE_ETH_LINK_SPEED_25G  |
+			RTE_ETH_LINK_SPEED_40G  |
+			RTE_ETH_LINK_SPEED_50G  |
+			RTE_ETH_LINK_SPEED_100G;
 
 	/* Inform framework about available features */
 	dev_info->rx_offload_capa = ena_get_rx_port_offloads(adapter);
@@ -2303,7 +2303,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	}
 #endif
 
-	fill_hash = rx_ring->offloads & DEV_RX_OFFLOAD_RSS_HASH;
+	fill_hash = rx_ring->offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	descs_in_use = rx_ring->ring_size -
 		ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1;
@@ -2416,11 +2416,11 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 #ifdef RTE_LIBRTE_ETHDEV_DEBUG
 		/* Check if requested offload is also enabled for the queue */
 		if ((ol_flags & PKT_TX_IP_CKSUM &&
-		     !(tx_ring->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)) ||
+		     !(tx_ring->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)) ||
 		    (l4_csum_flag == PKT_TX_TCP_CKSUM &&
-		     !(tx_ring->offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) ||
+		     !(tx_ring->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) ||
 		    (l4_csum_flag == PKT_TX_UDP_CKSUM &&
-		     !(tx_ring->offloads & DEV_TX_OFFLOAD_UDP_CKSUM))) {
+		     !(tx_ring->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM))) {
 			PMD_TX_LOG(DEBUG,
 				"mbuf[%" PRIu32 "]: requested offloads: %" PRIu16 " are not enabled for the queue[%u]\n",
 				i, m->nb_segs, tx_ring->id);
diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h
index 4f4142ed12d0..865e1241e0ce 100644
--- a/drivers/net/ena/ena_ethdev.h
+++ b/drivers/net/ena/ena_ethdev.h
@@ -58,8 +58,8 @@
 
 #define ENA_HASH_KEY_SIZE		40
 
-#define ENA_ALL_RSS_HF (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
-			ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_UDP)
+#define ENA_ALL_RSS_HF (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define ENA_IO_TXQ_IDX(q)		(2 * (q))
 #define ENA_IO_RXQ_IDX(q)		(2 * (q) + 1)
diff --git a/drivers/net/ena/ena_rss.c b/drivers/net/ena/ena_rss.c
index 152098410fa2..be4007e3f3fe 100644
--- a/drivers/net/ena/ena_rss.c
+++ b/drivers/net/ena/ena_rss.c
@@ -76,7 +76,7 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
 	if (reta_size == 0 || reta_conf == NULL)
 		return -EINVAL;
 
-	if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		PMD_DRV_LOG(ERR,
 			"RSS was not configured for the PMD\n");
 		return -ENOTSUP;
@@ -93,8 +93,8 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
 		/* Each reta_conf is for 64 entries.
 		 * To support 128 we use 2 conf of 64.
 		 */
-		conf_idx = i / RTE_RETA_GROUP_SIZE;
-		idx = i % RTE_RETA_GROUP_SIZE;
+		conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		idx = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (TEST_BIT(reta_conf[conf_idx].mask, idx)) {
 			entry_value =
 				ENA_IO_RXQ_IDX(reta_conf[conf_idx].reta[idx]);
@@ -139,7 +139,7 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
 	if (reta_size == 0 || reta_conf == NULL)
 		return -EINVAL;
 
-	if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		PMD_DRV_LOG(ERR,
 			"RSS was not configured for the PMD\n");
 		return -ENOTSUP;
@@ -154,8 +154,8 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0 ; i < reta_size ; i++) {
-		reta_conf_idx = i / RTE_RETA_GROUP_SIZE;
-		reta_idx = i % RTE_RETA_GROUP_SIZE;
+		reta_conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (TEST_BIT(reta_conf[reta_conf_idx].mask, reta_idx))
 			reta_conf[reta_conf_idx].reta[reta_idx] =
 				ENA_IO_RXQ_IDX_REV(indirect_table[i]);
@@ -199,34 +199,34 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
 	/* Convert proto to ETH flag */
 	switch (proto) {
 	case ENA_ADMIN_RSS_TCP4:
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		break;
 	case ENA_ADMIN_RSS_UDP4:
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 		break;
 	case ENA_ADMIN_RSS_TCP6:
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 		break;
 	case ENA_ADMIN_RSS_UDP6:
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 		break;
 	case ENA_ADMIN_RSS_IP4:
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 		break;
 	case ENA_ADMIN_RSS_IP6:
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 		break;
 	case ENA_ADMIN_RSS_IP4_FRAG:
-		rss_hf |= ETH_RSS_FRAG_IPV4;
+		rss_hf |= RTE_ETH_RSS_FRAG_IPV4;
 		break;
 	case ENA_ADMIN_RSS_NOT_IP:
-		rss_hf |= ETH_RSS_L2_PAYLOAD;
+		rss_hf |= RTE_ETH_RSS_L2_PAYLOAD;
 		break;
 	case ENA_ADMIN_RSS_TCP6_EX:
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 		break;
 	case ENA_ADMIN_RSS_IP6_EX:
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 		break;
 	default:
 		break;
@@ -235,10 +235,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
 	/* Check if only DA or SA is being used for L3. */
 	switch (fields & ENA_HF_RSS_ALL_L3) {
 	case ENA_ADMIN_RSS_L3_SA:
-		rss_hf |= ETH_RSS_L3_SRC_ONLY;
+		rss_hf |= RTE_ETH_RSS_L3_SRC_ONLY;
 		break;
 	case ENA_ADMIN_RSS_L3_DA:
-		rss_hf |= ETH_RSS_L3_DST_ONLY;
+		rss_hf |= RTE_ETH_RSS_L3_DST_ONLY;
 		break;
 	default:
 		break;
@@ -247,10 +247,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
 	/* Check if only DA or SA is being used for L4. */
 	switch (fields & ENA_HF_RSS_ALL_L4) {
 	case ENA_ADMIN_RSS_L4_SP:
-		rss_hf |= ETH_RSS_L4_SRC_ONLY;
+		rss_hf |= RTE_ETH_RSS_L4_SRC_ONLY;
 		break;
 	case ENA_ADMIN_RSS_L4_DP:
-		rss_hf |= ETH_RSS_L4_DST_ONLY;
+		rss_hf |= RTE_ETH_RSS_L4_DST_ONLY;
 		break;
 	default:
 		break;
@@ -268,11 +268,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
 	fields_mask = ENA_ADMIN_RSS_L2_DA | ENA_ADMIN_RSS_L2_SA;
 
 	/* Determine which fields of L3 should be used. */
-	switch (rss_hf & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) {
-	case ETH_RSS_L3_DST_ONLY:
+	switch (rss_hf & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) {
+	case RTE_ETH_RSS_L3_DST_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L3_DA;
 		break;
-	case ETH_RSS_L3_SRC_ONLY:
+	case RTE_ETH_RSS_L3_SRC_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L3_SA;
 		break;
 	default:
@@ -284,11 +284,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
 	}
 
 	/* Determine which fields of L4 should be used. */
-	switch (rss_hf & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) {
-	case ETH_RSS_L4_DST_ONLY:
+	switch (rss_hf & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) {
+	case RTE_ETH_RSS_L4_DST_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L4_DP;
 		break;
-	case ETH_RSS_L4_SRC_ONLY:
+	case RTE_ETH_RSS_L4_SRC_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L4_SP;
 		break;
 	default:
@@ -334,43 +334,43 @@ static int ena_set_hash_fields(struct ena_com_dev *ena_dev, uint64_t rss_hf)
 	int rc, i;
 
 	/* Turn on appropriate fields for each requested packet type */
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) != 0)
 		selected_fields[ENA_ADMIN_RSS_TCP4].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP4, rss_hf);
 
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) != 0)
 		selected_fields[ENA_ADMIN_RSS_UDP4].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP4, rss_hf);
 
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) != 0)
 		selected_fields[ENA_ADMIN_RSS_TCP6].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6, rss_hf);
 
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) != 0)
 		selected_fields[ENA_ADMIN_RSS_UDP6].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP6, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV4) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV4) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP4].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV6) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV6) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP6].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6, rss_hf);
 
-	if ((rss_hf & ETH_RSS_FRAG_IPV4) != 0)
+	if ((rss_hf & RTE_ETH_RSS_FRAG_IPV4) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP4_FRAG].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4_FRAG, rss_hf);
 
-	if ((rss_hf & ETH_RSS_L2_PAYLOAD) != 0)
+	if ((rss_hf & RTE_ETH_RSS_L2_PAYLOAD) != 0)
 		selected_fields[ENA_ADMIN_RSS_NOT_IP].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_NOT_IP, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV6_TCP_EX) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) != 0)
 		selected_fields[ENA_ADMIN_RSS_TCP6_EX].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6_EX, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV6_EX) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV6_EX) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP6_EX].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6_EX, rss_hf);
 
@@ -541,7 +541,7 @@ int ena_rss_hash_conf_get(struct rte_eth_dev *dev,
 	uint16_t admin_hf;
 	static bool warn_once;
 
-	if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		PMD_DRV_LOG(ERR, "RSS was not configured for the PMD\n");
 		return -ENOTSUP;
 	}
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index 1b567f01eae0..7cdb8ce463ed 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -100,27 +100,27 @@ enetc_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 	status = enetc_port_rd(enetc_hw, ENETC_PM0_STATUS);
 
 	if (status & ENETC_LINK_MODE)
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	else
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 
 	if (status & ENETC_LINK_STATUS)
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 	else
-		link.link_status = ETH_LINK_DOWN;
+		link.link_status = RTE_ETH_LINK_DOWN;
 
 	switch (status & ENETC_LINK_SPEED_MASK) {
 	case ENETC_LINK_SPEED_1G:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case ENETC_LINK_SPEED_100M:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	default:
 	case ENETC_LINK_SPEED_10M:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -207,10 +207,10 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
 	dev_info->max_tx_queues = MAX_TX_RINGS;
 	dev_info->max_rx_pktlen = ENETC_MAC_MAXFRM_SIZE;
 	dev_info->rx_offload_capa =
-		(DEV_RX_OFFLOAD_IPV4_CKSUM |
-		 DEV_RX_OFFLOAD_UDP_CKSUM |
-		 DEV_RX_OFFLOAD_TCP_CKSUM |
-		 DEV_RX_OFFLOAD_KEEP_CRC);
+		(RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		 RTE_ETH_RX_OFFLOAD_KEEP_CRC);
 
 	return 0;
 }
@@ -463,7 +463,7 @@ enetc_rx_queue_setup(struct rte_eth_dev *dev,
 			       RTE_ETH_QUEUE_STATE_STOPPED;
 	}
 
-	rx_ring->crc_len = (uint8_t)((rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+	rx_ring->crc_len = (uint8_t)((rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
 				     RTE_ETHER_CRC_LEN : 0);
 
 	return 0;
@@ -705,7 +705,7 @@ enetc_dev_configure(struct rte_eth_dev *dev)
 	enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
 	enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		int config;
 
 		config = enetc_port_rd(enetc_hw, ENETC_PM0_CMD_CFG);
@@ -713,10 +713,10 @@ enetc_dev_configure(struct rte_eth_dev *dev)
 		enetc_port_wr(enetc_hw, ENETC_PM0_CMD_CFG, config);
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		checksum &= ~L3_CKSUM;
 
-	if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM))
+	if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
 		checksum &= ~L4_CKSUM;
 
 	enetc_port_wr(enetc_hw, ENETC_PAR_PORT_CFG, checksum);
diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index 47bfdac2cfdd..d5493c98345d 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -178,7 +178,7 @@ struct enic {
 	 */
 	uint8_t rss_hash_type; /* NIC_CFG_RSS_HASH_TYPE flags */
 	uint8_t rss_enable;
-	uint64_t rss_hf; /* ETH_RSS flags */
+	uint64_t rss_hf; /* RTE_ETH_RSS flags */
 	union vnic_rss_key rss_key;
 	union vnic_rss_cpu rss_cpu;
 
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8df7332bc5e0..c8bdaf1a8e79 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -38,30 +38,30 @@ static const struct vic_speed_capa {
 	uint16_t sub_devid;
 	uint32_t capa;
 } vic_speed_capa_map[] = {
-	{ 0x0043, ETH_LINK_SPEED_10G }, /* VIC */
-	{ 0x0047, ETH_LINK_SPEED_10G }, /* P81E PCIe */
-	{ 0x0048, ETH_LINK_SPEED_10G }, /* M81KR Mezz */
-	{ 0x004f, ETH_LINK_SPEED_10G }, /* 1280 Mezz */
-	{ 0x0084, ETH_LINK_SPEED_10G }, /* 1240 MLOM */
-	{ 0x0085, ETH_LINK_SPEED_10G }, /* 1225 PCIe */
-	{ 0x00cd, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1285 PCIe */
-	{ 0x00ce, ETH_LINK_SPEED_10G }, /* 1225T PCIe */
-	{ 0x012a, ETH_LINK_SPEED_40G }, /* M4308 */
-	{ 0x012c, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1340 MLOM */
-	{ 0x012e, ETH_LINK_SPEED_10G }, /* 1227 PCIe */
-	{ 0x0137, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1380 Mezz */
-	{ 0x014d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1385 PCIe */
-	{ 0x015d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1387 MLOM */
-	{ 0x0215, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
-		  ETH_LINK_SPEED_40G }, /* 1440 Mezz */
-	{ 0x0216, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
-		  ETH_LINK_SPEED_40G }, /* 1480 MLOM */
-	{ 0x0217, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1455 PCIe */
-	{ 0x0218, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1457 MLOM */
-	{ 0x0219, ETH_LINK_SPEED_40G }, /* 1485 PCIe */
-	{ 0x021a, ETH_LINK_SPEED_40G }, /* 1487 MLOM */
-	{ 0x024a, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1495 PCIe */
-	{ 0x024b, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1497 MLOM */
+	{ 0x0043, RTE_ETH_LINK_SPEED_10G }, /* VIC */
+	{ 0x0047, RTE_ETH_LINK_SPEED_10G }, /* P81E PCIe */
+	{ 0x0048, RTE_ETH_LINK_SPEED_10G }, /* M81KR Mezz */
+	{ 0x004f, RTE_ETH_LINK_SPEED_10G }, /* 1280 Mezz */
+	{ 0x0084, RTE_ETH_LINK_SPEED_10G }, /* 1240 MLOM */
+	{ 0x0085, RTE_ETH_LINK_SPEED_10G }, /* 1225 PCIe */
+	{ 0x00cd, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1285 PCIe */
+	{ 0x00ce, RTE_ETH_LINK_SPEED_10G }, /* 1225T PCIe */
+	{ 0x012a, RTE_ETH_LINK_SPEED_40G }, /* M4308 */
+	{ 0x012c, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1340 MLOM */
+	{ 0x012e, RTE_ETH_LINK_SPEED_10G }, /* 1227 PCIe */
+	{ 0x0137, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1380 Mezz */
+	{ 0x014d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1385 PCIe */
+	{ 0x015d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1387 MLOM */
+	{ 0x0215, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+		  RTE_ETH_LINK_SPEED_40G }, /* 1440 Mezz */
+	{ 0x0216, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+		  RTE_ETH_LINK_SPEED_40G }, /* 1480 MLOM */
+	{ 0x0217, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1455 PCIe */
+	{ 0x0218, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1457 MLOM */
+	{ 0x0219, RTE_ETH_LINK_SPEED_40G }, /* 1485 PCIe */
+	{ 0x021a, RTE_ETH_LINK_SPEED_40G }, /* 1487 MLOM */
+	{ 0x024a, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1495 PCIe */
+	{ 0x024b, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1497 MLOM */
 	{ 0, 0 }, /* End marker */
 };
 
@@ -297,8 +297,8 @@ static int enicpmd_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 	ENICPMD_FUNC_TRACE();
 
 	offloads = eth_dev->data->dev_conf.rxmode.offloads;
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			enic->ig_vlan_strip_en = 1;
 		else
 			enic->ig_vlan_strip_en = 0;
@@ -323,17 +323,17 @@ static int enicpmd_dev_configure(struct rte_eth_dev *eth_dev)
 		return ret;
 	}
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	enic->mc_count = 0;
 	enic->hw_ip_checksum = !!(eth_dev->data->dev_conf.rxmode.offloads &
-				  DEV_RX_OFFLOAD_CHECKSUM);
+				  RTE_ETH_RX_OFFLOAD_CHECKSUM);
 	/* All vlan offload masks to apply the current settings */
-	mask = ETH_VLAN_STRIP_MASK |
-		ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK |
+		RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	ret = enicpmd_vlan_offload_set(eth_dev, mask);
 	if (ret) {
 		dev_err(enic, "Failed to configure VLAN offloads\n");
@@ -435,14 +435,14 @@ static uint32_t speed_capa_from_pci_id(struct rte_eth_dev *eth_dev)
 	}
 	/* 1300 and later models are at least 40G */
 	if (id >= 0x0100)
-		return ETH_LINK_SPEED_40G;
+		return RTE_ETH_LINK_SPEED_40G;
 	/* VFs have subsystem id 0, check device id */
 	if (id == 0) {
 		/* Newer VF implies at least 40G model */
 		if (pdev->id.device_id == PCI_DEVICE_ID_CISCO_VIC_ENET_SN)
-			return ETH_LINK_SPEED_40G;
+			return RTE_ETH_LINK_SPEED_40G;
 	}
-	return ETH_LINK_SPEED_10G;
+	return RTE_ETH_LINK_SPEED_10G;
 }
 
 static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
@@ -774,8 +774,8 @@ static int enicpmd_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = enic_sop_rq_idx_to_rte_idx(
 				enic->rss_cpu.cpu[i / 4].b[i % 4]);
@@ -806,8 +806,8 @@ static int enicpmd_dev_rss_reta_update(struct rte_eth_dev *dev,
 	 */
 	rss_cpu = enic->rss_cpu;
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			rss_cpu.cpu[i / 4].b[i % 4] =
 				enic_rte_rq_idx_to_sop_idx(
@@ -883,7 +883,7 @@ static void enicpmd_dev_rxq_info_get(struct rte_eth_dev *dev,
 	 */
 	conf->offloads = enic->rx_offload_capa;
 	if (!enic->ig_vlan_strip_en)
-		conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	/* rx_thresh and other fields are not applicable for enic */
 }
 
@@ -969,8 +969,8 @@ static int enicpmd_dev_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
 static int udp_tunnel_common_check(struct enic *enic,
 				   struct rte_eth_udp_tunnel *tnl)
 {
-	if (tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN &&
-	    tnl->prot_type != RTE_TUNNEL_TYPE_GENEVE)
+	if (tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN &&
+	    tnl->prot_type != RTE_ETH_TUNNEL_TYPE_GENEVE)
 		return -ENOTSUP;
 	if (!enic->overlay_offload) {
 		ENICPMD_LOG(DEBUG, " overlay offload is not supported\n");
@@ -1010,7 +1010,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
 	ret = udp_tunnel_common_check(enic, tnl);
 	if (ret)
 		return ret;
-	vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+	vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
 	if (vxlan)
 		port = enic->vxlan_port;
 	else
@@ -1039,7 +1039,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
 	ret = udp_tunnel_common_check(enic, tnl);
 	if (ret)
 		return ret;
-	vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+	vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
 	if (vxlan)
 		port = enic->vxlan_port;
 	else
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index dfc7f5d1f94f..21b1fffb14f0 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -430,7 +430,7 @@ int enic_link_update(struct rte_eth_dev *eth_dev)
 
 	memset(&link, 0, sizeof(link));
 	link.link_status = enic_get_link_status(enic);
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_speed = vnic_dev_port_speed(enic->vdev);
 
 	return rte_eth_linkstatus_set(eth_dev, &link);
@@ -597,7 +597,7 @@ int enic_enable(struct enic *enic)
 	}
 
 	eth_dev->data->dev_link.link_speed = vnic_dev_port_speed(enic->vdev);
-	eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	/* vnic notification of link status has already been turned on in
 	 * enic_dev_init() which is called during probe time.  Here we are
@@ -638,11 +638,11 @@ int enic_enable(struct enic *enic)
 	 * and vlan insertion are supported.
 	 */
 	simple_tx_offloads = enic->tx_offload_capa &
-		(DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		 DEV_TX_OFFLOAD_VLAN_INSERT |
-		 DEV_TX_OFFLOAD_IPV4_CKSUM |
-		 DEV_TX_OFFLOAD_UDP_CKSUM |
-		 DEV_TX_OFFLOAD_TCP_CKSUM);
+		(RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		 RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		 RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
 	if ((eth_dev->data->dev_conf.txmode.offloads &
 	     ~simple_tx_offloads) == 0) {
 		ENICPMD_LOG(DEBUG, " use the simple tx handler");
@@ -858,7 +858,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
 	max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
 
 	if (enic->rte_dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_SCATTER) {
+	    RTE_ETH_RX_OFFLOAD_SCATTER) {
 		dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
 		/* ceil((max pkt len)/mbuf_size) */
 		mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) / mbuf_size;
@@ -1385,15 +1385,15 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
 	rss_hash_type = 0;
 	rss_hf = rss_conf->rss_hf & enic->flow_type_rss_offloads;
 	if (enic->rq_count > 1 &&
-	    (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) &&
+	    (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) &&
 	    rss_hf != 0) {
 		rss_enable = 1;
-		if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			      ETH_RSS_NONFRAG_IPV4_OTHER))
+		if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			      RTE_ETH_RSS_NONFRAG_IPV4_OTHER))
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV4;
 			if (enic->udp_rss_weak) {
 				/*
@@ -1404,12 +1404,12 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
 				rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
 			}
 		}
-		if (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_IPV6_EX |
-			      ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER))
+		if (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_IPV6_EX |
+			      RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER))
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV6;
-		if (rss_hf & (ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX))
+		if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX))
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
-		if (rss_hf & (ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX)) {
+		if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX)) {
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV6;
 			if (enic->udp_rss_weak)
 				rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
@@ -1745,9 +1745,9 @@ enic_enable_overlay_offload(struct enic *enic)
 		return -EINVAL;
 	}
 	enic->tx_offload_capa |=
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		(enic->geneve ? DEV_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
-		(enic->vxlan ? DEV_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		(enic->geneve ? RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
+		(enic->vxlan ? RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
 	enic->tx_offload_mask |=
 		PKT_TX_OUTER_IPV6 |
 		PKT_TX_OUTER_IPV4 |
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index c5777772a09e..918a9e170ff6 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -147,31 +147,31 @@ int enic_get_vnic_config(struct enic *enic)
 		 * IPV4 hash type handles both non-frag and frag packet types.
 		 * TCP/UDP is controlled via a separate flag below.
 		 */
-		enic->flow_type_rss_offloads |= ETH_RSS_IPV4 |
-			ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV4 |
+			RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER;
 	if (ENIC_SETTING(enic, RSSHASH_TCPIPV4))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_TCP;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (ENIC_SETTING(enic, RSSHASH_IPV6))
 		/*
 		 * The VIC adapter can perform RSS on IPv6 packets with and
 		 * without extension headers. An IPv6 "fragment" is an IPv6
 		 * packet with the fragment extension header.
 		 */
-		enic->flow_type_rss_offloads |= ETH_RSS_IPV6 |
-			ETH_RSS_IPV6_EX | ETH_RSS_FRAG_IPV6 |
-			ETH_RSS_NONFRAG_IPV6_OTHER;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV6 |
+			RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_FRAG_IPV6 |
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER;
 	if (ENIC_SETTING(enic, RSSHASH_TCPIPV6))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_TCP |
-			ETH_RSS_IPV6_TCP_EX;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			RTE_ETH_RSS_IPV6_TCP_EX;
 	if (enic->udp_rss_weak)
 		enic->flow_type_rss_offloads |=
-			ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
-			ETH_RSS_IPV6_UDP_EX;
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			RTE_ETH_RSS_IPV6_UDP_EX;
 	if (ENIC_SETTING(enic, RSSHASH_UDPIPV4))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_UDP;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (ENIC_SETTING(enic, RSSHASH_UDPIPV6))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_UDP |
-			ETH_RSS_IPV6_UDP_EX;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			RTE_ETH_RSS_IPV6_UDP_EX;
 
 	/* Zero offloads if RSS is not enabled */
 	if (!ENIC_SETTING(enic, RSS))
@@ -201,19 +201,19 @@ int enic_get_vnic_config(struct enic *enic)
 	enic->tx_queue_offload_capa = 0;
 	enic->tx_offload_capa =
 		enic->tx_queue_offload_capa |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	enic->rx_offload_capa =
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	enic->tx_offload_mask =
 		PKT_TX_IPV6 |
 		PKT_TX_IPV4 |
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index b87c036e6014..82d595b1d1a0 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -17,10 +17,10 @@
 
 const char pmd_failsafe_driver_name[] = FAILSAFE_DRIVER_NAME;
 static const struct rte_eth_link eth_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_UP,
-	.link_autoneg = ETH_LINK_AUTONEG,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_UP,
+	.link_autoneg = RTE_ETH_LINK_AUTONEG,
 };
 
 static int
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 602c04033c18..5f4810051dac 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -326,7 +326,7 @@ int failsafe_rx_intr_install_subdevice(struct sub_device *sdev)
 	int qid;
 	struct rte_eth_dev *fsdev;
 	struct rxq **rxq;
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 				&ETH(sdev)->data->dev_conf.intr_conf;
 
 	fsdev = fs_dev(sdev);
@@ -519,7 +519,7 @@ int
 failsafe_rx_intr_install(struct rte_eth_dev *dev)
 {
 	struct fs_priv *priv = PRIV(dev);
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 			&priv->data->dev_conf.intr_conf;
 
 	if (intr_conf->rxq == 0 || dev->intr_handle != NULL)
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 29de39910c6e..a3a8a1c82e3a 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1172,51 +1172,51 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
 	 * configuring a sub-device.
 	 */
 	infos->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_LRO |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_MACSEC_STRIP |
-		DEV_RX_OFFLOAD_HEADER_SPLIT |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_TIMESTAMP |
-		DEV_RX_OFFLOAD_SECURITY |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_LRO |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+		RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+		RTE_ETH_RX_OFFLOAD_SECURITY |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	infos->rx_queue_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_LRO |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_MACSEC_STRIP |
-		DEV_RX_OFFLOAD_HEADER_SPLIT |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_TIMESTAMP |
-		DEV_RX_OFFLOAD_SECURITY |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_LRO |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+		RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+		RTE_ETH_RX_OFFLOAD_SECURITY |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	infos->tx_offload_capa =
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	infos->flow_type_rss_offloads =
-		ETH_RSS_IP |
-		ETH_RSS_UDP |
-		ETH_RSS_TCP;
+		RTE_ETH_RSS_IP |
+		RTE_ETH_RSS_UDP |
+		RTE_ETH_RSS_TCP;
 	infos->dev_capa =
 		RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 		RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 17c73c4dc5ae..b7522a47a80b 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -177,7 +177,7 @@ struct fm10k_rx_queue {
 	uint8_t drop_en;
 	uint8_t rx_deferred_start; /* don't start this queue in dev start. */
 	uint16_t rx_ftag_en; /* indicates FTAG RX supported */
-	uint64_t offloads; /* offloads of DEV_RX_OFFLOAD_* */
+	uint64_t offloads; /* offloads of RTE_ETH_RX_OFFLOAD_* */
 };
 
 /*
@@ -209,7 +209,7 @@ struct fm10k_tx_queue {
 	uint16_t next_rs; /* Next pos to set RS flag */
 	uint16_t next_dd; /* Next pos to check DD flag */
 	volatile uint32_t *tail_ptr;
-	uint64_t offloads; /* Offloads of DEV_TX_OFFLOAD_* */
+	uint64_t offloads; /* Offloads of RTE_ETH_TX_OFFLOAD_* */
 	uint16_t nb_desc;
 	uint16_t port_id;
 	uint8_t tx_deferred_start; /** don't start this queue in dev start. */
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 66f4a5c6df2c..d256334bfde9 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -413,12 +413,12 @@ fm10k_check_mq_mode(struct rte_eth_dev *dev)
 
 	vmdq_conf = &dev->data->dev_conf.rx_adv_conf.vmdq_rx_conf;
 
-	if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		PMD_INIT_LOG(ERR, "DCB mode is not supported.");
 		return -EINVAL;
 	}
 
-	if (!(rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+	if (!(rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
 		return 0;
 
 	if (hw->mac.type == fm10k_mac_vf) {
@@ -449,8 +449,8 @@ fm10k_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multipe queue mode checking */
 	ret  = fm10k_check_mq_mode(dev);
@@ -510,7 +510,7 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
 		0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
 	};
 
-	if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_RSS ||
+	if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_RSS ||
 		dev_conf->rx_adv_conf.rss_conf.rss_hf == 0) {
 		FM10K_WRITE_REG(hw, FM10K_MRQC(0), 0);
 		return;
@@ -547,15 +547,15 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
 	 */
 	hf = dev_conf->rx_adv_conf.rss_conf.rss_hf;
 	mrqc = 0;
-	mrqc |= (hf & ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
 
 	if (mrqc == 0) {
 		PMD_INIT_LOG(ERR, "Specified RSS mode 0x%"PRIx64"is not"
@@ -602,7 +602,7 @@ fm10k_dev_mq_rx_configure(struct rte_eth_dev *dev)
 	if (hw->mac.type != fm10k_mac_pf)
 		return;
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
 		nb_queue_pools = vmdq_conf->nb_queue_pools;
 
 	/* no pool number change, no need to update logic port and VLAN/MAC */
@@ -759,7 +759,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
 		/* It adds dual VLAN length for supporting dual VLAN */
 		if ((dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
 				2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
-			rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
+			rxq->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 			uint32_t reg;
 			dev->data->scattered_rx = 1;
 			reg = FM10K_READ_REG(hw, FM10K_SRRCTL(i));
@@ -1145,7 +1145,7 @@ fm10k_dev_start(struct rte_eth_dev *dev)
 	}
 
 	/* Update default vlan when not in VMDQ mode */
-	if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+	if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
 		fm10k_vlan_filter_set(dev, hw->mac.default_vid, true);
 
 	fm10k_link_update(dev, 0);
@@ -1222,11 +1222,11 @@ fm10k_link_update(struct rte_eth_dev *dev,
 		FM10K_DEV_PRIVATE_TO_INFO(dev->data->dev_private);
 	PMD_INIT_FUNC_TRACE();
 
-	dev->data->dev_link.link_speed  = ETH_SPEED_NUM_50G;
-	dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	dev->data->dev_link.link_speed  = RTE_ETH_SPEED_NUM_50G;
+	dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	dev->data->dev_link.link_status =
-		dev_info->sm_down ? ETH_LINK_DOWN : ETH_LINK_UP;
-	dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
+		dev_info->sm_down ? RTE_ETH_LINK_DOWN : RTE_ETH_LINK_UP;
+	dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
 
 	return 0;
 }
@@ -1378,7 +1378,7 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 	dev_info->max_vfs            = pdev->max_vfs;
 	dev_info->vmdq_pool_base     = 0;
 	dev_info->vmdq_queue_base    = 0;
-	dev_info->max_vmdq_pools     = ETH_32_POOLS;
+	dev_info->max_vmdq_pools     = RTE_ETH_32_POOLS;
 	dev_info->vmdq_queue_num     = FM10K_MAX_QUEUES_PF;
 	dev_info->rx_queue_offload_capa = fm10k_get_rx_queue_offloads_capa(dev);
 	dev_info->rx_offload_capa = fm10k_get_rx_port_offloads_capa(dev) |
@@ -1389,15 +1389,15 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 
 	dev_info->hash_key_size = FM10K_RSSRK_SIZE * sizeof(uint32_t);
 	dev_info->reta_size = FM10K_MAX_RSS_INDICES;
-	dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
-					ETH_RSS_IPV6 |
-					ETH_RSS_IPV6_EX |
-					ETH_RSS_NONFRAG_IPV4_TCP |
-					ETH_RSS_NONFRAG_IPV6_TCP |
-					ETH_RSS_IPV6_TCP_EX |
-					ETH_RSS_NONFRAG_IPV4_UDP |
-					ETH_RSS_NONFRAG_IPV6_UDP |
-					ETH_RSS_IPV6_UDP_EX;
+	dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+					RTE_ETH_RSS_IPV6 |
+					RTE_ETH_RSS_IPV6_EX |
+					RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+					RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+					RTE_ETH_RSS_IPV6_TCP_EX |
+					RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+					RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+					RTE_ETH_RSS_IPV6_UDP_EX;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -1435,9 +1435,9 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 		.nb_mtu_seg_max = FM10K_TX_MAX_MTU_SEG,
 	};
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G |
-			ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
-			ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G |
+			RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+			RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -1509,7 +1509,7 @@ fm10k_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 		return -EINVAL;
 	}
 
-	if (vlan_id > ETH_VLAN_ID_MAX) {
+	if (vlan_id > RTE_ETH_VLAN_ID_MAX) {
 		PMD_INIT_LOG(ERR, "Invalid vlan_id: must be < 4096");
 		return -EINVAL;
 	}
@@ -1767,20 +1767,20 @@ static uint64_t fm10k_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
 
-	return (uint64_t)(DEV_RX_OFFLOAD_SCATTER);
+	return (uint64_t)(RTE_ETH_RX_OFFLOAD_SCATTER);
 }
 
 static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
 
-	return  (uint64_t)(DEV_RX_OFFLOAD_VLAN_STRIP  |
-			   DEV_RX_OFFLOAD_VLAN_FILTER |
-			   DEV_RX_OFFLOAD_IPV4_CKSUM  |
-			   DEV_RX_OFFLOAD_UDP_CKSUM   |
-			   DEV_RX_OFFLOAD_TCP_CKSUM   |
-			   DEV_RX_OFFLOAD_HEADER_SPLIT |
-			   DEV_RX_OFFLOAD_RSS_HASH);
+	return  (uint64_t)(RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+			   RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+			   RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+			   RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+			   RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+			   RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+			   RTE_ETH_RX_OFFLOAD_RSS_HASH);
 }
 
 static int
@@ -1965,12 +1965,12 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
 
-	return (uint64_t)(DEV_TX_OFFLOAD_VLAN_INSERT |
-			  DEV_TX_OFFLOAD_MULTI_SEGS  |
-			  DEV_TX_OFFLOAD_IPV4_CKSUM  |
-			  DEV_TX_OFFLOAD_UDP_CKSUM   |
-			  DEV_TX_OFFLOAD_TCP_CKSUM   |
-			  DEV_TX_OFFLOAD_TCP_TSO);
+	return (uint64_t)(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+			  RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+			  RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+			  RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_TCP_TSO);
 }
 
 static int
@@ -2111,8 +2111,8 @@ fm10k_reta_update(struct rte_eth_dev *dev,
 	 * 128-entries in 32 registers
 	 */
 	for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				BIT_MASK_PER_UINT32);
 		if (mask == 0)
@@ -2160,8 +2160,8 @@ fm10k_reta_query(struct rte_eth_dev *dev,
 	 * 128-entries in 32 registers
 	 */
 	for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				BIT_MASK_PER_UINT32);
 		if (mask == 0)
@@ -2198,15 +2198,15 @@ fm10k_rss_hash_update(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	mrqc = 0;
-	mrqc |= (hf & ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
 
 	/* If the mapping doesn't fit any supported, return */
 	if (mrqc == 0)
@@ -2243,15 +2243,15 @@ fm10k_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	mrqc = FM10K_READ_REG(hw, FM10K_MRQC(0));
 	hf = 0;
-	hf |= (mrqc & FM10K_MRQC_IPV4)     ? ETH_RSS_IPV4              : 0;
-	hf |= (mrqc & FM10K_MRQC_IPV6)     ? ETH_RSS_IPV6              : 0;
-	hf |= (mrqc & FM10K_MRQC_IPV6)     ? ETH_RSS_IPV6_EX           : 0;
-	hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? ETH_RSS_NONFRAG_IPV4_TCP  : 0;
-	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_NONFRAG_IPV6_TCP  : 0;
-	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP_EX       : 0;
-	hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? ETH_RSS_NONFRAG_IPV4_UDP  : 0;
-	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_NONFRAG_IPV6_UDP  : 0;
-	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP_EX       : 0;
+	hf |= (mrqc & FM10K_MRQC_IPV4)     ? RTE_ETH_RSS_IPV4              : 0;
+	hf |= (mrqc & FM10K_MRQC_IPV6)     ? RTE_ETH_RSS_IPV6              : 0;
+	hf |= (mrqc & FM10K_MRQC_IPV6)     ? RTE_ETH_RSS_IPV6_EX           : 0;
+	hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_TCP  : 0;
+	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_TCP  : 0;
+	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_IPV6_TCP_EX       : 0;
+	hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_UDP  : 0;
+	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_UDP  : 0;
+	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_IPV6_UDP_EX       : 0;
 
 	rss_conf->rss_hf = hf;
 
@@ -2606,7 +2606,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
 
 			/* first clear the internal SW recording structure */
 			if (!(dev->data->dev_conf.rxmode.mq_mode &
-						ETH_MQ_RX_VMDQ_FLAG))
+						RTE_ETH_MQ_RX_VMDQ_FLAG))
 				fm10k_vlan_filter_set(dev, hw->mac.default_vid,
 					false);
 
@@ -2622,7 +2622,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
 					MAIN_VSI_POOL_NUMBER);
 
 			if (!(dev->data->dev_conf.rxmode.mq_mode &
-						ETH_MQ_RX_VMDQ_FLAG))
+						RTE_ETH_MQ_RX_VMDQ_FLAG))
 				fm10k_vlan_filter_set(dev, hw->mac.default_vid,
 					true);
 
diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c
index 83af01dc2da6..50973a662c67 100644
--- a/drivers/net/fm10k/fm10k_rxtx_vec.c
+++ b/drivers/net/fm10k/fm10k_rxtx_vec.c
@@ -208,11 +208,11 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
 {
 #ifndef RTE_LIBRTE_IEEE1588
 	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 
 #ifndef RTE_FM10K_RX_OLFLAGS_ENABLE
 	/* whithout rx ol_flags, no VP flag report */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 		return -1;
 #endif
 
@@ -221,7 +221,7 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
 		return -1;
 
 	/* no header split support */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
 		return -1;
 
 	return 0;
diff --git a/drivers/net/hinic/base/hinic_pmd_hwdev.c b/drivers/net/hinic/base/hinic_pmd_hwdev.c
index cb9cf6efa287..80f9eb5c3031 100644
--- a/drivers/net/hinic/base/hinic_pmd_hwdev.c
+++ b/drivers/net/hinic/base/hinic_pmd_hwdev.c
@@ -1320,28 +1320,28 @@ hinic_cable_status_event(u8 cmd, void *buf_in, __rte_unused u16 in_size,
 static int hinic_link_event_process(struct hinic_hwdev *hwdev,
 				    struct rte_eth_dev *eth_dev, u8 status)
 {
-	uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
-					ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
-					ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
-					ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+	uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+					RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+					RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+					RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
 	struct nic_port_info port_info;
 	struct rte_eth_link link;
 	int rc = HINIC_OK;
 
 	if (!status) {
-		link.link_status = ETH_LINK_DOWN;
+		link.link_status = RTE_ETH_LINK_DOWN;
 		link.link_speed = 0;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	} else {
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 
 		memset(&port_info, 0, sizeof(port_info));
 		rc = hinic_get_port_info(hwdev, &port_info);
 		if (rc) {
-			link.link_speed = ETH_SPEED_NUM_NONE;
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
-			link.link_autoneg = ETH_LINK_FIXED;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+			link.link_autoneg = RTE_ETH_LINK_FIXED;
 		} else {
 			link.link_speed = port_speed[port_info.speed %
 						LINK_SPEED_MAX];
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index c2374ebb6759..4cd5a85d5f8d 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -311,8 +311,8 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* mtu size is 256~9600 */
 	if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
@@ -338,7 +338,7 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
 
 	/* init vlan offoad */
 	err = hinic_vlan_offload_set(dev,
-				ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+				RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
 	if (err) {
 		PMD_DRV_LOG(ERR, "Initialize vlan filter and strip failed");
 		(void)hinic_config_mq_mode(dev, FALSE);
@@ -696,15 +696,15 @@ static void hinic_get_speed_capa(struct rte_eth_dev *dev, uint32_t *speed_capa)
 	} else {
 		*speed_capa = 0;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_1G))
-			*speed_capa |= ETH_LINK_SPEED_1G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_1G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_10G))
-			*speed_capa |= ETH_LINK_SPEED_10G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_10G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_25G))
-			*speed_capa |= ETH_LINK_SPEED_25G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_25G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_40G))
-			*speed_capa |= ETH_LINK_SPEED_40G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_40G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_100G))
-			*speed_capa |= ETH_LINK_SPEED_100G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	}
 }
 
@@ -732,24 +732,24 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 
 	hinic_get_speed_capa(dev, &info->speed_capa);
 	info->rx_queue_offload_capa = 0;
-	info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
-				DEV_RX_OFFLOAD_IPV4_CKSUM |
-				DEV_RX_OFFLOAD_UDP_CKSUM |
-				DEV_RX_OFFLOAD_TCP_CKSUM |
-				DEV_RX_OFFLOAD_VLAN_FILTER |
-				DEV_RX_OFFLOAD_SCATTER |
-				DEV_RX_OFFLOAD_TCP_LRO |
-				DEV_RX_OFFLOAD_RSS_HASH;
+	info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+				RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				RTE_ETH_RX_OFFLOAD_SCATTER |
+				RTE_ETH_RX_OFFLOAD_TCP_LRO |
+				RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	info->tx_queue_offload_capa = 0;
-	info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-				DEV_TX_OFFLOAD_IPV4_CKSUM |
-				DEV_TX_OFFLOAD_UDP_CKSUM |
-				DEV_TX_OFFLOAD_TCP_CKSUM |
-				DEV_TX_OFFLOAD_SCTP_CKSUM |
-				DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				DEV_TX_OFFLOAD_TCP_TSO |
-				DEV_TX_OFFLOAD_MULTI_SEGS;
+	info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	info->hash_key_size = HINIC_RSS_KEY_SIZE;
 	info->reta_size = HINIC_RSS_INDIR_SIZE;
@@ -846,20 +846,20 @@ static int hinic_priv_get_dev_link_status(struct hinic_nic_dev *nic_dev,
 	u8 port_link_status = 0;
 	struct nic_port_info port_link_info;
 	struct hinic_hwdev *nic_hwdev = nic_dev->hwdev;
-	uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
-					ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
-					ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
-					ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+	uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+					RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+					RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+					RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
 
 	rc = hinic_get_link_status(nic_hwdev, &port_link_status);
 	if (rc)
 		return rc;
 
 	if (!port_link_status) {
-		link->link_status = ETH_LINK_DOWN;
+		link->link_status = RTE_ETH_LINK_DOWN;
 		link->link_speed = 0;
-		link->link_duplex = ETH_LINK_HALF_DUPLEX;
-		link->link_autoneg = ETH_LINK_FIXED;
+		link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link->link_autoneg = RTE_ETH_LINK_FIXED;
 		return HINIC_OK;
 	}
 
@@ -901,8 +901,8 @@ static int hinic_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		/* Get link status information from hardware */
 		rc = hinic_priv_get_dev_link_status(nic_dev, &link);
 		if (rc != HINIC_OK) {
-			link.link_speed = ETH_SPEED_NUM_NONE;
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR, "Get link status failed");
 			goto out;
 		}
@@ -1650,8 +1650,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	int err;
 
 	/* Enable or disable VLAN filter */
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) ?
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) ?
 			TRUE : FALSE;
 		err = hinic_config_vlan_filter(nic_dev->hwdev, on);
 		if (err == HINIC_MGMT_CMD_UNSUPPORTED) {
@@ -1672,8 +1672,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	}
 
 	/* Enable or disable VLAN stripping */
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) ?
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) ?
 			TRUE : FALSE;
 		err = hinic_set_rx_vlan_offload(nic_dev->hwdev, on);
 		if (err) {
@@ -1859,13 +1859,13 @@ static int hinic_flow_ctrl_get(struct rte_eth_dev *dev,
 	fc_conf->autoneg = nic_pause.auto_neg;
 
 	if (nic_pause.tx_pause && nic_pause.rx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (nic_pause.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else if (nic_pause.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -1879,14 +1879,14 @@ static int hinic_flow_ctrl_set(struct rte_eth_dev *dev,
 
 	nic_pause.auto_neg = fc_conf->autoneg;
 
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-		(fc_conf->mode & RTE_FC_TX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+		(fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
 		nic_pause.tx_pause = true;
 	else
 		nic_pause.tx_pause = false;
 
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-		(fc_conf->mode & RTE_FC_RX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+		(fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
 		nic_pause.rx_pause = true;
 	else
 		nic_pause.rx_pause = false;
@@ -1930,7 +1930,7 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
 	struct nic_rss_type rss_type = {0};
 	int err = 0;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		PMD_DRV_LOG(WARNING, "RSS is not enabled");
 		return HINIC_OK;
 	}
@@ -1951,14 +1951,14 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
 		}
 	}
 
-	rss_type.ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
-	rss_type.tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
-	rss_type.ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
-	rss_type.ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
-	rss_type.tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
-	rss_type.tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
-	rss_type.udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
-	rss_type.udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+	rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+	rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+	rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+	rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+	rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+	rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+	rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+	rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
 
 	err = hinic_set_rss_type(nic_dev->hwdev, tmpl_idx, rss_type);
 	if (err) {
@@ -1994,7 +1994,7 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
 	struct nic_rss_type rss_type = {0};
 	int err;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		PMD_DRV_LOG(WARNING, "RSS is not enabled");
 		return HINIC_ERROR;
 	}
@@ -2015,15 +2015,15 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
 
 	rss_conf->rss_hf = 0;
 	rss_conf->rss_hf |=  rss_type.ipv4 ?
-		(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4) : 0;
-	rss_conf->rss_hf |=  rss_type.tcp_ipv4 ? ETH_RSS_NONFRAG_IPV4_TCP : 0;
+		(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4) : 0;
+	rss_conf->rss_hf |=  rss_type.tcp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_TCP : 0;
 	rss_conf->rss_hf |=  rss_type.ipv6 ?
-		(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6) : 0;
-	rss_conf->rss_hf |=  rss_type.ipv6_ext ? ETH_RSS_IPV6_EX : 0;
-	rss_conf->rss_hf |=  rss_type.tcp_ipv6 ? ETH_RSS_NONFRAG_IPV6_TCP : 0;
-	rss_conf->rss_hf |=  rss_type.tcp_ipv6_ext ? ETH_RSS_IPV6_TCP_EX : 0;
-	rss_conf->rss_hf |=  rss_type.udp_ipv4 ? ETH_RSS_NONFRAG_IPV4_UDP : 0;
-	rss_conf->rss_hf |=  rss_type.udp_ipv6 ? ETH_RSS_NONFRAG_IPV6_UDP : 0;
+		(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6) : 0;
+	rss_conf->rss_hf |=  rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0;
+	rss_conf->rss_hf |=  rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0;
+	rss_conf->rss_hf |=  rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0;
+	rss_conf->rss_hf |=  rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0;
+	rss_conf->rss_hf |=  rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0;
 
 	return HINIC_OK;
 }
@@ -2053,7 +2053,7 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
 	u16 i = 0;
 	u16 idx, shift;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG))
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG))
 		return HINIC_OK;
 
 	if (reta_size != NIC_RSS_INDIR_SIZE) {
@@ -2067,8 +2067,8 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
 
 	/* update rss indir_tbl */
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (reta_conf[idx].reta[shift] >= nic_dev->num_rq) {
 			PMD_DRV_LOG(ERR, "Invalid reta entry, indirtbl[%d]: %d "
@@ -2133,8 +2133,8 @@ static int hinic_rss_indirtbl_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = (uint16_t)indirtbl[i];
 	}
diff --git a/drivers/net/hinic/hinic_pmd_rx.c b/drivers/net/hinic/hinic_pmd_rx.c
index 842399cc4cd8..d347afe9a6a9 100644
--- a/drivers/net/hinic/hinic_pmd_rx.c
+++ b/drivers/net/hinic/hinic_pmd_rx.c
@@ -504,14 +504,14 @@ static void hinic_fill_rss_type(struct nic_rss_type *rss_type,
 {
 	u64 rss_hf = rss_conf->rss_hf;
 
-	rss_type->ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
-	rss_type->tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
-	rss_type->ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
-	rss_type->ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
-	rss_type->tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
-	rss_type->tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
-	rss_type->udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
-	rss_type->udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+	rss_type->ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+	rss_type->tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+	rss_type->ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+	rss_type->ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+	rss_type->tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+	rss_type->tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+	rss_type->udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+	rss_type->udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
 }
 
 static void hinic_fillout_indir_tbl(struct hinic_nic_dev *nic_dev, u32 *indir)
@@ -588,8 +588,8 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
 {
 	int err, i;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
-		nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
+		nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
 		nic_dev->num_rss = 0;
 		if (nic_dev->num_rq > 1) {
 			/* get rss template id */
@@ -599,7 +599,7 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
 				PMD_DRV_LOG(WARNING, "Alloc rss template failed");
 				return err;
 			}
-			nic_dev->flags |= ETH_MQ_RX_RSS_FLAG;
+			nic_dev->flags |= RTE_ETH_MQ_RX_RSS_FLAG;
 			for (i = 0; i < nic_dev->num_rq; i++)
 				hinic_add_rq_to_rx_queue_list(nic_dev, i);
 		}
@@ -610,12 +610,12 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
 
 static void hinic_destroy_num_qps(struct hinic_nic_dev *nic_dev)
 {
-	if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+	if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (hinic_rss_template_free(nic_dev->hwdev,
 					    nic_dev->rss_tmpl_idx))
 			PMD_DRV_LOG(WARNING, "Free rss template failed");
 
-		nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+		nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
 	}
 }
 
@@ -641,7 +641,7 @@ int hinic_config_mq_mode(struct rte_eth_dev *dev, bool on)
 	int ret = 0;
 
 	switch (dev_conf->rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		ret = hinic_config_mq_rx_rss(nic_dev, on);
 		break;
 	default:
@@ -662,7 +662,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
 	int lro_wqe_num;
 	int buf_size;
 
-	if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+	if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (rss_conf.rss_hf == 0) {
 			rss_conf.rss_hf = HINIC_RSS_OFFLOAD_ALL;
 		} else if ((rss_conf.rss_hf & HINIC_RSS_OFFLOAD_ALL) == 0) {
@@ -678,7 +678,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
 	}
 
 	/* Enable both L3/L4 rx checksum offload */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		nic_dev->rx_csum_en = HINIC_RX_CSUM_OFFLOAD_EN;
 
 	err = hinic_set_rx_csum_offload(nic_dev->hwdev,
@@ -687,7 +687,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
 		goto rx_csum_ofl_err;
 
 	/* config lro */
-	lro_en = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ?
+	lro_en = dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ?
 			true : false;
 	max_lro_size = dev->data->dev_conf.rxmode.max_lro_pkt_size;
 	buf_size = nic_dev->hwdev->nic_io->rq_buf_size;
@@ -726,7 +726,7 @@ void hinic_rx_remove_configure(struct rte_eth_dev *dev)
 {
 	struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
 
-	if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+	if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
 		hinic_rss_deinit(nic_dev);
 		hinic_destroy_num_qps(nic_dev);
 	}
diff --git a/drivers/net/hinic/hinic_pmd_rx.h b/drivers/net/hinic/hinic_pmd_rx.h
index 8a45f2d9fc50..5c303398b635 100644
--- a/drivers/net/hinic/hinic_pmd_rx.h
+++ b/drivers/net/hinic/hinic_pmd_rx.h
@@ -8,17 +8,17 @@
 #define HINIC_DEFAULT_RX_FREE_THRESH	32
 
 #define HINIC_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 |\
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 |\
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 enum rq_completion_fmt {
 	RQ_COMPLETE_SGE = 1
diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
index 8753c340e790..3d0159d78778 100644
--- a/drivers/net/hns3/hns3_dcb.c
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -1536,7 +1536,7 @@ hns3_dcb_hw_configure(struct hns3_adapter *hns)
 		return ret;
 	}
 
-	if (hw->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (hw->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		dcb_rx_conf = &hw->data->dev_conf.rx_adv_conf.dcb_rx_conf;
 		if (dcb_rx_conf->nb_tcs == 0)
 			hw->dcb_info.pfc_en = 1; /* tc0 only */
@@ -1693,7 +1693,7 @@ hns3_update_queue_map_configure(struct hns3_adapter *hns)
 	uint16_t nb_tx_q = hw->data->nb_tx_queues;
 	int ret;
 
-	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		return 0;
 
 	ret = hns3_dcb_update_tc_queue_mapping(hw, nb_rx_q, nb_tx_q);
@@ -1713,22 +1713,22 @@ static void
 hns3_get_fc_mode(struct hns3_hw *hw, enum rte_eth_fc_mode mode)
 {
 	switch (mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		hw->requested_fc_mode = HNS3_FC_NONE;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		hw->requested_fc_mode = HNS3_FC_RX_PAUSE;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		hw->requested_fc_mode = HNS3_FC_TX_PAUSE;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		hw->requested_fc_mode = HNS3_FC_FULL;
 		break;
 	default:
 		hw->requested_fc_mode = HNS3_FC_NONE;
 		hns3_warn(hw, "fc_mode(%u) exceeds member scope and is "
-			  "configured to RTE_FC_NONE", mode);
+			  "configured to RTE_ETH_FC_NONE", mode);
 		break;
 	}
 }
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 693048f58704..8e0ccecb57a6 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -60,29 +60,29 @@ enum hns3_evt_cause {
 };
 
 static const struct rte_eth_fec_capa speed_fec_capa_tbl[] = {
-	{ ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
 
-	{ ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
 
-	{ ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
 
-	{ ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
 
-	{ ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
 
-	{ ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(RS) }
 };
@@ -500,8 +500,8 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
 	struct hns3_cmd_desc desc;
 	int ret;
 
-	if ((vlan_type != ETH_VLAN_TYPE_INNER &&
-	     vlan_type != ETH_VLAN_TYPE_OUTER)) {
+	if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+	     vlan_type != RTE_ETH_VLAN_TYPE_OUTER)) {
 		hns3_err(hw, "Unsupported vlan type, vlan_type =%d", vlan_type);
 		return -EINVAL;
 	}
@@ -514,10 +514,10 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
 	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MAC_VLAN_TYPE_ID, false);
 	rx_req = (struct hns3_rx_vlan_type_cfg_cmd *)desc.data;
 
-	if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
 		rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
-	} else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+	} else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
 		rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
 		rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
 		rx_req->in_fst_vlan_type = rte_cpu_to_le_16(tpid);
@@ -725,11 +725,11 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	rte_spinlock_lock(&hw->lock);
 	rxmode = &dev->data->dev_conf.rxmode;
 	tmp_mask = (unsigned int)mask;
-	if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* ignore vlan filter configuration during promiscuous mode */
 		if (!dev->data->promiscuous) {
 			/* Enable or disable VLAN filter */
-			enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER ?
+			enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ?
 				 true : false;
 
 			ret = hns3_enable_vlan_filter(hns, enable);
@@ -742,9 +742,9 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		}
 	}
 
-	if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP ?
+		enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ?
 		    true : false;
 
 		ret = hns3_en_hw_strip_rxvtag(hns, enable);
@@ -1118,7 +1118,7 @@ hns3_init_vlan_config(struct hns3_adapter *hns)
 		return ret;
 	}
 
-	ret = hns3_vlan_tpid_configure(hns, ETH_VLAN_TYPE_INNER,
+	ret = hns3_vlan_tpid_configure(hns, RTE_ETH_VLAN_TYPE_INNER,
 				       RTE_ETHER_TYPE_VLAN);
 	if (ret) {
 		hns3_err(hw, "tpid set fail in pf, ret =%d", ret);
@@ -1161,7 +1161,7 @@ hns3_restore_vlan_conf(struct hns3_adapter *hns)
 	if (!hw->data->promiscuous) {
 		/* restore vlan filter states */
 		offloads = hw->data->dev_conf.rxmode.offloads;
-		enable = offloads & DEV_RX_OFFLOAD_VLAN_FILTER ? true : false;
+		enable = offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ? true : false;
 		ret = hns3_enable_vlan_filter(hns, enable);
 		if (ret) {
 			hns3_err(hw, "failed to restore vlan rx filter conf, "
@@ -1204,7 +1204,7 @@ hns3_dev_configure_vlan(struct rte_eth_dev *dev)
 			  txmode->hw_vlan_reject_untagged);
 
 	/* Apply vlan offload setting */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
 	ret = hns3_vlan_offload_set(dev, mask);
 	if (ret) {
 		hns3_err(hw, "dev config rx vlan offload failed, ret = %d",
@@ -2213,9 +2213,9 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
 	int max_tc = 0;
 	int i;
 
-	if ((rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG) ||
-	    (tx_mq_mode == ETH_MQ_TX_VMDQ_DCB ||
-	     tx_mq_mode == ETH_MQ_TX_VMDQ_ONLY)) {
+	if ((rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) ||
+	    (tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB ||
+	     tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)) {
 		hns3_err(hw, "VMDQ is not supported, rx_mq_mode = %d, tx_mq_mode = %d.",
 			 rx_mq_mode, tx_mq_mode);
 		return -EOPNOTSUPP;
@@ -2223,7 +2223,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
 
 	dcb_rx_conf = &dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
 	dcb_tx_conf = &dev->data->dev_conf.tx_adv_conf.dcb_tx_conf;
-	if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		if (dcb_rx_conf->nb_tcs > pf->tc_max) {
 			hns3_err(hw, "nb_tcs(%u) > max_tc(%u) driver supported.",
 				 dcb_rx_conf->nb_tcs, pf->tc_max);
@@ -2232,7 +2232,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
 
 		if (!(dcb_rx_conf->nb_tcs == HNS3_4_TCS ||
 		      dcb_rx_conf->nb_tcs == HNS3_8_TCS)) {
-			hns3_err(hw, "on ETH_MQ_RX_DCB_RSS mode, "
+			hns3_err(hw, "on RTE_ETH_MQ_RX_DCB_RSS mode, "
 				 "nb_tcs(%d) != %d or %d in rx direction.",
 				 dcb_rx_conf->nb_tcs, HNS3_4_TCS, HNS3_8_TCS);
 			return -EINVAL;
@@ -2400,11 +2400,11 @@ hns3_check_link_speed(struct hns3_hw *hw, uint32_t link_speeds)
 	 * configure link_speeds (default 0), which means auto-negotiation.
 	 * In this case, it should return success.
 	 */
-	if (link_speeds == ETH_LINK_SPEED_AUTONEG &&
+	if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG &&
 	    hw->mac.support_autoneg == 0)
 		return 0;
 
-	if (link_speeds != ETH_LINK_SPEED_AUTONEG) {
+	if (link_speeds != RTE_ETH_LINK_SPEED_AUTONEG) {
 		ret = hns3_check_port_speed(hw, link_speeds);
 		if (ret)
 			return ret;
@@ -2464,15 +2464,15 @@ hns3_dev_configure(struct rte_eth_dev *dev)
 	if (ret)
 		goto cfg_err;
 
-	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		ret = hns3_setup_dcb(dev);
 		if (ret)
 			goto cfg_err;
 	}
 
 	/* When RSS is not configured, redirect the packet queue 0 */
-	if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 		rss_conf = conf->rx_adv_conf.rss_conf;
 		hw->rss_dis_flag = false;
 		ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -2493,7 +2493,7 @@ hns3_dev_configure(struct rte_eth_dev *dev)
 		goto cfg_err;
 
 	/* config hardware GRO */
-	gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+	gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
 	ret = hns3_config_gro(hw, gro_en);
 	if (ret)
 		goto cfg_err;
@@ -2600,15 +2600,15 @@ hns3_get_copper_port_speed_capa(uint32_t supported_speed)
 	uint32_t speed_capa = 0;
 
 	if (supported_speed & HNS3_PHY_LINK_SPEED_10M_HD_BIT)
-		speed_capa |= ETH_LINK_SPEED_10M_HD;
+		speed_capa |= RTE_ETH_LINK_SPEED_10M_HD;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_10M_BIT)
-		speed_capa |= ETH_LINK_SPEED_10M;
+		speed_capa |= RTE_ETH_LINK_SPEED_10M;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_100M_HD_BIT)
-		speed_capa |= ETH_LINK_SPEED_100M_HD;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_100M_BIT)
-		speed_capa |= ETH_LINK_SPEED_100M;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_1000M_BIT)
-		speed_capa |= ETH_LINK_SPEED_1G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G;
 
 	return speed_capa;
 }
@@ -2619,19 +2619,19 @@ hns3_get_firber_port_speed_capa(uint32_t supported_speed)
 	uint32_t speed_capa = 0;
 
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_1G_BIT)
-		speed_capa |= ETH_LINK_SPEED_1G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_10G_BIT)
-		speed_capa |= ETH_LINK_SPEED_10G;
+		speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_25G_BIT)
-		speed_capa |= ETH_LINK_SPEED_25G;
+		speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_40G_BIT)
-		speed_capa |= ETH_LINK_SPEED_40G;
+		speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_50G_BIT)
-		speed_capa |= ETH_LINK_SPEED_50G;
+		speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_100G_BIT)
-		speed_capa |= ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_200G_BIT)
-		speed_capa |= ETH_LINK_SPEED_200G;
+		speed_capa |= RTE_ETH_LINK_SPEED_200G;
 
 	return speed_capa;
 }
@@ -2650,7 +2650,7 @@ hns3_get_speed_capa(struct hns3_hw *hw)
 			hns3_get_firber_port_speed_capa(mac->supported_speed);
 
 	if (mac->support_autoneg == 0)
-		speed_capa |= ETH_LINK_SPEED_FIXED;
+		speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
 
 	return speed_capa;
 }
@@ -2676,40 +2676,40 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 	info->max_mac_addrs = HNS3_UC_MACADDR_NUM;
 	info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
 	info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
-	info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_TCP_CKSUM |
-				 DEV_RX_OFFLOAD_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_SCTP_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_KEEP_CRC |
-				 DEV_RX_OFFLOAD_SCATTER |
-				 DEV_RX_OFFLOAD_VLAN_STRIP |
-				 DEV_RX_OFFLOAD_VLAN_FILTER |
-				 DEV_RX_OFFLOAD_RSS_HASH |
-				 DEV_RX_OFFLOAD_TCP_LRO);
-	info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_TCP_CKSUM |
-				 DEV_TX_OFFLOAD_UDP_CKSUM |
-				 DEV_TX_OFFLOAD_SCTP_CKSUM |
-				 DEV_TX_OFFLOAD_MULTI_SEGS |
-				 DEV_TX_OFFLOAD_TCP_TSO |
-				 DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				 DEV_TX_OFFLOAD_GRE_TNL_TSO |
-				 DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-				 DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+	info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+				 RTE_ETH_RX_OFFLOAD_SCATTER |
+				 RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				 RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				 RTE_ETH_RX_OFFLOAD_RSS_HASH |
+				 RTE_ETH_RX_OFFLOAD_TCP_LRO);
+	info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				 RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				 RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
 				 hns3_txvlan_cap_get(hw));
 
 	if (hns3_dev_get_support(hw, OUTER_UDP_CKSUM))
-		info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+		info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 
 	if (hns3_dev_get_support(hw, INDEP_TXRX))
 		info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 				 RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
 
 	if (hns3_dev_get_support(hw, PTP))
-		info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+		info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	info->rx_desc_lim = (struct rte_eth_desc_lim) {
 		.nb_max = HNS3_MAX_RING_DESC,
@@ -2793,7 +2793,7 @@ hns3_update_port_link_info(struct rte_eth_dev *eth_dev)
 
 	ret = hns3_update_link_info(eth_dev);
 	if (ret)
-		hw->mac.link_status = ETH_LINK_DOWN;
+		hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	return ret;
 }
@@ -2806,29 +2806,29 @@ hns3_setup_linkstatus(struct rte_eth_dev *eth_dev,
 	struct hns3_mac *mac = &hw->mac;
 
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_10M:
-	case ETH_SPEED_NUM_100M:
-	case ETH_SPEED_NUM_1G:
-	case ETH_SPEED_NUM_10G:
-	case ETH_SPEED_NUM_25G:
-	case ETH_SPEED_NUM_40G:
-	case ETH_SPEED_NUM_50G:
-	case ETH_SPEED_NUM_100G:
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_200G:
 		if (mac->link_status)
 			new_link->link_speed = mac->link_speed;
 		break;
 	default:
 		if (mac->link_status)
-			new_link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+			new_link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 	}
 
 	if (!mac->link_status)
-		new_link->link_speed = ETH_SPEED_NUM_NONE;
+		new_link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	new_link->link_duplex = mac->link_duplex;
-	new_link->link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+	new_link->link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 	new_link->link_autoneg = mac->link_autoneg;
 }
 
@@ -2848,8 +2848,8 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
 	if (eth_dev->data->dev_started == 0) {
 		new_link.link_autoneg = mac->link_autoneg;
 		new_link.link_duplex = mac->link_duplex;
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
-		new_link.link_status = ETH_LINK_DOWN;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		new_link.link_status = RTE_ETH_LINK_DOWN;
 		goto out;
 	}
 
@@ -2861,7 +2861,7 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
 			break;
 		}
 
-		if (!wait_to_complete || mac->link_status == ETH_LINK_UP)
+		if (!wait_to_complete || mac->link_status == RTE_ETH_LINK_UP)
 			break;
 
 		rte_delay_ms(HNS3_LINK_CHECK_INTERVAL);
@@ -3207,31 +3207,31 @@ hns3_parse_speed(int speed_cmd, uint32_t *speed)
 {
 	switch (speed_cmd) {
 	case HNS3_CFG_SPEED_10M:
-		*speed = ETH_SPEED_NUM_10M;
+		*speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case HNS3_CFG_SPEED_100M:
-		*speed = ETH_SPEED_NUM_100M;
+		*speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case HNS3_CFG_SPEED_1G:
-		*speed = ETH_SPEED_NUM_1G;
+		*speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case HNS3_CFG_SPEED_10G:
-		*speed = ETH_SPEED_NUM_10G;
+		*speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case HNS3_CFG_SPEED_25G:
-		*speed = ETH_SPEED_NUM_25G;
+		*speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case HNS3_CFG_SPEED_40G:
-		*speed = ETH_SPEED_NUM_40G;
+		*speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case HNS3_CFG_SPEED_50G:
-		*speed = ETH_SPEED_NUM_50G;
+		*speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case HNS3_CFG_SPEED_100G:
-		*speed = ETH_SPEED_NUM_100G;
+		*speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	case HNS3_CFG_SPEED_200G:
-		*speed = ETH_SPEED_NUM_200G;
+		*speed = RTE_ETH_SPEED_NUM_200G;
 		break;
 	default:
 		return -EINVAL;
@@ -3559,39 +3559,39 @@ hns3_cfg_mac_speed_dup_hw(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
 	hns3_set_bit(req->speed_dup, HNS3_CFG_DUPLEX_B, !!duplex ? 1 : 0);
 
 	switch (speed) {
-	case ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_10M:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10M);
 		break;
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100M);
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_1G);
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10G);
 		break;
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_25G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_25G);
 		break;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_40G);
 		break;
-	case ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_50G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_50G);
 		break;
-	case ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_100G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100G);
 		break;
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_200G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_200G);
 		break;
@@ -4254,14 +4254,14 @@ hns3_mac_init(struct hns3_hw *hw)
 	int ret;
 
 	pf->support_sfp_query = true;
-	mac->link_duplex = ETH_LINK_FULL_DUPLEX;
+	mac->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	ret = hns3_cfg_mac_speed_dup_hw(hw, mac->link_speed, mac->link_duplex);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Config mac speed dup fail ret = %d", ret);
 		return ret;
 	}
 
-	mac->link_status = ETH_LINK_DOWN;
+	mac->link_status = RTE_ETH_LINK_DOWN;
 
 	return hns3_config_mtu(hw, pf->mps);
 }
@@ -4511,7 +4511,7 @@ hns3_dev_promiscuous_enable(struct rte_eth_dev *dev)
 	 * all packets coming in in the receiving direction.
 	 */
 	offloads = dev->data->dev_conf.rxmode.offloads;
-	if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		ret = hns3_enable_vlan_filter(hns, false);
 		if (ret) {
 			hns3_err(hw, "failed to enable promiscuous mode due to "
@@ -4552,7 +4552,7 @@ hns3_dev_promiscuous_disable(struct rte_eth_dev *dev)
 	}
 	/* when promiscuous mode was disabled, restore the vlan filter status */
 	offloads = dev->data->dev_conf.rxmode.offloads;
-	if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		ret = hns3_enable_vlan_filter(hns, true);
 		if (ret) {
 			hns3_err(hw, "failed to disable promiscuous mode due to"
@@ -4672,8 +4672,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
 		mac_info->supported_speed =
 					rte_le_to_cpu_32(resp->supported_speed);
 		mac_info->support_autoneg = resp->autoneg_ability;
-		mac_info->link_autoneg = (resp->autoneg == 0) ? ETH_LINK_FIXED
-					: ETH_LINK_AUTONEG;
+		mac_info->link_autoneg = (resp->autoneg == 0) ? RTE_ETH_LINK_FIXED
+					: RTE_ETH_LINK_AUTONEG;
 	} else {
 		mac_info->query_type = HNS3_DEFAULT_QUERY;
 	}
@@ -4684,8 +4684,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
 static uint8_t
 hns3_check_speed_dup(uint8_t duplex, uint32_t speed)
 {
-	if (!(speed == ETH_SPEED_NUM_10M || speed == ETH_SPEED_NUM_100M))
-		duplex = ETH_LINK_FULL_DUPLEX;
+	if (!(speed == RTE_ETH_SPEED_NUM_10M || speed == RTE_ETH_SPEED_NUM_100M))
+		duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	return duplex;
 }
@@ -4735,7 +4735,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
 		return ret;
 
 	/* Do nothing if no SFP */
-	if (mac_info.link_speed == ETH_SPEED_NUM_NONE)
+	if (mac_info.link_speed == RTE_ETH_SPEED_NUM_NONE)
 		return 0;
 
 	/*
@@ -4762,7 +4762,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
 
 	/* Config full duplex for SFP */
 	return hns3_cfg_mac_speed_dup(hw, mac_info.link_speed,
-				      ETH_LINK_FULL_DUPLEX);
+				      RTE_ETH_LINK_FULL_DUPLEX);
 }
 
 static void
@@ -4881,10 +4881,10 @@ hns3_cfg_mac_mode(struct hns3_hw *hw, bool enable)
 	hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_B, val);
 
 	/*
-	 * If DEV_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
+	 * If RTE_ETH_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
 	 * when receiving frames. Otherwise, CRC will be stripped.
 	 */
-	if (hw->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (hw->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, 0);
 	else
 		hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, val);
@@ -4912,7 +4912,7 @@ hns3_get_mac_link_status(struct hns3_hw *hw)
 	ret = hns3_cmd_send(hw, &desc, 1);
 	if (ret) {
 		hns3_err(hw, "get link status cmd failed %d", ret);
-		return ETH_LINK_DOWN;
+		return RTE_ETH_LINK_DOWN;
 	}
 
 	req = (struct hns3_link_status_cmd *)desc.data;
@@ -5094,19 +5094,19 @@ hns3_set_firber_default_support_speed(struct hns3_hw *hw)
 	struct hns3_mac *mac = &hw->mac;
 
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		return HNS3_FIBER_LINK_SPEED_1G_BIT;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		return HNS3_FIBER_LINK_SPEED_10G_BIT;
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_25G:
 		return HNS3_FIBER_LINK_SPEED_25G_BIT;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		return HNS3_FIBER_LINK_SPEED_40G_BIT;
-	case ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_50G:
 		return HNS3_FIBER_LINK_SPEED_50G_BIT;
-	case ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_100G:
 		return HNS3_FIBER_LINK_SPEED_100G_BIT;
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_200G:
 		return HNS3_FIBER_LINK_SPEED_200G_BIT;
 	default:
 		hns3_warn(hw, "invalid speed %u Mbps.", mac->link_speed);
@@ -5344,20 +5344,20 @@ hns3_convert_link_speeds2bitmap_copper(uint32_t link_speeds)
 {
 	uint32_t speed_bit;
 
-	switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
-	case ETH_LINK_SPEED_10M:
+	switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+	case RTE_ETH_LINK_SPEED_10M:
 		speed_bit = HNS3_PHY_LINK_SPEED_10M_BIT;
 		break;
-	case ETH_LINK_SPEED_10M_HD:
+	case RTE_ETH_LINK_SPEED_10M_HD:
 		speed_bit = HNS3_PHY_LINK_SPEED_10M_HD_BIT;
 		break;
-	case ETH_LINK_SPEED_100M:
+	case RTE_ETH_LINK_SPEED_100M:
 		speed_bit = HNS3_PHY_LINK_SPEED_100M_BIT;
 		break;
-	case ETH_LINK_SPEED_100M_HD:
+	case RTE_ETH_LINK_SPEED_100M_HD:
 		speed_bit = HNS3_PHY_LINK_SPEED_100M_HD_BIT;
 		break;
-	case ETH_LINK_SPEED_1G:
+	case RTE_ETH_LINK_SPEED_1G:
 		speed_bit = HNS3_PHY_LINK_SPEED_1000M_BIT;
 		break;
 	default:
@@ -5373,26 +5373,26 @@ hns3_convert_link_speeds2bitmap_fiber(uint32_t link_speeds)
 {
 	uint32_t speed_bit;
 
-	switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
-	case ETH_LINK_SPEED_1G:
+	switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+	case RTE_ETH_LINK_SPEED_1G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_1G_BIT;
 		break;
-	case ETH_LINK_SPEED_10G:
+	case RTE_ETH_LINK_SPEED_10G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_10G_BIT;
 		break;
-	case ETH_LINK_SPEED_25G:
+	case RTE_ETH_LINK_SPEED_25G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_25G_BIT;
 		break;
-	case ETH_LINK_SPEED_40G:
+	case RTE_ETH_LINK_SPEED_40G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_40G_BIT;
 		break;
-	case ETH_LINK_SPEED_50G:
+	case RTE_ETH_LINK_SPEED_50G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_50G_BIT;
 		break;
-	case ETH_LINK_SPEED_100G:
+	case RTE_ETH_LINK_SPEED_100G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_100G_BIT;
 		break;
-	case ETH_LINK_SPEED_200G:
+	case RTE_ETH_LINK_SPEED_200G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_200G_BIT;
 		break;
 	default:
@@ -5427,28 +5427,28 @@ hns3_check_port_speed(struct hns3_hw *hw, uint32_t link_speeds)
 static inline uint32_t
 hns3_get_link_speed(uint32_t link_speeds)
 {
-	uint32_t speed = ETH_SPEED_NUM_NONE;
-
-	if (link_speeds & ETH_LINK_SPEED_10M ||
-	    link_speeds & ETH_LINK_SPEED_10M_HD)
-		speed = ETH_SPEED_NUM_10M;
-	if (link_speeds & ETH_LINK_SPEED_100M ||
-	    link_speeds & ETH_LINK_SPEED_100M_HD)
-		speed = ETH_SPEED_NUM_100M;
-	if (link_speeds & ETH_LINK_SPEED_1G)
-		speed = ETH_SPEED_NUM_1G;
-	if (link_speeds & ETH_LINK_SPEED_10G)
-		speed = ETH_SPEED_NUM_10G;
-	if (link_speeds & ETH_LINK_SPEED_25G)
-		speed = ETH_SPEED_NUM_25G;
-	if (link_speeds & ETH_LINK_SPEED_40G)
-		speed = ETH_SPEED_NUM_40G;
-	if (link_speeds & ETH_LINK_SPEED_50G)
-		speed = ETH_SPEED_NUM_50G;
-	if (link_speeds & ETH_LINK_SPEED_100G)
-		speed = ETH_SPEED_NUM_100G;
-	if (link_speeds & ETH_LINK_SPEED_200G)
-		speed = ETH_SPEED_NUM_200G;
+	uint32_t speed = RTE_ETH_SPEED_NUM_NONE;
+
+	if (link_speeds & RTE_ETH_LINK_SPEED_10M ||
+	    link_speeds & RTE_ETH_LINK_SPEED_10M_HD)
+		speed = RTE_ETH_SPEED_NUM_10M;
+	if (link_speeds & RTE_ETH_LINK_SPEED_100M ||
+	    link_speeds & RTE_ETH_LINK_SPEED_100M_HD)
+		speed = RTE_ETH_SPEED_NUM_100M;
+	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
+		speed = RTE_ETH_SPEED_NUM_1G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
+		speed = RTE_ETH_SPEED_NUM_10G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
+		speed = RTE_ETH_SPEED_NUM_25G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
+		speed = RTE_ETH_SPEED_NUM_40G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_50G)
+		speed = RTE_ETH_SPEED_NUM_50G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_100G)
+		speed = RTE_ETH_SPEED_NUM_100G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_200G)
+		speed = RTE_ETH_SPEED_NUM_200G;
 
 	return speed;
 }
@@ -5456,11 +5456,11 @@ hns3_get_link_speed(uint32_t link_speeds)
 static uint8_t
 hns3_get_link_duplex(uint32_t link_speeds)
 {
-	if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
-	    (link_speeds & ETH_LINK_SPEED_100M_HD))
-		return ETH_LINK_HALF_DUPLEX;
+	if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+	    (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+		return RTE_ETH_LINK_HALF_DUPLEX;
 	else
-		return ETH_LINK_FULL_DUPLEX;
+		return RTE_ETH_LINK_FULL_DUPLEX;
 }
 
 static int
@@ -5594,9 +5594,9 @@ hns3_apply_link_speed(struct hns3_hw *hw)
 	struct hns3_set_link_speed_cfg cfg;
 
 	memset(&cfg, 0, sizeof(struct hns3_set_link_speed_cfg));
-	cfg.autoneg = (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) ?
-			ETH_LINK_AUTONEG : ETH_LINK_FIXED;
-	if (cfg.autoneg != ETH_LINK_AUTONEG) {
+	cfg.autoneg = (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) ?
+			RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
+	if (cfg.autoneg != RTE_ETH_LINK_AUTONEG) {
 		cfg.speed = hns3_get_link_speed(conf->link_speeds);
 		cfg.duplex = hns3_get_link_duplex(conf->link_speeds);
 	}
@@ -5869,7 +5869,7 @@ hns3_do_stop(struct hns3_adapter *hns)
 	ret = hns3_cfg_mac_mode(hw, false);
 	if (ret)
 		return ret;
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED) == 0) {
 		hns3_configure_all_mac_addr(hns, true);
@@ -6080,17 +6080,17 @@ hns3_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	current_mode = hns3_get_current_fc_mode(dev);
 	switch (current_mode) {
 	case HNS3_FC_FULL:
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	case HNS3_FC_TX_PAUSE:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case HNS3_FC_RX_PAUSE:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case HNS3_FC_NONE:
 	default:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		break;
 	}
 
@@ -6236,7 +6236,7 @@ hns3_get_dcb_info(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info)
 	int i;
 
 	rte_spinlock_lock(&hw->lock);
-	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = pf->local_max_tc;
 	else
 		dcb_info->nb_tcs = 1;
@@ -6536,7 +6536,7 @@ hns3_stop_service(struct hns3_adapter *hns)
 	struct rte_eth_dev *eth_dev;
 
 	eth_dev = &rte_eth_devices[hw->data->port_id];
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 	if (hw->adapter_state == HNS3_NIC_STARTED) {
 		rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
 		hns3_update_linkstatus_and_event(hw, false);
@@ -6826,7 +6826,7 @@ get_current_fec_auto_state(struct hns3_hw *hw, uint8_t *state)
 	 * in device of link speed
 	 * below 10 Gbps.
 	 */
-	if (hw->mac.link_speed < ETH_SPEED_NUM_10G) {
+	if (hw->mac.link_speed < RTE_ETH_SPEED_NUM_10G) {
 		*state = 0;
 		return 0;
 	}
@@ -6858,7 +6858,7 @@ hns3_fec_get_internal(struct hns3_hw *hw, uint32_t *fec_capa)
 	 * configured FEC mode is returned.
 	 * If link is up, current FEC mode is returned.
 	 */
-	if (hw->mac.link_status == ETH_LINK_DOWN) {
+	if (hw->mac.link_status == RTE_ETH_LINK_DOWN) {
 		ret = get_current_fec_auto_state(hw, &auto_state);
 		if (ret)
 			return ret;
@@ -6957,12 +6957,12 @@ get_current_speed_fec_cap(struct hns3_hw *hw, struct rte_eth_fec_capa *fec_capa)
 	uint32_t cur_capa;
 
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		cur_capa = fec_capa[1].capa;
 		break;
-	case ETH_SPEED_NUM_25G:
-	case ETH_SPEED_NUM_100G:
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_200G:
 		cur_capa = fec_capa[0].capa;
 		break;
 	default:
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index e28056b1bd60..0f55fd4c83ad 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -190,10 +190,10 @@ struct hns3_mac {
 	uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
 	uint8_t media_type;
 	uint8_t phy_addr;
-	uint8_t link_duplex  : 1; /* ETH_LINK_[HALF/FULL]_DUPLEX */
-	uint8_t link_autoneg : 1; /* ETH_LINK_[AUTONEG/FIXED] */
-	uint8_t link_status  : 1; /* ETH_LINK_[DOWN/UP] */
-	uint32_t link_speed;      /* ETH_SPEED_NUM_ */
+	uint8_t link_duplex  : 1; /* RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+	uint8_t link_autoneg : 1; /* RTE_ETH_LINK_[AUTONEG/FIXED] */
+	uint8_t link_status  : 1; /* RTE_ETH_LINK_[DOWN/UP] */
+	uint32_t link_speed;      /* RTE_ETH_SPEED_NUM_ */
 	/*
 	 * Some firmware versions support only the SFP speed query. In addition
 	 * to the SFP speed query, some firmware supports the query of the speed
@@ -1076,9 +1076,9 @@ static inline uint64_t
 hns3_txvlan_cap_get(struct hns3_hw *hw)
 {
 	if (hw->port_base_vlan_cfg.state)
-		return DEV_TX_OFFLOAD_VLAN_INSERT;
+		return RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	else
-		return DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT;
+		return RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
 }
 
 #endif /* _HNS3_ETHDEV_H_ */
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 54dbd4b798f2..7b784048b518 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -807,15 +807,15 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
 	}
 
 	hw->adapter_state = HNS3_NIC_CONFIGURING;
-	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		hns3_err(hw, "setting link speed/duplex not supported");
 		ret = -EINVAL;
 		goto cfg_err;
 	}
 
 	/* When RSS is not configured, redirect the packet queue 0 */
-	if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 		hw->rss_dis_flag = false;
 		rss_conf = conf->rx_adv_conf.rss_conf;
 		ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -832,7 +832,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
 		goto cfg_err;
 
 	/* config hardware GRO */
-	gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+	gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
 	ret = hns3_config_gro(hw, gro_en);
 	if (ret)
 		goto cfg_err;
@@ -935,32 +935,32 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 	info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
 	info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
 
-	info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_TCP_CKSUM |
-				 DEV_RX_OFFLOAD_SCTP_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_SCATTER |
-				 DEV_RX_OFFLOAD_VLAN_STRIP |
-				 DEV_RX_OFFLOAD_VLAN_FILTER |
-				 DEV_RX_OFFLOAD_RSS_HASH |
-				 DEV_RX_OFFLOAD_TCP_LRO);
-	info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_TCP_CKSUM |
-				 DEV_TX_OFFLOAD_UDP_CKSUM |
-				 DEV_TX_OFFLOAD_SCTP_CKSUM |
-				 DEV_TX_OFFLOAD_MULTI_SEGS |
-				 DEV_TX_OFFLOAD_TCP_TSO |
-				 DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				 DEV_TX_OFFLOAD_GRE_TNL_TSO |
-				 DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-				 DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+	info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_SCATTER |
+				 RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				 RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				 RTE_ETH_RX_OFFLOAD_RSS_HASH |
+				 RTE_ETH_RX_OFFLOAD_TCP_LRO);
+	info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				 RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				 RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
 				 hns3_txvlan_cap_get(hw));
 
 	if (hns3_dev_get_support(hw, OUTER_UDP_CKSUM))
-		info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+		info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 
 	if (hns3_dev_get_support(hw, INDEP_TXRX))
 		info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -1640,10 +1640,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	tmp_mask = (unsigned int)mask;
 
-	if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
 		rte_spinlock_lock(&hw->lock);
 		/* Enable or disable VLAN filter */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ret = hns3vf_en_vlan_filter(hw, true);
 		else
 			ret = hns3vf_en_vlan_filter(hw, false);
@@ -1653,10 +1653,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	}
 
 	/* Vlan stripping setting */
-	if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
 		rte_spinlock_lock(&hw->lock);
 		/* Enable or disable VLAN stripping */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			ret = hns3vf_en_hw_strip_rxvtag(hw, true);
 		else
 			ret = hns3vf_en_hw_strip_rxvtag(hw, false);
@@ -1724,7 +1724,7 @@ hns3vf_restore_vlan_conf(struct hns3_adapter *hns)
 	int ret;
 
 	dev_conf = &hw->data->dev_conf;
-	en = dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP ? true
+	en = dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ? true
 								   : false;
 	ret = hns3vf_en_hw_strip_rxvtag(hw, en);
 	if (ret)
@@ -1749,8 +1749,8 @@ hns3vf_dev_configure_vlan(struct rte_eth_dev *dev)
 	}
 
 	/* Apply vlan offload setting */
-	ret = hns3vf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK |
-					ETH_VLAN_FILTER_MASK);
+	ret = hns3vf_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK |
+					RTE_ETH_VLAN_FILTER_MASK);
 	if (ret)
 		hns3_err(hw, "dev config vlan offload failed, ret = %d.", ret);
 
@@ -2059,7 +2059,7 @@ hns3vf_do_stop(struct hns3_adapter *hns)
 	struct hns3_hw *hw = &hns->hw;
 	int ret;
 
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	/*
 	 * The "hns3vf_do_stop" function will also be called by .stop_service to
@@ -2218,31 +2218,31 @@ hns3vf_dev_link_update(struct rte_eth_dev *eth_dev,
 
 	memset(&new_link, 0, sizeof(new_link));
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_10M:
-	case ETH_SPEED_NUM_100M:
-	case ETH_SPEED_NUM_1G:
-	case ETH_SPEED_NUM_10G:
-	case ETH_SPEED_NUM_25G:
-	case ETH_SPEED_NUM_40G:
-	case ETH_SPEED_NUM_50G:
-	case ETH_SPEED_NUM_100G:
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_200G:
 		if (mac->link_status)
 			new_link.link_speed = mac->link_speed;
 		break;
 	default:
 		if (mac->link_status)
-			new_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+			new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 	}
 
 	if (!mac->link_status)
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	new_link.link_duplex = mac->link_duplex;
-	new_link.link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+	new_link.link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg =
-	    !(eth_dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED);
+	    !(eth_dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(eth_dev, &new_link);
 }
@@ -2570,11 +2570,11 @@ hns3vf_stop_service(struct hns3_adapter *hns)
 		 * Make sure call update link status before hns3vf_stop_poll_job
 		 * because update link status depend on polling job exist.
 		 */
-		hns3vf_update_link_status(hw, ETH_LINK_DOWN, hw->mac.link_speed,
+		hns3vf_update_link_status(hw, RTE_ETH_LINK_DOWN, hw->mac.link_speed,
 					  hw->mac.link_duplex);
 		hns3vf_stop_poll_job(eth_dev);
 	}
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	hns3_set_rxtx_function(eth_dev);
 	rte_wmb();
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 38a2ee58a651..da6918fddda3 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1298,10 +1298,10 @@ hns3_rss_input_tuple_supported(struct hns3_hw *hw,
 	 * Kunpeng930 and future kunpeng series support to use src/dst port
 	 * fields to RSS hash for IPv6 SCTP packet type.
 	 */
-	if (rss->types & (ETH_RSS_L4_DST_ONLY | ETH_RSS_L4_SRC_ONLY) &&
-	    (rss->types & ETH_RSS_IP ||
+	if (rss->types & (RTE_ETH_RSS_L4_DST_ONLY | RTE_ETH_RSS_L4_SRC_ONLY) &&
+	    (rss->types & RTE_ETH_RSS_IP ||
 	    (!hw->rss_info.ipv6_sctp_offload_supported &&
-	    rss->types & ETH_RSS_NONFRAG_IPV6_SCTP)))
+	    rss->types & RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
 		return false;
 
 	return true;
diff --git a/drivers/net/hns3/hns3_ptp.c b/drivers/net/hns3/hns3_ptp.c
index 5dfe68cc4dbd..9a829d7011ad 100644
--- a/drivers/net/hns3/hns3_ptp.c
+++ b/drivers/net/hns3/hns3_ptp.c
@@ -21,7 +21,7 @@ hns3_mbuf_dyn_rx_timestamp_register(struct rte_eth_dev *dev,
 	struct hns3_hw *hw = &hns->hw;
 	int ret;
 
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		return 0;
 
 	ret = rte_mbuf_dyn_rx_timestamp_register
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index 3a81e90e0911..85495bbe89d9 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -76,69 +76,69 @@ static const struct {
 	uint64_t rss_types;
 	uint64_t rss_field;
 } hns3_set_tuple_table[] = {
-	{ ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
-	{ ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
-	{ ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) },
-	{ ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) },
 };
 
@@ -146,44 +146,44 @@ static const struct {
 	uint64_t rss_types;
 	uint64_t rss_field;
 } hns3_set_rss_types[] = {
-	{ ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
+	{ RTE_ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_VER) },
-	{ ETH_RSS_NONFRAG_IPV4_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
-	{ ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
+	{ RTE_ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) |
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_VER) },
-	{ ETH_RSS_NONFRAG_IPV6_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }
 };
@@ -365,10 +365,10 @@ hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw,
 	 * When user does not specify the following types or a combination of
 	 * the following types, it enables all fields for the supported RSS
 	 * types. the following types as:
-	 * - ETH_RSS_L3_SRC_ONLY
-	 * - ETH_RSS_L3_DST_ONLY
-	 * - ETH_RSS_L4_SRC_ONLY
-	 * - ETH_RSS_L4_DST_ONLY
+	 * - RTE_ETH_RSS_L3_SRC_ONLY
+	 * - RTE_ETH_RSS_L3_DST_ONLY
+	 * - RTE_ETH_RSS_L4_SRC_ONLY
+	 * - RTE_ETH_RSS_L4_DST_ONLY
 	 */
 	if (fields_count == 0) {
 		for (i = 0; i < RTE_DIM(hns3_set_rss_types); i++) {
@@ -520,8 +520,8 @@ hns3_dev_rss_reta_update(struct rte_eth_dev *dev,
 	memcpy(indirection_tbl, rss_cfg->rss_indirection_tbl,
 	       sizeof(rss_cfg->rss_indirection_tbl));
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].reta[shift] >= hw->alloc_rss_size) {
 			rte_spinlock_unlock(&hw->lock);
 			hns3_err(hw, "queue id(%u) set to redirection table "
@@ -572,8 +572,8 @@ hns3_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 	rte_spinlock_lock(&hw->lock);
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] =
 						rss_cfg->rss_indirection_tbl[i];
@@ -692,7 +692,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 	}
 
 	/* When RSS is off, redirect the packet queue 0 */
-	if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) == 0)
+	if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0)
 		hns3_rss_uninit(hns);
 
 	/* Configure RSS hash algorithm and hash key offset */
@@ -709,7 +709,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 	 * When RSS is off, it doesn't need to configure rss redirection table
 	 * to hardware.
 	 */
-	if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+	if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		ret = hns3_set_rss_indir_table(hw, rss_cfg->rss_indirection_tbl,
 					       hw->rss_ind_tbl_size);
 		if (ret)
@@ -723,7 +723,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 	return ret;
 
 rss_indir_table_uninit:
-	if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+	if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		ret1 = hns3_rss_reset_indir_table(hw);
 		if (ret1 != 0)
 			return ret;
diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h
index 996083b88b25..6f153a1b7bfb 100644
--- a/drivers/net/hns3/hns3_rss.h
+++ b/drivers/net/hns3/hns3_rss.h
@@ -8,20 +8,20 @@
 #include <rte_flow.h>
 
 #define HNS3_ETH_RSS_SUPPORT ( \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L3_SRC_ONLY | \
-	ETH_RSS_L3_DST_ONLY | \
-	ETH_RSS_L4_SRC_ONLY | \
-	ETH_RSS_L4_DST_ONLY)
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L3_SRC_ONLY | \
+	RTE_ETH_RSS_L3_DST_ONLY | \
+	RTE_ETH_RSS_L4_SRC_ONLY | \
+	RTE_ETH_RSS_L4_DST_ONLY)
 
 #define HNS3_RSS_IND_TBL_SIZE	512 /* The size of hash lookup table */
 #define HNS3_RSS_IND_TBL_SIZE_MAX 2048
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 602548a4f25b..920ee8ceeab9 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1924,7 +1924,7 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	memset(&rxq->dfx_stats, 0, sizeof(struct hns3_rx_dfx_stats));
 
 	/* CRC len set here is used for amending packet length */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1969,7 +1969,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
 						 rxq->rx_buf_len);
 	}
 
-	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
+	if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
 	    dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
 		dev->data->scattered_rx = true;
 }
@@ -2845,7 +2845,7 @@ hns3_get_rx_function(struct rte_eth_dev *dev)
 	vec_allowed = vec_support && hns3_get_default_vec_support();
 	sve_allowed = vec_support && hns3_get_sve_support();
 	simple_allowed = !dev->data->scattered_rx &&
-			 (offloads & DEV_RX_OFFLOAD_TCP_LRO) == 0;
+			 (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) == 0;
 
 	if (hns->rx_func_hint == HNS3_IO_FUNC_HINT_VEC && vec_allowed)
 		return hns3_recv_pkts_vec;
@@ -3139,7 +3139,7 @@ hns3_restore_gro_conf(struct hns3_hw *hw)
 	int ret;
 
 	offloads = hw->data->dev_conf.rxmode.offloads;
-	gro_en = offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+	gro_en = offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
 	ret = hns3_config_gro(hw, gro_en);
 	if (ret)
 		hns3_err(hw, "restore hardware GRO to %s failed, ret = %d",
@@ -4291,7 +4291,7 @@ hns3_tx_check_simple_support(struct rte_eth_dev *dev)
 	if (hns3_dev_get_support(hw, PTP))
 		return false;
 
-	return (offloads == (offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE));
+	return (offloads == (offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE));
 }
 
 static bool
@@ -4303,16 +4303,16 @@ hns3_get_tx_prep_needed(struct rte_eth_dev *dev)
 	return true;
 #else
 #define HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK (\
-		DEV_TX_OFFLOAD_IPV4_CKSUM | \
-		DEV_TX_OFFLOAD_TCP_CKSUM | \
-		DEV_TX_OFFLOAD_UDP_CKSUM | \
-		DEV_TX_OFFLOAD_SCTP_CKSUM | \
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-		DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
-		DEV_TX_OFFLOAD_TCP_TSO | \
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-		DEV_TX_OFFLOAD_GRE_TNL_TSO | \
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO)
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)
 
 	uint64_t tx_offload = dev->data->dev_conf.txmode.offloads;
 	if (tx_offload & HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index c8229e9076b5..dfea5d5b4c2f 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -307,7 +307,7 @@ struct hns3_rx_queue {
 	uint16_t rx_rearm_start; /* index of BD that driver re-arming from */
 	uint16_t rx_rearm_nb;    /* number of remaining BDs to be re-armed */
 
-	/* 4 if DEV_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
+	/* 4 if RTE_ETH_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
 	uint8_t crc_len;
 
 	/*
diff --git a/drivers/net/hns3/hns3_rxtx_vec.c b/drivers/net/hns3/hns3_rxtx_vec.c
index ff434d2d33ed..455110361aac 100644
--- a/drivers/net/hns3/hns3_rxtx_vec.c
+++ b/drivers/net/hns3/hns3_rxtx_vec.c
@@ -22,8 +22,8 @@ hns3_tx_check_vec_support(struct rte_eth_dev *dev)
 	if (hns3_dev_get_support(hw, PTP))
 		return -ENOTSUP;
 
-	/* Only support DEV_TX_OFFLOAD_MBUF_FAST_FREE */
-	if (txmode->offloads != DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	/* Only support RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE */
+	if (txmode->offloads != RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		return -ENOTSUP;
 
 	return 0;
@@ -228,10 +228,10 @@ hns3_rxq_vec_check(struct hns3_rx_queue *rxq, void *arg)
 int
 hns3_rx_check_vec_support(struct rte_eth_dev *dev)
 {
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-	uint64_t offloads_mask = DEV_RX_OFFLOAD_TCP_LRO |
-				 DEV_RX_OFFLOAD_VLAN;
+	uint64_t offloads_mask = RTE_ETH_RX_OFFLOAD_TCP_LRO |
+				 RTE_ETH_RX_OFFLOAD_VLAN;
 
 	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if (hns3_dev_get_support(hw, PTP))
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 0a4db0891d4a..293df887bf7c 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1629,7 +1629,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
 
 	/* Set the global registers with default ether type value */
 	if (!pf->support_multi_driver) {
-		ret = i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+		ret = i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					 RTE_ETHER_TYPE_VLAN);
 		if (ret != I40E_SUCCESS) {
 			PMD_INIT_LOG(ERR,
@@ -1896,8 +1896,8 @@ i40e_dev_configure(struct rte_eth_dev *dev)
 	ad->tx_simple_allowed = true;
 	ad->tx_vec_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Only legacy filter API needs the following fdir config. So when the
 	 * legacy filter API is deprecated, the following codes should also be
@@ -1931,13 +1931,13 @@ i40e_dev_configure(struct rte_eth_dev *dev)
 	 *  number, which will be available after rx_queue_setup(). dev_start()
 	 *  function is good to place RSS setup.
 	 */
-	if (mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+	if (mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) {
 		ret = i40e_vmdq_setup(dev);
 		if (ret)
 			goto err;
 	}
 
-	if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if (mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		ret = i40e_dcb_setup(dev);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "failed to configure DCB.");
@@ -2214,17 +2214,17 @@ i40e_parse_link_speeds(uint16_t link_speeds)
 {
 	uint8_t link_speed = I40E_LINK_SPEED_UNKNOWN;
 
-	if (link_speeds & ETH_LINK_SPEED_40G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
 		link_speed |= I40E_LINK_SPEED_40GB;
-	if (link_speeds & ETH_LINK_SPEED_25G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
 		link_speed |= I40E_LINK_SPEED_25GB;
-	if (link_speeds & ETH_LINK_SPEED_20G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_20G)
 		link_speed |= I40E_LINK_SPEED_20GB;
-	if (link_speeds & ETH_LINK_SPEED_10G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 		link_speed |= I40E_LINK_SPEED_10GB;
-	if (link_speeds & ETH_LINK_SPEED_1G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
 		link_speed |= I40E_LINK_SPEED_1GB;
-	if (link_speeds & ETH_LINK_SPEED_100M)
+	if (link_speeds & RTE_ETH_LINK_SPEED_100M)
 		link_speed |= I40E_LINK_SPEED_100MB;
 
 	return link_speed;
@@ -2332,13 +2332,13 @@ i40e_apply_link_speed(struct rte_eth_dev *dev)
 	abilities |= I40E_AQ_PHY_ENABLE_ATOMIC_LINK |
 		     I40E_AQ_PHY_LINK_ENABLED;
 
-	if (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
-		conf->link_speeds = ETH_LINK_SPEED_40G |
-				    ETH_LINK_SPEED_25G |
-				    ETH_LINK_SPEED_20G |
-				    ETH_LINK_SPEED_10G |
-				    ETH_LINK_SPEED_1G |
-				    ETH_LINK_SPEED_100M;
+	if (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
+		conf->link_speeds = RTE_ETH_LINK_SPEED_40G |
+				    RTE_ETH_LINK_SPEED_25G |
+				    RTE_ETH_LINK_SPEED_20G |
+				    RTE_ETH_LINK_SPEED_10G |
+				    RTE_ETH_LINK_SPEED_1G |
+				    RTE_ETH_LINK_SPEED_100M;
 
 		abilities |= I40E_AQ_PHY_AN_ENABLED;
 	} else {
@@ -2876,34 +2876,34 @@ update_link_reg(struct i40e_hw *hw, struct rte_eth_link *link)
 	/* Parse the link status */
 	switch (link_speed) {
 	case I40E_REG_SPEED_0:
-		link->link_speed = ETH_SPEED_NUM_100M;
+		link->link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case I40E_REG_SPEED_1:
-		link->link_speed = ETH_SPEED_NUM_1G;
+		link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case I40E_REG_SPEED_2:
 		if (hw->mac.type == I40E_MAC_X722)
-			link->link_speed = ETH_SPEED_NUM_2_5G;
+			link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		else
-			link->link_speed = ETH_SPEED_NUM_10G;
+			link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case I40E_REG_SPEED_3:
 		if (hw->mac.type == I40E_MAC_X722) {
-			link->link_speed = ETH_SPEED_NUM_5G;
+			link->link_speed = RTE_ETH_SPEED_NUM_5G;
 		} else {
 			reg_val = I40E_READ_REG(hw, I40E_PRTMAC_MACC);
 
 			if (reg_val & I40E_REG_MACC_25GB)
-				link->link_speed = ETH_SPEED_NUM_25G;
+				link->link_speed = RTE_ETH_SPEED_NUM_25G;
 			else
-				link->link_speed = ETH_SPEED_NUM_40G;
+				link->link_speed = RTE_ETH_SPEED_NUM_40G;
 		}
 		break;
 	case I40E_REG_SPEED_4:
 		if (hw->mac.type == I40E_MAC_X722)
-			link->link_speed = ETH_SPEED_NUM_10G;
+			link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		else
-			link->link_speed = ETH_SPEED_NUM_20G;
+			link->link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "Unknown link speed info %u", link_speed);
@@ -2930,8 +2930,8 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
 		status = i40e_aq_get_link_info(hw, enable_lse,
 						&link_status, NULL);
 		if (unlikely(status != I40E_SUCCESS)) {
-			link->link_speed = ETH_SPEED_NUM_NONE;
-			link->link_duplex = ETH_LINK_FULL_DUPLEX;
+			link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+			link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR, "Failed to get link info");
 			return;
 		}
@@ -2946,28 +2946,28 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
 	/* Parse the link status */
 	switch (link_status.link_speed) {
 	case I40E_LINK_SPEED_100MB:
-		link->link_speed = ETH_SPEED_NUM_100M;
+		link->link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case I40E_LINK_SPEED_1GB:
-		link->link_speed = ETH_SPEED_NUM_1G;
+		link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case I40E_LINK_SPEED_10GB:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case I40E_LINK_SPEED_20GB:
-		link->link_speed = ETH_SPEED_NUM_20G;
+		link->link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case I40E_LINK_SPEED_25GB:
-		link->link_speed = ETH_SPEED_NUM_25G;
+		link->link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case I40E_LINK_SPEED_40GB:
-		link->link_speed = ETH_SPEED_NUM_40G;
+		link->link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	default:
 		if (link->link_status)
-			link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+			link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		else
-			link->link_speed = ETH_SPEED_NUM_NONE;
+			link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 }
@@ -2984,9 +2984,9 @@ i40e_dev_link_update(struct rte_eth_dev *dev,
 	memset(&link, 0, sizeof(link));
 
 	/* i40e uses full duplex only */
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_SPEED_FIXED);
 
 	if (!wait_to_complete && !enable_lse)
 		update_link_reg(hw, &link);
@@ -3720,33 +3720,33 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_KEEP_CRC |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_RSS_HASH;
-
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
 		dev_info->tx_queue_offload_capa;
 	dev_info->dev_capa =
 		RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -3805,7 +3805,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	if (I40E_PHY_TYPE_SUPPORT_40G(hw->phy.phy_types)) {
 		/* For XL710 */
-		dev_info->speed_capa = ETH_LINK_SPEED_40G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_40G;
 		dev_info->default_rxportconf.nb_queues = 2;
 		dev_info->default_txportconf.nb_queues = 2;
 		if (dev->data->nb_rx_queues == 1)
@@ -3819,17 +3819,17 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	} else if (I40E_PHY_TYPE_SUPPORT_25G(hw->phy.phy_types)) {
 		/* For XXV710 */
-		dev_info->speed_capa = ETH_LINK_SPEED_25G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_25G;
 		dev_info->default_rxportconf.nb_queues = 1;
 		dev_info->default_txportconf.nb_queues = 1;
 		dev_info->default_rxportconf.ring_size = 256;
 		dev_info->default_txportconf.ring_size = 256;
 	} else {
 		/* For X710 */
-		dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
 		dev_info->default_rxportconf.nb_queues = 1;
 		dev_info->default_txportconf.nb_queues = 1;
-		if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_10G) {
+		if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_10G) {
 			dev_info->default_rxportconf.ring_size = 512;
 			dev_info->default_txportconf.ring_size = 256;
 		} else {
@@ -3868,7 +3868,7 @@ i40e_vlan_tpid_set_by_registers(struct rte_eth_dev *dev,
 	int ret;
 
 	if (qinq) {
-		if (vlan_type == ETH_VLAN_TYPE_OUTER)
+		if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
 			reg_id = 2;
 	}
 
@@ -3915,12 +3915,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
 	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	int qinq = dev->data->dev_conf.rxmode.offloads &
-		   DEV_RX_OFFLOAD_VLAN_EXTEND;
+		   RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	int ret = 0;
 
-	if ((vlan_type != ETH_VLAN_TYPE_INNER &&
-	     vlan_type != ETH_VLAN_TYPE_OUTER) ||
-	    (!qinq && vlan_type == ETH_VLAN_TYPE_INNER)) {
+	if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+	     vlan_type != RTE_ETH_VLAN_TYPE_OUTER) ||
+	    (!qinq && vlan_type == RTE_ETH_VLAN_TYPE_INNER)) {
 		PMD_DRV_LOG(ERR,
 			    "Unsupported vlan type.");
 		return -EINVAL;
@@ -3934,12 +3934,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
 	/* 802.1ad frames ability is added in NVM API 1.7*/
 	if (hw->flags & I40E_HW_FLAG_802_1AD_CAPABLE) {
 		if (qinq) {
-			if (vlan_type == ETH_VLAN_TYPE_OUTER)
+			if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
 				hw->first_tag = rte_cpu_to_le_16(tpid);
-			else if (vlan_type == ETH_VLAN_TYPE_INNER)
+			else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER)
 				hw->second_tag = rte_cpu_to_le_16(tpid);
 		} else {
-			if (vlan_type == ETH_VLAN_TYPE_OUTER)
+			if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
 				hw->second_tag = rte_cpu_to_le_16(tpid);
 		}
 		ret = i40e_aq_set_switch_config(hw, 0, 0, 0, NULL);
@@ -3998,37 +3998,37 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			i40e_vsi_config_vlan_filter(vsi, TRUE);
 		else
 			i40e_vsi_config_vlan_filter(vsi, FALSE);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			i40e_vsi_config_vlan_stripping(vsi, TRUE);
 		else
 			i40e_vsi_config_vlan_stripping(vsi, FALSE);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
 			i40e_vsi_config_double_vlan(vsi, TRUE);
 			/* Set global registers with default ethertype. */
-			i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+			i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					   RTE_ETHER_TYPE_VLAN);
-			i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+			i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
 					   RTE_ETHER_TYPE_VLAN);
 		}
 		else
 			i40e_vsi_config_double_vlan(vsi, FALSE);
 	}
 
-	if (mask & ETH_QINQ_STRIP_MASK) {
+	if (mask & RTE_ETH_QINQ_STRIP_MASK) {
 		/* Enable or disable outer VLAN stripping */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
 			i40e_vsi_config_outer_vlan_stripping(vsi, TRUE);
 		else
 			i40e_vsi_config_outer_vlan_stripping(vsi, FALSE);
@@ -4111,17 +4111,17 @@ i40e_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	 /* Return current mode according to actual setting*/
 	switch (hw->fc.current_mode) {
 	case I40E_FC_FULL:
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	case I40E_FC_TX_PAUSE:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case I40E_FC_RX_PAUSE:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case I40E_FC_NONE:
 	default:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	};
 
 	return 0;
@@ -4137,10 +4137,10 @@ i40e_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	struct i40e_hw *hw;
 	struct i40e_pf *pf;
 	enum i40e_fc_mode rte_fcmode_2_i40e_fcmode[] = {
-		[RTE_FC_NONE] = I40E_FC_NONE,
-		[RTE_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
-		[RTE_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
-		[RTE_FC_FULL] = I40E_FC_FULL
+		[RTE_ETH_FC_NONE] = I40E_FC_NONE,
+		[RTE_ETH_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
+		[RTE_ETH_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
+		[RTE_ETH_FC_FULL] = I40E_FC_FULL
 	};
 
 	/* high_water field in the rte_eth_fc_conf using the kilobytes unit */
@@ -4287,7 +4287,7 @@ i40e_macaddr_add(struct rte_eth_dev *dev,
 	}
 
 	rte_memcpy(&mac_filter.mac_addr, mac_addr, RTE_ETHER_ADDR_LEN);
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 		mac_filter.filter_type = I40E_MACVLAN_PERFECT_MATCH;
 	else
 		mac_filter.filter_type = I40E_MAC_PERFECT_MATCH;
@@ -4440,7 +4440,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
 	int ret;
 
 	if (reta_size != lut_size ||
-		reta_size > ETH_RSS_RETA_SIZE_512) {
+		reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
 		PMD_DRV_LOG(ERR,
 			"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
 			reta_size, lut_size);
@@ -4456,8 +4456,8 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
 	if (ret)
 		goto out;
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			lut[i] = reta_conf[idx].reta[shift];
 	}
@@ -4483,7 +4483,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
 	int ret;
 
 	if (reta_size != lut_size ||
-		reta_size > ETH_RSS_RETA_SIZE_512) {
+		reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
 		PMD_DRV_LOG(ERR,
 			"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
 			reta_size, lut_size);
@@ -4500,8 +4500,8 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
 	if (ret)
 		goto out;
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = lut[i];
 	}
@@ -4818,7 +4818,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
 			pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
 				hw->func_caps.num_vsis - vsi_count);
 			pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
-				ETH_64_POOLS);
+				RTE_ETH_64_POOLS);
 			if (pf->max_nb_vmdq_vsi) {
 				pf->flags |= I40E_FLAG_VMDQ;
 				pf->vmdq_nb_qps = pf->vmdq_nb_qp_max;
@@ -6104,10 +6104,10 @@ i40e_dev_init_vlan(struct rte_eth_dev *dev)
 	int mask = 0;
 
 	/* Apply vlan offload setting */
-	mask = ETH_VLAN_STRIP_MASK |
-	       ETH_QINQ_STRIP_MASK |
-	       ETH_VLAN_FILTER_MASK |
-	       ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK |
+	       RTE_ETH_QINQ_STRIP_MASK |
+	       RTE_ETH_VLAN_FILTER_MASK |
+	       RTE_ETH_VLAN_EXTEND_MASK;
 	ret = i40e_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_DRV_LOG(INFO, "Failed to update vlan offload");
@@ -6236,9 +6236,9 @@ i40e_pf_setup(struct i40e_pf *pf)
 
 	/* Configure filter control */
 	memset(&settings, 0, sizeof(settings));
-	if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_128)
+	if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_128)
 		settings.hash_lut_size = I40E_HASH_LUT_SIZE_128;
-	else if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_512)
+	else if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_512)
 		settings.hash_lut_size = I40E_HASH_LUT_SIZE_512;
 	else {
 		PMD_DRV_LOG(ERR, "Hash lookup table size (%u) not supported",
@@ -7098,7 +7098,7 @@ i40e_find_vlan_filter(struct i40e_vsi *vsi,
 {
 	uint32_t vid_idx, vid_bit;
 
-	if (vlan_id > ETH_VLAN_ID_MAX)
+	if (vlan_id > RTE_ETH_VLAN_ID_MAX)
 		return 0;
 
 	vid_idx = I40E_VFTA_IDX(vlan_id);
@@ -7133,7 +7133,7 @@ i40e_set_vlan_filter(struct i40e_vsi *vsi,
 	struct i40e_aqc_add_remove_vlan_element_data vlan_data = {0};
 	int ret;
 
-	if (vlan_id > ETH_VLAN_ID_MAX)
+	if (vlan_id > RTE_ETH_VLAN_ID_MAX)
 		return;
 
 	i40e_store_vlan_filter(vsi, vlan_id, on);
@@ -7727,25 +7727,25 @@ static int
 i40e_dev_get_filter_type(uint16_t filter_type, uint16_t *flag)
 {
 	switch (filter_type) {
-	case RTE_TUNNEL_FILTER_IMAC_IVLAN:
+	case RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN;
 		break;
-	case RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID:
+	case RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID;
 		break;
-	case RTE_TUNNEL_FILTER_IMAC_TENID:
+	case RTE_ETH_TUNNEL_FILTER_IMAC_TENID:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID;
 		break;
-	case RTE_TUNNEL_FILTER_OMAC_TENID_IMAC:
+	case RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC;
 		break;
-	case ETH_TUNNEL_FILTER_IMAC:
+	case RTE_ETH_TUNNEL_FILTER_IMAC:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC;
 		break;
-	case ETH_TUNNEL_FILTER_OIP:
+	case RTE_ETH_TUNNEL_FILTER_OIP:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_OIP;
 		break;
-	case ETH_TUNNEL_FILTER_IIP:
+	case RTE_ETH_TUNNEL_FILTER_IIP:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IIP;
 		break;
 	default:
@@ -8711,16 +8711,16 @@ i40e_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
 					  I40E_AQC_TUNNEL_TYPE_VXLAN);
 		break;
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
 					  I40E_AQC_TUNNEL_TYPE_VXLAN_GPE);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -1;
 		break;
@@ -8746,12 +8746,12 @@ i40e_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		ret = i40e_del_vxlan_port(pf, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -1;
 		break;
@@ -8843,7 +8843,7 @@ int
 i40e_pf_reset_rss_reta(struct i40e_pf *pf)
 {
 	struct i40e_hw *hw = &pf->adapter->hw;
-	uint8_t lut[ETH_RSS_RETA_SIZE_512];
+	uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
 	uint32_t i;
 	int num;
 
@@ -8851,7 +8851,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
 	 * configured. It's necessary to calculate the actual PF
 	 * queues that are configured.
 	 */
-	if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+	if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
 		num = i40e_pf_calc_configured_queues_num(pf);
 	else
 		num = pf->dev_data->nb_rx_queues;
@@ -8930,7 +8930,7 @@ i40e_pf_config_rss(struct i40e_pf *pf)
 	rss_hf = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
 	mq_mode = pf->dev_data->dev_conf.rxmode.mq_mode;
 	if (!(rss_hf & pf->adapter->flow_types_mask) ||
-	    !(mq_mode & ETH_MQ_RX_RSS_FLAG))
+	    !(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 		return 0;
 
 	hw = I40E_PF_TO_HW(pf);
@@ -10267,16 +10267,16 @@ i40e_start_timecounters(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 
 	switch (link.link_speed) {
-	case ETH_SPEED_NUM_40G:
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_25G:
 		tsync_inc_l = I40E_PTP_40GB_INCVAL & 0xFFFFFFFF;
 		tsync_inc_h = I40E_PTP_40GB_INCVAL >> 32;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		tsync_inc_l = I40E_PTP_10GB_INCVAL & 0xFFFFFFFF;
 		tsync_inc_h = I40E_PTP_10GB_INCVAL >> 32;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		tsync_inc_l = I40E_PTP_1GB_INCVAL & 0xFFFFFFFF;
 		tsync_inc_h = I40E_PTP_1GB_INCVAL >> 32;
 		break;
@@ -10504,7 +10504,7 @@ i40e_parse_dcb_configure(struct rte_eth_dev *dev,
 	else
 		*tc_map = RTE_LEN2MASK(dcb_rx_conf->nb_tcs, uint8_t);
 
-	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		dcb_cfg->pfc.willing = 0;
 		dcb_cfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS;
 		dcb_cfg->pfc.pfcenable = *tc_map;
@@ -11012,7 +11012,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
 	uint16_t bsf, tc_mapping;
 	int i, j = 0;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = rte_bsf32(vsi->enabled_tc + 1);
 	else
 		dcb_info->nb_tcs = 1;
@@ -11060,7 +11060,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
 				dcb_info->tc_queue.tc_rxq[j][i].nb_queue;
 		}
 		j++;
-	} while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, ETH_MAX_VMDQ_POOL));
+	} while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, RTE_ETH_MAX_VMDQ_POOL));
 	return 0;
 }
 
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 1d57b9617e66..d8042abbd9be 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -147,17 +147,17 @@ enum i40e_flxpld_layer_idx {
 		       I40E_FLAG_RSS_AQ_CAPABLE)
 
 #define I40E_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L2_PAYLOAD)
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L2_PAYLOAD)
 
 /* All bits of RSS hash enable for X722*/
 #define I40E_RSS_HENA_ALL_X722 ( \
@@ -1063,7 +1063,7 @@ struct i40e_rte_flow_rss_conf {
 	uint8_t key[(I40E_VFQF_HKEY_MAX_INDEX > I40E_PFQF_HKEY_MAX_INDEX ?
 		     I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
 		    sizeof(uint32_t)];		/**< Hash key. */
-	uint16_t queue[ETH_RSS_RETA_SIZE_512];	/**< Queues indices to use. */
+	uint16_t queue[RTE_ETH_RSS_RETA_SIZE_512];	/**< Queues indices to use. */
 
 	bool symmetric_enable;		/**< true, if enable symmetric */
 	uint64_t config_pctypes;	/**< All PCTYPES with the flow  */
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index e41a84f1d737..9acaa1875105 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -2015,7 +2015,7 @@ i40e_get_outer_vlan(struct rte_eth_dev *dev)
 {
 	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	int qinq = dev->data->dev_conf.rxmode.offloads &
-		DEV_RX_OFFLOAD_VLAN_EXTEND;
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	uint64_t reg_r = 0;
 	uint16_t reg_id;
 	uint16_t tpid;
@@ -3601,13 +3601,13 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
 }
 
 static uint16_t i40e_supported_tunnel_filter_types[] = {
-	ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_TENID |
-	ETH_TUNNEL_FILTER_IVLAN,
-	ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_IVLAN,
-	ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_TENID,
-	ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_TENID |
-	ETH_TUNNEL_FILTER_IMAC,
-	ETH_TUNNEL_FILTER_IMAC,
+	RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID |
+	RTE_ETH_TUNNEL_FILTER_IVLAN,
+	RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
+	RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID,
+	RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_TENID |
+	RTE_ETH_TUNNEL_FILTER_IMAC,
+	RTE_ETH_TUNNEL_FILTER_IMAC,
 };
 
 static int
@@ -3697,12 +3697,12 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 					rte_memcpy(&filter->outer_mac,
 						   &eth_spec->dst,
 						   RTE_ETHER_ADDR_LEN);
-					filter_type |= ETH_TUNNEL_FILTER_OMAC;
+					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
 						   &eth_spec->dst,
 						   RTE_ETHER_ADDR_LEN);
-					filter_type |= ETH_TUNNEL_FILTER_IMAC;
+					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
 			}
 			break;
@@ -3724,7 +3724,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 					filter->inner_vlan =
 					      rte_be_to_cpu_16(vlan_spec->tci) &
 					      I40E_VLAN_TCI_MASK;
-				filter_type |= ETH_TUNNEL_FILTER_IVLAN;
+				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
@@ -3798,7 +3798,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 					   vxlan_spec->vni, 3);
 				filter->tenant_id =
 					rte_be_to_cpu_32(tenant_id_be);
-				filter_type |= ETH_TUNNEL_FILTER_TENID;
+				filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
 			}
 
 			vxlan_flag = 1;
@@ -3927,12 +3927,12 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 					rte_memcpy(&filter->outer_mac,
 						   &eth_spec->dst,
 						   RTE_ETHER_ADDR_LEN);
-					filter_type |= ETH_TUNNEL_FILTER_OMAC;
+					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
 						   &eth_spec->dst,
 						   RTE_ETHER_ADDR_LEN);
-					filter_type |= ETH_TUNNEL_FILTER_IMAC;
+					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
 			}
 
@@ -3955,7 +3955,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 					filter->inner_vlan =
 					      rte_be_to_cpu_16(vlan_spec->tci) &
 					      I40E_VLAN_TCI_MASK;
-				filter_type |= ETH_TUNNEL_FILTER_IVLAN;
+				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
@@ -4050,7 +4050,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 					   nvgre_spec->tni, 3);
 				filter->tenant_id =
 					rte_be_to_cpu_32(tenant_id_be);
-				filter_type |= ETH_TUNNEL_FILTER_TENID;
+				filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
 			}
 
 			nvgre_flag = 1;
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 5da3d187076e..8962e9d97aa7 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -105,47 +105,47 @@ struct i40e_hash_map_rss_inset {
 
 const struct i40e_hash_map_rss_inset i40e_hash_rss_inset[] = {
 	/* IPv4 */
-	{ ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
-	{ ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+	{ RTE_ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+	{ RTE_ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
 
-	{ ETH_RSS_NONFRAG_IPV4_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	  I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
 
-	{ ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
 
 	/* IPv6 */
-	{ ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
-	{ ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+	{ RTE_ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+	{ RTE_ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
 
-	{ ETH_RSS_NONFRAG_IPV6_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	  I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
 
-	{ ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
 
 	/* Port */
-	{ ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
+	{ RTE_ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
 	/* Ether */
-	{ ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
-	{ ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
+	{ RTE_ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
+	{ RTE_ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
 
 	/* VLAN */
-	{ ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
-	{ ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
+	{ RTE_ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
+	{ RTE_ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
 };
 
 #define I40E_HASH_VOID_NEXT_ALLOW	BIT_ULL(RTE_FLOW_ITEM_TYPE_ETH)
@@ -208,30 +208,30 @@ struct i40e_hash_match_pattern {
 #define I40E_HASH_MAP_CUS_PATTERN(pattern, rss_mask, cus_pctype) { \
 	pattern, rss_mask, true, cus_pctype }
 
-#define I40E_HASH_L2_RSS_MASK		(ETH_RSS_VLAN | ETH_RSS_ETH | \
-					ETH_RSS_L2_SRC_ONLY | \
-					ETH_RSS_L2_DST_ONLY)
+#define I40E_HASH_L2_RSS_MASK		(RTE_ETH_RSS_VLAN | RTE_ETH_RSS_ETH | \
+					RTE_ETH_RSS_L2_SRC_ONLY | \
+					RTE_ETH_RSS_L2_DST_ONLY)
 
 #define I40E_HASH_L23_RSS_MASK		(I40E_HASH_L2_RSS_MASK | \
-					ETH_RSS_L3_SRC_ONLY | \
-					ETH_RSS_L3_DST_ONLY)
+					RTE_ETH_RSS_L3_SRC_ONLY | \
+					RTE_ETH_RSS_L3_DST_ONLY)
 
-#define I40E_HASH_IPV4_L23_RSS_MASK	(ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
-#define I40E_HASH_IPV6_L23_RSS_MASK	(ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV4_L23_RSS_MASK	(RTE_ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV6_L23_RSS_MASK	(RTE_ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
 
 #define I40E_HASH_L234_RSS_MASK		(I40E_HASH_L23_RSS_MASK | \
-					ETH_RSS_PORT | ETH_RSS_L4_SRC_ONLY | \
-					ETH_RSS_L4_DST_ONLY)
+					RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY | \
+					RTE_ETH_RSS_L4_DST_ONLY)
 
-#define I40E_HASH_IPV4_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV4)
-#define I40E_HASH_IPV6_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV6)
+#define I40E_HASH_IPV4_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV4)
+#define I40E_HASH_IPV6_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV6)
 
-#define I40E_HASH_L4_TYPES		(ETH_RSS_NONFRAG_IPV4_TCP | \
-					ETH_RSS_NONFRAG_IPV4_UDP | \
-					ETH_RSS_NONFRAG_IPV4_SCTP | \
-					ETH_RSS_NONFRAG_IPV6_TCP | \
-					ETH_RSS_NONFRAG_IPV6_UDP | \
-					ETH_RSS_NONFRAG_IPV6_SCTP)
+#define I40E_HASH_L4_TYPES		(RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+					RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+					RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+					RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+					RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+					RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 /* Current supported patterns and RSS types.
  * All items that have the same pattern types are together.
@@ -239,72 +239,72 @@ struct i40e_hash_match_pattern {
 static const struct i40e_hash_match_pattern match_patterns[] = {
 	/* Ether */
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_ETH,
-			      ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
+			      RTE_ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
 			      I40E_FILTER_PCTYPE_L2_PAYLOAD),
 
 	/* IPv4 */
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
-			      ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
+			      RTE_ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_FRAG_IPV4),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
-			      ETH_RSS_NONFRAG_IPV4_OTHER |
+			      RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
 			      I40E_HASH_IPV4_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_OTHER),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_TCP,
-			      ETH_RSS_NONFRAG_IPV4_TCP |
+			      RTE_ETH_RSS_NONFRAG_IPV4_TCP |
 			      I40E_HASH_IPV4_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_TCP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_UDP,
-			      ETH_RSS_NONFRAG_IPV4_UDP |
+			      RTE_ETH_RSS_NONFRAG_IPV4_UDP |
 			      I40E_HASH_IPV4_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_UDP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_SCTP,
-			      ETH_RSS_NONFRAG_IPV4_SCTP |
+			      RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
 			      I40E_HASH_IPV4_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_SCTP),
 
 	/* IPv6 */
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
-			      ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
+			      RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_FRAG_IPV6),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
-			      ETH_RSS_NONFRAG_IPV6_OTHER |
+			      RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			      I40E_HASH_IPV6_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_OTHER),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_FRAG,
-			      ETH_RSS_FRAG_IPV6 | I40E_HASH_L23_RSS_MASK,
+			      RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_FRAG_IPV6),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_TCP,
-			      ETH_RSS_NONFRAG_IPV6_TCP |
+			      RTE_ETH_RSS_NONFRAG_IPV6_TCP |
 			      I40E_HASH_IPV6_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_TCP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_UDP,
-			      ETH_RSS_NONFRAG_IPV6_UDP |
+			      RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			      I40E_HASH_IPV6_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_UDP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_SCTP,
-			      ETH_RSS_NONFRAG_IPV6_SCTP |
+			      RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
 			      I40E_HASH_IPV6_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_SCTP),
 
 	/* ESP */
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_UDP_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_UDP_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
 
 	/* GTPC */
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPC,
@@ -319,27 +319,27 @@ static const struct i40e_hash_match_pattern match_patterns[] = {
 				  I40E_HASH_IPV4_L234_RSS_MASK,
 				  I40E_CUSTOMIZED_GTPU),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV4,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV6,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU,
 				  I40E_HASH_IPV6_L234_RSS_MASK,
 				  I40E_CUSTOMIZED_GTPU),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV4,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV6,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
 
 	/* L2TPV3 */
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_L2TPV3,
-				  ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
+				  RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_L2TPV3,
-				  ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
+				  RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
 
 	/* AH */
-	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, ETH_RSS_AH,
+	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, RTE_ETH_RSS_AH,
 				  I40E_CUSTOMIZED_AH_IPV4),
-	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, ETH_RSS_AH,
+	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, RTE_ETH_RSS_AH,
 				  I40E_CUSTOMIZED_AH_IPV6),
 };
 
@@ -575,29 +575,29 @@ i40e_hash_get_inset(uint64_t rss_types)
 	/* If SRC_ONLY and DST_ONLY of the same level are used simultaneously,
 	 * it is the same case as none of them are added.
 	 */
-	mask = rss_types & (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY);
-	if (mask == ETH_RSS_L2_SRC_ONLY)
+	mask = rss_types & (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY);
+	if (mask == RTE_ETH_RSS_L2_SRC_ONLY)
 		inset &= ~I40E_INSET_DMAC;
-	else if (mask == ETH_RSS_L2_DST_ONLY)
+	else if (mask == RTE_ETH_RSS_L2_DST_ONLY)
 		inset &= ~I40E_INSET_SMAC;
 
-	mask = rss_types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
-	if (mask == ETH_RSS_L3_SRC_ONLY)
+	mask = rss_types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
+	if (mask == RTE_ETH_RSS_L3_SRC_ONLY)
 		inset &= ~(I40E_INSET_IPV4_DST | I40E_INSET_IPV6_DST);
-	else if (mask == ETH_RSS_L3_DST_ONLY)
+	else if (mask == RTE_ETH_RSS_L3_DST_ONLY)
 		inset &= ~(I40E_INSET_IPV4_SRC | I40E_INSET_IPV6_SRC);
 
-	mask = rss_types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
-	if (mask == ETH_RSS_L4_SRC_ONLY)
+	mask = rss_types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
+	if (mask == RTE_ETH_RSS_L4_SRC_ONLY)
 		inset &= ~I40E_INSET_DST_PORT;
-	else if (mask == ETH_RSS_L4_DST_ONLY)
+	else if (mask == RTE_ETH_RSS_L4_DST_ONLY)
 		inset &= ~I40E_INSET_SRC_PORT;
 
 	if (rss_types & I40E_HASH_L4_TYPES) {
 		uint64_t l3_mask = rss_types &
-				   (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+				   (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
 		uint64_t l4_mask = rss_types &
-				   (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+				   (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
 
 		if (l3_mask && !l4_mask)
 			inset &= ~(I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT);
@@ -836,7 +836,7 @@ i40e_hash_config(struct i40e_pf *pf,
 
 	/* Update lookup table */
 	if (rss_info->queue_num > 0) {
-		uint8_t lut[ETH_RSS_RETA_SIZE_512];
+		uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
 		uint32_t i, j = 0;
 
 		for (i = 0; i < hw->func_caps.rss_table_size; i++) {
@@ -943,7 +943,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
 			    "RSS key is ignored when queues specified");
 
 	pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
-	if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+	if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
 		max_queue = i40e_pf_calc_configured_queues_num(pf);
 	else
 		max_queue = pf->dev_data->nb_rx_queues;
@@ -1081,22 +1081,22 @@ i40e_hash_validate_rss_types(uint64_t rss_types)
 	uint64_t type, mask;
 
 	/* Validate L2 */
-	type = ETH_RSS_ETH & rss_types;
-	mask = (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY) & rss_types;
+	type = RTE_ETH_RSS_ETH & rss_types;
+	mask = (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY) & rss_types;
 	if (!type && mask)
 		return false;
 
 	/* Validate L3 */
-	type = (I40E_HASH_L4_TYPES | ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-	       ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_IPV6 |
-	       ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
-	mask = (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY) & rss_types;
+	type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+	       RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_IPV6 |
+	       RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
+	mask = (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY) & rss_types;
 	if (!type && mask)
 		return false;
 
 	/* Validate L4 */
-	type = (I40E_HASH_L4_TYPES | ETH_RSS_PORT) & rss_types;
-	mask = (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY) & rss_types;
+	type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_PORT) & rss_types;
+	mask = (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY) & rss_types;
 	if (!type && mask)
 		return false;
 
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index e2d8b2b5f7f1..ccb3924a5f68 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -1207,24 +1207,24 @@ i40e_notify_vf_link_status(struct rte_eth_dev *dev, struct i40e_pf_vf *vf)
 	event.event_data.link_event.link_status =
 		dev->data->dev_link.link_status;
 
-	/* need to convert the ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
+	/* need to convert the RTE_ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
 	switch (dev->data->dev_link.link_speed) {
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_100MB;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_1GB;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_10GB;
 		break;
-	case ETH_SPEED_NUM_20G:
+	case RTE_ETH_SPEED_NUM_20G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_20GB;
 		break;
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_25G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_25GB;
 		break;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_40GB;
 		break;
 	default:
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 554b1142c136..a13bb81115f4 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1329,7 +1329,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 	for (i = 0; i < tx_rs_thresh; i++)
 		rte_prefetch0((txep + i)->mbuf);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
 		if (k) {
 			for (j = 0; j != k; j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
 				for (i = 0; i < RTE_I40E_TX_MAX_FREE_BUF_SZ; ++i, ++txep) {
@@ -1995,7 +1995,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->queue_id = queue_idx;
 	rxq->reg_idx = reg_idx;
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -2243,7 +2243,7 @@ i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
 	}
 	/* check simple tx conflict */
 	if (ad->tx_simple_allowed) {
-		if ((txq->offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
+		if ((txq->offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
 				txq->tx_rs_thresh < RTE_PMD_I40E_TX_MAX_BURST) {
 			PMD_DRV_LOG(ERR, "No-simple tx is required.");
 			return -EINVAL;
@@ -3417,7 +3417,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
 	/* Use a simple Tx queue if possible (only fast free is allowed) */
 	ad->tx_simple_allowed =
 		(txq->offloads ==
-		 (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+		 (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
 		 txq->tx_rs_thresh >= RTE_PMD_I40E_TX_MAX_BURST);
 	ad->tx_vec_allowed = (ad->tx_simple_allowed &&
 			txq->tx_rs_thresh <= RTE_I40E_TX_MAX_FREE_BUF_SZ);
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 2301e6301d7d..5e6eecc50116 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -120,7 +120,7 @@ struct i40e_rx_queue {
 	bool rx_deferred_start; /**< don't start this queue in dev start */
 	uint16_t rx_using_sse; /**<flag indicate the usage of vPMD for rx */
 	uint8_t dcb_tc;         /**< Traffic class of rx queue */
-	uint64_t offloads; /**< Rx offload flags of DEV_RX_OFFLOAD_* */
+	uint64_t offloads; /**< Rx offload flags of RTE_ETH_RX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
@@ -166,7 +166,7 @@ struct i40e_tx_queue {
 	bool q_set; /**< indicate if tx queue has been configured */
 	bool tx_deferred_start; /**< don't start this queue in dev start */
 	uint8_t dcb_tc;         /**< Traffic class of tx queue */
-	uint64_t offloads; /**< Tx offload flags of DEV_RX_OFFLOAD_* */
+	uint64_t offloads; /**< Tx offload flags of RTE_ETH_RX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 4ffe030fcb64..7abc0821d119 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -900,7 +900,7 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->tx_next_dd - (n - 1);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		void **cache_objs;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index f52e3c567558..f9a7f4655050 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -100,7 +100,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 	  */
 	txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
 		for (i = 0; i < n; i++) {
 			free[i] = txep[i].mbuf;
 			txep[i].mbuf = NULL;
@@ -211,7 +211,7 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
 	struct i40e_adapter *ad =
 		I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 	struct i40e_rx_queue *rxq;
 	uint16_t desc, i;
 	bool first_queue;
@@ -221,11 +221,11 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
 		return -1;
 
 	 /* no header split support */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
 		return -1;
 
 	/* no QinQ support */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 		return -1;
 
 	/**
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 12d5a2e48a9b..663c46b91dc5 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -42,30 +42,30 @@ i40e_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX;
 	dev_info->hash_key_size = (I40E_VFQF_HKEY_MAX_INDEX + 1) *
 		sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_64;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_64;
 	dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
 	dev_info->max_mac_addrs = I40E_NUM_MACADDR_MAX;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_FILTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_MULTI_SEGS  |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -385,19 +385,19 @@ i40e_vf_representor_vlan_offload_set(struct rte_eth_dev *ethdev, int mask)
 		return -EINVAL;
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* Enable or disable VLAN filtering offload */
 		if (ethdev->data->dev_conf.rxmode.offloads &
-		    DEV_RX_OFFLOAD_VLAN_FILTER)
+		    RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			return i40e_vsi_config_vlan_filter(vsi, TRUE);
 		else
 			return i40e_vsi_config_vlan_filter(vsi, FALSE);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping offload */
 		if (ethdev->data->dev_conf.rxmode.offloads &
-		    DEV_RX_OFFLOAD_VLAN_STRIP)
+		    RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			return i40e_vsi_config_vlan_stripping(vsi, TRUE);
 		else
 			return i40e_vsi_config_vlan_stripping(vsi, FALSE);
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 34bfa9af4734..12f541f53926 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -50,18 +50,18 @@
 	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
 
 #define IAVF_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 |         \
-	ETH_RSS_NONFRAG_IPV4_TCP |  \
-	ETH_RSS_NONFRAG_IPV4_UDP |  \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 |         \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP |  \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP |  \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
 
 #define IAVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
 #define IAVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 611f1f7722b0..df44df772e4e 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -266,53 +266,53 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
 	static const uint64_t map_hena_rss[] = {
 		/* IPv4 */
 		[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP] =
-				ETH_RSS_NONFRAG_IPV4_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP] =
-				ETH_RSS_NONFRAG_IPV4_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_UDP] =
-				ETH_RSS_NONFRAG_IPV4_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK] =
-				ETH_RSS_NONFRAG_IPV4_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP] =
-				ETH_RSS_NONFRAG_IPV4_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_SCTP] =
-				ETH_RSS_NONFRAG_IPV4_SCTP,
+				RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_OTHER] =
-				ETH_RSS_NONFRAG_IPV4_OTHER,
-		[IAVF_FILTER_PCTYPE_FRAG_IPV4] = ETH_RSS_FRAG_IPV4,
+				RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+		[IAVF_FILTER_PCTYPE_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
 
 		/* IPv6 */
 		[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP] =
-				ETH_RSS_NONFRAG_IPV6_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP] =
-				ETH_RSS_NONFRAG_IPV6_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_UDP] =
-				ETH_RSS_NONFRAG_IPV6_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK] =
-				ETH_RSS_NONFRAG_IPV6_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP] =
-				ETH_RSS_NONFRAG_IPV6_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_SCTP] =
-				ETH_RSS_NONFRAG_IPV6_SCTP,
+				RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_OTHER] =
-				ETH_RSS_NONFRAG_IPV6_OTHER,
-		[IAVF_FILTER_PCTYPE_FRAG_IPV6] = ETH_RSS_FRAG_IPV6,
+				RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+		[IAVF_FILTER_PCTYPE_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
 
 		/* L2 Payload */
-		[IAVF_FILTER_PCTYPE_L2_PAYLOAD] = ETH_RSS_L2_PAYLOAD
+		[IAVF_FILTER_PCTYPE_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
 	};
 
-	const uint64_t ipv4_rss = ETH_RSS_NONFRAG_IPV4_UDP |
-				  ETH_RSS_NONFRAG_IPV4_TCP |
-				  ETH_RSS_NONFRAG_IPV4_SCTP |
-				  ETH_RSS_NONFRAG_IPV4_OTHER |
-				  ETH_RSS_FRAG_IPV4;
+	const uint64_t ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+				  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+				  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+				  RTE_ETH_RSS_FRAG_IPV4;
 
-	const uint64_t ipv6_rss = ETH_RSS_NONFRAG_IPV6_UDP |
-				  ETH_RSS_NONFRAG_IPV6_TCP |
-				  ETH_RSS_NONFRAG_IPV6_SCTP |
-				  ETH_RSS_NONFRAG_IPV6_OTHER |
-				  ETH_RSS_FRAG_IPV6;
+	const uint64_t ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+				  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+				  RTE_ETH_RSS_FRAG_IPV6;
 
 	struct iavf_info *vf =  IAVF_DEV_PRIVATE_TO_VF(adapter);
 	uint64_t caps = 0, hena = 0, valid_rss_hf = 0;
@@ -331,13 +331,13 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
 	}
 
 	/**
-	 * ETH_RSS_IPV4 and ETH_RSS_IPV6 can be considered as 2
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
 	 * generalizations of all other IPv4 and IPv6 RSS types.
 	 */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		rss_hf |= ipv4_rss;
 
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		rss_hf |= ipv6_rss;
 
 	RTE_BUILD_BUG_ON(RTE_DIM(map_hena_rss) > sizeof(uint64_t) * CHAR_BIT);
@@ -363,10 +363,10 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
 	}
 
 	if (valid_rss_hf & ipv4_rss)
-		valid_rss_hf |= rss_hf & ETH_RSS_IPV4;
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
 
 	if (valid_rss_hf & ipv6_rss)
-		valid_rss_hf |= rss_hf & ETH_RSS_IPV6;
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
 
 	if (rss_hf & ~valid_rss_hf)
 		PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
@@ -467,7 +467,7 @@ iavf_dev_vlan_insert_set(struct rte_eth_dev *dev)
 		return 0;
 
 	enable = !!(dev->data->dev_conf.txmode.offloads &
-		    DEV_TX_OFFLOAD_VLAN_INSERT);
+		    RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
 	iavf_config_vlan_insert_v2(adapter, enable);
 
 	return 0;
@@ -479,10 +479,10 @@ iavf_dev_init_vlan(struct rte_eth_dev *dev)
 	int err;
 
 	err = iavf_dev_vlan_offload_set(dev,
-					ETH_VLAN_STRIP_MASK |
-					ETH_QINQ_STRIP_MASK |
-					ETH_VLAN_FILTER_MASK |
-					ETH_VLAN_EXTEND_MASK);
+					RTE_ETH_VLAN_STRIP_MASK |
+					RTE_ETH_QINQ_STRIP_MASK |
+					RTE_ETH_VLAN_FILTER_MASK |
+					RTE_ETH_VLAN_EXTEND_MASK);
 	if (err) {
 		PMD_DRV_LOG(ERR, "Failed to update vlan offload");
 		return err;
@@ -512,8 +512,8 @@ iavf_dev_configure(struct rte_eth_dev *dev)
 	ad->rx_vec_allowed = true;
 	ad->tx_vec_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Large VF setting */
 	if (num_queue_pairs > IAVF_MAX_NUM_QUEUES_DFLT) {
@@ -611,7 +611,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
 	}
 
 	rxq->max_pkt_len = max_pkt_len;
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 	    rxq->max_pkt_len > buf_size) {
 		dev_data->scattered_rx = 1;
 	}
@@ -961,34 +961,34 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->flow_type_rss_offloads = IAVF_RSS_OFFLOAD_ALL;
 	dev_info->max_mac_addrs = IAVF_NUM_MACADDR_MAX;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_CRC)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_KEEP_CRC;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_free_thresh = IAVF_DEFAULT_RX_FREE_THRESH,
@@ -1048,42 +1048,42 @@ iavf_dev_link_update(struct rte_eth_dev *dev,
 	 */
 	switch (vf->link_speed) {
 	case 10:
-		new_link.link_speed = ETH_SPEED_NUM_10M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case 100:
-		new_link.link_speed = ETH_SPEED_NUM_100M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case 1000:
-		new_link.link_speed = ETH_SPEED_NUM_1G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case 10000:
-		new_link.link_speed = ETH_SPEED_NUM_10G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case 20000:
-		new_link.link_speed = ETH_SPEED_NUM_20G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case 25000:
-		new_link.link_speed = ETH_SPEED_NUM_25G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case 40000:
-		new_link.link_speed = ETH_SPEED_NUM_40G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case 50000:
-		new_link.link_speed = ETH_SPEED_NUM_50G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case 100000:
-		new_link.link_speed = ETH_SPEED_NUM_100G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	default:
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	new_link.link_status = vf->link_up ? ETH_LINK_UP :
-					     ETH_LINK_DOWN;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vf->link_up ? RTE_ETH_LINK_UP :
+					     RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(dev, &new_link);
 }
@@ -1231,14 +1231,14 @@ iavf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask)
 	bool enable;
 	int err;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
 
 		iavf_iterate_vlan_filters_v2(dev, enable);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 		err = iavf_config_vlan_strip_v2(adapter, enable);
 		/* If not support, the stripping is already disabled by PF */
@@ -1267,9 +1267,9 @@ iavf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		return -ENOTSUP;
 
 	/* Vlan stripping setting */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			err = iavf_enable_vlan_strip(adapter);
 		else
 			err = iavf_disable_vlan_strip(adapter);
@@ -1311,8 +1311,8 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
 	rte_memcpy(lut, vf->rss_lut, reta_size);
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			lut[i] = reta_conf[idx].reta[shift];
 	}
@@ -1348,8 +1348,8 @@ iavf_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = vf->rss_lut[i];
 	}
@@ -1556,7 +1556,7 @@ iavf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	ret = iavf_query_stats(adapter, &pstats);
 	if (ret == 0) {
 		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
-					 DEV_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
 					 RTE_ETHER_CRC_LEN;
 		iavf_update_stats(vsi, pstats);
 		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index 01724cd569dd..55d8a11da388 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -395,90 +395,90 @@ struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tcp_tmplt = {
 /* rss type super set */
 
 /* IPv4 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV4	(ETH_RSS_ETH | ETH_RSS_IPV4 | \
-					 ETH_RSS_FRAG_IPV4 | \
-					 ETH_RSS_IPV4_CHKSUM)
+#define IAVF_RSS_TYPE_OUTER_IPV4	(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_FRAG_IPV4 | \
+					 RTE_ETH_RSS_IPV4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV4_UDP	(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV4_TCP	(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV4_SCTP	(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 /* IPv6 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV6	(ETH_RSS_ETH | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_OUTER_IPV6	(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
 #define IAVF_RSS_TYPE_OUTER_IPV6_FRAG	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_FRAG_IPV6)
+					 RTE_ETH_RSS_FRAG_IPV6)
 #define IAVF_RSS_TYPE_OUTER_IPV6_UDP	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV6_TCP	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV6_SCTP	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 /* VLAN IPV4 */
 #define IAVF_RSS_TYPE_VLAN_IPV4		(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV4_UDP	(IAVF_RSS_TYPE_OUTER_IPV4_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV4_TCP	(IAVF_RSS_TYPE_OUTER_IPV4_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV4_SCTP	(IAVF_RSS_TYPE_OUTER_IPV4_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 /* VLAN IPv6 */
 #define IAVF_RSS_TYPE_VLAN_IPV6		(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_FRAG	(IAVF_RSS_TYPE_OUTER_IPV6_FRAG | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_UDP	(IAVF_RSS_TYPE_OUTER_IPV6_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_TCP	(IAVF_RSS_TYPE_OUTER_IPV6_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_SCTP	(IAVF_RSS_TYPE_OUTER_IPV6_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 /* IPv4 inner */
-#define IAVF_RSS_TYPE_INNER_IPV4	ETH_RSS_IPV4
-#define IAVF_RSS_TYPE_INNER_IPV4_UDP	(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV4_TCP	(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV4_SCTP	(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV4	RTE_ETH_RSS_IPV4
+#define IAVF_RSS_TYPE_INNER_IPV4_UDP	(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV4_TCP	(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV4_SCTP	(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 /* IPv6 inner */
-#define IAVF_RSS_TYPE_INNER_IPV6	ETH_RSS_IPV6
-#define IAVF_RSS_TYPE_INNER_IPV6_UDP	(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV6_TCP	(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV6_SCTP	(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV6	RTE_ETH_RSS_IPV6
+#define IAVF_RSS_TYPE_INNER_IPV6_UDP	(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV6_TCP	(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV6_SCTP	(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 /* GTPU IPv4 */
 #define IAVF_RSS_TYPE_GTPU_IPV4		(IAVF_RSS_TYPE_INNER_IPV4 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV4_UDP	(IAVF_RSS_TYPE_INNER_IPV4_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV4_TCP	(IAVF_RSS_TYPE_INNER_IPV4_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 /* GTPU IPv6 */
 #define IAVF_RSS_TYPE_GTPU_IPV6		(IAVF_RSS_TYPE_INNER_IPV6 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV6_UDP	(IAVF_RSS_TYPE_INNER_IPV6_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV6_TCP	(IAVF_RSS_TYPE_INNER_IPV6_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 /* ESP, AH, L2TPV3 and PFCP */
-#define IAVF_RSS_TYPE_IPV4_ESP		(ETH_RSS_ESP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV4_AH		(ETH_RSS_AH | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_ESP		(ETH_RSS_ESP | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV6_AH		(ETH_RSS_AH | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV4_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV6_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
 
 /**
  * Supported pattern for hash.
@@ -496,7 +496,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_vlan_ipv4_udp,		IAVF_RSS_TYPE_VLAN_IPV4_UDP,	&outer_ipv4_udp_tmplt},
 	{iavf_pattern_eth_vlan_ipv4_tcp,		IAVF_RSS_TYPE_VLAN_IPV4_TCP,	&outer_ipv4_tcp_tmplt},
 	{iavf_pattern_eth_vlan_ipv4_sctp,		IAVF_RSS_TYPE_VLAN_IPV4_SCTP,	&outer_ipv4_sctp_tmplt},
-	{iavf_pattern_eth_ipv4_gtpu,			ETH_RSS_IPV4,			&outer_ipv4_udp_tmplt},
+	{iavf_pattern_eth_ipv4_gtpu,			RTE_ETH_RSS_IPV4,			&outer_ipv4_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv4,		IAVF_RSS_TYPE_GTPU_IPV4,	&inner_ipv4_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv4_udp,		IAVF_RSS_TYPE_GTPU_IPV4_UDP,	&inner_ipv4_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv4_tcp,		IAVF_RSS_TYPE_GTPU_IPV4_TCP,	&inner_ipv4_tcp_tmplt},
@@ -538,9 +538,9 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_ipv4_ah,			IAVF_RSS_TYPE_IPV4_AH,		&ipv4_ah_tmplt},
 	{iavf_pattern_eth_ipv4_l2tpv3,			IAVF_RSS_TYPE_IPV4_L2TPV3,	&ipv4_l2tpv3_tmplt},
 	{iavf_pattern_eth_ipv4_pfcp,			IAVF_RSS_TYPE_IPV4_PFCP,	&ipv4_pfcp_tmplt},
-	{iavf_pattern_eth_ipv4_gtpc,			ETH_RSS_IPV4,			&ipv4_udp_gtpc_tmplt},
-	{iavf_pattern_eth_ecpri,			ETH_RSS_ECPRI,			&eth_ecpri_tmplt},
-	{iavf_pattern_eth_ipv4_ecpri,			ETH_RSS_ECPRI,			&ipv4_ecpri_tmplt},
+	{iavf_pattern_eth_ipv4_gtpc,			RTE_ETH_RSS_IPV4,			&ipv4_udp_gtpc_tmplt},
+	{iavf_pattern_eth_ecpri,			RTE_ETH_RSS_ECPRI,			&eth_ecpri_tmplt},
+	{iavf_pattern_eth_ipv4_ecpri,			RTE_ETH_RSS_ECPRI,			&ipv4_ecpri_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv4,		IAVF_RSS_TYPE_INNER_IPV4,	&inner_ipv4_tmplt},
 	{iavf_pattern_eth_ipv6_gre_ipv4,		IAVF_RSS_TYPE_INNER_IPV4, &inner_ipv4_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv4_tcp,	IAVF_RSS_TYPE_INNER_IPV4_TCP, &inner_ipv4_tcp_tmplt},
@@ -565,7 +565,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_vlan_ipv6_udp,		IAVF_RSS_TYPE_VLAN_IPV6_UDP,	&outer_ipv6_udp_tmplt},
 	{iavf_pattern_eth_vlan_ipv6_tcp,		IAVF_RSS_TYPE_VLAN_IPV6_TCP,	&outer_ipv6_tcp_tmplt},
 	{iavf_pattern_eth_vlan_ipv6_sctp,		IAVF_RSS_TYPE_VLAN_IPV6_SCTP,	&outer_ipv6_sctp_tmplt},
-	{iavf_pattern_eth_ipv6_gtpu,			ETH_RSS_IPV6,			&outer_ipv6_udp_tmplt},
+	{iavf_pattern_eth_ipv6_gtpu,			RTE_ETH_RSS_IPV6,			&outer_ipv6_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv6,		IAVF_RSS_TYPE_GTPU_IPV6,	&inner_ipv6_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv6_udp,		IAVF_RSS_TYPE_GTPU_IPV6_UDP,	&inner_ipv6_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv6_tcp,		IAVF_RSS_TYPE_GTPU_IPV6_TCP,	&inner_ipv6_tcp_tmplt},
@@ -607,7 +607,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_ipv6_ah,			IAVF_RSS_TYPE_IPV6_AH,		&ipv6_ah_tmplt},
 	{iavf_pattern_eth_ipv6_l2tpv3,			IAVF_RSS_TYPE_IPV6_L2TPV3,	&ipv6_l2tpv3_tmplt},
 	{iavf_pattern_eth_ipv6_pfcp,			IAVF_RSS_TYPE_IPV6_PFCP,	&ipv6_pfcp_tmplt},
-	{iavf_pattern_eth_ipv6_gtpc,			ETH_RSS_IPV6,			&ipv6_udp_gtpc_tmplt},
+	{iavf_pattern_eth_ipv6_gtpc,			RTE_ETH_RSS_IPV6,			&ipv6_udp_gtpc_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv6,		IAVF_RSS_TYPE_INNER_IPV6,	&inner_ipv6_tmplt},
 	{iavf_pattern_eth_ipv6_gre_ipv6,		IAVF_RSS_TYPE_INNER_IPV6, &inner_ipv6_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv6_tcp,	IAVF_RSS_TYPE_INNER_IPV6_TCP, &inner_ipv6_tcp_tmplt},
@@ -648,52 +648,52 @@ iavf_rss_hash_set(struct iavf_adapter *ad, uint64_t rss_hf, bool add)
 	struct virtchnl_rss_cfg rss_cfg;
 
 #define IAVF_RSS_HF_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 	rss_cfg.rss_algorithm = VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC;
-	if (rss_hf & ETH_RSS_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_IPV4) {
 		rss_cfg.proto_hdrs = inner_ipv4_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		rss_cfg.proto_hdrs = inner_ipv4_udp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		rss_cfg.proto_hdrs = inner_ipv4_tcp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
 		rss_cfg.proto_hdrs = inner_ipv4_sctp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_IPV6) {
 		rss_cfg.proto_hdrs = inner_ipv6_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
 		rss_cfg.proto_hdrs = inner_ipv6_udp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 		rss_cfg.proto_hdrs = inner_ipv6_tcp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
 		rss_cfg.proto_hdrs = inner_ipv6_sctp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
@@ -855,28 +855,28 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 		hdr = &proto_hdrs->proto_hdr[i];
 		switch (hdr->type) {
 		case VIRTCHNL_PROTO_HDR_ETH:
-			if (!(rss_type & ETH_RSS_ETH))
+			if (!(rss_type & RTE_ETH_RSS_ETH))
 				hdr->field_selector = 0;
-			else if (rss_type & ETH_RSS_L2_SRC_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
 				REFINE_PROTO_FLD(DEL, ETH_DST);
-			else if (rss_type & ETH_RSS_L2_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
 				REFINE_PROTO_FLD(DEL, ETH_SRC);
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV4:
 			if (rss_type &
-			    (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			     ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV4_SCTP)) {
-				if (rss_type & ETH_RSS_FRAG_IPV4) {
+			    (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			     RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
 					iavf_hash_add_fragment_hdr(proto_hdrs, i + 1);
-				} else if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+				} else if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV4_DST);
-				} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+				} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV4_SRC);
 				} else if (rss_type &
-					   (ETH_RSS_L4_SRC_ONLY |
-					    ETH_RSS_L4_DST_ONLY)) {
+					   (RTE_ETH_RSS_L4_SRC_ONLY |
+					    RTE_ETH_RSS_L4_DST_ONLY)) {
 					REFINE_PROTO_FLD(DEL, IPV4_DST);
 					REFINE_PROTO_FLD(DEL, IPV4_SRC);
 				}
@@ -884,39 +884,39 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_IPV4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, IPV4_CHKSUM);
 
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV4_FRAG:
 			if (rss_type &
-			    (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			     ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV4_SCTP)) {
-				if (rss_type & ETH_RSS_FRAG_IPV4)
+			    (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			     RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_FRAG_IPV4)
 					REFINE_PROTO_FLD(ADD, IPV4_FRAG_PKID);
 			} else {
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_IPV4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, IPV4_CHKSUM);
 
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV6:
 			if (rss_type &
-			    (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-			     ETH_RSS_NONFRAG_IPV6_UDP |
-			     ETH_RSS_NONFRAG_IPV6_TCP |
-			     ETH_RSS_NONFRAG_IPV6_SCTP)) {
-				if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			    (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+			     RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV6_DST);
-				} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+				} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV6_SRC);
 				} else if (rss_type &
-					   (ETH_RSS_L4_SRC_ONLY |
-					    ETH_RSS_L4_DST_ONLY)) {
+					   (RTE_ETH_RSS_L4_SRC_ONLY |
+					    RTE_ETH_RSS_L4_DST_ONLY)) {
 					REFINE_PROTO_FLD(DEL, IPV6_DST);
 					REFINE_PROTO_FLD(DEL, IPV6_SRC);
 				}
@@ -933,7 +933,7 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			}
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG:
-			if (rss_type & ETH_RSS_FRAG_IPV6)
+			if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
 				REFINE_PROTO_FLD(ADD, IPV6_EH_FRAG_PKID);
 			else
 				hdr->field_selector = 0;
@@ -941,87 +941,87 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			break;
 		case VIRTCHNL_PROTO_HDR_UDP:
 			if (rss_type &
-			    (ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV6_UDP)) {
-				if (rss_type & ETH_RSS_L4_SRC_ONLY)
+			    (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+				if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 					REFINE_PROTO_FLD(DEL, UDP_DST_PORT);
-				else if (rss_type & ETH_RSS_L4_DST_ONLY)
+				else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 					REFINE_PROTO_FLD(DEL, UDP_SRC_PORT);
 				else if (rss_type &
-					 (ETH_RSS_L3_SRC_ONLY |
-					  ETH_RSS_L3_DST_ONLY))
+					 (RTE_ETH_RSS_L3_SRC_ONLY |
+					  RTE_ETH_RSS_L3_DST_ONLY))
 					hdr->field_selector = 0;
 			} else {
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_L4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, UDP_CHKSUM);
 			break;
 		case VIRTCHNL_PROTO_HDR_TCP:
 			if (rss_type &
-			    (ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV6_TCP)) {
-				if (rss_type & ETH_RSS_L4_SRC_ONLY)
+			    (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+				if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 					REFINE_PROTO_FLD(DEL, TCP_DST_PORT);
-				else if (rss_type & ETH_RSS_L4_DST_ONLY)
+				else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 					REFINE_PROTO_FLD(DEL, TCP_SRC_PORT);
 				else if (rss_type &
-					 (ETH_RSS_L3_SRC_ONLY |
-					  ETH_RSS_L3_DST_ONLY))
+					 (RTE_ETH_RSS_L3_SRC_ONLY |
+					  RTE_ETH_RSS_L3_DST_ONLY))
 					hdr->field_selector = 0;
 			} else {
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_L4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, TCP_CHKSUM);
 			break;
 		case VIRTCHNL_PROTO_HDR_SCTP:
 			if (rss_type &
-			    (ETH_RSS_NONFRAG_IPV4_SCTP |
-			     ETH_RSS_NONFRAG_IPV6_SCTP)) {
-				if (rss_type & ETH_RSS_L4_SRC_ONLY)
+			    (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 					REFINE_PROTO_FLD(DEL, SCTP_DST_PORT);
-				else if (rss_type & ETH_RSS_L4_DST_ONLY)
+				else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 					REFINE_PROTO_FLD(DEL, SCTP_SRC_PORT);
 				else if (rss_type &
-					 (ETH_RSS_L3_SRC_ONLY |
-					  ETH_RSS_L3_DST_ONLY))
+					 (RTE_ETH_RSS_L3_SRC_ONLY |
+					  RTE_ETH_RSS_L3_DST_ONLY))
 					hdr->field_selector = 0;
 			} else {
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_L4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, SCTP_CHKSUM);
 			break;
 		case VIRTCHNL_PROTO_HDR_S_VLAN:
-			if (!(rss_type & ETH_RSS_S_VLAN))
+			if (!(rss_type & RTE_ETH_RSS_S_VLAN))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_C_VLAN:
-			if (!(rss_type & ETH_RSS_C_VLAN))
+			if (!(rss_type & RTE_ETH_RSS_C_VLAN))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_L2TPV3:
-			if (!(rss_type & ETH_RSS_L2TPV3))
+			if (!(rss_type & RTE_ETH_RSS_L2TPV3))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_ESP:
-			if (!(rss_type & ETH_RSS_ESP))
+			if (!(rss_type & RTE_ETH_RSS_ESP))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_AH:
-			if (!(rss_type & ETH_RSS_AH))
+			if (!(rss_type & RTE_ETH_RSS_AH))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_PFCP:
-			if (!(rss_type & ETH_RSS_PFCP))
+			if (!(rss_type & RTE_ETH_RSS_PFCP))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_ECPRI:
-			if (!(rss_type & ETH_RSS_ECPRI))
+			if (!(rss_type & RTE_ETH_RSS_ECPRI))
 				hdr->field_selector = 0;
 			break;
 		default:
@@ -1038,7 +1038,7 @@ iavf_refine_proto_hdrs_gtpu(struct virtchnl_proto_hdrs *proto_hdrs,
 	struct virtchnl_proto_hdr *hdr;
 	int i;
 
-	if (!(rss_type & ETH_RSS_GTPU))
+	if (!(rss_type & RTE_ETH_RSS_GTPU))
 		return;
 
 	for (i = 0; i < proto_hdrs->count; i++) {
@@ -1163,10 +1163,10 @@ static void iavf_refine_proto_hdrs(struct virtchnl_proto_hdrs *proto_hdrs,
 }
 
 static uint64_t invalid_rss_comb[] = {
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	RTE_ETH_RSS_L3_PRE32 | RTE_ETH_RSS_L3_PRE40 |
 	RTE_ETH_RSS_L3_PRE48 | RTE_ETH_RSS_L3_PRE56 |
 	RTE_ETH_RSS_L3_PRE96
@@ -1177,27 +1177,27 @@ struct rss_attr_type {
 	uint64_t type;
 };
 
-#define VALID_RSS_IPV4_L4	(ETH_RSS_NONFRAG_IPV4_UDP	| \
-				 ETH_RSS_NONFRAG_IPV4_TCP	| \
-				 ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4	(RTE_ETH_RSS_NONFRAG_IPV4_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
-#define VALID_RSS_IPV6_L4	(ETH_RSS_NONFRAG_IPV6_UDP	| \
-				 ETH_RSS_NONFRAG_IPV6_TCP	| \
-				 ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4	(RTE_ETH_RSS_NONFRAG_IPV6_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
-#define VALID_RSS_IPV4		(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4		(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
 				 VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6		(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6		(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
 				 VALID_RSS_IPV6_L4)
 #define VALID_RSS_L3		(VALID_RSS_IPV4 | VALID_RSS_IPV6)
 #define VALID_RSS_L4		(VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
 
-#define VALID_RSS_ATTR		(ETH_RSS_L3_SRC_ONLY	| \
-				 ETH_RSS_L3_DST_ONLY	| \
-				 ETH_RSS_L4_SRC_ONLY	| \
-				 ETH_RSS_L4_DST_ONLY	| \
-				 ETH_RSS_L2_SRC_ONLY	| \
-				 ETH_RSS_L2_DST_ONLY	| \
+#define VALID_RSS_ATTR		(RTE_ETH_RSS_L3_SRC_ONLY	| \
+				 RTE_ETH_RSS_L3_DST_ONLY	| \
+				 RTE_ETH_RSS_L4_SRC_ONLY	| \
+				 RTE_ETH_RSS_L4_DST_ONLY	| \
+				 RTE_ETH_RSS_L2_SRC_ONLY	| \
+				 RTE_ETH_RSS_L2_DST_ONLY	| \
 				 RTE_ETH_RSS_L3_PRE64)
 
 #define INVALID_RSS_ATTR	(RTE_ETH_RSS_L3_PRE32	| \
@@ -1207,9 +1207,9 @@ struct rss_attr_type {
 				 RTE_ETH_RSS_L3_PRE96)
 
 static struct rss_attr_type rss_attr_to_valid_type[] = {
-	{ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY,	ETH_RSS_ETH},
-	{ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
-	{ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
+	{RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY,	RTE_ETH_RSS_ETH},
+	{RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
+	{RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
 	/* current ipv6 prefix only supports prefix 64 bits*/
 	{RTE_ETH_RSS_L3_PRE64,				VALID_RSS_IPV6},
 	{INVALID_RSS_ATTR,				0}
@@ -1226,15 +1226,15 @@ iavf_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
 	 * hash function.
 	 */
 	if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
-		if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
-		    ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+		if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+		    RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
 			return true;
 
 		if (!(rss_type &
-		   (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
-		    ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
-		    ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
-		    ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+		   (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+		    RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
 			return true;
 	}
 
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 88bbd40c1027..ac4db117f5cd 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -617,7 +617,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	rxq->vsi = vsi;
 	rxq->offloads = offloads;
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index f4ae2fd6e123..2d7f6b1b2dca 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -24,22 +24,22 @@
 #define IAVF_VPMD_TX_MAX_FREE_BUF 64
 
 #define IAVF_TX_NO_VECTOR_FLAGS (				 \
-		DEV_TX_OFFLOAD_MULTI_SEGS |		 \
-		DEV_TX_OFFLOAD_TCP_TSO)
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |		 \
+		RTE_ETH_TX_OFFLOAD_TCP_TSO)
 
 #define IAVF_TX_VECTOR_OFFLOAD (				 \
-		DEV_TX_OFFLOAD_VLAN_INSERT |		 \
-		DEV_TX_OFFLOAD_QINQ_INSERT |		 \
-		DEV_TX_OFFLOAD_IPV4_CKSUM |		 \
-		DEV_TX_OFFLOAD_SCTP_CKSUM |		 \
-		DEV_TX_OFFLOAD_UDP_CKSUM |		 \
-		DEV_TX_OFFLOAD_TCP_CKSUM)
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |		 \
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |		 \
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |		 \
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |		 \
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |		 \
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 
 #define IAVF_RX_VECTOR_OFFLOAD (				 \
-		DEV_RX_OFFLOAD_CHECKSUM |		 \
-		DEV_RX_OFFLOAD_SCTP_CKSUM |		 \
-		DEV_RX_OFFLOAD_VLAN |		 \
-		DEV_RX_OFFLOAD_RSS_HASH)
+		RTE_ETH_RX_OFFLOAD_CHECKSUM |		 \
+		RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |		 \
+		RTE_ETH_RX_OFFLOAD_VLAN |		 \
+		RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define IAVF_VECTOR_PATH 0
 #define IAVF_VECTOR_OFFLOAD_PATH 1
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 72a4fcab04a5..b47c51b8ebe4 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -906,7 +906,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 		 * needs to load 2nd 16B of each desc for RSS hash parsing,
 		 * will cause performance drop to get into this context.
 		 */
-		if (offloads & DEV_RX_OFFLOAD_RSS_HASH ||
+		if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH ||
 		    rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh7 =
@@ -958,7 +958,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 					(_mm256_castsi128_si256(raw_desc_bh0),
 					raw_desc_bh1, 1);
 
-			if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+			if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 				/**
 				 * to shift the 32b RSS hash value to the
 				 * highest 32b of each 128b before mask
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 12375d3d80bd..b8f2f69f12fc 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1141,7 +1141,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 			 * needs to load 2nd 16B of each desc for RSS hash parsing,
 			 * will cause performance drop to get into this context.
 			 */
-			if (offloads & DEV_RX_OFFLOAD_RSS_HASH ||
+			if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH ||
 			    rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
@@ -1193,7 +1193,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 						(_mm256_castsi128_si256(raw_desc_bh0),
 						 raw_desc_bh1, 1);
 
-				if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+				if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 					/**
 					 * to shift the 32b RSS hash value to the
 					 * highest 32b of each 128b before mask
@@ -1721,7 +1721,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->next_dd - (n - 1);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
 								rte_lcore_id());
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index edb54991e298..1de43b9b8ee2 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -819,7 +819,7 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
 		 * needs to load 2nd 16B of each desc for RSS hash parsing,
 		 * will cause performance drop to get into this context.
 		 */
-		if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh3 =
 				_mm_load_si128
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index c9c01a14e349..7b7df5eebb6d 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -835,7 +835,7 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
 		PMD_DRV_LOG(DEBUG, "RSS is not supported");
 		return -ENOTSUP;
 	}
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
 		/* set all lut items to default queue */
 		memset(hw->rss_lut, 0, hw->vf_res->rss_lut_size);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index ebd8ca57ef5f..1cda2db00e56 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -95,7 +95,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
 	}
 
 	rxq->max_pkt_len = max_pkt_len;
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 	    (rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size) {
 		dev_data->scattered_rx = 1;
 	}
@@ -582,7 +582,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -644,7 +644,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
 	}
 
 	ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	ad->pf.adapter_stopped = 1;
 	hw->tm_conf.committed = false;
 
@@ -660,8 +660,8 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
 	ad->rx_bulk_alloc_allowed = true;
 	ad->tx_simple_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	return 0;
 }
@@ -683,27 +683,27 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -933,42 +933,42 @@ ice_dcf_link_update(struct rte_eth_dev *dev,
 	 */
 	switch (hw->link_speed) {
 	case 10:
-		new_link.link_speed = ETH_SPEED_NUM_10M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case 100:
-		new_link.link_speed = ETH_SPEED_NUM_100M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case 1000:
-		new_link.link_speed = ETH_SPEED_NUM_1G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case 10000:
-		new_link.link_speed = ETH_SPEED_NUM_10G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case 20000:
-		new_link.link_speed = ETH_SPEED_NUM_20G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case 25000:
-		new_link.link_speed = ETH_SPEED_NUM_25G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case 40000:
-		new_link.link_speed = ETH_SPEED_NUM_40G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case 50000:
-		new_link.link_speed = ETH_SPEED_NUM_50G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case 100000:
-		new_link.link_speed = ETH_SPEED_NUM_100G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	default:
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	new_link.link_status = hw->link_up ? ETH_LINK_UP :
-					     ETH_LINK_DOWN;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = hw->link_up ? RTE_ETH_LINK_UP :
+					     RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(dev, &new_link);
 }
@@ -987,11 +987,11 @@ ice_dcf_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ice_create_tunnel(parent_hw, TNL_VXLAN,
 					udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_ECPRI:
+	case RTE_ETH_TUNNEL_TYPE_ECPRI:
 		ret = ice_create_tunnel(parent_hw, TNL_ECPRI,
 					udp_tunnel->udp_port);
 		break;
@@ -1018,8 +1018,8 @@ ice_dcf_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
-	case RTE_TUNNEL_TYPE_ECPRI:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_ECPRI:
 		ret = ice_destroy_tunnel(parent_hw, udp_tunnel->udp_port, 0);
 		break;
 	default:
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index 44fb38dbe7b1..b9fcfc80ad9b 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -37,7 +37,7 @@ ice_dcf_vf_repr_dev_configure(struct rte_eth_dev *dev)
 static int
 ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -45,7 +45,7 @@ ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
 static int
 ice_dcf_vf_repr_dev_stop(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -143,28 +143,28 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -246,9 +246,9 @@ ice_dcf_vf_repr_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		return -ENOTSUP;
 
 	/* Vlan stripping setting */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		bool enable = !!(dev_conf->rxmode.offloads &
-				 DEV_RX_OFFLOAD_VLAN_STRIP);
+				 RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 		if (enable && repr->outer_vlan_info.port_vlan_ena) {
 			PMD_DRV_LOG(ERR,
@@ -345,7 +345,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
 	if (!ice_dcf_vlan_offload_ena(repr))
 		return -ENOTSUP;
 
-	if (vlan_type != ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
 		PMD_DRV_LOG(ERR,
 			    "Can accelerate only outer VLAN in QinQ\n");
 		return -EINVAL;
@@ -375,7 +375,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
 
 	if (repr->outer_vlan_info.stripping_ena) {
 		err = ice_dcf_vf_repr_vlan_offload_set(dev,
-						       ETH_VLAN_STRIP_MASK);
+						       RTE_ETH_VLAN_STRIP_MASK);
 		if (err) {
 			PMD_DRV_LOG(ERR,
 				    "Failed to reset VLAN stripping : %d\n",
@@ -449,7 +449,7 @@ ice_dcf_vf_repr_init_vlan(struct rte_eth_dev *vf_rep_eth_dev)
 	int err;
 
 	err = ice_dcf_vf_repr_vlan_offload_set(vf_rep_eth_dev,
-					       ETH_VLAN_STRIP_MASK);
+					       RTE_ETH_VLAN_STRIP_MASK);
 	if (err) {
 		PMD_DRV_LOG(ERR, "Failed to set VLAN offload");
 		return err;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index edbc74632711..6a6637a15af7 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1487,9 +1487,9 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
 	TAILQ_INIT(&vsi->mac_list);
 	TAILQ_INIT(&vsi->vlan_list);
 
-	/* Be sync with ETH_RSS_RETA_SIZE_x maximum value definition */
+	/* Be sync with RTE_ETH_RSS_RETA_SIZE_x maximum value definition */
 	pf->hash_lut_size = hw->func_caps.common_cap.rss_table_size >
-			ETH_RSS_RETA_SIZE_512 ? ETH_RSS_RETA_SIZE_512 :
+			RTE_ETH_RSS_RETA_SIZE_512 ? RTE_ETH_RSS_RETA_SIZE_512 :
 			hw->func_caps.common_cap.rss_table_size;
 	pf->flags |= ICE_FLAG_RSS_AQ_CAPABLE;
 
@@ -2993,14 +2993,14 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	int ret;
 
 #define ICE_RSS_HF_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 	ret = ice_rem_vsi_rss_cfg(hw, vsi->idx);
 	if (ret)
@@ -3010,7 +3010,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	cfg.symm = 0;
 	cfg.hdr_type = ICE_RSS_OUTER_HEADERS;
 	/* Configure RSS for IPv4 with src/dst addr as input set */
-	if (rss_hf & ETH_RSS_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_IPV4) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV4;
 		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -3020,7 +3020,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for IPv6 with src/dst addr as input set */
-	if (rss_hf & ETH_RSS_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_IPV6) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV6;
 		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -3030,7 +3030,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for udp4 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -3041,7 +3041,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for udp6 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -3052,7 +3052,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for tcp4 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -3063,7 +3063,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for tcp6 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -3074,7 +3074,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for sctp4 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_SCTP_IPV4;
@@ -3085,7 +3085,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for sctp6 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_SCTP_IPV6;
@@ -3095,7 +3095,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_IPV4) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV4;
@@ -3105,7 +3105,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_IPV6) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV6;
@@ -3115,7 +3115,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
 				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -3125,7 +3125,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
 				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -3135,7 +3135,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
 				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -3145,7 +3145,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
 				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -3288,8 +3288,8 @@ ice_dev_configure(struct rte_eth_dev *dev)
 	ad->rx_bulk_alloc_allowed = true;
 	ad->tx_simple_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (dev->data->nb_rx_queues) {
 		ret = ice_init_rss(pf);
@@ -3569,8 +3569,8 @@ ice_dev_start(struct rte_eth_dev *dev)
 	ice_set_rx_function(dev);
 	ice_set_tx_function(dev);
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-			ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+			RTE_ETH_VLAN_EXTEND_MASK;
 	ret = ice_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
@@ -3682,40 +3682,40 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_KEEP_CRC |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_FILTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->flow_type_rss_offloads = 0;
 
 	if (!is_safe_mode) {
 		dev_info->rx_offload_capa |=
-			DEV_RX_OFFLOAD_IPV4_CKSUM |
-			DEV_RX_OFFLOAD_UDP_CKSUM |
-			DEV_RX_OFFLOAD_TCP_CKSUM |
-			DEV_RX_OFFLOAD_QINQ_STRIP |
-			DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-			DEV_RX_OFFLOAD_VLAN_EXTEND |
-			DEV_RX_OFFLOAD_RSS_HASH |
-			DEV_RX_OFFLOAD_TIMESTAMP;
+			RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+			RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+			RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+			RTE_ETH_RX_OFFLOAD_RSS_HASH |
+			RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 		dev_info->tx_offload_capa |=
-			DEV_TX_OFFLOAD_QINQ_INSERT |
-			DEV_TX_OFFLOAD_IPV4_CKSUM |
-			DEV_TX_OFFLOAD_UDP_CKSUM |
-			DEV_TX_OFFLOAD_TCP_CKSUM |
-			DEV_TX_OFFLOAD_SCTP_CKSUM |
-			DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-			DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+			RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+			RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+			RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 		dev_info->flow_type_rss_offloads |= ICE_RSS_OFFLOAD_ALL;
 	}
 
 	dev_info->rx_queue_offload_capa = 0;
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->reta_size = pf->hash_lut_size;
 	dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
@@ -3754,24 +3754,24 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.nb_align = ICE_ALIGN_RING_DESC,
 	};
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M |
-			       ETH_LINK_SPEED_100M |
-			       ETH_LINK_SPEED_1G |
-			       ETH_LINK_SPEED_2_5G |
-			       ETH_LINK_SPEED_5G |
-			       ETH_LINK_SPEED_10G |
-			       ETH_LINK_SPEED_20G |
-			       ETH_LINK_SPEED_25G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			       RTE_ETH_LINK_SPEED_100M |
+			       RTE_ETH_LINK_SPEED_1G |
+			       RTE_ETH_LINK_SPEED_2_5G |
+			       RTE_ETH_LINK_SPEED_5G |
+			       RTE_ETH_LINK_SPEED_10G |
+			       RTE_ETH_LINK_SPEED_20G |
+			       RTE_ETH_LINK_SPEED_25G;
 
 	phy_type_low = hw->port_info->phy.phy_type_low;
 	phy_type_high = hw->port_info->phy.phy_type_high;
 
 	if (ICE_PHY_TYPE_SUPPORT_50G(phy_type_low))
-		dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
 
 	if (ICE_PHY_TYPE_SUPPORT_100G_LOW(phy_type_low) ||
 			ICE_PHY_TYPE_SUPPORT_100G_HIGH(phy_type_high))
-		dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
 
 	dev_info->nb_rx_queues = dev->data->nb_rx_queues;
 	dev_info->nb_tx_queues = dev->data->nb_tx_queues;
@@ -3836,8 +3836,8 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		status = ice_aq_get_link_info(hw->port_info, enable_lse,
 					      &link_status, NULL);
 		if (status != ICE_SUCCESS) {
-			link.link_speed = ETH_SPEED_NUM_100M;
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			link.link_speed = RTE_ETH_SPEED_NUM_100M;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR, "Failed to get link info");
 			goto out;
 		}
@@ -3853,55 +3853,55 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		goto out;
 
 	/* Full-duplex operation at all supported speeds */
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	/* Parse the link status */
 	switch (link_status.link_speed) {
 	case ICE_AQ_LINK_SPEED_10MB:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case ICE_AQ_LINK_SPEED_100MB:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case ICE_AQ_LINK_SPEED_1000MB:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case ICE_AQ_LINK_SPEED_2500MB:
-		link.link_speed = ETH_SPEED_NUM_2_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	case ICE_AQ_LINK_SPEED_5GB:
-		link.link_speed = ETH_SPEED_NUM_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 	case ICE_AQ_LINK_SPEED_10GB:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case ICE_AQ_LINK_SPEED_20GB:
-		link.link_speed = ETH_SPEED_NUM_20G;
+		link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case ICE_AQ_LINK_SPEED_25GB:
-		link.link_speed = ETH_SPEED_NUM_25G;
+		link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case ICE_AQ_LINK_SPEED_40GB:
-		link.link_speed = ETH_SPEED_NUM_40G;
+		link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case ICE_AQ_LINK_SPEED_50GB:
-		link.link_speed = ETH_SPEED_NUM_50G;
+		link.link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case ICE_AQ_LINK_SPEED_100GB:
-		link.link_speed = ETH_SPEED_NUM_100G;
+		link.link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	case ICE_AQ_LINK_SPEED_UNKNOWN:
 		PMD_DRV_LOG(ERR, "Unknown link speed");
-		link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "None link speed");
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			      ETH_LINK_SPEED_FIXED);
+			      RTE_ETH_LINK_SPEED_FIXED);
 
 out:
 	ice_atomic_write_link_status(dev, &link);
@@ -4377,15 +4377,15 @@ ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ice_vsi_config_vlan_filter(vsi, true);
 		else
 			ice_vsi_config_vlan_filter(vsi, false);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			ice_vsi_config_vlan_stripping(vsi, true);
 		else
 			ice_vsi_config_vlan_stripping(vsi, false);
@@ -4500,8 +4500,8 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
 		goto out;
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			lut[i] = reta_conf[idx].reta[shift];
 	}
@@ -4550,8 +4550,8 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
 		goto out;
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = lut[i];
 	}
@@ -5460,7 +5460,7 @@ ice_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ice_create_tunnel(hw, TNL_VXLAN, udp_tunnel->udp_port);
 		break;
 	default:
@@ -5484,7 +5484,7 @@ ice_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ice_destroy_tunnel(hw, udp_tunnel->udp_port, 0);
 		break;
 	default:
@@ -5505,7 +5505,7 @@ ice_timesync_enable(struct rte_eth_dev *dev)
 	int ret;
 
 	if (dev->data->dev_started && !(dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_TIMESTAMP)) {
+	    RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
 		PMD_DRV_LOG(ERR, "Rx timestamp offload not configured");
 		return -1;
 	}
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 1cd3753ccc5f..599e0028f7e8 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -117,19 +117,19 @@
 		       ICE_FLAG_VF_MAC_BY_PF)
 
 #define ICE_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L2_PAYLOAD)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L2_PAYLOAD)
 
 /**
  * The overhead from MTU to max frame size.
diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c
index 20a3204fab7e..35eff8b17d28 100644
--- a/drivers/net/ice/ice_hash.c
+++ b/drivers/net/ice/ice_hash.c
@@ -39,27 +39,27 @@
 #define ICE_IPV4_PROT		BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_PROT)
 #define ICE_IPV6_PROT		BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PROT)
 
-#define VALID_RSS_IPV4_L4	(ETH_RSS_NONFRAG_IPV4_UDP	| \
-				 ETH_RSS_NONFRAG_IPV4_TCP	| \
-				 ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4	(RTE_ETH_RSS_NONFRAG_IPV4_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
-#define VALID_RSS_IPV6_L4	(ETH_RSS_NONFRAG_IPV6_UDP	| \
-				 ETH_RSS_NONFRAG_IPV6_TCP	| \
-				 ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4	(RTE_ETH_RSS_NONFRAG_IPV6_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
-#define VALID_RSS_IPV4		(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4		(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
 				 VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6		(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6		(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
 				 VALID_RSS_IPV6_L4)
 #define VALID_RSS_L3		(VALID_RSS_IPV4 | VALID_RSS_IPV6)
 #define VALID_RSS_L4		(VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
 
-#define VALID_RSS_ATTR		(ETH_RSS_L3_SRC_ONLY	| \
-				 ETH_RSS_L3_DST_ONLY	| \
-				 ETH_RSS_L4_SRC_ONLY	| \
-				 ETH_RSS_L4_DST_ONLY	| \
-				 ETH_RSS_L2_SRC_ONLY	| \
-				 ETH_RSS_L2_DST_ONLY	| \
+#define VALID_RSS_ATTR		(RTE_ETH_RSS_L3_SRC_ONLY	| \
+				 RTE_ETH_RSS_L3_DST_ONLY	| \
+				 RTE_ETH_RSS_L4_SRC_ONLY	| \
+				 RTE_ETH_RSS_L4_DST_ONLY	| \
+				 RTE_ETH_RSS_L2_SRC_ONLY	| \
+				 RTE_ETH_RSS_L2_DST_ONLY	| \
 				 RTE_ETH_RSS_L3_PRE32	| \
 				 RTE_ETH_RSS_L3_PRE48	| \
 				 RTE_ETH_RSS_L3_PRE64)
@@ -373,87 +373,87 @@ struct ice_rss_hash_cfg eth_tmplt = {
 };
 
 /* IPv4 */
-#define ICE_RSS_TYPE_ETH_IPV4		(ETH_RSS_ETH | ETH_RSS_IPV4 | \
-					 ETH_RSS_FRAG_IPV4 | \
-					 ETH_RSS_IPV4_CHKSUM)
+#define ICE_RSS_TYPE_ETH_IPV4		(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_FRAG_IPV4 | \
+					 RTE_ETH_RSS_IPV4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV4_UDP	(ICE_RSS_TYPE_ETH_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV4_TCP	(ICE_RSS_TYPE_ETH_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV4_SCTP	(ICE_RSS_TYPE_ETH_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP | \
-					 ETH_RSS_L4_CHKSUM)
-#define ICE_RSS_TYPE_IPV4		ETH_RSS_IPV4
-#define ICE_RSS_TYPE_IPV4_UDP		(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP)
-#define ICE_RSS_TYPE_IPV4_TCP		(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP)
-#define ICE_RSS_TYPE_IPV4_SCTP		(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP)
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
+#define ICE_RSS_TYPE_IPV4		RTE_ETH_RSS_IPV4
+#define ICE_RSS_TYPE_IPV4_UDP		(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define ICE_RSS_TYPE_IPV4_TCP		(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define ICE_RSS_TYPE_IPV4_SCTP		(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
 /* IPv6 */
-#define ICE_RSS_TYPE_ETH_IPV6		(ETH_RSS_ETH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_ETH_IPV6_FRAG	(ETH_RSS_ETH | ETH_RSS_IPV6 | \
-					 ETH_RSS_FRAG_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6		(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6_FRAG	(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_FRAG_IPV6)
 #define ICE_RSS_TYPE_ETH_IPV6_UDP	(ICE_RSS_TYPE_ETH_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV6_TCP	(ICE_RSS_TYPE_ETH_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV6_SCTP	(ICE_RSS_TYPE_ETH_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP | \
-					 ETH_RSS_L4_CHKSUM)
-#define ICE_RSS_TYPE_IPV6		ETH_RSS_IPV6
-#define ICE_RSS_TYPE_IPV6_UDP		(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP)
-#define ICE_RSS_TYPE_IPV6_TCP		(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP)
-#define ICE_RSS_TYPE_IPV6_SCTP		(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP)
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
+#define ICE_RSS_TYPE_IPV6		RTE_ETH_RSS_IPV6
+#define ICE_RSS_TYPE_IPV6_UDP		(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define ICE_RSS_TYPE_IPV6_TCP		(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define ICE_RSS_TYPE_IPV6_SCTP		(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 /* VLAN IPV4 */
 #define ICE_RSS_TYPE_VLAN_IPV4		(ICE_RSS_TYPE_IPV4 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
-					 ETH_RSS_FRAG_IPV4)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+					 RTE_ETH_RSS_FRAG_IPV4)
 #define ICE_RSS_TYPE_VLAN_IPV4_UDP	(ICE_RSS_TYPE_IPV4_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV4_TCP	(ICE_RSS_TYPE_IPV4_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV4_SCTP	(ICE_RSS_TYPE_IPV4_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 /* VLAN IPv6 */
 #define ICE_RSS_TYPE_VLAN_IPV6		(ICE_RSS_TYPE_IPV6 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV6_FRAG	(ICE_RSS_TYPE_IPV6 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
-					 ETH_RSS_FRAG_IPV6)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+					 RTE_ETH_RSS_FRAG_IPV6)
 #define ICE_RSS_TYPE_VLAN_IPV6_UDP	(ICE_RSS_TYPE_IPV6_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV6_TCP	(ICE_RSS_TYPE_IPV6_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV6_SCTP	(ICE_RSS_TYPE_IPV6_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 
 /* GTPU IPv4 */
 #define ICE_RSS_TYPE_GTPU_IPV4		(ICE_RSS_TYPE_IPV4 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV4_UDP	(ICE_RSS_TYPE_IPV4_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV4_TCP	(ICE_RSS_TYPE_IPV4_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 /* GTPU IPv6 */
 #define ICE_RSS_TYPE_GTPU_IPV6		(ICE_RSS_TYPE_IPV6 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV6_UDP	(ICE_RSS_TYPE_IPV6_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV6_TCP	(ICE_RSS_TYPE_IPV6_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 
 /* PPPOE */
-#define ICE_RSS_TYPE_PPPOE		(ETH_RSS_ETH | ETH_RSS_PPPOE)
+#define ICE_RSS_TYPE_PPPOE		(RTE_ETH_RSS_ETH | RTE_ETH_RSS_PPPOE)
 
 /* PPPOE IPv4 */
 #define ICE_RSS_TYPE_PPPOE_IPV4		(ICE_RSS_TYPE_IPV4 | \
@@ -472,17 +472,17 @@ struct ice_rss_hash_cfg eth_tmplt = {
 					 ICE_RSS_TYPE_PPPOE)
 
 /* ESP, AH, L2TPV3 and PFCP */
-#define ICE_RSS_TYPE_IPV4_ESP		(ETH_RSS_ESP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_ESP		(ETH_RSS_ESP | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_AH		(ETH_RSS_AH | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_AH		(ETH_RSS_AH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
 
 /* MAC */
-#define ICE_RSS_TYPE_ETH		ETH_RSS_ETH
+#define ICE_RSS_TYPE_ETH		RTE_ETH_RSS_ETH
 
 /**
  * Supported pattern for hash.
@@ -647,86 +647,86 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 	uint64_t *hash_flds = &hash_cfg->hash_flds;
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH) {
-		if (!(rss_type & ETH_RSS_ETH))
+		if (!(rss_type & RTE_ETH_RSS_ETH))
 			*hash_flds &= ~ICE_FLOW_HASH_ETH;
-		if (rss_type & ETH_RSS_L2_SRC_ONLY)
+		if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
 			*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_DA));
-		else if (rss_type & ETH_RSS_L2_DST_ONLY)
+		else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
 			*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_SA));
 		*addl_hdrs &= ~ICE_FLOW_SEG_HDR_ETH;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH_NON_IP) {
-		if (rss_type & ETH_RSS_ETH)
+		if (rss_type & RTE_ETH_RSS_ETH)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_TYPE);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_VLAN) {
-		if (rss_type & ETH_RSS_C_VLAN)
+		if (rss_type & RTE_ETH_RSS_C_VLAN)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_C_VLAN);
-		else if (rss_type & ETH_RSS_S_VLAN)
+		else if (rss_type & RTE_ETH_RSS_S_VLAN)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_S_VLAN);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_PPPOE) {
-		if (!(rss_type & ETH_RSS_PPPOE))
+		if (!(rss_type & RTE_ETH_RSS_PPPOE))
 			*hash_flds &= ~ICE_FLOW_HASH_PPPOE_SESS_ID;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV4) {
 		if (rss_type &
-		   (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-		    ETH_RSS_NONFRAG_IPV4_UDP |
-		    ETH_RSS_NONFRAG_IPV4_TCP |
-		    ETH_RSS_NONFRAG_IPV4_SCTP)) {
-			if (rss_type & ETH_RSS_FRAG_IPV4) {
+		   (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+		    RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+			if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
 				*addl_hdrs |= ICE_FLOW_SEG_HDR_IPV_FRAG;
 				*addl_hdrs &= ~(ICE_FLOW_SEG_HDR_IPV_OTHER);
 				*hash_flds |=
 					BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_ID);
 			}
-			if (rss_type & ETH_RSS_L3_SRC_ONLY)
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_DA));
-			else if (rss_type & ETH_RSS_L3_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_SA));
 			else if (rss_type &
-				(ETH_RSS_L4_SRC_ONLY |
-				ETH_RSS_L4_DST_ONLY))
+				(RTE_ETH_RSS_L4_SRC_ONLY |
+				RTE_ETH_RSS_L4_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_IPV4;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_IPV4;
 		}
 
-		if (rss_type & ETH_RSS_IPV4_CHKSUM)
+		if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_CHKSUM);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV6) {
 		if (rss_type &
-		   (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-		    ETH_RSS_NONFRAG_IPV6_UDP |
-		    ETH_RSS_NONFRAG_IPV6_TCP |
-		    ETH_RSS_NONFRAG_IPV6_SCTP)) {
-			if (rss_type & ETH_RSS_FRAG_IPV6)
+		   (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+		    RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+			if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
 				*hash_flds |=
 					BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_ID);
-			if (rss_type & ETH_RSS_L3_SRC_ONLY)
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
-			else if (rss_type & ETH_RSS_L3_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 			else if (rss_type &
-				(ETH_RSS_L4_SRC_ONLY |
-				ETH_RSS_L4_DST_ONLY))
+				(RTE_ETH_RSS_L4_SRC_ONLY |
+				RTE_ETH_RSS_L4_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_IPV6;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_IPV6;
 		}
 
 		if (rss_type & RTE_ETH_RSS_L3_PRE32) {
-			if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_SA));
-			} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+			} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_DA));
 			} else {
@@ -735,10 +735,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 			}
 		}
 		if (rss_type & RTE_ETH_RSS_L3_PRE48) {
-			if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_SA));
-			} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+			} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_DA));
 			} else {
@@ -747,10 +747,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 			}
 		}
 		if (rss_type & RTE_ETH_RSS_L3_PRE64) {
-			if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_SA));
-			} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+			} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_DA));
 			} else {
@@ -762,81 +762,81 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_UDP) {
 		if (rss_type &
-		   (ETH_RSS_NONFRAG_IPV4_UDP |
-		    ETH_RSS_NONFRAG_IPV6_UDP)) {
-			if (rss_type & ETH_RSS_L4_SRC_ONLY)
+		   (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+			if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_DST_PORT));
-			else if (rss_type & ETH_RSS_L4_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_SRC_PORT));
 			else if (rss_type &
-				(ETH_RSS_L3_SRC_ONLY |
-				  ETH_RSS_L3_DST_ONLY))
+				(RTE_ETH_RSS_L3_SRC_ONLY |
+				  RTE_ETH_RSS_L3_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
 		}
 
-		if (rss_type & ETH_RSS_L4_CHKSUM)
+		if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_CHKSUM);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_TCP) {
 		if (rss_type &
-		   (ETH_RSS_NONFRAG_IPV4_TCP |
-		    ETH_RSS_NONFRAG_IPV6_TCP)) {
-			if (rss_type & ETH_RSS_L4_SRC_ONLY)
+		   (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+			if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_DST_PORT));
-			else if (rss_type & ETH_RSS_L4_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_SRC_PORT));
 			else if (rss_type &
-				(ETH_RSS_L3_SRC_ONLY |
-				  ETH_RSS_L3_DST_ONLY))
+				(RTE_ETH_RSS_L3_SRC_ONLY |
+				  RTE_ETH_RSS_L3_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
 		}
 
-		if (rss_type & ETH_RSS_L4_CHKSUM)
+		if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_CHKSUM);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_SCTP) {
 		if (rss_type &
-		   (ETH_RSS_NONFRAG_IPV4_SCTP |
-		    ETH_RSS_NONFRAG_IPV6_SCTP)) {
-			if (rss_type & ETH_RSS_L4_SRC_ONLY)
+		   (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+			if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_DST_PORT));
-			else if (rss_type & ETH_RSS_L4_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_SRC_PORT));
 			else if (rss_type &
-				(ETH_RSS_L3_SRC_ONLY |
-				  ETH_RSS_L3_DST_ONLY))
+				(RTE_ETH_RSS_L3_SRC_ONLY |
+				  RTE_ETH_RSS_L3_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
 		}
 
-		if (rss_type & ETH_RSS_L4_CHKSUM)
+		if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_CHKSUM);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_L2TPV3) {
-		if (!(rss_type & ETH_RSS_L2TPV3))
+		if (!(rss_type & RTE_ETH_RSS_L2TPV3))
 			*hash_flds &= ~ICE_FLOW_HASH_L2TPV3_SESS_ID;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_ESP) {
-		if (!(rss_type & ETH_RSS_ESP))
+		if (!(rss_type & RTE_ETH_RSS_ESP))
 			*hash_flds &= ~ICE_FLOW_HASH_ESP_SPI;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_AH) {
-		if (!(rss_type & ETH_RSS_AH))
+		if (!(rss_type & RTE_ETH_RSS_AH))
 			*hash_flds &= ~ICE_FLOW_HASH_AH_SPI;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_PFCP_SESSION) {
-		if (!(rss_type & ETH_RSS_PFCP))
+		if (!(rss_type & RTE_ETH_RSS_PFCP))
 			*hash_flds &= ~ICE_FLOW_HASH_PFCP_SEID;
 	}
 }
@@ -870,7 +870,7 @@ ice_refine_hash_cfg_gtpu(struct ice_rss_hash_cfg *hash_cfg,
 	uint64_t *hash_flds = &hash_cfg->hash_flds;
 
 	/* update hash field for gtpu eh/gtpu dwn/gtpu up. */
-	if (!(rss_type & ETH_RSS_GTPU))
+	if (!(rss_type & RTE_ETH_RSS_GTPU))
 		return;
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_GTPU_DWN)
@@ -892,10 +892,10 @@ static void ice_refine_hash_cfg(struct ice_rss_hash_cfg *hash_cfg,
 }
 
 static uint64_t invalid_rss_comb[] = {
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	RTE_ETH_RSS_L3_PRE40 |
 	RTE_ETH_RSS_L3_PRE56 |
 	RTE_ETH_RSS_L3_PRE96
@@ -907,9 +907,9 @@ struct rss_attr_type {
 };
 
 static struct rss_attr_type rss_attr_to_valid_type[] = {
-	{ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY,	ETH_RSS_ETH},
-	{ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
-	{ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
+	{RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY,	RTE_ETH_RSS_ETH},
+	{RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
+	{RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
 	/* current ipv6 prefix only supports prefix 64 bits*/
 	{RTE_ETH_RSS_L3_PRE32,				VALID_RSS_IPV6},
 	{RTE_ETH_RSS_L3_PRE48,				VALID_RSS_IPV6},
@@ -928,16 +928,16 @@ ice_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
 	 * hash function.
 	 */
 	if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
-		if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
-		    ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+		if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+		    RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
 			return true;
 
 		if (!(rss_type &
-		   (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
-		    ETH_RSS_FRAG_IPV4 | ETH_RSS_FRAG_IPV6 |
-		    ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
-		    ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
-		    ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+		   (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+		    RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_FRAG_IPV6 |
+		    RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
 			return true;
 	}
 
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index ff362c21d9f5..8406240d7209 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -303,7 +303,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
 		}
 	}
 
-	if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 		/* Register mbuf field and flag for Rx timestamp */
 		err = rte_mbuf_dyn_rx_timestamp_register(
 				&ice_timestamp_dynfield_offset,
@@ -367,7 +367,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
 	regval |= (0x03 << QRXFLXP_CNTXT_RXDID_PRIO_S) &
 		QRXFLXP_CNTXT_RXDID_PRIO_M;
 
-	if (ad->ptp_ena || rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (ad->ptp_ena || rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 		regval |= QRXFLXP_CNTXT_TS_M;
 
 	ICE_WRITE_REG(hw, QRXFLXP_CNTXT(rxq->reg_idx), regval);
@@ -1117,7 +1117,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
 
 	rxq->reg_idx = vsi->base_queue + queue_idx;
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1624,7 +1624,7 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq)
 			ice_rxd_to_vlan_tci(mb, &rxdp[j]);
 			rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
-			if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+			if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 				ts_ns = ice_tstamp_convert_32b_64b(hw,
 					rte_le_to_cpu_32(rxdp[j].wb.flex_ts.ts_high));
 				if (ice_timestamp_dynflag > 0) {
@@ -1942,7 +1942,7 @@ ice_recv_scattered_pkts(void *rx_queue,
 		rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
 		pkt_flags = ice_rxd_error_to_pkt_flags(rx_stat_err0);
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
-		if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 			ts_ns = ice_tstamp_convert_32b_64b(hw,
 				rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high));
 			if (ice_timestamp_dynflag > 0) {
@@ -2373,7 +2373,7 @@ ice_recv_pkts(void *rx_queue,
 		rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
 		pkt_flags = ice_rxd_error_to_pkt_flags(rx_stat_err0);
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
-		if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 			ts_ns = ice_tstamp_convert_32b_64b(hw,
 				rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high));
 			if (ice_timestamp_dynflag > 0) {
@@ -2889,7 +2889,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
 	for (i = 0; i < txq->tx_rs_thresh; i++)
 		rte_prefetch0((txep + i)->mbuf);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
 		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
 			rte_mempool_put(txep->mbuf->pool, txep->mbuf);
 			txep->mbuf = NULL;
@@ -3365,7 +3365,7 @@ ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq)
 	/* Use a simple Tx queue if possible (only fast free is allowed) */
 	ad->tx_simple_allowed =
 		(txq->offloads ==
-		(txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+		(txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
 		txq->tx_rs_thresh >= ICE_TX_MAX_BURST);
 
 	if (ad->tx_simple_allowed)
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 490693bff218..86955539bea8 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -474,7 +474,7 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 			 * will cause performance drop to get into this context.
 			 */
 			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-					DEV_RX_OFFLOAD_RSS_HASH) {
+					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
 					_mm_load_si128
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 7efe7b50a206..af23f6a34e58 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -585,7 +585,7 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
 			 * will cause performance drop to get into this context.
 			 */
 			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-					DEV_RX_OFFLOAD_RSS_HASH) {
+					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
 					_mm_load_si128
@@ -995,7 +995,7 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->tx_next_dd - (n - 1);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		void **cache_objs;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index f0f99265857e..b1d975b31a5a 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -248,23 +248,23 @@ ice_rxq_vec_setup_default(struct ice_rx_queue *rxq)
 }
 
 #define ICE_TX_NO_VECTOR_FLAGS (			\
-		DEV_TX_OFFLOAD_MULTI_SEGS |		\
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |	\
-		DEV_TX_OFFLOAD_TCP_TSO)
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |		\
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |	\
+		RTE_ETH_TX_OFFLOAD_TCP_TSO)
 
 #define ICE_TX_VECTOR_OFFLOAD (				\
-		DEV_TX_OFFLOAD_VLAN_INSERT |		\
-		DEV_TX_OFFLOAD_QINQ_INSERT |		\
-		DEV_TX_OFFLOAD_IPV4_CKSUM |		\
-		DEV_TX_OFFLOAD_SCTP_CKSUM |		\
-		DEV_TX_OFFLOAD_UDP_CKSUM |		\
-		DEV_TX_OFFLOAD_TCP_CKSUM)
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |		\
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |		\
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 
 #define ICE_RX_VECTOR_OFFLOAD (				\
-		DEV_RX_OFFLOAD_CHECKSUM |		\
-		DEV_RX_OFFLOAD_SCTP_CKSUM |		\
-		DEV_RX_OFFLOAD_VLAN |			\
-		DEV_RX_OFFLOAD_RSS_HASH)
+		RTE_ETH_RX_OFFLOAD_CHECKSUM |		\
+		RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |		\
+		RTE_ETH_RX_OFFLOAD_VLAN |			\
+		RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define ICE_VECTOR_PATH		0
 #define ICE_VECTOR_OFFLOAD_PATH	1
@@ -287,7 +287,7 @@ ice_rx_vec_queue_default(struct ice_rx_queue *rxq)
 	if (rxq->proto_xtr != PROTO_XTR_NONE)
 		return -1;
 
-	if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 		return -1;
 
 	if (rxq->offloads & ICE_RX_VECTOR_OFFLOAD)
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 653bd28b417c..117494131f32 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -479,7 +479,7 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 		 * will cause performance drop to get into this context.
 		 */
 		if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_RSS_HASH) {
+				RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh3 =
 				_mm_load_si128
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 2a1ed90b641b..7ce80a442b35 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -307,8 +307,8 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (rx_mq_mode != ETH_MQ_RX_NONE &&
-		rx_mq_mode != ETH_MQ_RX_RSS) {
+	if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+		rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
 		/* RSS together with VMDq not supported*/
 		PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
 				rx_mq_mode);
@@ -318,7 +318,7 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
 	/* To no break software that set invalid mode, only display
 	 * warning if invalid mode is used.
 	 */
-	if (tx_mq_mode != ETH_MQ_TX_NONE)
+	if (tx_mq_mode != RTE_ETH_MQ_TX_NONE)
 		PMD_INIT_LOG(WARNING,
 			"TX mode %d is not supported. Due to meaningless in this driver, just ignore",
 			tx_mq_mode);
@@ -334,8 +334,8 @@ eth_igc_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	ret  = igc_check_mq_mode(dev);
 	if (ret != 0)
@@ -473,12 +473,12 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		uint16_t duplex, speed;
 		hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
 		link.link_duplex = (duplex == FULL_DUPLEX) ?
-				ETH_LINK_FULL_DUPLEX :
-				ETH_LINK_HALF_DUPLEX;
+				RTE_ETH_LINK_FULL_DUPLEX :
+				RTE_ETH_LINK_HALF_DUPLEX;
 		link.link_speed = speed;
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 		link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 		if (speed == SPEED_2500) {
 			uint32_t tipg = IGC_READ_REG(hw, IGC_TIPG);
@@ -490,9 +490,9 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		}
 	} else {
 		link.link_speed = 0;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_status = ETH_LINK_DOWN;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -525,7 +525,7 @@ eth_igc_interrupt_action(struct rte_eth_dev *dev)
 				" Port %d: Link Up - speed %u Mbps - %s",
 				dev->data->port_id,
 				(unsigned int)link.link_speed,
-				link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+				link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 				"full-duplex" : "half-duplex");
 		else
 			PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -972,18 +972,18 @@ eth_igc_start(struct rte_eth_dev *dev)
 
 	/* VLAN Offload Settings */
 	eth_igc_vlan_offload_set(dev,
-		ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK);
+		RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK);
 
 	/* Setup link speed and duplex */
 	speeds = &dev->data->dev_conf.link_speeds;
-	if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		hw->phy.autoneg_advertised = IGC_ALL_SPEED_DUPLEX_2500;
 		hw->mac.autoneg = 1;
 	} else {
 		int num_speeds = 0;
 
-		if (*speeds & ETH_LINK_SPEED_FIXED) {
+		if (*speeds & RTE_ETH_LINK_SPEED_FIXED) {
 			PMD_DRV_LOG(ERR,
 				    "Force speed mode currently not supported");
 			igc_dev_clear_queues(dev);
@@ -993,33 +993,33 @@ eth_igc_start(struct rte_eth_dev *dev)
 		hw->phy.autoneg_advertised = 0;
 		hw->mac.autoneg = 1;
 
-		if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G)) {
+		if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G)) {
 			num_speeds = -1;
 			goto error_invalid_config;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_1G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_1G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_2_5G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_2_5G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_2500_FULL;
 			num_speeds++;
 		}
@@ -1482,14 +1482,14 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = hw->mac.rar_entry_count;
 	dev_info->rx_offload_capa = IGC_RX_OFFLOAD_ALL;
 	dev_info->tx_offload_capa = IGC_TX_OFFLOAD_ALL;
-	dev_info->rx_queue_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->rx_queue_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	dev_info->max_rx_queues = IGC_QUEUE_PAIRS_NUM;
 	dev_info->max_tx_queues = IGC_QUEUE_PAIRS_NUM;
 	dev_info->max_vmdq_pools = 0;
 
 	dev_info->hash_key_size = IGC_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = IGC_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1515,9 +1515,9 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->rx_desc_lim = rx_desc_lim;
 	dev_info->tx_desc_lim = tx_desc_lim;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G;
 
 	dev_info->max_mtu = dev_info->max_rx_pktlen - IGC_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -2141,13 +2141,13 @@ eth_igc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		rx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -2179,16 +2179,16 @@ eth_igc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		hw->fc.requested_mode = igc_fc_none;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		hw->fc.requested_mode = igc_fc_rx_pause;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		hw->fc.requested_mode = igc_fc_tx_pause;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		hw->fc.requested_mode = igc_fc_full;
 		break;
 	default:
@@ -2234,29 +2234,29 @@ eth_igc_rss_reta_update(struct rte_eth_dev *dev,
 	struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
 	uint16_t i;
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR,
 			"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
-			reta_size, ETH_RSS_RETA_SIZE_128);
+			reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
-	RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+	RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
 
 	/* set redirection table */
-	for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+	for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
 		union igc_rss_reta_reg reta, reg;
 		uint16_t idx, shift;
 		uint8_t j, mask;
 
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				IGC_RSS_RDT_REG_SIZE_MASK);
 
 		/* if no need to update the register */
 		if (!mask ||
-		    shift > (RTE_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
+		    shift > (RTE_ETH_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
 			continue;
 
 		/* check mask whether need to read the register value first */
@@ -2290,29 +2290,29 @@ eth_igc_rss_reta_query(struct rte_eth_dev *dev,
 	struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
 	uint16_t i;
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR,
 			"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
-			reta_size, ETH_RSS_RETA_SIZE_128);
+			reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
-	RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+	RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
 
 	/* read redirection table */
-	for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+	for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
 		union igc_rss_reta_reg reta;
 		uint16_t idx, shift;
 		uint8_t j, mask;
 
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				IGC_RSS_RDT_REG_SIZE_MASK);
 
 		/* if no need to read register */
 		if (!mask ||
-		    shift > (RTE_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
+		    shift > (RTE_ETH_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
 			continue;
 
 		/* read register and get the queue index */
@@ -2369,23 +2369,23 @@ eth_igc_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	rss_hf = 0;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_EX)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP_EX)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP_EX)
-		rss_hf |= ETH_RSS_IPV6_UDP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
 
 	rss_conf->rss_hf |= rss_hf;
 	return 0;
@@ -2514,22 +2514,22 @@ eth_igc_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			igc_vlan_hw_strip_enable(dev);
 		else
 			igc_vlan_hw_strip_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			igc_vlan_hw_filter_enable(dev);
 		else
 			igc_vlan_hw_filter_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			return igc_vlan_hw_extend_enable(dev);
 		else
 			return igc_vlan_hw_extend_disable(dev);
@@ -2547,7 +2547,7 @@ eth_igc_vlan_tpid_set(struct rte_eth_dev *dev,
 	uint32_t reg_val;
 
 	/* only outer TPID of double VLAN can be configured*/
-	if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		reg_val = IGC_READ_REG(hw, IGC_VET);
 		reg_val = (reg_val & (~IGC_VET_EXT)) |
 			((uint32_t)tpid << IGC_VET_EXT_SHIFT);
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index 5e6c2ff30157..f56cad79e939 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -66,37 +66,37 @@ extern "C" {
 #define IGC_TX_MAX_MTU_SEG	UINT8_MAX
 
 #define IGC_RX_OFFLOAD_ALL	(    \
-	DEV_RX_OFFLOAD_VLAN_STRIP  | \
-	DEV_RX_OFFLOAD_VLAN_FILTER | \
-	DEV_RX_OFFLOAD_VLAN_EXTEND | \
-	DEV_RX_OFFLOAD_IPV4_CKSUM  | \
-	DEV_RX_OFFLOAD_UDP_CKSUM   | \
-	DEV_RX_OFFLOAD_TCP_CKSUM   | \
-	DEV_RX_OFFLOAD_SCTP_CKSUM  | \
-	DEV_RX_OFFLOAD_KEEP_CRC    | \
-	DEV_RX_OFFLOAD_SCATTER     | \
-	DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RX_OFFLOAD_VLAN_STRIP  | \
+	RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+	RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+	RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  | \
+	RTE_ETH_RX_OFFLOAD_UDP_CKSUM   | \
+	RTE_ETH_RX_OFFLOAD_TCP_CKSUM   | \
+	RTE_ETH_RX_OFFLOAD_SCTP_CKSUM  | \
+	RTE_ETH_RX_OFFLOAD_KEEP_CRC    | \
+	RTE_ETH_RX_OFFLOAD_SCATTER     | \
+	RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define IGC_TX_OFFLOAD_ALL	(    \
-	DEV_TX_OFFLOAD_VLAN_INSERT | \
-	DEV_TX_OFFLOAD_IPV4_CKSUM  | \
-	DEV_TX_OFFLOAD_UDP_CKSUM   | \
-	DEV_TX_OFFLOAD_TCP_CKSUM   | \
-	DEV_TX_OFFLOAD_SCTP_CKSUM  | \
-	DEV_TX_OFFLOAD_TCP_TSO     | \
-	DEV_TX_OFFLOAD_UDP_TSO	   | \
-	DEV_TX_OFFLOAD_MULTI_SEGS)
+	RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  | \
+	RTE_ETH_TX_OFFLOAD_UDP_CKSUM   | \
+	RTE_ETH_TX_OFFLOAD_TCP_CKSUM   | \
+	RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  | \
+	RTE_ETH_TX_OFFLOAD_TCP_TSO     | \
+	RTE_ETH_TX_OFFLOAD_UDP_TSO	   | \
+	RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define IGC_RSS_OFFLOAD_ALL	(    \
-	ETH_RSS_IPV4               | \
-	ETH_RSS_NONFRAG_IPV4_TCP   | \
-	ETH_RSS_NONFRAG_IPV4_UDP   | \
-	ETH_RSS_IPV6               | \
-	ETH_RSS_NONFRAG_IPV6_TCP   | \
-	ETH_RSS_NONFRAG_IPV6_UDP   | \
-	ETH_RSS_IPV6_EX            | \
-	ETH_RSS_IPV6_TCP_EX        | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4               | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP   | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP   | \
+	RTE_ETH_RSS_IPV6               | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP   | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP   | \
+	RTE_ETH_RSS_IPV6_EX            | \
+	RTE_ETH_RSS_IPV6_TCP_EX        | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define IGC_MAX_ETQF_FILTERS		3	/* etqf(3) is used for 1588 */
 #define IGC_ETQF_FILTER_1588		3
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 56132e8c6cd6..1d34ae2e1b15 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -127,7 +127,7 @@ struct igc_rx_queue {
 	uint8_t             crc_len;    /**< 0 if CRC stripped, 4 otherwise. */
 	uint8_t             drop_en;	/**< If not 0, set SRRCTL.Drop_En. */
 	uint32_t            flags;      /**< RX flags. */
-	uint64_t	    offloads;   /**< offloads of DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads;   /**< offloads of RTE_ETH_RX_OFFLOAD_* */
 };
 
 /** Offload features */
@@ -209,7 +209,7 @@ struct igc_tx_queue {
 	/**< Start context position for transmit queue. */
 	struct igc_advctx_info ctx_cache[IGC_CTX_NUM];
 	/**< Hardware context history.*/
-	uint64_t	       offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+	uint64_t	       offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
 };
 
 static inline uint64_t
@@ -847,23 +847,23 @@ igc_hw_rss_hash_set(struct igc_hw *hw, struct rte_eth_rss_conf *rss_conf)
 	/* Set configured hashing protocols in MRQC register */
 	rss_hf = rss_conf->rss_hf;
 	mrqc = IGC_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV4_TCP;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6;
-	if (rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_EX)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP;
-	if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV4_UDP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP;
-	if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP_EX;
 	IGC_WRITE_REG(hw, IGC_MRQC, mrqc);
 }
@@ -1037,10 +1037,10 @@ igc_dev_mq_rx_configure(struct rte_eth_dev *dev)
 	}
 
 	switch (dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		igc_rss_configure(dev);
 		break;
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 		/*
 		 * configure RSS register for following,
 		 * then disable the RSS logic
@@ -1111,7 +1111,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 * call to configure
 		 */
-		rxq->crc_len = (offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+		rxq->crc_len = (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
 				RTE_ETHER_CRC_LEN : 0;
 
 		bus_addr = rxq->rx_ring_phys_addr;
@@ -1177,7 +1177,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 		IGC_WRITE_REG(hw, IGC_RXDCTL(rxq->reg_idx), rxdctl);
 	}
 
-	if (offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		dev->data->scattered_rx = 1;
 
 	if (dev->data->scattered_rx) {
@@ -1221,20 +1221,20 @@ igc_rx_init(struct rte_eth_dev *dev)
 	rxcsum |= IGC_RXCSUM_PCSD;
 
 	/* Enable both L3/L4 rx checksum offload */
-	if (offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		rxcsum |= IGC_RXCSUM_IPOFL;
 	else
 		rxcsum &= ~IGC_RXCSUM_IPOFL;
 
 	if (offloads &
-		(DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+		(RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
 		rxcsum |= IGC_RXCSUM_TUOFL;
-		offloads |= DEV_RX_OFFLOAD_SCTP_CKSUM;
+		offloads |= RTE_ETH_RX_OFFLOAD_SCTP_CKSUM;
 	} else {
 		rxcsum &= ~IGC_RXCSUM_TUOFL;
 	}
 
-	if (offloads & DEV_RX_OFFLOAD_SCTP_CKSUM)
+	if (offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM)
 		rxcsum |= IGC_RXCSUM_CRCOFL;
 	else
 		rxcsum &= ~IGC_RXCSUM_CRCOFL;
@@ -1242,7 +1242,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 	IGC_WRITE_REG(hw, IGC_RXCSUM, rxcsum);
 
 	/* Setup the Receive Control Register. */
-	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rctl &= ~IGC_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
 	else
 		rctl |= IGC_RCTL_SECRC; /* Strip Ethernet CRC. */
@@ -1279,12 +1279,12 @@ igc_rx_init(struct rte_eth_dev *dev)
 		IGC_WRITE_REG(hw, IGC_RDT(rxq->reg_idx), rxq->nb_rx_desc - 1);
 
 		dvmolr = IGC_READ_REG(hw, IGC_DVMOLR(rxq->reg_idx));
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			dvmolr |= IGC_DVMOLR_STRVLAN;
 		else
 			dvmolr &= ~IGC_DVMOLR_STRVLAN;
 
-		if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			dvmolr &= ~IGC_DVMOLR_STRCRC;
 		else
 			dvmolr |= IGC_DVMOLR_STRCRC;
@@ -2253,10 +2253,10 @@ eth_igc_vlan_strip_queue_set(struct rte_eth_dev *dev,
 	reg_val = IGC_READ_REG(hw, IGC_DVMOLR(rx_queue_id));
 	if (on) {
 		reg_val |= IGC_DVMOLR_STRVLAN;
-		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
 		reg_val &= ~(IGC_DVMOLR_STRVLAN | IGC_DVMOLR_HIDVLAN);
-		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	IGC_WRITE_REG(hw, IGC_DVMOLR(rx_queue_id), reg_val);
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index f94a1fed0a38..c688c3735c06 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -280,37 +280,37 @@ ionic_dev_link_update(struct rte_eth_dev *eth_dev,
 	memset(&link, 0, sizeof(link));
 
 	if (adapter->idev.port_info->config.an_enable) {
-		link.link_autoneg = ETH_LINK_AUTONEG;
+		link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	}
 
 	if (!adapter->link_up ||
 	    !(lif->state & IONIC_LIF_F_UP)) {
 		/* Interface is down */
-		link.link_status = ETH_LINK_DOWN;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	} else {
 		/* Interface is up */
-		link.link_status = ETH_LINK_UP;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_status = RTE_ETH_LINK_UP;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		switch (adapter->link_speed) {
 		case  10000:
-			link.link_speed = ETH_SPEED_NUM_10G;
+			link.link_speed = RTE_ETH_SPEED_NUM_10G;
 			break;
 		case  25000:
-			link.link_speed = ETH_SPEED_NUM_25G;
+			link.link_speed = RTE_ETH_SPEED_NUM_25G;
 			break;
 		case  40000:
-			link.link_speed = ETH_SPEED_NUM_40G;
+			link.link_speed = RTE_ETH_SPEED_NUM_40G;
 			break;
 		case  50000:
-			link.link_speed = ETH_SPEED_NUM_50G;
+			link.link_speed = RTE_ETH_SPEED_NUM_50G;
 			break;
 		case 100000:
-			link.link_speed = ETH_SPEED_NUM_100G;
+			link.link_speed = RTE_ETH_SPEED_NUM_100G;
 			break;
 		default:
-			link.link_speed = ETH_SPEED_NUM_NONE;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 			break;
 		}
 	}
@@ -387,17 +387,17 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->flow_type_rss_offloads = IONIC_ETH_RSS_OFFLOAD_ALL;
 
 	dev_info->speed_capa =
-		ETH_LINK_SPEED_10G |
-		ETH_LINK_SPEED_25G |
-		ETH_LINK_SPEED_40G |
-		ETH_LINK_SPEED_50G |
-		ETH_LINK_SPEED_100G;
+		RTE_ETH_LINK_SPEED_10G |
+		RTE_ETH_LINK_SPEED_25G |
+		RTE_ETH_LINK_SPEED_40G |
+		RTE_ETH_LINK_SPEED_50G |
+		RTE_ETH_LINK_SPEED_100G;
 
 	/*
 	 * Per-queue capabilities
 	 * RTE does not support disabling a feature on a queue if it is
 	 * enabled globally on the device. Thus the driver does not advertise
-	 * capabilities like DEV_TX_OFFLOAD_IPV4_CKSUM as per-queue even
+	 * capabilities like RTE_ETH_TX_OFFLOAD_IPV4_CKSUM as per-queue even
 	 * though the driver would be otherwise capable of disabling it on
 	 * a per-queue basis.
 	 */
@@ -411,24 +411,24 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
 	 */
 
 	dev_info->rx_offload_capa = dev_info->rx_queue_offload_capa |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_RSS_HASH |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH |
 		0;
 
 	dev_info->tx_offload_capa = dev_info->tx_queue_offload_capa |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
 		0;
 
 	dev_info->rx_desc_lim = rx_desc_lim;
@@ -463,9 +463,9 @@ ionic_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 		fc_conf->autoneg = 0;
 
 		if (idev->port_info->config.pause_type)
-			fc_conf->mode = RTE_FC_FULL;
+			fc_conf->mode = RTE_ETH_FC_FULL;
 		else
-			fc_conf->mode = RTE_FC_NONE;
+			fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return 0;
@@ -487,14 +487,14 @@ ionic_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		pause_type = IONIC_PORT_PAUSE_TYPE_NONE;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		pause_type = IONIC_PORT_PAUSE_TYPE_LINK;
 		break;
-	case RTE_FC_RX_PAUSE:
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		return -ENOTSUP;
 	}
 
@@ -545,12 +545,12 @@ ionic_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	num = tbl_sz / RTE_RETA_GROUP_SIZE;
+	num = tbl_sz / RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < num; i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if (reta_conf[i].mask & ((uint64_t)1 << j)) {
-				index = (i * RTE_RETA_GROUP_SIZE) + j;
+				index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
 				lif->rss_ind_tbl[index] = reta_conf[i].reta[j];
 			}
 		}
@@ -585,12 +585,12 @@ ionic_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	num = reta_size / RTE_RETA_GROUP_SIZE;
+	num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < num; i++) {
 		memcpy(reta_conf->reta,
-			&lif->rss_ind_tbl[i * RTE_RETA_GROUP_SIZE],
-			RTE_RETA_GROUP_SIZE);
+			&lif->rss_ind_tbl[i * RTE_ETH_RETA_GROUP_SIZE],
+			RTE_ETH_RETA_GROUP_SIZE);
 		reta_conf++;
 	}
 
@@ -618,17 +618,17 @@ ionic_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
 			IONIC_RSS_HASH_KEY_SIZE);
 
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 	rss_conf->rss_hf = rss_hf;
 
@@ -660,17 +660,17 @@ ionic_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
 		if (!lif->rss_ind_tbl)
 			return -EINVAL;
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV4)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
 			rss_types |= IONIC_RSS_TYPE_IPV4;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			rss_types |= IONIC_RSS_TYPE_IPV4_TCP;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 			rss_types |= IONIC_RSS_TYPE_IPV4_UDP;
-		if (rss_conf->rss_hf & ETH_RSS_IPV6)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
 			rss_types |= IONIC_RSS_TYPE_IPV6;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 			rss_types |= IONIC_RSS_TYPE_IPV6_TCP;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 			rss_types |= IONIC_RSS_TYPE_IPV6_UDP;
 
 		ionic_lif_rss_config(lif, rss_types, key, NULL);
@@ -842,15 +842,15 @@ ionic_dev_configure(struct rte_eth_dev *eth_dev)
 static inline uint32_t
 ionic_parse_link_speeds(uint16_t link_speeds)
 {
-	if (link_speeds & ETH_LINK_SPEED_100G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_100G)
 		return 100000;
-	else if (link_speeds & ETH_LINK_SPEED_50G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_50G)
 		return 50000;
-	else if (link_speeds & ETH_LINK_SPEED_40G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_40G)
 		return 40000;
-	else if (link_speeds & ETH_LINK_SPEED_25G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_25G)
 		return 25000;
-	else if (link_speeds & ETH_LINK_SPEED_10G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 		return 10000;
 	else
 		return 0;
@@ -874,12 +874,12 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
 	IONIC_PRINT_CALL();
 
 	allowed_speeds =
-		ETH_LINK_SPEED_FIXED |
-		ETH_LINK_SPEED_10G |
-		ETH_LINK_SPEED_25G |
-		ETH_LINK_SPEED_40G |
-		ETH_LINK_SPEED_50G |
-		ETH_LINK_SPEED_100G;
+		RTE_ETH_LINK_SPEED_FIXED |
+		RTE_ETH_LINK_SPEED_10G |
+		RTE_ETH_LINK_SPEED_25G |
+		RTE_ETH_LINK_SPEED_40G |
+		RTE_ETH_LINK_SPEED_50G |
+		RTE_ETH_LINK_SPEED_100G;
 
 	if (dev_conf->link_speeds & ~allowed_speeds) {
 		IONIC_PRINT(ERR, "Invalid link setting");
@@ -896,7 +896,7 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
 	}
 
 	/* Configure link */
-	an_enable = (dev_conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+	an_enable = (dev_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 	ionic_dev_cmd_port_autoneg(idev, an_enable);
 	err = ionic_dev_cmd_wait_check(idev, IONIC_DEVCMD_TIMEOUT);
diff --git a/drivers/net/ionic/ionic_ethdev.h b/drivers/net/ionic/ionic_ethdev.h
index 6cbcd0f825a3..652f28c97d57 100644
--- a/drivers/net/ionic/ionic_ethdev.h
+++ b/drivers/net/ionic/ionic_ethdev.h
@@ -8,12 +8,12 @@
 #include <rte_ethdev.h>
 
 #define IONIC_ETH_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define IONIC_ETH_DEV_TO_LIF(eth_dev) ((struct ionic_lif *) \
 	(eth_dev)->data->dev_private)
diff --git a/drivers/net/ionic/ionic_lif.c b/drivers/net/ionic/ionic_lif.c
index a1f9ce2d81cb..5e8fdf3893ad 100644
--- a/drivers/net/ionic/ionic_lif.c
+++ b/drivers/net/ionic/ionic_lif.c
@@ -1688,12 +1688,12 @@ ionic_lif_configure_vlan_offload(struct ionic_lif *lif, int mask)
 
 	/*
 	 * IONIC_ETH_HW_VLAN_RX_FILTER cannot be turned off, so
-	 * set DEV_RX_OFFLOAD_VLAN_FILTER and ignore ETH_VLAN_FILTER_MASK
+	 * set RTE_ETH_RX_OFFLOAD_VLAN_FILTER and ignore RTE_ETH_VLAN_FILTER_MASK
 	 */
-	rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			lif->features |= IONIC_ETH_HW_VLAN_RX_STRIP;
 		else
 			lif->features &= ~IONIC_ETH_HW_VLAN_RX_STRIP;
@@ -1733,19 +1733,19 @@ ionic_lif_configure(struct ionic_lif *lif)
 	/*
 	 * NB: While it is true that RSS_HASH is always enabled on ionic,
 	 *     setting this flag unconditionally causes problems in DTS.
-	 * rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	 * rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	 */
 
 	/* RX per-port */
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM ||
-	    rxmode->offloads & DEV_RX_OFFLOAD_UDP_CKSUM ||
-	    rxmode->offloads & DEV_RX_OFFLOAD_TCP_CKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM ||
+	    rxmode->offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM ||
+	    rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
 		lif->features |= IONIC_ETH_HW_RX_CSUM;
 	else
 		lif->features &= ~IONIC_ETH_HW_RX_CSUM;
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		lif->features |= IONIC_ETH_HW_RX_SG;
 		lif->eth_dev->data->scattered_rx = 1;
 	} else {
@@ -1754,30 +1754,30 @@ ionic_lif_configure(struct ionic_lif *lif)
 	}
 
 	/* Covers VLAN_STRIP */
-	ionic_lif_configure_vlan_offload(lif, ETH_VLAN_STRIP_MASK);
+	ionic_lif_configure_vlan_offload(lif, RTE_ETH_VLAN_STRIP_MASK);
 
 	/* TX per-port */
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		lif->features |= IONIC_ETH_HW_TX_CSUM;
 	else
 		lif->features &= ~IONIC_ETH_HW_TX_CSUM;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		lif->features |= IONIC_ETH_HW_VLAN_TX_TAG;
 	else
 		lif->features &= ~IONIC_ETH_HW_VLAN_TX_TAG;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		lif->features |= IONIC_ETH_HW_TX_SG;
 	else
 		lif->features &= ~IONIC_ETH_HW_TX_SG;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		lif->features |= IONIC_ETH_HW_TSO;
 		lif->features |= IONIC_ETH_HW_TSO_IPV6;
 		lif->features |= IONIC_ETH_HW_TSO_ECN;
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index 4d16a39c6b6d..e3df7c56debe 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -203,11 +203,11 @@ ionic_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id,
 		txq->flags |= IONIC_QCQ_F_DEFERRED;
 
 	/* Convert the offload flags into queue flags */
-	if (offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+	if (offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		txq->flags |= IONIC_QCQ_F_CSUM_L3;
-	if (offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+	if (offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 		txq->flags |= IONIC_QCQ_F_CSUM_TCP;
-	if (offloads & DEV_TX_OFFLOAD_UDP_CKSUM)
+	if (offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)
 		txq->flags |= IONIC_QCQ_F_CSUM_UDP;
 
 	eth_dev->data->tx_queues[tx_queue_id] = txq;
@@ -743,11 +743,11 @@ ionic_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 
 	/*
 	 * Note: the interface does not currently support
-	 * DEV_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
+	 * RTE_ETH_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
 	 * when the adapter will be able to keep the CRC and subtract
 	 * it to the length for all received packets:
 	 * if (eth_dev->data->dev_conf.rxmode.offloads &
-	 *     DEV_RX_OFFLOAD_KEEP_CRC)
+	 *     RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 	 *   rxq->crc_len = ETHER_CRC_LEN;
 	 */
 
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 063a9c6a6f7f..17088585757f 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -50,11 +50,11 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->speed_capa =
 		(hw->retimer.mac_type ==
 			IFPGA_RAWDEV_RETIMER_MAC_TYPE_10GE_XFI) ?
-		ETH_LINK_SPEED_10G :
+		RTE_ETH_LINK_SPEED_10G :
 		((hw->retimer.mac_type ==
 			IFPGA_RAWDEV_RETIMER_MAC_TYPE_25GE_25GAUI) ?
-		ETH_LINK_SPEED_25G :
-		ETH_LINK_SPEED_AUTONEG);
+		RTE_ETH_LINK_SPEED_25G :
+		RTE_ETH_LINK_SPEED_AUTONEG);
 
 	dev_info->max_rx_queues  = 1;
 	dev_info->max_tx_queues  = 1;
@@ -67,30 +67,30 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
 	};
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_VLAN_FILTER;
-
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
 		dev_info->tx_queue_offload_capa;
 
 	dev_info->dev_capa =
@@ -2399,10 +2399,10 @@ ipn3ke_update_link(struct rte_rawdev *rawdev,
 				(uint64_t *)&link_speed);
 	switch (link_speed) {
 	case IFPGA_RAWDEV_LINK_SPEED_10GB:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case IFPGA_RAWDEV_LINK_SPEED_25GB:
-		link->link_speed = ETH_SPEED_NUM_25G;
+		link->link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	default:
 		IPN3KE_AFU_PMD_ERR("Unknown link speed info %u", link_speed);
@@ -2460,9 +2460,9 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,
 
 	memset(&link, 0, sizeof(link));
 
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_autoneg = !(ethdev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	rawdev = hw->rawdev;
 	ipn3ke_update_link(rawdev, rpst->port_id, &link);
@@ -2518,9 +2518,9 @@ ipn3ke_rpst_link_check(struct ipn3ke_rpst *rpst)
 
 	memset(&link, 0, sizeof(link));
 
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_autoneg = !(rpst->ethdev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	rawdev = hw->rawdev;
 	ipn3ke_update_link(rawdev, rpst->port_id, &link);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 46c95425adfb..7fd2c539e002 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1857,7 +1857,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 	qinq &= IXGBE_DMATXCTL_GDV;
 
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
+	case RTE_ETH_VLAN_TYPE_INNER:
 		if (qinq) {
 			reg = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
 			reg = (reg & (~IXGBE_VLNCTRL_VET)) | (uint32_t)tpid;
@@ -1872,7 +1872,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 				    " by single VLAN");
 		}
 		break;
-	case ETH_VLAN_TYPE_OUTER:
+	case RTE_ETH_VLAN_TYPE_OUTER:
 		if (qinq) {
 			/* Only the high 16-bits is valid */
 			IXGBE_WRITE_REG(hw, IXGBE_EXVET, (uint32_t)tpid <<
@@ -1959,10 +1959,10 @@ ixgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 
 	if (on) {
 		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
-		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
 		rxq->vlan_flags = PKT_RX_VLAN;
-		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
 
@@ -2083,7 +2083,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	if (hw->mac.type == ixgbe_mac_82598EB) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 			ctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
 			ctrl |= IXGBE_VLNCTRL_VME;
 			IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, ctrl);
@@ -2100,7 +2100,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
 			ctrl = IXGBE_READ_REG(hw, IXGBE_RXDCTL(rxq->reg_idx));
-			if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+			if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 				ctrl |= IXGBE_RXDCTL_VME;
 				on = TRUE;
 			} else {
@@ -2122,17 +2122,17 @@ ixgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	struct ixgbe_rx_queue *rxq;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		rxmode = &dev->data->dev_conf.rxmode;
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 		else
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 	}
 }
@@ -2143,19 +2143,18 @@ ixgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	rxmode = &dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK)
 		ixgbe_vlan_hw_strip_config(dev);
-	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ixgbe_vlan_hw_filter_enable(dev);
 		else
 			ixgbe_vlan_hw_filter_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			ixgbe_vlan_hw_extend_enable(dev);
 		else
 			ixgbe_vlan_hw_extend_disable(dev);
@@ -2194,10 +2193,10 @@ ixgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
 	switch (nb_rx_q) {
 	case 1:
 	case 2:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
 		break;
 	case 4:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
 		break;
 	default:
 		return -EINVAL;
@@ -2221,18 +2220,18 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
 		/* check multi-queue mode */
 		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
 			break;
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
 			/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
 			PMD_INIT_LOG(ERR, "SRIOV active,"
 					" unsupported mq_mode rx %d.",
 					dev_conf->rxmode.mq_mode);
 			return -EINVAL;
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
 			if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
 				if (ixgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
 					PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -2242,12 +2241,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 					return -EINVAL;
 				}
 			break;
-		case ETH_MQ_RX_VMDQ_ONLY:
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_NONE:
 			/* if nothing mq mode configure, use default scheme */
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
 			break;
-		default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+		default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
 			/* SRIOV only works in VMDq enable mode */
 			PMD_INIT_LOG(ERR, "SRIOV is active,"
 					" wrong mq_mode rx %d.",
@@ -2256,12 +2255,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 		}
 
 		switch (dev_conf->txmode.mq_mode) {
-		case ETH_MQ_TX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+		case RTE_ETH_MQ_TX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+			dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
 			break;
-		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
+		default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
+			dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_ONLY;
 			break;
 		}
 
@@ -2276,13 +2275,13 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 			return -EINVAL;
 		}
 	} else {
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
 			PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
 					  " not supported.");
 			return -EINVAL;
 		}
 		/* check configuration for vmdb+dcb mode */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_conf *conf;
 
 			if (nb_rx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2291,15 +2290,15 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools must be %d or %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_tx_conf *conf;
 
 			if (nb_tx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2308,39 +2307,39 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools != %d and"
 						" nb_queue_pools != %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
 
 		/* For DCB mode check our configuration before we go further */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
 			const struct rte_eth_dcb_rx_conf *conf;
 
 			conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
 
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 			const struct rte_eth_dcb_tx_conf *conf;
 
 			conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
@@ -2349,7 +2348,7 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 		 * When DCB/VT is off, maximum number of queues changes,
 		 * except for 82598EB, which remains constant.
 		 */
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
 				hw->mac.type != ixgbe_mac_82598EB) {
 			if (nb_tx_q > IXGBE_NONE_MODE_TX_NB_QUEUES) {
 				PMD_INIT_LOG(ERR,
@@ -2373,8 +2372,8 @@ ixgbe_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multipe queue mode checking */
 	ret  = ixgbe_check_mq_mode(dev);
@@ -2619,15 +2618,15 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 		goto error;
 	}
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = ixgbe_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
 		goto error;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
 		/* Enable vlan filtering for VMDq */
 		ixgbe_vmdq_vlan_hw_filter_enable(dev);
 	}
@@ -2704,17 +2703,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 	case ixgbe_mac_X550:
 	case ixgbe_mac_X550EM_x:
 	case ixgbe_mac_X550EM_a:
-		allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_2_5G |  ETH_LINK_SPEED_5G |
-			ETH_LINK_SPEED_10G;
+		allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_2_5G |  RTE_ETH_LINK_SPEED_5G |
+			RTE_ETH_LINK_SPEED_10G;
 		if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
 				hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
-			allowed_speeds = ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+			allowed_speeds = RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
 		break;
 	default:
-		allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_10G;
+		allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G;
 	}
 
 	link_speeds = &dev->data->dev_conf.link_speeds;
@@ -2728,7 +2727,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 	}
 
 	speed = 0x0;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		switch (hw->mac.type) {
 		case ixgbe_mac_82598EB:
 			speed = IXGBE_LINK_SPEED_82598_AUTONEG;
@@ -2746,17 +2745,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 			speed = IXGBE_LINK_SPEED_82599_AUTONEG;
 		}
 	} else {
-		if (*link_speeds & ETH_LINK_SPEED_10G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
 			speed |= IXGBE_LINK_SPEED_10GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
 			speed |= IXGBE_LINK_SPEED_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_2_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
 			speed |= IXGBE_LINK_SPEED_2_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_1G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed |= IXGBE_LINK_SPEED_1GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_100M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed |= IXGBE_LINK_SPEED_100_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_10M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
 			speed |= IXGBE_LINK_SPEED_10_FULL;
 	}
 
@@ -3832,7 +3831,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		 * When DCB/VT is off, maximum number of queues changes,
 		 * except for 82598EB, which remains constant.
 		 */
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
 				hw->mac.type != ixgbe_mac_82598EB)
 			dev_info->max_tx_queues = IXGBE_NONE_MODE_TX_NB_QUEUES;
 	}
@@ -3842,9 +3841,9 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
 	if (hw->mac.type == ixgbe_mac_82598EB)
-		dev_info->max_vmdq_pools = ETH_16_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	else
-		dev_info->max_vmdq_pools = ETH_64_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->max_mtu =  dev_info->max_rx_pktlen - IXGBE_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 	dev_info->vmdq_queue_num = dev_info->max_rx_queues;
@@ -3883,21 +3882,21 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->reta_size = ixgbe_reta_size_get(hw->mac.type);
 	dev_info->flow_type_rss_offloads = IXGBE_RSS_OFFLOAD_ALL;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
 	if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
 			hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
-		dev_info->speed_capa = ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
 
 	if (hw->mac.type == ixgbe_mac_X540 ||
 	    hw->mac.type == ixgbe_mac_X540_vf ||
 	    hw->mac.type == ixgbe_mac_X550 ||
 	    hw->mac.type == ixgbe_mac_X550_vf) {
-		dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
 	}
 	if (hw->mac.type == ixgbe_mac_X550) {
-		dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
-		dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
 	}
 
 	/* Driver-preferred Rx/Tx parameters */
@@ -3966,9 +3965,9 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
 	if (hw->mac.type == ixgbe_mac_82598EB)
-		dev_info->max_vmdq_pools = ETH_16_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	else
-		dev_info->max_vmdq_pools = ETH_64_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->rx_queue_offload_capa = ixgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (ixgbe_get_rx_port_offloads(dev) |
 				     dev_info->rx_queue_offload_capa);
@@ -4211,11 +4210,11 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	u32 esdp_reg;
 
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_SPEED_FIXED);
 
 	hw->mac.get_link_status = true;
 
@@ -4237,8 +4236,8 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
 		diag = ixgbe_check_link(hw, &link_speed, &link_up, wait);
 
 	if (diag != 0) {
-		link.link_speed = ETH_SPEED_NUM_100M;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
@@ -4274,37 +4273,37 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (link_speed) {
 	default:
 	case IXGBE_LINK_SPEED_UNKNOWN:
-		link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 
 	case IXGBE_LINK_SPEED_10_FULL:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 
 	case IXGBE_LINK_SPEED_100_FULL:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	case IXGBE_LINK_SPEED_1GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case IXGBE_LINK_SPEED_2_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_2_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 
 	case IXGBE_LINK_SPEED_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 
 	case IXGBE_LINK_SPEED_10GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	}
 
@@ -4521,7 +4520,7 @@ ixgbe_dev_link_status_print(struct rte_eth_dev *dev)
 		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -4740,13 +4739,13 @@ ixgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		tx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -5044,8 +5043,8 @@ ixgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i += IXGBE_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IXGBE_4_BIT_MASK);
 		if (!mask)
@@ -5092,8 +5091,8 @@ ixgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i += IXGBE_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IXGBE_4_BIT_MASK);
 		if (!mask)
@@ -5255,22 +5254,22 @@ ixgbevf_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
 		     dev->data->port_id);
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/*
 	 * VF has no ability to enable/disable HW CRC
 	 * Keep the persistent behavior the same as Host PF
 	 */
 #ifndef RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
-		conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #else
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
 		PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #endif
 
@@ -5330,8 +5329,8 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
 	ixgbevf_set_vfta_all(dev, 1);
 
 	/* Set HW strip */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = ixgbevf_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -5568,10 +5567,10 @@ ixgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	int on = 0;
 
 	/* VF function only support hw strip feature, others are not support */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
-			on = !!(rxq->offloads &	DEV_RX_OFFLOAD_VLAN_STRIP);
+			on = !!(rxq->offloads &	RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 			ixgbevf_vlan_strip_queue_set(dev, i, on);
 		}
 	}
@@ -5702,12 +5701,12 @@ ixgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
 		return -ENOTSUP;
 
 	if (on) {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = ~0;
 			IXGBE_WRITE_REG(hw, IXGBE_UTA(i), ~0);
 		}
 	} else {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = 0;
 			IXGBE_WRITE_REG(hw, IXGBE_UTA(i), 0);
 		}
@@ -5721,15 +5720,15 @@ ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
 {
 	uint32_t new_val = orig_val;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
 		new_val |= IXGBE_VMOLR_AUPE;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
 		new_val |= IXGBE_VMOLR_ROMPE;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 		new_val |= IXGBE_VMOLR_ROPE;
-	if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 		new_val |= IXGBE_VMOLR_BAM;
-	if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 		new_val |= IXGBE_VMOLR_MPE;
 
 	return new_val;
@@ -6724,15 +6723,15 @@ ixgbe_start_timecounters(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 
 	switch (link.link_speed) {
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		incval = IXGBE_INCVAL_100;
 		shift = IXGBE_INCVAL_SHIFT_100;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		incval = IXGBE_INCVAL_1GB;
 		shift = IXGBE_INCVAL_SHIFT_1GB;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 	default:
 		incval = IXGBE_INCVAL_10GB;
 		shift = IXGBE_INCVAL_SHIFT_10GB;
@@ -7143,16 +7142,16 @@ ixgbe_reta_size_get(enum ixgbe_mac_type mac_type) {
 	case ixgbe_mac_X550:
 	case ixgbe_mac_X550EM_x:
 	case ixgbe_mac_X550EM_a:
-		return ETH_RSS_RETA_SIZE_512;
+		return RTE_ETH_RSS_RETA_SIZE_512;
 	case ixgbe_mac_X550_vf:
 	case ixgbe_mac_X550EM_x_vf:
 	case ixgbe_mac_X550EM_a_vf:
-		return ETH_RSS_RETA_SIZE_64;
+		return RTE_ETH_RSS_RETA_SIZE_64;
 	case ixgbe_mac_X540_vf:
 	case ixgbe_mac_82599_vf:
 		return 0;
 	default:
-		return ETH_RSS_RETA_SIZE_128;
+		return RTE_ETH_RSS_RETA_SIZE_128;
 	}
 }
 
@@ -7162,10 +7161,10 @@ ixgbe_reta_reg_get(enum ixgbe_mac_type mac_type, uint16_t reta_idx) {
 	case ixgbe_mac_X550:
 	case ixgbe_mac_X550EM_x:
 	case ixgbe_mac_X550EM_a:
-		if (reta_idx < ETH_RSS_RETA_SIZE_128)
+		if (reta_idx < RTE_ETH_RSS_RETA_SIZE_128)
 			return IXGBE_RETA(reta_idx >> 2);
 		else
-			return IXGBE_ERETA((reta_idx - ETH_RSS_RETA_SIZE_128) >> 2);
+			return IXGBE_ERETA((reta_idx - RTE_ETH_RSS_RETA_SIZE_128) >> 2);
 	case ixgbe_mac_X550_vf:
 	case ixgbe_mac_X550EM_x_vf:
 	case ixgbe_mac_X550EM_a_vf:
@@ -7221,7 +7220,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	uint8_t nb_tcs;
 	uint8_t i, j;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
 	else
 		dcb_info->nb_tcs = 1;
@@ -7232,7 +7231,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	if (dcb_config->vt_mode) { /* vt is enabled*/
 		struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
 		if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
 			for (j = 0; j < nb_tcs; j++) {
@@ -7256,9 +7255,9 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	} else { /* vt is disabled*/
 		struct rte_eth_dcb_rx_conf *rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
-		if (dcb_info->nb_tcs == ETH_4_TCS) {
+		if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7271,7 +7270,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 			dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
 			dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
 			dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
-		} else if (dcb_info->nb_tcs == ETH_8_TCS) {
+		} else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7524,7 +7523,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = ixgbe_e_tag_filter_add(dev, l2_tunnel);
 		break;
 	default:
@@ -7556,7 +7555,7 @@ ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 		return ret;
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = ixgbe_e_tag_filter_del(dev, l2_tunnel);
 		break;
 	default:
@@ -7653,12 +7652,12 @@ ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ixgbe_add_vxlan_port(hw, udp_tunnel->udp_port);
 		break;
 
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -EINVAL;
 		break;
@@ -7690,11 +7689,11 @@ ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ixgbe_del_vxlan_port(hw, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -EINVAL;
 		break;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 950fb2d2450c..876b670f2682 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -114,15 +114,15 @@
 #define IXGBE_FDIR_NVGRE_TUNNEL_TYPE    0x0
 
 #define IXGBE_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define IXGBE_VF_IRQ_ENABLE_MASK        3          /* vf irq enable mask */
 #define IXGBE_VF_MAXMSIVECTOR           1
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 27a49bbce5e7..7894047829a8 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -90,9 +90,9 @@ static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl);
 static uint32_t ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
 				 uint32_t key);
 static uint32_t atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc);
+		enum rte_eth_fdir_pballoc_type pballoc);
 static uint32_t atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc);
+		enum rte_eth_fdir_pballoc_type pballoc);
 static int fdir_write_perfect_filter_82599(struct ixgbe_hw *hw,
 			union ixgbe_atr_input *input, uint8_t queue,
 			uint32_t fdircmd, uint32_t fdirhash,
@@ -163,20 +163,20 @@ fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl)
  * flexbytes matching field, and drop queue (only for perfect matching mode).
  */
 static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf, uint32_t *fdirctrl)
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf, uint32_t *fdirctrl)
 {
 	*fdirctrl = 0;
 
 	switch (conf->pballoc) {
-	case RTE_FDIR_PBALLOC_64K:
+	case RTE_ETH_FDIR_PBALLOC_64K:
 		/* 8k - 1 signature filters */
 		*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_64K;
 		break;
-	case RTE_FDIR_PBALLOC_128K:
+	case RTE_ETH_FDIR_PBALLOC_128K:
 		/* 16k - 1 signature filters */
 		*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_128K;
 		break;
-	case RTE_FDIR_PBALLOC_256K:
+	case RTE_ETH_FDIR_PBALLOC_256K:
 		/* 32k - 1 signature filters */
 		*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_256K;
 		break;
@@ -807,13 +807,13 @@ ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
 
 static uint32_t
 atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		return ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				PERFECT_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		return ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				PERFECT_BUCKET_128KB_HASH_MASK;
@@ -850,15 +850,15 @@ ixgbe_fdir_check_cmd_complete(struct ixgbe_hw *hw, uint32_t *fdircmd)
  */
 static uint32_t
 atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
 	uint32_t bucket_hash, sig_hash;
 
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		bucket_hash = ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				SIG_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		bucket_hash = ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				SIG_BUCKET_128KB_HASH_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 27322ab9038a..bdc9d4796c02 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -1259,7 +1259,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
 		return -rte_errno;
 	}
 
-	filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+	filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
 	/**
 	 * grp and e_cid_base are bit fields and only use 14 bits.
 	 * e-tag id is taken as little endian by HW.
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
index e45c5501e6bf..944c9f23809e 100644
--- a/drivers/net/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -392,7 +392,7 @@ ixgbe_crypto_create_session(void *device,
 	aead_xform = &conf->crypto_xform->aead;
 
 	if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 			ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -400,7 +400,7 @@ ixgbe_crypto_create_session(void *device,
 			return -ENOTSUP;
 		}
 	} else {
-		if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+		if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 			ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -633,11 +633,11 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	tx_offloads = dev->data->dev_conf.txmode.offloads;
 
 	/* sanity checks */
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
 		return -1;
 	}
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
 		return -1;
 	}
@@ -657,7 +657,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
 	IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
 		reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
 		if (reg != 0) {
@@ -665,7 +665,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 			return -1;
 		}
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 		IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL,
 				IXGBE_SECTXCTRL_STORE_FORWARD);
 		reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 295e5a39b245..9f1bd0a62ba4 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -104,15 +104,15 @@ int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
 	memset(uta_info, 0, sizeof(struct ixgbe_uta_info));
 	hw->mac.mc_filter_type = 0;
 
-	if (vf_num >= ETH_32_POOLS) {
+	if (vf_num >= RTE_ETH_32_POOLS) {
 		nb_queue = 2;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
-	} else if (vf_num >= ETH_16_POOLS) {
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+	} else if (vf_num >= RTE_ETH_16_POOLS) {
 		nb_queue = 4;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
 	} else {
 		nb_queue = 8;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
 	}
 
 	RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -263,15 +263,15 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
 	gpie |= IXGBE_GPIE_MSIX_MODE | IXGBE_GPIE_PBA_SUPPORT;
 
 	switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		gcr_ext |= IXGBE_GCR_EXT_VT_MODE_64;
 		gpie |= IXGBE_GPIE_VTMODE_64;
 		break;
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		gcr_ext |= IXGBE_GCR_EXT_VT_MODE_32;
 		gpie |= IXGBE_GPIE_VTMODE_32;
 		break;
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		gcr_ext |= IXGBE_GCR_EXT_VT_MODE_16;
 		gpie |= IXGBE_GPIE_VTMODE_16;
 		break;
@@ -674,29 +674,29 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
 	/* Notify VF of number of DCB traffic classes */
 	eth_conf = &dev->data->dev_conf;
 	switch (eth_conf->txmode.mq_mode) {
-	case ETH_MQ_TX_NONE:
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_NONE:
+	case RTE_ETH_MQ_TX_DCB:
 		PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
 			", but its tx mode = %d\n", vf,
 			eth_conf->txmode.mq_mode);
 		return -1;
 
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		vmdq_dcb_tx_conf = &eth_conf->tx_adv_conf.vmdq_dcb_tx_conf;
 		switch (vmdq_dcb_tx_conf->nb_queue_pools) {
-		case ETH_16_POOLS:
-			num_tcs = ETH_8_TCS;
+		case RTE_ETH_16_POOLS:
+			num_tcs = RTE_ETH_8_TCS;
 			break;
-		case ETH_32_POOLS:
-			num_tcs = ETH_4_TCS;
+		case RTE_ETH_32_POOLS:
+			num_tcs = RTE_ETH_4_TCS;
 			break;
 		default:
 			return -1;
 		}
 		break;
 
-	/* ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
-	case ETH_MQ_TX_VMDQ_ONLY:
+	/* RTE_ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
+	case RTE_ETH_MQ_TX_VMDQ_ONLY:
 		hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 		vmvir = IXGBE_READ_REG(hw, IXGBE_VMVIR(vf));
 		vlana = vmvir & IXGBE_VMVIR_VLANA_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index a51450fe5b82..aa3a406c204d 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2592,26 +2592,26 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM   |
-		DEV_TX_OFFLOAD_SCTP_CKSUM  |
-		DEV_TX_OFFLOAD_TCP_TSO     |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO     |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	if (hw->mac.type == ixgbe_mac_82599EB ||
 	    hw->mac.type == ixgbe_mac_X540)
-		tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 
 	if (hw->mac.type == ixgbe_mac_X550 ||
 	    hw->mac.type == ixgbe_mac_X550EM_x ||
 	    hw->mac.type == ixgbe_mac_X550EM_a)
-		tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
 #endif
 	return tx_offload_capa;
 }
@@ -2780,7 +2780,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_deferred_start = tx_conf->tx_deferred_start;
 #ifdef RTE_LIB_SECURITY
 	txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SECURITY);
+			RTE_ETH_TX_OFFLOAD_SECURITY);
 #endif
 
 	/*
@@ -3021,7 +3021,7 @@ ixgbe_get_rx_queue_offloads(struct rte_eth_dev *dev)
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (hw->mac.type != ixgbe_mac_82598EB)
-		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	return offloads;
 }
@@ -3032,19 +3032,19 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
 	uint64_t offloads;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	offloads = DEV_RX_OFFLOAD_IPV4_CKSUM  |
-		   DEV_RX_OFFLOAD_UDP_CKSUM   |
-		   DEV_RX_OFFLOAD_TCP_CKSUM   |
-		   DEV_RX_OFFLOAD_KEEP_CRC    |
-		   DEV_RX_OFFLOAD_VLAN_FILTER |
-		   DEV_RX_OFFLOAD_SCATTER |
-		   DEV_RX_OFFLOAD_RSS_HASH;
+	offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+		   RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+		   RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		   RTE_ETH_RX_OFFLOAD_SCATTER |
+		   RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (hw->mac.type == ixgbe_mac_82598EB)
-		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	if (ixgbe_is_vf(dev) == 0)
-		offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
 	/*
 	 * RSC is only supported by 82599 and x540 PF devices in a non-SR-IOV
@@ -3054,20 +3054,20 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
 	     hw->mac.type == ixgbe_mac_X540 ||
 	     hw->mac.type == ixgbe_mac_X550) &&
 	    !RTE_ETH_DEV_SRIOV(dev).active)
-		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+		offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 
 	if (hw->mac.type == ixgbe_mac_82599EB ||
 	    hw->mac.type == ixgbe_mac_X540)
-		offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
 
 	if (hw->mac.type == ixgbe_mac_X550 ||
 	    hw->mac.type == ixgbe_mac_X550EM_x ||
 	    hw->mac.type == ixgbe_mac_X550EM_a)
-		offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		offloads |= DEV_RX_OFFLOAD_SECURITY;
+		offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
 #endif
 
 	return offloads;
@@ -3122,7 +3122,7 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
 		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -3507,23 +3507,23 @@ ixgbe_hw_rss_hash_set(struct ixgbe_hw *hw, struct rte_eth_rss_conf *rss_conf)
 	/* Set configured hashing protocols in MRQC register */
 	rss_hf = rss_conf->rss_hf;
 	mrqc = IXGBE_MRQC_RSSEN; /* Enable RSS */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_TCP;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6;
-	if (rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_EX)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_TCP;
-	if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_UDP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_UDP;
-	if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP;
 	IXGBE_WRITE_REG(hw, mrqc_reg, mrqc);
 }
@@ -3605,23 +3605,23 @@ ixgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 	}
 	rss_hf = 0;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP)
-		rss_hf |= ETH_RSS_IPV6_UDP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
 	rss_conf->rss_hf = rss_hf;
 	return 0;
 }
@@ -3697,12 +3697,12 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
 	num_pools = cfg->nb_queue_pools;
 	/* Check we have a valid number of pools */
-	if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+	if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
 		ixgbe_rss_disable(dev);
 		return;
 	}
 	/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
-	nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+	nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
 
 	/*
 	 * RXPBSIZE
@@ -3727,7 +3727,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
 	}
 	/* zero alloc all unused TCs */
-	for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		uint32_t rxpbsize = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(i));
 
 		rxpbsize &= (~(0x3FF << IXGBE_RXPBSIZE_SHIFT));
@@ -3736,7 +3736,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	}
 
 	/* MRQC: enable vmdq and dcb */
-	mrqc = (num_pools == ETH_16_POOLS) ?
+	mrqc = (num_pools == RTE_ETH_16_POOLS) ?
 		IXGBE_MRQC_VMDQRT8TCEN : IXGBE_MRQC_VMDQRT4TCEN;
 	IXGBE_WRITE_REG(hw, IXGBE_MRQC, mrqc);
 
@@ -3752,7 +3752,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 
 	/* RTRUP2TC: mapping user priorities to traffic classes (TCs) */
 	queue_mapping = 0;
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 		/*
 		 * mapping is done with 3 bits per priority,
 		 * so shift by i*3 each time
@@ -3776,7 +3776,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 
 	/* VFRE: pool enabling for receive - 16 or 32 */
 	IXGBE_WRITE_REG(hw, IXGBE_VFRE(0),
-			num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+			num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	/*
 	 * MPSAR - allow pools to read specific mac addresses
@@ -3858,7 +3858,7 @@ ixgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
 	if (hw->mac.type != ixgbe_mac_82598EB)
 		/*PF VF Transmit Enable*/
 		IXGBE_WRITE_REG(hw, IXGBE_VFTE(0),
-			vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+			vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	/*Configure general DCB TX parameters*/
 	ixgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3874,12 +3874,12 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
-	if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3889,7 +3889,7 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3907,12 +3907,12 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
-	if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3922,7 +3922,7 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3949,7 +3949,7 @@ ixgbe_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3976,7 +3976,7 @@ ixgbe_dcb_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -4145,7 +4145,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
 
 	switch (dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_VMDQ_DCB:
+	case RTE_ETH_MQ_RX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		if (hw->mac.type != ixgbe_mac_82598EB) {
 			config_dcb_rx = DCB_RX_CONFIG;
@@ -4158,8 +4158,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			ixgbe_vmdq_dcb_configure(dev);
 		}
 		break;
-	case ETH_MQ_RX_DCB:
-	case ETH_MQ_RX_DCB_RSS:
+	case RTE_ETH_MQ_RX_DCB:
+	case RTE_ETH_MQ_RX_DCB_RSS:
 		dcb_config->vt_mode = false;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -4172,7 +4172,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		break;
 	}
 	switch (dev->data->dev_conf.txmode.mq_mode) {
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/* get DCB and VT TX configuration parameters
@@ -4183,7 +4183,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		ixgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
 		break;
 
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_DCB:
 		dcb_config->vt_mode = false;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/*get DCB TX configuration parameters from rte_eth_conf*/
@@ -4199,15 +4199,15 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	nb_tcs = dcb_config->num_tcs.pfc_tcs;
 	/* Unpack map */
 	ixgbe_dcb_unpack_map_cee(dcb_config, IXGBE_DCB_RX_CONFIG, map);
-	if (nb_tcs == ETH_4_TCS) {
+	if (nb_tcs == RTE_ETH_4_TCS) {
 		/* Avoid un-configured priority mapping to TC0 */
 		uint8_t j = 4;
 		uint8_t mask = 0xFF;
 
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
 			mask = (uint8_t)(mask & (~(1 << map[i])));
 		for (i = 0; mask && (i < IXGBE_DCB_MAX_TRAFFIC_CLASS); i++) {
-			if ((mask & 0x1) && (j < ETH_DCB_NUM_USER_PRIORITIES))
+			if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
 				map[j++] = i;
 			mask >>= 1;
 		}
@@ -4257,9 +4257,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
 		}
 		/* zero alloc all unused TCs */
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
-		}
 	}
 	if (config_dcb_tx) {
 		/* Only support an equally distributed
@@ -4273,7 +4272,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), txpbthresh);
 		}
 		/* Clear unused TCs, if any, to zero buffer size*/
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), 0);
 			IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), 0);
 		}
@@ -4309,7 +4308,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	ixgbe_dcb_config_tc_stats_82599(hw, dcb_config);
 
 	/* Check if the PFC is supported */
-	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
 		for (i = 0; i < nb_tcs; i++) {
 			/*
@@ -4323,7 +4322,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			tc->pfc = ixgbe_dcb_pfc_enabled;
 		}
 		ixgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
-		if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+		if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
 			pfc_en &= 0x0F;
 		ret = ixgbe_dcb_config_pfc(hw, pfc_en, map);
 	}
@@ -4344,12 +4343,12 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	/* check support mq_mode for DCB */
-	if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
-	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
-	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS))
+	if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
 		return;
 
-	if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+	if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
 		return;
 
 	/** Configure DCB hardware **/
@@ -4405,7 +4404,7 @@ ixgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 
 	/* VFRE: pool enabling for receive - 64 */
 	IXGBE_WRITE_REG(hw, IXGBE_VFRE(0), UINT32_MAX);
-	if (num_pools == ETH_64_POOLS)
+	if (num_pools == RTE_ETH_64_POOLS)
 		IXGBE_WRITE_REG(hw, IXGBE_VFRE(1), UINT32_MAX);
 
 	/*
@@ -4526,11 +4525,11 @@ ixgbe_config_vf_rss(struct rte_eth_dev *dev)
 	mrqc = IXGBE_READ_REG(hw, IXGBE_MRQC);
 	mrqc &= ~IXGBE_MRQC_MRQE_MASK;
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		mrqc |= IXGBE_MRQC_VMDQRSS64EN;
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		mrqc |= IXGBE_MRQC_VMDQRSS32EN;
 		break;
 
@@ -4551,17 +4550,17 @@ ixgbe_config_vf_default(struct rte_eth_dev *dev)
 		IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		IXGBE_WRITE_REG(hw, IXGBE_MRQC,
 			IXGBE_MRQC_VMDQEN);
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		IXGBE_WRITE_REG(hw, IXGBE_MRQC,
 			IXGBE_MRQC_VMDQRT4TCEN);
 		break;
 
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		IXGBE_WRITE_REG(hw, IXGBE_MRQC,
 			IXGBE_MRQC_VMDQRT8TCEN);
 		break;
@@ -4588,21 +4587,21 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * any DCB/RSS w/o VMDq multi-queue setting
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_DCB_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			ixgbe_rss_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
 			ixgbe_vmdq_dcb_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
 			ixgbe_vmdq_rx_hw_configure(dev);
 			break;
 
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_NONE:
 		default:
 			/* if mq_mode is none, disable rss mode.*/
 			ixgbe_rss_disable(dev);
@@ -4613,18 +4612,18 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * Support RSS together with SRIOV.
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			ixgbe_config_vf_rss(dev);
 			break;
-		case ETH_MQ_RX_VMDQ_DCB:
-		case ETH_MQ_RX_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_DCB:
 		/* In SRIOV, the configuration is the same as VMDq case */
 			ixgbe_vmdq_dcb_configure(dev);
 			break;
 		/* DCB/RSS together with SRIOV is not supported */
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
-		case ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
 			PMD_INIT_LOG(ERR,
 				"Could not support DCB/RSS with VMDq & SRIOV");
 			return -1;
@@ -4658,7 +4657,7 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV inactive scheme
 		 * any DCB w/o VMDq multi-queue setting
 		 */
-		if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+		if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
 			ixgbe_vmdq_tx_hw_configure(hw);
 		else {
 			mtqc = IXGBE_MTQC_64Q_1PB;
@@ -4671,13 +4670,13 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV active scheme
 		 * FIXME if support DCB together with VMDq & SRIOV
 		 */
-		case ETH_64_POOLS:
+		case RTE_ETH_64_POOLS:
 			mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_64VF;
 			break;
-		case ETH_32_POOLS:
+		case RTE_ETH_32_POOLS:
 			mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_32VF;
 			break;
-		case ETH_16_POOLS:
+		case RTE_ETH_16_POOLS:
 			mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_RT_ENA |
 				IXGBE_MTQC_8TC_8TQ;
 			break;
@@ -4885,7 +4884,7 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
 		rxq->rx_using_sse = rx_using_sse;
 #ifdef RTE_LIB_SECURITY
 		rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_SECURITY);
+				RTE_ETH_RX_OFFLOAD_SECURITY);
 #endif
 	}
 }
@@ -4913,10 +4912,10 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* Sanity check */
 	dev->dev_ops->dev_infos_get(dev, &dev_info);
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		rsc_capable = true;
 
-	if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
 				   "support it");
 		return -EINVAL;
@@ -4924,8 +4923,8 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* RSC global configuration (chapter 4.6.7.2.1 of 82599 Spec) */
 
-	if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
-	     (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+	     (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		/*
 		 * According to chapter of 4.6.7.2.1 of the Spec Rev.
 		 * 3.0 RSC configuration requires HW CRC stripping being
@@ -4939,7 +4938,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* RFCTL configuration  */
 	rfctl = IXGBE_READ_REG(hw, IXGBE_RFCTL);
-	if ((rsc_capable) && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if ((rsc_capable) && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		rfctl &= ~IXGBE_RFCTL_RSC_DIS;
 	else
 		rfctl |= IXGBE_RFCTL_RSC_DIS;
@@ -4948,7 +4947,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 	IXGBE_WRITE_REG(hw, IXGBE_RFCTL, rfctl);
 
 	/* If LRO hasn't been requested - we are done here. */
-	if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		return 0;
 
 	/* Set RDRXCTL.RSCACKC bit */
@@ -5070,7 +5069,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Configure CRC stripping, if any.
 	 */
 	hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		hlreg0 &= ~IXGBE_HLREG0_RXCRCSTRP;
 	else
 		hlreg0 |= IXGBE_HLREG0_RXCRCSTRP;
@@ -5107,7 +5106,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	/* Setup RX queues */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
@@ -5116,7 +5115,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 * call to configure.
 		 */
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -5158,11 +5157,11 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 		/* It adds dual VLAN length for supporting dual VLAN */
 		if (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
 			dev->data->scattered_rx = 1;
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		dev->data->scattered_rx = 1;
 
 	/*
@@ -5177,7 +5176,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 */
 	rxcsum = IXGBE_READ_REG(hw, IXGBE_RXCSUM);
 	rxcsum |= IXGBE_RXCSUM_PCSD;
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= IXGBE_RXCSUM_IPPCSE;
 	else
 		rxcsum &= ~IXGBE_RXCSUM_IPPCSE;
@@ -5187,7 +5186,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	if (hw->mac.type == ixgbe_mac_82599EB ||
 	    hw->mac.type == ixgbe_mac_X540) {
 		rdrxctl = IXGBE_READ_REG(hw, IXGBE_RDRXCTL);
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rdrxctl &= ~IXGBE_RDRXCTL_CRCSTRIP;
 		else
 			rdrxctl |= IXGBE_RDRXCTL_CRCSTRIP;
@@ -5393,9 +5392,9 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
 
 #ifdef RTE_LIB_SECURITY
 	if ((dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_SECURITY) ||
+			RTE_ETH_RX_OFFLOAD_SECURITY) ||
 		(dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SECURITY)) {
+			RTE_ETH_TX_OFFLOAD_SECURITY)) {
 		ret = ixgbe_crypto_enable_ipsec(dev);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR,
@@ -5681,7 +5680,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	/* Setup RX queues */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
@@ -5730,7 +5729,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
 		buf_size = (uint16_t) ((srrctl & IXGBE_SRRCTL_BSIZEPKT_MASK) <<
 				       IXGBE_SRRCTL_BSIZEPKT_SHIFT);
 
-		if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
 		    /* It adds dual VLAN length for supporting dual VLAN */
 		    (frame_size + 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
 			if (!dev->data->scattered_rx)
@@ -5738,8 +5737,8 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
 			dev->data->scattered_rx = 1;
 		}
 
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	/* Set RQPL for VF RSS according to max Rx queue */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index a1764f2b08af..668a5b9814f6 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -133,7 +133,7 @@ struct ixgbe_rx_queue {
 	uint8_t             rx_udp_csum_zero_err;
 	/** flags to set in mbuf when a vlan is detected. */
 	uint64_t            vlan_flags;
-	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
 	/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
 	struct rte_mbuf fake_mbuf;
 	/** hold packets to return to application */
@@ -227,7 +227,7 @@ struct ixgbe_tx_queue {
 	uint8_t             pthresh;       /**< Prefetch threshold register. */
 	uint8_t             hthresh;       /**< Host threshold register. */
 	uint8_t             wthresh;       /**< Write-back threshold reg. */
-	uint64_t offloads; /**< Tx offload flags of DEV_TX_OFFLOAD_* */
+	uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
 	uint32_t            ctx_curr;      /**< Hardware context states. */
 	/** Hardware context0 history. */
 	struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 005e60668a8b..cd34d4098785 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -277,7 +277,7 @@ static inline int
 ixgbe_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
 {
 #ifndef RTE_LIBRTE_IEEE1588
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 
 	/* no fdir support */
 	if (fconf->mode != RTE_FDIR_MODE_NONE)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index ae03ea6e9db3..ac8976062fa7 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -119,14 +119,14 @@ ixgbe_tc_nb_get(struct rte_eth_dev *dev)
 	uint8_t nb_tcs = 0;
 
 	eth_conf = &dev->data->dev_conf;
-	if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+	if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 		nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
-	} else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	} else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
-		    ETH_32_POOLS)
-			nb_tcs = ETH_4_TCS;
+		    RTE_ETH_32_POOLS)
+			nb_tcs = RTE_ETH_4_TCS;
 		else
-			nb_tcs = ETH_8_TCS;
+			nb_tcs = RTE_ETH_8_TCS;
 	} else {
 		nb_tcs = 1;
 	}
@@ -375,10 +375,10 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 	if (vf_num) {
 		/* no DCB */
 		if (nb_tcs == 1) {
-			if (vf_num >= ETH_32_POOLS) {
+			if (vf_num >= RTE_ETH_32_POOLS) {
 				*nb = 2;
 				*base = vf_num * 2;
-			} else if (vf_num >= ETH_16_POOLS) {
+			} else if (vf_num >= RTE_ETH_16_POOLS) {
 				*nb = 4;
 				*base = vf_num * 4;
 			} else {
@@ -392,7 +392,7 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 		}
 	} else {
 		/* VT off */
-		if (nb_tcs == ETH_8_TCS) {
+		if (nb_tcs == RTE_ETH_8_TCS) {
 			switch (tc_node_no) {
 			case 0:
 				*base = 0;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index 9fa75984fb31..bd528ff346c7 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -58,20 +58,20 @@ ixgbe_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	/**< Maximum number of MAC addresses. */
 
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |	DEV_RX_OFFLOAD_UDP_CKSUM  |
-		DEV_RX_OFFLOAD_TCP_CKSUM;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	RTE_ETH_RX_OFFLOAD_UDP_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 	/**< Device RX offload capabilities. */
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	/**< Device TX offload capabilities. */
 
 	dev_info->speed_capa =
 		representor->pf_ethdev->data->dev_link.link_speed;
-	/**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+	/**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
 
 	dev_info->switch_info.name =
 		representor->pf_ethdev->device->name;
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.c b/drivers/net/ixgbe/rte_pmd_ixgbe.c
index cf089cd9aee5..9729f8575f53 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.c
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.c
@@ -303,10 +303,10 @@ rte_pmd_ixgbe_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
 	 */
 	if (hw->mac.type == ixgbe_mac_82598EB)
 		queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
-				  ETH_16_POOLS;
+				  RTE_ETH_16_POOLS;
 	else
 		queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
-				  ETH_64_POOLS;
+				  RTE_ETH_64_POOLS;
 
 	for (q = 0; q < queues_per_pool; q++)
 		(*dev->dev_ops->vlan_strip_queue_set)(dev,
@@ -736,14 +736,14 @@ rte_pmd_ixgbe_set_tc_bw_alloc(uint16_t port,
 	bw_conf = IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
 	eth_conf = &dev->data->dev_conf;
 
-	if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+	if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 		nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
-	} else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	} else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
-		    ETH_32_POOLS)
-			nb_tcs = ETH_4_TCS;
+		    RTE_ETH_32_POOLS)
+			nb_tcs = RTE_ETH_4_TCS;
 		else
-			nb_tcs = ETH_8_TCS;
+			nb_tcs = RTE_ETH_8_TCS;
 	} else {
 		nb_tcs = 1;
 	}
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.h b/drivers/net/ixgbe/rte_pmd_ixgbe.h
index 90fc8160b1f8..eef6f6661c74 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.h
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.h
@@ -285,8 +285,8 @@ int rte_pmd_ixgbe_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an,
 * @param rx_mask
 *    The RX mode mask, which is one or more of accepting Untagged Packets,
 *    packets that match the PFUTA table, Broadcast and Multicast Promiscuous.
-*    ETH_VMDQ_ACCEPT_UNTAG,ETH_VMDQ_ACCEPT_HASH_UC,
-*    ETH_VMDQ_ACCEPT_BROADCAST and ETH_VMDQ_ACCEPT_MULTICAST will be used
+*    RTE_ETH_VMDQ_ACCEPT_UNTAG, RTE_ETH_VMDQ_ACCEPT_HASH_UC,
+*    RTE_ETH_VMDQ_ACCEPT_BROADCAST and RTE_ETH_VMDQ_ACCEPT_MULTICAST will be used
 *    in rx_mode.
 * @param on
 *    1 - Enable a VF RX mode.
diff --git a/drivers/net/kni/rte_eth_kni.c b/drivers/net/kni/rte_eth_kni.c
index cb9f7c8e8200..c428caf44189 100644
--- a/drivers/net/kni/rte_eth_kni.c
+++ b/drivers/net/kni/rte_eth_kni.c
@@ -61,10 +61,10 @@ struct pmd_internals {
 };
 
 static const struct rte_eth_link pmd_link = {
-		.link_speed = ETH_SPEED_NUM_10G,
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_status = ETH_LINK_DOWN,
-		.link_autoneg = ETH_LINK_FIXED,
+		.link_speed = RTE_ETH_SPEED_NUM_10G,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_status = RTE_ETH_LINK_DOWN,
+		.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 static int is_kni_initialized;
 
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 0fc3f0ab66a9..90ffe31b9fda 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -384,15 +384,15 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
 	case PCI_SUBSYS_DEV_ID_CN2360_210SVPN3:
 	case PCI_SUBSYS_DEV_ID_CN2350_210SVPT:
 	case PCI_SUBSYS_DEV_ID_CN2360_210SVPT:
-		devinfo->speed_capa = ETH_LINK_SPEED_10G;
+		devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
 		break;
 	/* CN23xx 25G cards */
 	case PCI_SUBSYS_DEV_ID_CN2350_225:
 	case PCI_SUBSYS_DEV_ID_CN2360_225:
-		devinfo->speed_capa = ETH_LINK_SPEED_25G;
+		devinfo->speed_capa = RTE_ETH_LINK_SPEED_25G;
 		break;
 	default:
-		devinfo->speed_capa = ETH_LINK_SPEED_10G;
+		devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
 		lio_dev_err(lio_dev,
 			    "Unknown CN23XX subsystem device id. Setting 10G as default link speed.\n");
 		return -EINVAL;
@@ -406,27 +406,27 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
 
 	devinfo->max_mac_addrs = 1;
 
-	devinfo->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM		|
-				    DEV_RX_OFFLOAD_UDP_CKSUM		|
-				    DEV_RX_OFFLOAD_TCP_CKSUM		|
-				    DEV_RX_OFFLOAD_VLAN_STRIP		|
-				    DEV_RX_OFFLOAD_RSS_HASH);
-	devinfo->tx_offload_capa = (DEV_TX_OFFLOAD_IPV4_CKSUM		|
-				    DEV_TX_OFFLOAD_UDP_CKSUM		|
-				    DEV_TX_OFFLOAD_TCP_CKSUM		|
-				    DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM);
+	devinfo->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM		|
+				    RTE_ETH_RX_OFFLOAD_UDP_CKSUM		|
+				    RTE_ETH_RX_OFFLOAD_TCP_CKSUM		|
+				    RTE_ETH_RX_OFFLOAD_VLAN_STRIP		|
+				    RTE_ETH_RX_OFFLOAD_RSS_HASH);
+	devinfo->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
+				    RTE_ETH_TX_OFFLOAD_UDP_CKSUM		|
+				    RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
+				    RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM);
 
 	devinfo->rx_desc_lim = lio_rx_desc_lim;
 	devinfo->tx_desc_lim = lio_tx_desc_lim;
 
 	devinfo->reta_size = LIO_RSS_MAX_TABLE_SZ;
 	devinfo->hash_key_size = LIO_RSS_MAX_KEY_SZ;
-	devinfo->flow_type_rss_offloads = (ETH_RSS_IPV4			|
-					   ETH_RSS_NONFRAG_IPV4_TCP	|
-					   ETH_RSS_IPV6			|
-					   ETH_RSS_NONFRAG_IPV6_TCP	|
-					   ETH_RSS_IPV6_EX		|
-					   ETH_RSS_IPV6_TCP_EX);
+	devinfo->flow_type_rss_offloads = (RTE_ETH_RSS_IPV4			|
+					   RTE_ETH_RSS_NONFRAG_IPV4_TCP	|
+					   RTE_ETH_RSS_IPV6			|
+					   RTE_ETH_RSS_NONFRAG_IPV6_TCP	|
+					   RTE_ETH_RSS_IPV6_EX		|
+					   RTE_ETH_RSS_IPV6_TCP_EX);
 	return 0;
 }
 
@@ -519,10 +519,10 @@ lio_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
 	rss_param->param.flags &= ~LIO_RSS_PARAM_ITABLE_UNCHANGED;
 	rss_param->param.itablesize = LIO_RSS_MAX_TABLE_SZ;
 
-	for (i = 0; i < (reta_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask) & ((uint64_t)1 << j)) {
-				index = (i * RTE_RETA_GROUP_SIZE) + j;
+				index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
 				rss_state->itable[index] = reta_conf[i].reta[j];
 			}
 		}
@@ -562,12 +562,12 @@ lio_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	num = reta_size / RTE_RETA_GROUP_SIZE;
+	num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < num; i++) {
 		memcpy(reta_conf->reta,
-		       &rss_state->itable[i * RTE_RETA_GROUP_SIZE],
-		       RTE_RETA_GROUP_SIZE);
+		       &rss_state->itable[i * RTE_ETH_RETA_GROUP_SIZE],
+		       RTE_ETH_RETA_GROUP_SIZE);
 		reta_conf++;
 	}
 
@@ -595,17 +595,17 @@ lio_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
 		memcpy(hash_key, rss_state->hash_key, rss_state->hash_key_size);
 
 	if (rss_state->ip)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (rss_state->tcp_hash)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (rss_state->ipv6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (rss_state->ipv6_tcp_hash)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (rss_state->ipv6_ex)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (rss_state->ipv6_tcp_ex_hash)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 
 	rss_conf->rss_hf = rss_hf;
 
@@ -673,42 +673,42 @@ lio_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
 		if (rss_state->hash_disable)
 			return -EINVAL;
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
 			hashinfo |= LIO_RSS_HASH_IPV4;
 			rss_state->ip = 1;
 		} else {
 			rss_state->ip = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 			hashinfo |= LIO_RSS_HASH_TCP_IPV4;
 			rss_state->tcp_hash = 1;
 		} else {
 			rss_state->tcp_hash = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV6) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6) {
 			hashinfo |= LIO_RSS_HASH_IPV6;
 			rss_state->ipv6 = 1;
 		} else {
 			rss_state->ipv6 = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 			hashinfo |= LIO_RSS_HASH_TCP_IPV6;
 			rss_state->ipv6_tcp_hash = 1;
 		} else {
 			rss_state->ipv6_tcp_hash = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV6_EX) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX) {
 			hashinfo |= LIO_RSS_HASH_IPV6_EX;
 			rss_state->ipv6_ex = 1;
 		} else {
 			rss_state->ipv6_ex = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) {
 			hashinfo |= LIO_RSS_HASH_TCP_IPV6_EX;
 			rss_state->ipv6_tcp_ex_hash = 1;
 		} else {
@@ -757,7 +757,7 @@ lio_dev_udp_tunnel_add(struct rte_eth_dev *eth_dev,
 	if (udp_tnl == NULL)
 		return -EINVAL;
 
-	if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+	if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
 		lio_dev_err(lio_dev, "Unsupported tunnel type\n");
 		return -1;
 	}
@@ -814,7 +814,7 @@ lio_dev_udp_tunnel_del(struct rte_eth_dev *eth_dev,
 	if (udp_tnl == NULL)
 		return -EINVAL;
 
-	if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+	if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
 		lio_dev_err(lio_dev, "Unsupported tunnel type\n");
 		return -1;
 	}
@@ -912,10 +912,10 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
 
 	/* Initialize */
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	/* Return what we found */
 	if (lio_dev->linfo.link.s.link_up == 0) {
@@ -923,18 +923,18 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
 		return rte_eth_linkstatus_set(eth_dev, &link);
 	}
 
-	link.link_status = ETH_LINK_UP; /* Interface is up */
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP; /* Interface is up */
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	switch (lio_dev->linfo.link.s.speed) {
 	case LIO_LINK_SPEED_10000:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case LIO_LINK_SPEED_25000:
-		link.link_speed = ETH_SPEED_NUM_25G;
+		link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	default:
-		link.link_speed = ETH_SPEED_NUM_NONE;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	}
 
 	return rte_eth_linkstatus_set(eth_dev, &link);
@@ -1086,8 +1086,8 @@ lio_dev_rss_configure(struct rte_eth_dev *eth_dev)
 
 		q_idx = (uint8_t)((eth_dev->data->nb_rx_queues > 1) ?
 				  i % eth_dev->data->nb_rx_queues : 0);
-		conf_idx = i / RTE_RETA_GROUP_SIZE;
-		reta_idx = i % RTE_RETA_GROUP_SIZE;
+		conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
 		reta_conf[conf_idx].reta[reta_idx] = q_idx;
 		reta_conf[conf_idx].mask |= ((uint64_t)1 << reta_idx);
 	}
@@ -1103,10 +1103,10 @@ lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rss_conf rss_conf;
 
 	switch (eth_dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		lio_dev_rss_configure(eth_dev);
 		break;
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 	/* if mq_mode is none, disable rss mode. */
 	default:
 		memset(&rss_conf, 0, sizeof(rss_conf));
@@ -1484,7 +1484,7 @@ lio_dev_set_link_up(struct rte_eth_dev *eth_dev)
 	}
 
 	lio_dev->linfo.link.s.link_up = 1;
-	eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -1505,11 +1505,11 @@ lio_dev_set_link_down(struct rte_eth_dev *eth_dev)
 	}
 
 	lio_dev->linfo.link.s.link_up = 0;
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	if (lio_send_rx_ctrl_cmd(eth_dev, 0)) {
 		lio_dev->linfo.link.s.link_up = 1;
-		eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+		eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 		lio_dev_err(lio_dev, "Unable to set Link Down\n");
 		return -1;
 	}
@@ -1721,9 +1721,9 @@ lio_dev_configure(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Inform firmware about change in number of queues to use.
 	 * Disable IO queues and reset registers for re-configuration.
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index 364e818d65c1..8533e39f6957 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -525,7 +525,7 @@ memif_disconnect(struct rte_eth_dev *dev)
 	int i;
 	int ret;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
 	pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTED;
 
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index 980150293e86..9deb7a5f1360 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -55,10 +55,10 @@ static const char * const valid_arguments[] = {
 };
 
 static const struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_AUTONEG
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_AUTONEG
 };
 
 #define MEMIF_MP_SEND_REGION		"memif_mp_send_region"
@@ -199,7 +199,7 @@ memif_dev_info(struct rte_eth_dev *dev __rte_unused, struct rte_eth_dev_info *de
 	dev_info->max_rx_queues = ETH_MEMIF_MAX_NUM_Q_PAIRS;
 	dev_info->max_tx_queues = ETH_MEMIF_MAX_NUM_Q_PAIRS;
 	dev_info->min_rx_bufsize = 0;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -1219,7 +1219,7 @@ memif_connect(struct rte_eth_dev *dev)
 
 		pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
 		pmd->flags |= ETH_MEMIF_FLAG_CONNECTED;
-		dev->data->dev_link.link_status = ETH_LINK_UP;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	}
 	MIF_LOG(INFO, "Connected.");
 	return 0;
@@ -1381,10 +1381,10 @@ memif_link_update(struct rte_eth_dev *dev,
 
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
 		proc_private = dev->process_private;
-		if (dev->data->dev_link.link_status == ETH_LINK_UP &&
+		if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP &&
 				proc_private->regions_num == 0) {
 			memif_mp_request_regions(dev);
-		} else if (dev->data->dev_link.link_status == ETH_LINK_DOWN &&
+		} else if (dev->data->dev_link.link_status == RTE_ETH_LINK_DOWN &&
 				proc_private->regions_num > 0) {
 			memif_free_regions(dev);
 		}
diff --git a/drivers/net/mlx4/mlx4_ethdev.c b/drivers/net/mlx4/mlx4_ethdev.c
index 783ff94dce8d..d606ec8ca76d 100644
--- a/drivers/net/mlx4/mlx4_ethdev.c
+++ b/drivers/net/mlx4/mlx4_ethdev.c
@@ -657,11 +657,11 @@ mlx4_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 	info->if_index = priv->if_index;
 	info->hash_key_size = MLX4_RSS_HASH_KEY_SIZE;
 	info->speed_capa =
-			ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_10G |
-			ETH_LINK_SPEED_20G |
-			ETH_LINK_SPEED_40G |
-			ETH_LINK_SPEED_56G;
+			RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_20G |
+			RTE_ETH_LINK_SPEED_40G |
+			RTE_ETH_LINK_SPEED_56G;
 	info->flow_type_rss_offloads = mlx4_conv_rss_types(priv, 0, 1);
 
 	return 0;
@@ -821,13 +821,13 @@ mlx4_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 	}
 	link_speed = ethtool_cmd_speed(&edata);
 	if (link_speed == -1)
-		dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	else
 		dev_link.link_speed = link_speed;
 	dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
-				ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+				RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
 	dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				  ETH_LINK_SPEED_FIXED);
+				  RTE_ETH_LINK_SPEED_FIXED);
 	dev->data->dev_link = dev_link;
 	return 0;
 }
@@ -863,13 +863,13 @@ mlx4_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 	fc_conf->autoneg = ethpause.autoneg;
 	if (ethpause.rx_pause && ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (ethpause.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	ret = 0;
 out:
 	MLX4_ASSERT(ret >= 0);
@@ -899,13 +899,13 @@ mlx4_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	ifr.ifr_data = (void *)&ethpause;
 	ethpause.autoneg = fc_conf->autoneg;
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_RX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
 		ethpause.rx_pause = 1;
 	else
 		ethpause.rx_pause = 0;
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_TX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
 		ethpause.tx_pause = 1;
 	else
 		ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c
index 71ea91b3fb82..2e1b6c87e983 100644
--- a/drivers/net/mlx4/mlx4_flow.c
+++ b/drivers/net/mlx4/mlx4_flow.c
@@ -109,21 +109,21 @@ mlx4_conv_rss_types(struct mlx4_priv *priv, uint64_t types, int verbs_to_dpdk)
 	};
 	static const uint64_t dpdk[] = {
 		[INNER] = 0,
-		[IPV4] = ETH_RSS_IPV4,
-		[IPV4_1] = ETH_RSS_FRAG_IPV4,
-		[IPV4_2] = ETH_RSS_NONFRAG_IPV4_OTHER,
-		[IPV6] = ETH_RSS_IPV6,
-		[IPV6_1] = ETH_RSS_FRAG_IPV6,
-		[IPV6_2] = ETH_RSS_NONFRAG_IPV6_OTHER,
-		[IPV6_3] = ETH_RSS_IPV6_EX,
+		[IPV4] = RTE_ETH_RSS_IPV4,
+		[IPV4_1] = RTE_ETH_RSS_FRAG_IPV4,
+		[IPV4_2] = RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+		[IPV6] = RTE_ETH_RSS_IPV6,
+		[IPV6_1] = RTE_ETH_RSS_FRAG_IPV6,
+		[IPV6_2] = RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+		[IPV6_3] = RTE_ETH_RSS_IPV6_EX,
 		[TCP] = 0,
 		[UDP] = 0,
-		[IPV4_TCP] = ETH_RSS_NONFRAG_IPV4_TCP,
-		[IPV4_UDP] = ETH_RSS_NONFRAG_IPV4_UDP,
-		[IPV6_TCP] = ETH_RSS_NONFRAG_IPV6_TCP,
-		[IPV6_TCP_1] = ETH_RSS_IPV6_TCP_EX,
-		[IPV6_UDP] = ETH_RSS_NONFRAG_IPV6_UDP,
-		[IPV6_UDP_1] = ETH_RSS_IPV6_UDP_EX,
+		[IPV4_TCP] = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+		[IPV4_UDP] = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+		[IPV6_TCP] = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+		[IPV6_TCP_1] = RTE_ETH_RSS_IPV6_TCP_EX,
+		[IPV6_UDP] = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+		[IPV6_UDP_1] = RTE_ETH_RSS_IPV6_UDP_EX,
 	};
 	static const uint64_t verbs[RTE_DIM(dpdk)] = {
 		[INNER] = IBV_RX_HASH_INNER,
@@ -1283,7 +1283,7 @@ mlx4_flow_internal_next_vlan(struct mlx4_priv *priv, uint16_t vlan)
  * - MAC flow rules are generated from @p dev->data->mac_addrs
  *   (@p priv->mac array).
  * - An additional flow rule for Ethernet broadcasts is also generated.
- * - All these are per-VLAN if @p DEV_RX_OFFLOAD_VLAN_FILTER
+ * - All these are per-VLAN if @p RTE_ETH_RX_OFFLOAD_VLAN_FILTER
  *   is enabled and VLAN filters are configured.
  *
  * @param priv
@@ -1358,7 +1358,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 	struct rte_ether_addr *rule_mac = &eth_spec.dst;
 	rte_be16_t *rule_vlan =
 		(ETH_DEV(priv)->data->dev_conf.rxmode.offloads &
-		 DEV_RX_OFFLOAD_VLAN_FILTER) &&
+		 RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 		!ETH_DEV(priv)->data->promiscuous ?
 		&vlan_spec.tci :
 		NULL;
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index d56009c41845..2aab0f60a7b5 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -118,7 +118,7 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
 static void
 mlx4_link_status_alarm(struct mlx4_priv *priv)
 {
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 
 	MLX4_ASSERT(priv->intr_alarm == 1);
@@ -183,7 +183,7 @@ mlx4_interrupt_handler(struct mlx4_priv *priv)
 	};
 	uint32_t caught[RTE_DIM(type)] = { 0 };
 	struct ibv_async_event event;
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 	unsigned int i;
 
@@ -280,7 +280,7 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
 int
 mlx4_intr_install(struct mlx4_priv *priv)
 {
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 	int rc;
 
@@ -386,7 +386,7 @@ mlx4_rx_intr_enable(struct rte_eth_dev *dev, uint16_t idx)
 int
 mlx4_rxq_intr_enable(struct mlx4_priv *priv)
 {
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 
 	if (intr_conf->rxq && mlx4_rx_intr_vec_enable(priv) < 0)
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index ee2d2b75e59a..781ee256df71 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -682,12 +682,12 @@ mlx4_rxq_detach(struct rxq *rxq)
 uint64_t
 mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
 {
-	uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
-			    DEV_RX_OFFLOAD_KEEP_CRC |
-			    DEV_RX_OFFLOAD_RSS_HASH;
+	uint64_t offloads = RTE_ETH_RX_OFFLOAD_SCATTER |
+			    RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+			    RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (priv->hw_csum)
-		offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 	return offloads;
 }
 
@@ -703,7 +703,7 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
 uint64_t
 mlx4_get_rx_port_offloads(struct mlx4_priv *priv)
 {
-	uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+	uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	(void)priv;
 	return offloads;
@@ -785,7 +785,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	}
 	/* By default, FCS (CRC) is stripped by hardware. */
 	crc_present = 0;
-	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		if (priv->hw_fcs_strip) {
 			crc_present = 1;
 		} else {
@@ -816,9 +816,9 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		.elts = elts,
 		/* Toggle Rx checksum offload if hardware supports it. */
 		.csum = priv->hw_csum &&
-			(offloads & DEV_RX_OFFLOAD_CHECKSUM),
+			(offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
 		.csum_l2tun = priv->hw_csum_l2tun &&
-			      (offloads & DEV_RX_OFFLOAD_CHECKSUM),
+			      (offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
 		.crc_present = crc_present,
 		.l2tun_offload = priv->hw_csum_l2tun,
 		.stats = {
@@ -832,7 +832,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
 	if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
 		;
-	} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+	} else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
 		uint32_t sges_n;
 
diff --git a/drivers/net/mlx4/mlx4_txq.c b/drivers/net/mlx4/mlx4_txq.c
index 7d8c4f2a2223..0db2e55befd3 100644
--- a/drivers/net/mlx4/mlx4_txq.c
+++ b/drivers/net/mlx4/mlx4_txq.c
@@ -273,20 +273,20 @@ mlx4_txq_fill_dv_obj_info(struct txq *txq, struct mlx4dv_obj *mlxdv)
 uint64_t
 mlx4_get_tx_port_offloads(struct mlx4_priv *priv)
 {
-	uint64_t offloads = DEV_TX_OFFLOAD_MULTI_SEGS;
+	uint64_t offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	if (priv->hw_csum) {
-		offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_UDP_CKSUM |
-			     DEV_TX_OFFLOAD_TCP_CKSUM);
+		offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
 	}
 	if (priv->tso)
-		offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+		offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	if (priv->hw_csum_l2tun) {
-		offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+		offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 		if (priv->tso)
-			offloads |= (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				     DEV_TX_OFFLOAD_GRE_TNL_TSO);
+			offloads |= (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
 	}
 	return offloads;
 }
@@ -394,12 +394,12 @@ mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		.elts_comp_cd_init =
 			RTE_MIN(MLX4_PMD_TX_PER_COMP_REQ, desc / 4),
 		.csum = priv->hw_csum &&
-			(offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-					   DEV_TX_OFFLOAD_UDP_CKSUM |
-					   DEV_TX_OFFLOAD_TCP_CKSUM)),
+			(offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+					   RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+					   RTE_ETH_TX_OFFLOAD_TCP_CKSUM)),
 		.csum_l2tun = priv->hw_csum_l2tun &&
 			      (offloads &
-			       DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM),
+			       RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM),
 		/* Enable Tx loopback for VF devices. */
 		.lb = !!priv->vf,
 		.bounce_buf = bounce_buf,
diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index f34133e2c641..79e27fe2d668 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -439,24 +439,24 @@ mlx5_link_update_unlocked_gset(struct rte_eth_dev *dev,
 	}
 	link_speed = ethtool_cmd_speed(&edata);
 	if (link_speed == -1)
-		dev_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		dev_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	else
 		dev_link.link_speed = link_speed;
 	priv->link_speed_capa = 0;
 	if (edata.supported & (SUPPORTED_1000baseT_Full |
 			       SUPPORTED_1000baseKX_Full))
-		priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (edata.supported & SUPPORTED_10000baseKR_Full)
-		priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (edata.supported & (SUPPORTED_40000baseKR4_Full |
 			       SUPPORTED_40000baseCR4_Full |
 			       SUPPORTED_40000baseSR4_Full |
 			       SUPPORTED_40000baseLR4_Full))
-		priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
-				ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+				RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
 	dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_SPEED_FIXED);
 	*link = dev_link;
 	return 0;
 }
@@ -545,45 +545,45 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
 		return ret;
 	}
 	dev_link.link_speed = (ecmd->speed == UINT32_MAX) ?
-				ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
+				RTE_ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
 	sc = ecmd->link_mode_masks[0] |
 		((uint64_t)ecmd->link_mode_masks[1] << 32);
 	priv->link_speed_capa = 0;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseT_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseKX_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKR_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseR_FEC_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_20G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_20G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseCR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseSR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseLR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_56G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_56G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseCR_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseKR_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseSR_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_25G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_50G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_100G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
 
 	sc = ecmd->link_mode_masks[2] |
 		((uint64_t)ecmd->link_mode_masks[3] << 32);
@@ -591,11 +591,11 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
 		  MLX5_BITSHIFT
 		       (ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseDR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
 	dev_link.link_duplex = ((ecmd->duplex == DUPLEX_HALF) ?
-				ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+				RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
 	dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				  ETH_LINK_SPEED_FIXED);
+				  RTE_ETH_LINK_SPEED_FIXED);
 	*link = dev_link;
 	return 0;
 }
@@ -677,13 +677,13 @@ mlx5_dev_get_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 	fc_conf->autoneg = ethpause.autoneg;
 	if (ethpause.rx_pause && ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (ethpause.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	return 0;
 }
 
@@ -709,14 +709,14 @@ mlx5_dev_set_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	ifr.ifr_data = (void *)&ethpause;
 	ethpause.autoneg = fc_conf->autoneg;
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_RX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
 		ethpause.rx_pause = 1;
 	else
 		ethpause.rx_pause = 0;
 
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_TX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
 		ethpause.tx_pause = 1;
 	else
 		ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 111a7597317a..23d9e0a476ac 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1310,8 +1310,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	 * Remove this check once DPDK supports larger/variable
 	 * indirection tables.
 	 */
-	if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
-		config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+	if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+		config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
 	DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
 		config->ind_table_max_size);
 	config->hw_vlan_strip = !!(sh->device_attr.raw_packet_caps &
@@ -1594,7 +1594,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	/*
 	 * If HW has bug working with tunnel packet decapsulation and
 	 * scatter FCS, and decapsulation is needed, clear the hw_fcs_strip
-	 * bit. Then DEV_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
+	 * bit. Then RTE_ETH_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
 	 */
 	if (config->hca_attr.scatter_fcs_w_decap_disable && config->decap_en)
 		config->hw_fcs_strip = 0;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 7263d354b180..3a9b716e438c 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1704,10 +1704,10 @@ mlx5_udp_tunnel_port_add(struct rte_eth_dev *dev __rte_unused,
 			 struct rte_eth_udp_tunnel *udp_tunnel)
 {
 	MLX5_ASSERT(udp_tunnel != NULL);
-	if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN &&
+	if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN &&
 	    udp_tunnel->udp_port == 4789)
 		return 0;
-	if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN_GPE &&
+	if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN_GPE &&
 	    udp_tunnel->udp_port == 4790)
 		return 0;
 	return -ENOTSUP;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 42cacd0bbe3b..52f03ada2ced 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1233,7 +1233,7 @@ TAILQ_HEAD(mlx5_legacy_flow_meters, mlx5_legacy_flow_meter);
 struct mlx5_flow_rss_desc {
 	uint32_t level;
 	uint32_t queue_num; /**< Number of entries in @p queue. */
-	uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+	uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
 	uint64_t hash_fields; /* Verbs Hash fields. */
 	uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
 	uint32_t key_len; /**< RSS hash key len. */
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index fe86bb40d351..12ddf4c7ff28 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -90,11 +90,11 @@
 #define MLX5_VPMD_DESCS_PER_LOOP      4
 
 /* Mask of RSS on source only or destination only. */
-#define MLX5_RSS_SRC_DST_ONLY (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | \
-			       ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define MLX5_RSS_SRC_DST_ONLY (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY | \
+			       RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
 
 /* Supported RSS */
-#define MLX5_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP | \
+#define MLX5_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP | \
 			    MLX5_RSS_SRC_DST_ONLY))
 
 /* Timeout in seconds to get a valid link status. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 82e2284d9866..f2b78c3cc69e 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -91,7 +91,7 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
 	}
 
 	if ((dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
+			RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
 			rte_mbuf_dyn_tx_timestamp_register(NULL, NULL) != 0) {
 		DRV_LOG(ERR, "port %u cannot register Tx timestamp field/flag",
 			dev->data->port_id);
@@ -225,8 +225,8 @@ mlx5_set_default_params(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 	info->default_txportconf.ring_size = 256;
 	info->default_rxportconf.burst_size = MLX5_RX_DEFAULT_BURST;
 	info->default_txportconf.burst_size = MLX5_TX_DEFAULT_BURST;
-	if ((priv->link_speed_capa & ETH_LINK_SPEED_200G) |
-		(priv->link_speed_capa & ETH_LINK_SPEED_100G)) {
+	if ((priv->link_speed_capa & RTE_ETH_LINK_SPEED_200G) |
+		(priv->link_speed_capa & RTE_ETH_LINK_SPEED_100G)) {
 		info->default_rxportconf.nb_queues = 16;
 		info->default_txportconf.nb_queues = 16;
 		if (dev->data->nb_rx_queues > 2 ||
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 002449e993e7..d645fd48647e 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -98,7 +98,7 @@ struct mlx5_flow_expand_node {
 	uint64_t rss_types;
 	/**<
 	 * RSS types bit-field associated with this node
-	 * (see ETH_RSS_* definitions).
+	 * (see RTE_ETH_RSS_* definitions).
 	 */
 	uint64_t node_flags;
 	/**<
@@ -298,7 +298,7 @@ mlx5_flow_expand_rss_skip_explicit(const struct mlx5_flow_expand_node graph[],
  * @param[in] pattern
  *   User flow pattern.
  * @param[in] types
- *   RSS types to expand (see ETH_RSS_* definitions).
+ *   RSS types to expand (see RTE_ETH_RSS_* definitions).
  * @param[in] graph
  *   Input graph to expand @p pattern according to @p types.
  * @param[in] graph_root_index
@@ -560,8 +560,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 			 MLX5_EXPANSION_IPV4,
 			 MLX5_EXPANSION_IPV6),
 		.type = RTE_FLOW_ITEM_TYPE_IPV4,
-		.rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			ETH_RSS_NONFRAG_IPV4_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	},
 	[MLX5_EXPANSION_OUTER_IPV4_UDP] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -569,11 +569,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 						  MLX5_EXPANSION_MPLS,
 						  MLX5_EXPANSION_GTP),
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 	},
 	[MLX5_EXPANSION_OUTER_IPV4_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 	},
 	[MLX5_EXPANSION_OUTER_IPV6] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT
@@ -584,8 +584,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 			 MLX5_EXPANSION_GRE,
 			 MLX5_EXPANSION_NVGRE),
 		.type = RTE_FLOW_ITEM_TYPE_IPV6,
-		.rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-			ETH_RSS_NONFRAG_IPV6_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	},
 	[MLX5_EXPANSION_OUTER_IPV6_UDP] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -593,11 +593,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 						  MLX5_EXPANSION_MPLS,
 						  MLX5_EXPANSION_GTP),
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 	},
 	[MLX5_EXPANSION_OUTER_IPV6_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	},
 	[MLX5_EXPANSION_VXLAN] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_ETH,
@@ -659,32 +659,32 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4_UDP,
 						  MLX5_EXPANSION_IPV4_TCP),
 		.type = RTE_FLOW_ITEM_TYPE_IPV4,
-		.rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			ETH_RSS_NONFRAG_IPV4_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	},
 	[MLX5_EXPANSION_IPV4_UDP] = {
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 	},
 	[MLX5_EXPANSION_IPV4_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 	},
 	[MLX5_EXPANSION_IPV6] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV6_UDP,
 						  MLX5_EXPANSION_IPV6_TCP,
 						  MLX5_EXPANSION_IPV6_FRAG_EXT),
 		.type = RTE_FLOW_ITEM_TYPE_IPV6,
-		.rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-			ETH_RSS_NONFRAG_IPV6_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	},
 	[MLX5_EXPANSION_IPV6_UDP] = {
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 	},
 	[MLX5_EXPANSION_IPV6_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	},
 	[MLX5_EXPANSION_IPV6_FRAG_EXT] = {
 		.type = RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
@@ -1100,7 +1100,7 @@ mlx5_flow_item_acceptable(const struct rte_flow_item *item,
  * @param[in] tunnel
  *   1 when the hash field is for a tunnel item.
  * @param[in] layer_types
- *   ETH_RSS_* types.
+ *   RTE_ETH_RSS_* types.
  * @param[in] hash_fields
  *   Item hash fields.
  *
@@ -1653,14 +1653,14 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev,
 					  &rss->types,
 					  "some RSS protocols are not"
 					  " supported");
-	if ((rss->types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) &&
-	    !(rss->types & ETH_RSS_IP))
+	if ((rss->types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) &&
+	    !(rss->types & RTE_ETH_RSS_IP))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
 					  "L3 partial RSS requested but L3 RSS"
 					  " type not specified");
-	if ((rss->types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) &&
-	    !(rss->types & (ETH_RSS_UDP | ETH_RSS_TCP)))
+	if ((rss->types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) &&
+	    !(rss->types & (RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP)))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
 					  "L4 partial RSS requested but L4 RSS"
@@ -6427,8 +6427,8 @@ flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type,
 		 * mlx5_flow_hashfields_adjust() in advance.
 		 */
 		rss_desc->level = rss->level;
-		/* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
-		rss_desc->types = !rss->types ? ETH_RSS_IP : rss->types;
+		/* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+		rss_desc->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
 	}
 	flow->dev_handles = 0;
 	if (rss && rss->types) {
@@ -7126,7 +7126,7 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev,
 	if (!priv->reta_idx_n || !priv->rxqs_n) {
 		return 0;
 	}
-	if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+	if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 		action_rss.types = 0;
 	for (i = 0; i != priv->reta_idx_n; ++i)
 		queue[i] = (*priv->reta_idx)[i];
@@ -8794,7 +8794,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
 				(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION_CONF,
 				NULL, "invalid port configuration");
-		if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+		if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 			ctx->action_rss.types = 0;
 		for (i = 0; i != priv->reta_idx_n; ++i)
 			ctx->queue[i] = (*priv->reta_idx)[i];
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index f1a83d537d0c..4a16f30fb7a6 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -331,18 +331,18 @@ enum mlx5_feature_name {
 
 /* Valid layer type for IPV4 RSS. */
 #define MLX5_IPV4_LAYER_TYPES \
-	(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
-	 ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
-	 ETH_RSS_NONFRAG_IPV4_OTHER)
+	(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+	 RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	 RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
 
 /* IBV hash source bits  for IPV4. */
 #define MLX5_IPV4_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV4 | IBV_RX_HASH_DST_IPV4)
 
 /* Valid layer type for IPV6 RSS. */
 #define MLX5_IPV6_LAYER_TYPES \
-	(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP | \
-	 ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_EX  | ETH_RSS_IPV6_TCP_EX | \
-	 ETH_RSS_IPV6_UDP_EX | ETH_RSS_NONFRAG_IPV6_OTHER)
+	(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	 RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_EX  | RTE_ETH_RSS_IPV6_TCP_EX | \
+	 RTE_ETH_RSS_IPV6_UDP_EX | RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
 
 /* IBV hash source bits  for IPV6. */
 #define MLX5_IPV6_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV6 | IBV_RX_HASH_DST_IPV6)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 5bd90bfa2818..c4a5706532a9 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -10862,9 +10862,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 	if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV4)) ||
 	    (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV4))) {
 		if (rss_types & MLX5_IPV4_LAYER_TYPES) {
-			if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV4;
-			else if (rss_types & ETH_RSS_L3_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV4;
 			else
 				dev_flow->hash_fields |= MLX5_IPV4_IBV_RX_HASH;
@@ -10872,9 +10872,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 	} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV6)) ||
 		   (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV6))) {
 		if (rss_types & MLX5_IPV6_LAYER_TYPES) {
-			if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV6;
-			else if (rss_types & ETH_RSS_L3_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV6;
 			else
 				dev_flow->hash_fields |= MLX5_IPV6_IBV_RX_HASH;
@@ -10888,11 +10888,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 		return;
 	if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_UDP)) ||
 	    (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_UDP))) {
-		if (rss_types & ETH_RSS_UDP) {
-			if (rss_types & ETH_RSS_L4_SRC_ONLY)
+		if (rss_types & RTE_ETH_RSS_UDP) {
+			if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_SRC_PORT_UDP;
-			else if (rss_types & ETH_RSS_L4_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_DST_PORT_UDP;
 			else
@@ -10900,11 +10900,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 		}
 	} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_TCP)) ||
 		   (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_TCP))) {
-		if (rss_types & ETH_RSS_TCP) {
-			if (rss_types & ETH_RSS_L4_SRC_ONLY)
+		if (rss_types & RTE_ETH_RSS_TCP) {
+			if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_SRC_PORT_TCP;
-			else if (rss_types & ETH_RSS_L4_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_DST_PORT_TCP;
 			else
@@ -14444,9 +14444,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV4:
 		if (rss_types & MLX5_IPV4_LAYER_TYPES) {
 			*hash_field &= ~MLX5_RSS_HASH_IPV4;
-			if (rss_types & ETH_RSS_L3_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_IPV4;
-			else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_IPV4;
 			else
 				*hash_field |= MLX5_RSS_HASH_IPV4;
@@ -14455,9 +14455,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV6:
 		if (rss_types & MLX5_IPV6_LAYER_TYPES) {
 			*hash_field &= ~MLX5_RSS_HASH_IPV6;
-			if (rss_types & ETH_RSS_L3_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_IPV6;
-			else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_IPV6;
 			else
 				*hash_field |= MLX5_RSS_HASH_IPV6;
@@ -14466,11 +14466,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV4_UDP:
 		/* fall-through. */
 	case MLX5_RSS_HASH_IPV6_UDP:
-		if (rss_types & ETH_RSS_UDP) {
+		if (rss_types & RTE_ETH_RSS_UDP) {
 			*hash_field &= ~MLX5_UDP_IBV_RX_HASH;
-			if (rss_types & ETH_RSS_L4_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_PORT_UDP;
-			else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_PORT_UDP;
 			else
 				*hash_field |= MLX5_UDP_IBV_RX_HASH;
@@ -14479,11 +14479,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV4_TCP:
 		/* fall-through. */
 	case MLX5_RSS_HASH_IPV6_TCP:
-		if (rss_types & ETH_RSS_TCP) {
+		if (rss_types & RTE_ETH_RSS_TCP) {
 			*hash_field &= ~MLX5_TCP_IBV_RX_HASH;
-			if (rss_types & ETH_RSS_L4_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_PORT_TCP;
-			else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_PORT_TCP;
 			else
 				*hash_field |= MLX5_TCP_IBV_RX_HASH;
@@ -14631,8 +14631,8 @@ __flow_dv_action_rss_create(struct rte_eth_dev *dev,
 	origin = &shared_rss->origin;
 	origin->func = rss->func;
 	origin->level = rss->level;
-	/* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
-	origin->types = !rss->types ? ETH_RSS_IP : rss->types;
+	/* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+	origin->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
 	/* NULL RSS key indicates default RSS key. */
 	rss_key = !rss->key ? rss_hash_default_key : rss->key;
 	memcpy(shared_rss->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 892abcb65779..f9010a674d7f 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1824,7 +1824,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
 			if (dev_flow->hash_fields != 0)
 				dev_flow->hash_fields |=
 					mlx5_flow_hashfields_adjust
-					(rss_desc, tunnel, ETH_RSS_TCP,
+					(rss_desc, tunnel, RTE_ETH_RSS_TCP,
 					 (IBV_RX_HASH_SRC_PORT_TCP |
 					  IBV_RX_HASH_DST_PORT_TCP));
 			item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP :
@@ -1837,7 +1837,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
 			if (dev_flow->hash_fields != 0)
 				dev_flow->hash_fields |=
 					mlx5_flow_hashfields_adjust
-					(rss_desc, tunnel, ETH_RSS_UDP,
+					(rss_desc, tunnel, RTE_ETH_RSS_UDP,
 					 (IBV_RX_HASH_SRC_PORT_UDP |
 					  IBV_RX_HASH_DST_PORT_UDP));
 			item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
diff --git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c
index c32129cdc2b8..a4f690039e24 100644
--- a/drivers/net/mlx5/mlx5_rss.c
+++ b/drivers/net/mlx5/mlx5_rss.c
@@ -68,7 +68,7 @@ mlx5_rss_hash_update(struct rte_eth_dev *dev,
 		if (!(*priv->rxqs)[i])
 			continue;
 		(*priv->rxqs)[i]->rss_hash = !!rss_conf->rss_hf &&
-			!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS);
+			!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS);
 		++idx;
 	}
 	return 0;
@@ -170,8 +170,8 @@ mlx5_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 	/* Fill each entry of the table even if its bit is not set. */
 	for (idx = 0, i = 0; (i != reta_size); ++i) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		reta_conf[idx].reta[i % RTE_RETA_GROUP_SIZE] =
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] =
 			(*priv->reta_idx)[i];
 	}
 	return 0;
@@ -209,8 +209,8 @@ mlx5_dev_rss_reta_update(struct rte_eth_dev *dev,
 	if (ret)
 		return ret;
 	for (idx = 0, i = 0; (i != reta_size); ++i) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		pos = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (((reta_conf[idx].mask >> i) & 0x1) == 0)
 			continue;
 		MLX5_ASSERT(reta_conf[idx].reta[pos] < priv->rxqs_n);
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 60673d014d02..14b9991c5fa8 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -333,22 +333,22 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_dev_config *config = &priv->config;
-	uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
-			     DEV_RX_OFFLOAD_TIMESTAMP |
-			     DEV_RX_OFFLOAD_RSS_HASH);
+	uint64_t offloads = (RTE_ETH_RX_OFFLOAD_SCATTER |
+			     RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+			     RTE_ETH_RX_OFFLOAD_RSS_HASH);
 
 	if (!config->mprq.enabled)
 		offloads |= RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT;
 	if (config->hw_fcs_strip)
-		offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	if (config->hw_csum)
-		offloads |= (DEV_RX_OFFLOAD_IPV4_CKSUM |
-			     DEV_RX_OFFLOAD_UDP_CKSUM |
-			     DEV_RX_OFFLOAD_TCP_CKSUM);
+		offloads |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			     RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
 	if (config->hw_vlan_strip)
-		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	if (MLX5_LRO_SUPPORTED(dev))
-		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+		offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 	return offloads;
 }
 
@@ -362,7 +362,7 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
 uint64_t
 mlx5_get_rx_port_offloads(void)
 {
-	uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+	uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	return offloads;
 }
@@ -694,7 +694,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 				    dev->data->dev_conf.rxmode.offloads;
 
 		/* The offloads should be checked on rte_eth_dev layer. */
-		MLX5_ASSERT(offloads & DEV_RX_OFFLOAD_SCATTER);
+		MLX5_ASSERT(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
 		if (!(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
 			DRV_LOG(ERR, "port %u queue index %u split "
 				     "offload not configured",
@@ -1336,7 +1336,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	struct mlx5_dev_config *config = &priv->config;
 	uint64_t offloads = conf->offloads |
 			   dev->data->dev_conf.rxmode.offloads;
-	unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
+	unsigned int lro_on_queue = !!(offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO);
 	unsigned int max_rx_pktlen = lro_on_queue ?
 			dev->data->dev_conf.rxmode.max_lro_pkt_size :
 			dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN +
@@ -1439,7 +1439,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	} while (tail_len || !rte_is_power_of_2(tmpl->rxq.rxseg_n));
 	MLX5_ASSERT(tmpl->rxq.rxseg_n &&
 		    tmpl->rxq.rxseg_n <= MLX5_MAX_RXQ_NSEG);
-	if (tmpl->rxq.rxseg_n > 1 && !(offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	if (tmpl->rxq.rxseg_n > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
 			" configured and no enough mbuf space(%u) to contain "
 			"the maximum RX packet length(%u) with head-room(%u)",
@@ -1485,7 +1485,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 			config->mprq.stride_size_n : mprq_stride_size;
 		tmpl->rxq.strd_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT;
 		tmpl->rxq.strd_scatter_en =
-				!!(offloads & DEV_RX_OFFLOAD_SCATTER);
+				!!(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
 		tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
 				config->mprq.max_memcpy_len);
 		max_lro_size = RTE_MIN(max_rx_pktlen,
@@ -1500,7 +1500,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
 		tmpl->rxq.sges_n = 0;
 		max_lro_size = max_rx_pktlen;
-	} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+	} else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		unsigned int sges_n;
 
 		if (lro_on_queue && first_mb_free_size <
@@ -1561,9 +1561,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	}
 	mlx5_max_lro_msg_size_adjust(dev, idx, max_lro_size);
 	/* Toggle RX checksum offload if hardware supports it. */
-	tmpl->rxq.csum = !!(offloads & DEV_RX_OFFLOAD_CHECKSUM);
+	tmpl->rxq.csum = !!(offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM);
 	/* Configure Rx timestamp. */
-	tmpl->rxq.hw_timestamp = !!(offloads & DEV_RX_OFFLOAD_TIMESTAMP);
+	tmpl->rxq.hw_timestamp = !!(offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP);
 	tmpl->rxq.timestamp_rx_flag = 0;
 	if (tmpl->rxq.hw_timestamp && rte_mbuf_dyn_rx_timestamp_register(
 			&tmpl->rxq.timestamp_offset,
@@ -1572,11 +1572,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		goto error;
 	}
 	/* Configure VLAN stripping. */
-	tmpl->rxq.vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	tmpl->rxq.vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 	/* By default, FCS (CRC) is stripped by hardware. */
 	tmpl->rxq.crc_present = 0;
 	tmpl->rxq.lro = lro_on_queue;
-	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		if (config->hw_fcs_strip) {
 			/*
 			 * RQs used for LRO-enabled TIRs should not be
@@ -1606,7 +1606,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		tmpl->rxq.crc_present << 2);
 	/* Save port ID. */
 	tmpl->rxq.rss_hash = !!priv->rss_conf.rss_hf &&
-		(!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS));
+		(!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS));
 	tmpl->rxq.port_id = dev->data->port_id;
 	tmpl->priv = priv;
 	tmpl->rxq.mp = rx_seg[0].mp;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.h b/drivers/net/mlx5/mlx5_rxtx_vec.h
index 93b4f517bb3e..65d91bdf67e2 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.h
@@ -16,10 +16,10 @@
 
 /* HW checksum offload capabilities of vectorized Tx. */
 #define MLX5_VEC_TX_CKSUM_OFFLOAD_CAP \
-	(DEV_TX_OFFLOAD_IPV4_CKSUM | \
-	 DEV_TX_OFFLOAD_UDP_CKSUM | \
-	 DEV_TX_OFFLOAD_TCP_CKSUM | \
-	 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+	(RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+	 RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+	 RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+	 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 
 /*
  * Compile time sanity check for vectorized functions.
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index df671379e46d..12aeba60348a 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -523,36 +523,36 @@ mlx5_select_tx_function(struct rte_eth_dev *dev)
 	unsigned int diff = 0, olx = 0, i, m;
 
 	MLX5_ASSERT(priv);
-	if (tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
 		/* We should support Multi-Segment Packets. */
 		olx |= MLX5_TXOFF_CONFIG_MULTI;
 	}
-	if (tx_offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-			   DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			   DEV_TX_OFFLOAD_GRE_TNL_TSO |
-			   DEV_TX_OFFLOAD_IP_TNL_TSO |
-			   DEV_TX_OFFLOAD_UDP_TNL_TSO)) {
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+			   RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO)) {
 		/* We should support TCP Send Offload. */
 		olx |= MLX5_TXOFF_CONFIG_TSO;
 	}
-	if (tx_offloads & (DEV_TX_OFFLOAD_IP_TNL_TSO |
-			   DEV_TX_OFFLOAD_UDP_TNL_TSO |
-			   DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 		/* We should support Software Parser for Tunnels. */
 		olx |= MLX5_TXOFF_CONFIG_SWP;
 	}
-	if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			   DEV_TX_OFFLOAD_UDP_CKSUM |
-			   DEV_TX_OFFLOAD_TCP_CKSUM |
-			   DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 		/* We should support IP/TCP/UDP Checksums. */
 		olx |= MLX5_TXOFF_CONFIG_CSUM;
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) {
 		/* We should support VLAN insertion. */
 		olx |= MLX5_TXOFF_CONFIG_VLAN;
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
 	    rte_mbuf_dynflag_lookup
 			(RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME, NULL) >= 0 &&
 	    rte_mbuf_dynfield_lookup
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 1f92250f5edd..02bb9307ae61 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -98,42 +98,42 @@ uint64_t
 mlx5_get_tx_port_offloads(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	uint64_t offloads = (DEV_TX_OFFLOAD_MULTI_SEGS |
-			     DEV_TX_OFFLOAD_VLAN_INSERT);
+	uint64_t offloads = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+			     RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
 	struct mlx5_dev_config *config = &priv->config;
 
 	if (config->hw_csum)
-		offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_UDP_CKSUM |
-			     DEV_TX_OFFLOAD_TCP_CKSUM);
+		offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
 	if (config->tso)
-		offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+		offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	if (config->tx_pp)
-		offloads |= DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP;
+		offloads |= RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP;
 	if (config->swp) {
 		if (config->swp & MLX5_SW_PARSING_CSUM_CAP)
-			offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+			offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 		if (config->swp & MLX5_SW_PARSING_TSO_CAP)
-			offloads |= (DEV_TX_OFFLOAD_IP_TNL_TSO |
-				     DEV_TX_OFFLOAD_UDP_TNL_TSO);
+			offloads |= (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 	}
 	if (config->tunnel_en) {
 		if (config->hw_csum)
-			offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+			offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 		if (config->tso) {
 			if (config->tunnel_en &
 				MLX5_TUNNELED_OFFLOADS_VXLAN_CAP)
-				offloads |= DEV_TX_OFFLOAD_VXLAN_TNL_TSO;
+				offloads |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
 			if (config->tunnel_en &
 				MLX5_TUNNELED_OFFLOADS_GRE_CAP)
-				offloads |= DEV_TX_OFFLOAD_GRE_TNL_TSO;
+				offloads |= RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO;
 			if (config->tunnel_en &
 				MLX5_TUNNELED_OFFLOADS_GENEVE_CAP)
-				offloads |= DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+				offloads |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
 		}
 	}
 	if (!config->mprq.enabled)
-		offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	return offloads;
 }
 
@@ -801,17 +801,17 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
 	unsigned int inlen_mode; /* Minimal required Inline data. */
 	unsigned int txqs_inline; /* Min Tx queues to enable inline. */
 	uint64_t dev_txoff = priv->dev_data->dev_conf.txmode.offloads;
-	bool tso = txq_ctrl->txq.offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-					    DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-					    DEV_TX_OFFLOAD_GRE_TNL_TSO |
-					    DEV_TX_OFFLOAD_IP_TNL_TSO |
-					    DEV_TX_OFFLOAD_UDP_TNL_TSO);
+	bool tso = txq_ctrl->txq.offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+					    RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+					    RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+					    RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+					    RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 	bool vlan_inline;
 	unsigned int temp;
 
 	txq_ctrl->txq.fast_free =
-		!!((txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
-		   !(txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MULTI_SEGS) &&
+		!!((txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
+		   !(txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) &&
 		   !config->mprq.enabled);
 	if (config->txqs_inline == MLX5_ARG_UNSET)
 		txqs_inline =
@@ -870,7 +870,7 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
 	 * tx_burst routine.
 	 */
 	txq_ctrl->txq.vlan_en = config->hw_vlan_insert;
-	vlan_inline = (dev_txoff & DEV_TX_OFFLOAD_VLAN_INSERT) &&
+	vlan_inline = (dev_txoff & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) &&
 		      !config->hw_vlan_insert;
 	/*
 	 * If there are few Tx queues it is prioritized
@@ -978,19 +978,19 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
 						    MLX5_MAX_TSO_HEADER);
 		txq_ctrl->txq.tso_en = 1;
 	}
-	if (((DEV_TX_OFFLOAD_VXLAN_TNL_TSO & txq_ctrl->txq.offloads) &&
+	if (((RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO & txq_ctrl->txq.offloads) &&
 	    (config->tunnel_en & MLX5_TUNNELED_OFFLOADS_VXLAN_CAP)) |
-	   ((DEV_TX_OFFLOAD_GRE_TNL_TSO & txq_ctrl->txq.offloads) &&
+	   ((RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO & txq_ctrl->txq.offloads) &&
 	    (config->tunnel_en & MLX5_TUNNELED_OFFLOADS_GRE_CAP)) |
-	   ((DEV_TX_OFFLOAD_GENEVE_TNL_TSO & txq_ctrl->txq.offloads) &&
+	   ((RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO & txq_ctrl->txq.offloads) &&
 	    (config->tunnel_en & MLX5_TUNNELED_OFFLOADS_GENEVE_CAP)) |
 	   (config->swp  & MLX5_SW_PARSING_TSO_CAP))
 		txq_ctrl->txq.tunnel_en = 1;
-	txq_ctrl->txq.swp_en = (((DEV_TX_OFFLOAD_IP_TNL_TSO |
-				  DEV_TX_OFFLOAD_UDP_TNL_TSO) &
+	txq_ctrl->txq.swp_en = (((RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO) &
 				  txq_ctrl->txq.offloads) && (config->swp &
 				  MLX5_SW_PARSING_TSO_CAP)) |
-				((DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM &
+				((RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM &
 				 txq_ctrl->txq.offloads) && (config->swp &
 				 MLX5_SW_PARSING_CSUM_CAP));
 }
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 60f97f2d2d1f..07792fc5d94f 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -142,9 +142,9 @@ mlx5_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct mlx5_priv *priv = dev->data->dev_private;
 	unsigned int i;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		int hw_vlan_strip = !!(dev->data->dev_conf.rxmode.offloads &
-				       DEV_RX_OFFLOAD_VLAN_STRIP);
+				       RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 		if (!priv->config.hw_vlan_strip) {
 			DRV_LOG(ERR, "port %u VLAN stripping is not supported",
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 31c4d3276053..9a9069da7572 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -485,8 +485,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	 * Remove this check once DPDK supports larger/variable
 	 * indirection tables.
 	 */
-	if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
-		config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+	if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+		config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
 	DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
 		config->ind_table_max_size);
 	if (config->hw_padding) {
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index 2a0288087357..10fe6d828ccd 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -114,7 +114,7 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
 	struct mvneta_priv *priv = dev->data->dev_private;
 	struct neta_ppio_params *ppio_params;
 
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE) {
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE) {
 		MVNETA_LOG(INFO, "Unsupported RSS and rx multi queue mode %d",
 			dev->data->dev_conf.rxmode.mq_mode);
 		if (dev->data->nb_rx_queues > 1)
@@ -126,7 +126,7 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		priv->multiseg = 1;
 
 	ppio_params = &priv->ppio_params;
@@ -151,10 +151,10 @@ static int
 mvneta_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
 		   struct rte_eth_dev_info *info)
 {
-	info->speed_capa = ETH_LINK_SPEED_10M |
-			   ETH_LINK_SPEED_100M |
-			   ETH_LINK_SPEED_1G |
-			   ETH_LINK_SPEED_2_5G;
+	info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			   RTE_ETH_LINK_SPEED_100M |
+			   RTE_ETH_LINK_SPEED_1G |
+			   RTE_ETH_LINK_SPEED_2_5G;
 
 	info->max_rx_queues = MRVL_NETA_RXQ_MAX;
 	info->max_tx_queues = MRVL_NETA_TXQ_MAX;
@@ -503,28 +503,28 @@ mvneta_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 
 	switch (ethtool_cmd_speed(&edata)) {
 	case SPEED_10:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case SPEED_100:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case SPEED_1000:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case SPEED_2500:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	default:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	}
 
-	dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
-							 ETH_LINK_HALF_DUPLEX;
-	dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
-							   ETH_LINK_FIXED;
+	dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+							 RTE_ETH_LINK_HALF_DUPLEX;
+	dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+							   RTE_ETH_LINK_FIXED;
 
 	neta_ppio_get_link_state(priv->ppio, &link_up);
-	dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index 126a9a0c11b9..ccb87d518d83 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -54,14 +54,14 @@
 #define MRVL_NETA_MRU_TO_MTU(mru)	((mru) - MRVL_NETA_HDRS_LEN)
 
 /** Rx offloads capabilities */
-#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_CHECKSUM)
+#define MVNETA_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_CHECKSUM)
 
 /** Tx offloads capabilities */
-#define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				    DEV_TX_OFFLOAD_UDP_CKSUM  | \
-				    DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MVNETA_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+				    RTE_ETH_TX_OFFLOAD_UDP_CKSUM  | \
+				    RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 #define MVNETA_TX_OFFLOADS (MVNETA_TX_OFFLOAD_CHECKSUM | \
-			    DEV_TX_OFFLOAD_MULTI_SEGS)
+			    RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define MVNETA_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
 				PKT_TX_TCP_CKSUM | \
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index 9836bb071a82..62d8aa586dae 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -734,7 +734,7 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	rxq->priv = priv;
 	rxq->mp = mp;
 	rxq->cksum_enabled = dev->data->dev_conf.rxmode.offloads &
-			     DEV_RX_OFFLOAD_IPV4_CKSUM;
+			     RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 	rxq->queue_id = idx;
 	rxq->port_id = dev->data->port_id;
 	rxq->size = desc;
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index a6458d2ce9b5..d0746b0d1215 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -58,15 +58,15 @@
 #define MRVL_COOKIE_HIGH_ADDR_MASK 0xffffff0000000000
 
 /** Port Rx offload capabilities */
-#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
-			  DEV_RX_OFFLOAD_CHECKSUM)
+#define MRVL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+			  RTE_ETH_RX_OFFLOAD_CHECKSUM)
 
 /** Port Tx offloads capabilities */
-#define MRVL_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				  DEV_TX_OFFLOAD_UDP_CKSUM  | \
-				  DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MRVL_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM  | \
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 #define MRVL_TX_OFFLOADS (MRVL_TX_OFFLOAD_CHECKSUM | \
-			  DEV_TX_OFFLOAD_MULTI_SEGS)
+			  RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define MRVL_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
 			      PKT_TX_TCP_CKSUM | \
@@ -442,14 +442,14 @@ mrvl_configure_rss(struct mrvl_priv *priv, struct rte_eth_rss_conf *rss_conf)
 
 	if (rss_conf->rss_hf == 0) {
 		priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
-	} else if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+	} else if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
 		priv->ppio_params.inqs_params.hash_type =
 			PP2_PPIO_HASH_T_2_TUPLE;
-	} else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	} else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		priv->ppio_params.inqs_params.hash_type =
 			PP2_PPIO_HASH_T_5_TUPLE;
 		priv->rss_hf_tcp = 1;
-	} else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	} else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		priv->ppio_params.inqs_params.hash_type =
 			PP2_PPIO_HASH_T_5_TUPLE;
 		priv->rss_hf_tcp = 0;
@@ -483,8 +483,8 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE &&
-	    dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE &&
+	    dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		MRVL_LOG(INFO, "Unsupported rx multi queue mode %d",
 			dev->data->dev_conf.rxmode.mq_mode);
 		return -EINVAL;
@@ -502,7 +502,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		priv->multiseg = 1;
 
 	ret = mrvl_configure_rxqs(priv, dev->data->port_id,
@@ -524,7 +524,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		return ret;
 
 	if (dev->data->nb_rx_queues == 1 &&
-	    dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	    dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		MRVL_LOG(WARNING, "Disabling hash for 1 rx queue");
 		priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
 		priv->configured = 1;
@@ -623,7 +623,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
 	int ret;
 
 	if (!priv->ppio) {
-		dev->data->dev_link.link_status = ETH_LINK_UP;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 		return 0;
 	}
 
@@ -644,7 +644,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
 		return ret;
 	}
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -664,14 +664,14 @@ mrvl_dev_set_link_down(struct rte_eth_dev *dev)
 	int ret;
 
 	if (!priv->ppio) {
-		dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 		return 0;
 	}
 	ret = pp2_ppio_disable(priv->ppio);
 	if (ret)
 		return ret;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
@@ -893,7 +893,7 @@ mrvl_dev_start(struct rte_eth_dev *dev)
 	if (dev->data->all_multicast == 1)
 		mrvl_allmulticast_enable(dev);
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		ret = mrvl_populate_vlan_table(dev, 1);
 		if (ret) {
 			MRVL_LOG(ERR, "Failed to populate VLAN table");
@@ -929,11 +929,11 @@ mrvl_dev_start(struct rte_eth_dev *dev)
 		priv->flow_ctrl = 0;
 	}
 
-	if (dev->data->dev_link.link_status == ETH_LINK_UP) {
+	if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
 		ret = mrvl_dev_set_link_up(dev);
 		if (ret) {
 			MRVL_LOG(ERR, "Failed to set link up");
-			dev->data->dev_link.link_status = ETH_LINK_DOWN;
+			dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 			goto out;
 		}
 	}
@@ -1202,30 +1202,30 @@ mrvl_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 
 	switch (ethtool_cmd_speed(&edata)) {
 	case SPEED_10:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case SPEED_100:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case SPEED_1000:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case SPEED_2500:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	case SPEED_10000:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	default:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	}
 
-	dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
-							 ETH_LINK_HALF_DUPLEX;
-	dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
-							   ETH_LINK_FIXED;
+	dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+							 RTE_ETH_LINK_HALF_DUPLEX;
+	dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+							   RTE_ETH_LINK_FIXED;
 	pp2_ppio_get_link_state(priv->ppio, &link_up);
-	dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -1709,11 +1709,11 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
 {
 	struct mrvl_priv *priv = dev->data->dev_private;
 
-	info->speed_capa = ETH_LINK_SPEED_10M |
-			   ETH_LINK_SPEED_100M |
-			   ETH_LINK_SPEED_1G |
-			   ETH_LINK_SPEED_2_5G |
-			   ETH_LINK_SPEED_10G;
+	info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			   RTE_ETH_LINK_SPEED_100M |
+			   RTE_ETH_LINK_SPEED_1G |
+			   RTE_ETH_LINK_SPEED_2_5G |
+			   RTE_ETH_LINK_SPEED_10G;
 
 	info->max_rx_queues = MRVL_PP2_RXQ_MAX;
 	info->max_tx_queues = MRVL_PP2_TXQ_MAX;
@@ -1733,9 +1733,9 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
 	info->tx_offload_capa = MRVL_TX_OFFLOADS;
 	info->tx_queue_offload_capa = MRVL_TX_OFFLOADS;
 
-	info->flow_type_rss_offloads = ETH_RSS_IPV4 |
-				       ETH_RSS_NONFRAG_IPV4_TCP |
-				       ETH_RSS_NONFRAG_IPV4_UDP;
+	info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+				       RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				       RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	/* By default packets are dropped if no descriptors are available */
 	info->default_rxconf.rx_drop_en = 1;
@@ -1864,13 +1864,13 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
 	int ret;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		MRVL_LOG(ERR, "VLAN stripping is not supported\n");
 		return -ENOTSUP;
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ret = mrvl_populate_vlan_table(dev, 1);
 		else
 			ret = mrvl_populate_vlan_table(dev, 0);
@@ -1879,7 +1879,7 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			return ret;
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
 		MRVL_LOG(ERR, "Extend VLAN not supported\n");
 		return -ENOTSUP;
 	}
@@ -2022,7 +2022,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 
 	rxq->priv = priv;
 	rxq->mp = mp;
-	rxq->cksum_enabled = offloads & DEV_RX_OFFLOAD_IPV4_CKSUM;
+	rxq->cksum_enabled = offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 	rxq->queue_id = idx;
 	rxq->port_id = dev->data->port_id;
 	mrvl_port_to_bpool_lookup[rxq->port_id] = priv->bpool;
@@ -2182,7 +2182,7 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		return ret;
 	}
 
-	fc_conf->mode = en ? RTE_FC_RX_PAUSE : RTE_FC_NONE;
+	fc_conf->mode = en ? RTE_ETH_FC_RX_PAUSE : RTE_ETH_FC_NONE;
 
 	ret = pp2_ppio_get_tx_pause(priv->ppio, &en);
 	if (ret) {
@@ -2191,10 +2191,10 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	if (en) {
-		if (fc_conf->mode == RTE_FC_NONE)
-			fc_conf->mode = RTE_FC_TX_PAUSE;
+		if (fc_conf->mode == RTE_ETH_FC_NONE)
+			fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		else
-			fc_conf->mode = RTE_FC_FULL;
+			fc_conf->mode = RTE_ETH_FC_FULL;
 	}
 
 	return 0;
@@ -2240,19 +2240,19 @@ mrvl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		rx_en = 1;
 		tx_en = 1;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		rx_en = 0;
 		tx_en = 1;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		rx_en = 1;
 		tx_en = 0;
 		break;
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		rx_en = 0;
 		tx_en = 0;
 		break;
@@ -2329,11 +2329,11 @@ mrvl_rss_hash_conf_get(struct rte_eth_dev *dev,
 	if (hash_type == PP2_PPIO_HASH_T_NONE)
 		rss_conf->rss_hf = 0;
 	else if (hash_type == PP2_PPIO_HASH_T_2_TUPLE)
-		rss_conf->rss_hf = ETH_RSS_IPV4;
+		rss_conf->rss_hf = RTE_ETH_RSS_IPV4;
 	else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && priv->rss_hf_tcp)
-		rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && !priv->rss_hf_tcp)
-		rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	return 0;
 }
@@ -3152,7 +3152,7 @@ mrvl_eth_dev_create(struct rte_vdev_device *vdev, const char *name)
 	eth_dev->dev_ops = &mrvl_ops;
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	rte_eth_dev_probing_finish(eth_dev);
 	return 0;
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9e2a40597349..9c4ae80e7e16 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -40,16 +40,16 @@
 #include "hn_nvs.h"
 #include "ndis.h"
 
-#define HN_TX_OFFLOAD_CAPS (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-			    DEV_TX_OFFLOAD_TCP_CKSUM  | \
-			    DEV_TX_OFFLOAD_UDP_CKSUM  | \
-			    DEV_TX_OFFLOAD_TCP_TSO    | \
-			    DEV_TX_OFFLOAD_MULTI_SEGS | \
-			    DEV_TX_OFFLOAD_VLAN_INSERT)
+#define HN_TX_OFFLOAD_CAPS (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+			    RTE_ETH_TX_OFFLOAD_TCP_CKSUM  | \
+			    RTE_ETH_TX_OFFLOAD_UDP_CKSUM  | \
+			    RTE_ETH_TX_OFFLOAD_TCP_TSO    | \
+			    RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+			    RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 
-#define HN_RX_OFFLOAD_CAPS (DEV_RX_OFFLOAD_CHECKSUM | \
-			    DEV_RX_OFFLOAD_VLAN_STRIP | \
-			    DEV_RX_OFFLOAD_RSS_HASH)
+#define HN_RX_OFFLOAD_CAPS (RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+			    RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+			    RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define NETVSC_ARG_LATENCY "latency"
 #define NETVSC_ARG_RXBREAK "rx_copybreak"
@@ -238,21 +238,21 @@ hn_dev_link_update(struct rte_eth_dev *dev,
 	hn_rndis_get_linkspeed(hv);
 
 	link = (struct rte_eth_link) {
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_autoneg = ETH_LINK_SPEED_FIXED,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_autoneg = RTE_ETH_LINK_SPEED_FIXED,
 		.link_speed = hv->link_speed / 10000,
 	};
 
 	if (hv->link_status == NDIS_MEDIA_STATE_CONNECTED)
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 	else
-		link.link_status = ETH_LINK_DOWN;
+		link.link_status = RTE_ETH_LINK_DOWN;
 
 	if (old.link_status == link.link_status)
 		return 0;
 
 	PMD_INIT_LOG(DEBUG, "Port %d is %s", dev->data->port_id,
-		     (link.link_status == ETH_LINK_UP) ? "up" : "down");
+		     (link.link_status == RTE_ETH_LINK_UP) ? "up" : "down");
 
 	return rte_eth_linkstatus_set(dev, &link);
 }
@@ -263,14 +263,14 @@ static int hn_dev_info_get(struct rte_eth_dev *dev,
 	struct hn_data *hv = dev->data->dev_private;
 	int rc;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
 	dev_info->min_rx_bufsize = HN_MIN_RX_BUF_SIZE;
 	dev_info->max_rx_pktlen  = HN_MAX_XFER_LEN;
 	dev_info->max_mac_addrs  = 1;
 
 	dev_info->hash_key_size = NDIS_HASH_KEYSIZE_TOEPLITZ;
 	dev_info->flow_type_rss_offloads = hv->rss_offloads;
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 
 	dev_info->max_rx_queues = hv->max_queues;
 	dev_info->max_tx_queues = hv->max_queues;
@@ -306,8 +306,8 @@ static int hn_rss_reta_update(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < NDIS_HASH_INDCNT; i++) {
-		uint16_t idx = i / RTE_RETA_GROUP_SIZE;
-		uint16_t shift = i % RTE_RETA_GROUP_SIZE;
+		uint16_t idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint16_t shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint64_t mask = (uint64_t)1 << shift;
 
 		if (reta_conf[idx].mask & mask)
@@ -346,8 +346,8 @@ static int hn_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < NDIS_HASH_INDCNT; i++) {
-		uint16_t idx = i / RTE_RETA_GROUP_SIZE;
-		uint16_t shift = i % RTE_RETA_GROUP_SIZE;
+		uint16_t idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint16_t shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint64_t mask = (uint64_t)1 << shift;
 
 		if (reta_conf[idx].mask & mask)
@@ -362,17 +362,17 @@ static void hn_rss_hash_init(struct hn_data *hv,
 	/* Convert from DPDK RSS hash flags to NDIS hash flags */
 	hv->rss_hash = NDIS_HASH_FUNCTION_TOEPLITZ;
 
-	if (rss_conf->rss_hf & ETH_RSS_IPV4)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
 		hv->rss_hash |= NDIS_HASH_IPV4;
-	if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		hv->rss_hash |= NDIS_HASH_TCP_IPV4;
-	if (rss_conf->rss_hf & ETH_RSS_IPV6)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
 		hv->rss_hash |=  NDIS_HASH_IPV6;
-	if (rss_conf->rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX)
 		hv->rss_hash |=  NDIS_HASH_IPV6_EX;
-	if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		hv->rss_hash |= NDIS_HASH_TCP_IPV6;
-	if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		hv->rss_hash |= NDIS_HASH_TCP_IPV6_EX;
 
 	memcpy(hv->rss_key, rss_conf->rss_key ? : rss_default_key,
@@ -427,22 +427,22 @@ static int hn_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	rss_conf->rss_hf = 0;
 	if (hv->rss_hash & NDIS_HASH_IPV4)
-		rss_conf->rss_hf |= ETH_RSS_IPV4;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
 
 	if (hv->rss_hash & NDIS_HASH_TCP_IPV4)
-		rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 
 	if (hv->rss_hash & NDIS_HASH_IPV6)
-		rss_conf->rss_hf |= ETH_RSS_IPV6;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
 
 	if (hv->rss_hash & NDIS_HASH_IPV6_EX)
-		rss_conf->rss_hf |= ETH_RSS_IPV6_EX;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_EX;
 
 	if (hv->rss_hash & NDIS_HASH_TCP_IPV6)
-		rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 
 	if (hv->rss_hash & NDIS_HASH_TCP_IPV6_EX)
-		rss_conf->rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 
 	return 0;
 }
@@ -686,8 +686,8 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev_conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev_conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	unsupported = txmode->offloads & ~HN_TX_OFFLOAD_CAPS;
 	if (unsupported) {
@@ -705,7 +705,7 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	hv->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	hv->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 	err = hn_rndis_conf_offload(hv, txmode->offloads,
 				    rxmode->offloads);
diff --git a/drivers/net/netvsc/hn_rndis.c b/drivers/net/netvsc/hn_rndis.c
index 62ba39636cd8..1b63b27e0c3e 100644
--- a/drivers/net/netvsc/hn_rndis.c
+++ b/drivers/net/netvsc/hn_rndis.c
@@ -710,15 +710,15 @@ hn_rndis_query_rsscaps(struct hn_data *hv,
 
 	hv->rss_offloads = 0;
 	if (caps.ndis_caps & NDIS_RSS_CAP_IPV4)
-		hv->rss_offloads |= ETH_RSS_IPV4
-			| ETH_RSS_NONFRAG_IPV4_TCP
-			| ETH_RSS_NONFRAG_IPV4_UDP;
+		hv->rss_offloads |= RTE_ETH_RSS_IPV4
+			| RTE_ETH_RSS_NONFRAG_IPV4_TCP
+			| RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (caps.ndis_caps & NDIS_RSS_CAP_IPV6)
-		hv->rss_offloads |= ETH_RSS_IPV6
-			| ETH_RSS_NONFRAG_IPV6_TCP;
+		hv->rss_offloads |= RTE_ETH_RSS_IPV6
+			| RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (caps.ndis_caps & NDIS_RSS_CAP_IPV6_EX)
-		hv->rss_offloads |= ETH_RSS_IPV6_EX
-			| ETH_RSS_IPV6_TCP_EX;
+		hv->rss_offloads |= RTE_ETH_RSS_IPV6_EX
+			| RTE_ETH_RSS_IPV6_TCP_EX;
 
 	/* Commit! */
 	*rxr_cnt0 = rxr_cnt;
@@ -800,7 +800,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 		params.ndis_hdr.ndis_size = NDIS_OFFLOAD_PARAMS_SIZE;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_TCP4)
 			params.ndis_tcp4csum = NDIS_OFFLOAD_PARAM_TX;
 		else
@@ -812,7 +812,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) {
 		if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4)
 		    == NDIS_RXCSUM_CAP_TCP4)
 			params.ndis_tcp4csum |= NDIS_OFFLOAD_PARAM_RX;
@@ -826,7 +826,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4)
 			params.ndis_udp4csum = NDIS_OFFLOAD_PARAM_TX;
 		else
@@ -839,7 +839,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (rx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+	if (rx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4)
 			params.ndis_udp4csum |= NDIS_OFFLOAD_PARAM_RX;
 		else
@@ -851,21 +851,21 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
 		if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_IP4)
 		    == NDIS_TXCSUM_CAP_IP4)
 			params.ndis_ip4csum = NDIS_OFFLOAD_PARAM_TX;
 		else
 			goto unsupported;
 	}
-	if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
 			params.ndis_ip4csum |= NDIS_OFFLOAD_PARAM_RX;
 		else
 			goto unsupported;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		if (hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023)
 			params.ndis_lsov2_ip4 = NDIS_OFFLOAD_LSOV2_ON;
 		else
@@ -907,41 +907,41 @@ int hn_rndis_get_offload(struct hn_data *hv,
 		return error;
 	}
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-				    DEV_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				    RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_IP4)
 	    == HN_NDIS_TXCSUM_CAP_IP4)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_TCP4)
 	    == HN_NDIS_TXCSUM_CAP_TCP4 &&
 	    (hwcaps.ndis_csum.ndis_ip6_txcsum & HN_NDIS_TXCSUM_CAP_TCP6)
 	    == HN_NDIS_TXCSUM_CAP_TCP6)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4) &&
 	    (hwcaps.ndis_csum.ndis_ip6_txcsum & NDIS_TXCSUM_CAP_UDP6))
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_UDP_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
 
 	if ((hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023) &&
 	    (hwcaps.ndis_lsov2.ndis_ip6_opts & HN_NDIS_LSOV2_CAP_IP6)
 	    == HN_NDIS_LSOV2_CAP_IP6)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
-				    DEV_RX_OFFLOAD_RSS_HASH;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				    RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4) &&
 	    (hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_TCP6))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4) &&
 	    (hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_UDP6))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_UDP_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
 
 	return 0;
 }
diff --git a/drivers/net/nfb/nfb_ethdev.c b/drivers/net/nfb/nfb_ethdev.c
index 99d93ebf4667..3c39937816a4 100644
--- a/drivers/net/nfb/nfb_ethdev.c
+++ b/drivers/net/nfb/nfb_ethdev.c
@@ -200,7 +200,7 @@ nfb_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_rx_pktlen = (uint32_t)-1;
 	dev_info->max_rx_queues = dev->data->nb_rx_queues;
 	dev_info->max_tx_queues = dev->data->nb_tx_queues;
-	dev_info->speed_capa = ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -268,26 +268,26 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
 
 	status.speed = MAC_SPEED_UNKNOWN;
 
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_status = ETH_LINK_DOWN;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_autoneg = ETH_LINK_SPEED_FIXED;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_SPEED_FIXED;
 
 	if (internals->rxmac[0] != NULL) {
 		nc_rxmac_read_status(internals->rxmac[0], &status);
 
 		switch (status.speed) {
 		case MAC_SPEED_10G:
-			link.link_speed = ETH_SPEED_NUM_10G;
+			link.link_speed = RTE_ETH_SPEED_NUM_10G;
 			break;
 		case MAC_SPEED_40G:
-			link.link_speed = ETH_SPEED_NUM_40G;
+			link.link_speed = RTE_ETH_SPEED_NUM_40G;
 			break;
 		case MAC_SPEED_100G:
-			link.link_speed = ETH_SPEED_NUM_100G;
+			link.link_speed = RTE_ETH_SPEED_NUM_100G;
 			break;
 		default:
-			link.link_speed = ETH_SPEED_NUM_NONE;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 			break;
 		}
 	}
@@ -296,7 +296,7 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
 		nc_rxmac_read_status(internals->rxmac[i], &status);
 
 		if (status.enabled && status.link_up) {
-			link.link_status = ETH_LINK_UP;
+			link.link_status = RTE_ETH_LINK_UP;
 			break;
 		}
 	}
diff --git a/drivers/net/nfb/nfb_rx.c b/drivers/net/nfb/nfb_rx.c
index 3ebb332ae46c..f76e2ba64621 100644
--- a/drivers/net/nfb/nfb_rx.c
+++ b/drivers/net/nfb/nfb_rx.c
@@ -42,7 +42,7 @@ nfb_check_timestamp(struct rte_devargs *devargs)
 	}
 	/* Timestamps are enabled when there is
 	 * key-value pair: enable_timestamp=1
-	 * TODO: timestamp should be enabled with DEV_RX_OFFLOAD_TIMESTAMP
+	 * TODO: timestamp should be enabled with RTE_ETH_RX_OFFLOAD_TIMESTAMP
 	 */
 	if (rte_kvargs_process(kvlist, TIMESTAMP_ARG,
 		timestamp_check_handler, NULL) < 0) {
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 0003fd54dde5..3ea697c54462 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -160,8 +160,8 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Checking TX mode */
 	if (txmode->mq_mode) {
@@ -170,7 +170,7 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	}
 
 	/* Checking RX mode */
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS &&
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS &&
 	    !(hw->cap & NFP_NET_CFG_CTRL_RSS)) {
 		PMD_INIT_LOG(INFO, "RSS not supported");
 		return -EINVAL;
@@ -359,19 +359,19 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
 		if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
 			ctrl |= NFP_NET_CFG_CTRL_RXCSUM;
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 		if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
 			ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
 	}
 
 	hw->mtu = dev->data->mtu;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
 
 	/* L2 broadcast */
@@ -383,13 +383,13 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 		ctrl |= NFP_NET_CFG_CTRL_L2MC;
 
 	/* TX checksum offload */
-	if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 		ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
 
 	/* LSO offload */
-	if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		if (hw->cap & NFP_NET_CFG_CTRL_LSO)
 			ctrl |= NFP_NET_CFG_CTRL_LSO;
 		else
@@ -397,7 +397,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 	}
 
 	/* RX gather */
-	if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		ctrl |= NFP_NET_CFG_CTRL_GATHER;
 
 	return ctrl;
@@ -485,14 +485,14 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 	int ret;
 
 	static const uint32_t ls_to_ethtool[] = {
-		[NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = ETH_SPEED_NUM_NONE,
-		[NFP_NET_CFG_STS_LINK_RATE_UNKNOWN]     = ETH_SPEED_NUM_NONE,
-		[NFP_NET_CFG_STS_LINK_RATE_1G]          = ETH_SPEED_NUM_1G,
-		[NFP_NET_CFG_STS_LINK_RATE_10G]         = ETH_SPEED_NUM_10G,
-		[NFP_NET_CFG_STS_LINK_RATE_25G]         = ETH_SPEED_NUM_25G,
-		[NFP_NET_CFG_STS_LINK_RATE_40G]         = ETH_SPEED_NUM_40G,
-		[NFP_NET_CFG_STS_LINK_RATE_50G]         = ETH_SPEED_NUM_50G,
-		[NFP_NET_CFG_STS_LINK_RATE_100G]        = ETH_SPEED_NUM_100G,
+		[NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = RTE_ETH_SPEED_NUM_NONE,
+		[NFP_NET_CFG_STS_LINK_RATE_UNKNOWN]     = RTE_ETH_SPEED_NUM_NONE,
+		[NFP_NET_CFG_STS_LINK_RATE_1G]          = RTE_ETH_SPEED_NUM_1G,
+		[NFP_NET_CFG_STS_LINK_RATE_10G]         = RTE_ETH_SPEED_NUM_10G,
+		[NFP_NET_CFG_STS_LINK_RATE_25G]         = RTE_ETH_SPEED_NUM_25G,
+		[NFP_NET_CFG_STS_LINK_RATE_40G]         = RTE_ETH_SPEED_NUM_40G,
+		[NFP_NET_CFG_STS_LINK_RATE_50G]         = RTE_ETH_SPEED_NUM_50G,
+		[NFP_NET_CFG_STS_LINK_RATE_100G]        = RTE_ETH_SPEED_NUM_100G,
 	};
 
 	PMD_DRV_LOG(DEBUG, "Link update");
@@ -504,15 +504,15 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 	memset(&link, 0, sizeof(struct rte_eth_link));
 
 	if (nn_link_status & NFP_NET_CFG_STS_LINK)
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	nn_link_status = (nn_link_status >> NFP_NET_CFG_STS_LINK_RATE_SHIFT) &
 			 NFP_NET_CFG_STS_LINK_RATE_MASK;
 
 	if (nn_link_status >= RTE_DIM(ls_to_ethtool))
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	else
 		link.link_speed = ls_to_ethtool[nn_link_status];
 
@@ -701,26 +701,26 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = 1;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
-		dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+		dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM |
-					     DEV_RX_OFFLOAD_UDP_CKSUM |
-					     DEV_RX_OFFLOAD_TCP_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+					     RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+					     RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)
-		dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+		dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_TXCSUM)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM |
-					     DEV_TX_OFFLOAD_UDP_CKSUM |
-					     DEV_TX_OFFLOAD_TCP_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+					     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+					     RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_LSO_ANY)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_GATHER)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -757,22 +757,22 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	};
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
-		dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
-						   ETH_RSS_NONFRAG_IPV4_TCP |
-						   ETH_RSS_NONFRAG_IPV4_UDP |
-						   ETH_RSS_IPV6 |
-						   ETH_RSS_NONFRAG_IPV6_TCP |
-						   ETH_RSS_NONFRAG_IPV6_UDP;
+		dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+						   RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+						   RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+						   RTE_ETH_RSS_IPV6 |
+						   RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+						   RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 		dev_info->reta_size = NFP_NET_CFG_RSS_ITBL_SZ;
 		dev_info->hash_key_size = NFP_NET_CFG_RSS_KEY_SZ;
 	}
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			       ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
-			       ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			       RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+			       RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -843,7 +843,7 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
 	if (link.link_status)
 		PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 			    dev->data->port_id, link.link_speed,
-			    link.link_duplex == ETH_LINK_FULL_DUPLEX
+			    link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX
 			    ? "full-duplex" : "half-duplex");
 	else
 		PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -973,12 +973,12 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	new_ctrl = 0;
 
 	/* Enable vlan strip if it is not configured yet */
-	if ((mask & ETH_VLAN_STRIP_OFFLOAD) &&
+	if ((mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
 	    !(hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
 		new_ctrl = hw->ctrl | NFP_NET_CFG_CTRL_RXVLAN;
 
 	/* Disable vlan strip just if it is configured */
-	if (!(mask & ETH_VLAN_STRIP_OFFLOAD) &&
+	if (!(mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
 	    (hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
 		new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_RXVLAN;
 
@@ -1018,8 +1018,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 	 */
 	for (i = 0; i < reta_size; i += 4) {
 		/* Handling 4 RSS entries per loop */
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
 
 		if (!mask)
@@ -1099,8 +1099,8 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 	 */
 	for (i = 0; i < reta_size; i += 4) {
 		/* Handling 4 RSS entries per loop */
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
 
 		if (!mask)
@@ -1138,22 +1138,22 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 
 	rss_hf = rss_conf->rss_hf;
 
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_TCP;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_UDP;
 
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_TCP;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_UDP;
 
 	cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK;
@@ -1223,22 +1223,22 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
 	cfg_rss_ctrl = nn_cfg_readl(hw, NFP_NET_CFG_RSS_CTRL);
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 	/* Propagate current RSS hash functions to caller */
 	rss_conf->rss_hf = rss_hf;
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 1169ea77a8c7..e08e594b04fe 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -141,7 +141,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
 		new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 62cb3536e0c9..817fe64dbceb 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -103,7 +103,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
 		new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3b5c6615adfa..fc76b84b5b66 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -409,7 +409,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 	dev->data->dev_link.link_status = link_up;
 
 	link_speeds = &dev->data->dev_conf.link_speeds;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG)
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG)
 		negotiate = true;
 
 	err = hw->mac.get_link_capabilities(hw, &speed, &negotiate);
@@ -418,11 +418,11 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 
 	allowed_speeds = 0;
 	if (hw->mac.default_speeds & NGBE_LINK_SPEED_1GB_FULL)
-		allowed_speeds |= ETH_LINK_SPEED_1G;
+		allowed_speeds |= RTE_ETH_LINK_SPEED_1G;
 	if (hw->mac.default_speeds & NGBE_LINK_SPEED_100M_FULL)
-		allowed_speeds |= ETH_LINK_SPEED_100M;
+		allowed_speeds |= RTE_ETH_LINK_SPEED_100M;
 	if (hw->mac.default_speeds & NGBE_LINK_SPEED_10M_FULL)
-		allowed_speeds |= ETH_LINK_SPEED_10M;
+		allowed_speeds |= RTE_ETH_LINK_SPEED_10M;
 
 	if (*link_speeds & ~allowed_speeds) {
 		PMD_INIT_LOG(ERR, "Invalid link setting");
@@ -430,14 +430,14 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 	}
 
 	speed = 0x0;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		speed = hw->mac.default_speeds;
 	} else {
-		if (*link_speeds & ETH_LINK_SPEED_1G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed |= NGBE_LINK_SPEED_1GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_100M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed |= NGBE_LINK_SPEED_100M_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_10M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
 			speed |= NGBE_LINK_SPEED_10M_FULL;
 	}
 
@@ -653,8 +653,8 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->rx_desc_lim = rx_desc_lim;
 	dev_info->tx_desc_lim = tx_desc_lim;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_10M;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_10M;
 
 	/* Driver-preferred Rx/Tx parameters */
 	dev_info->default_rxportconf.burst_size = 32;
@@ -682,11 +682,11 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 	int wait = 1;
 
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			~ETH_LINK_SPEED_AUTONEG);
+			~RTE_ETH_LINK_SPEED_AUTONEG);
 
 	hw->mac.get_link_status = true;
 
@@ -699,8 +699,8 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 
 	err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
 	if (err != 0) {
-		link.link_speed = ETH_SPEED_NUM_NONE;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
@@ -708,27 +708,27 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 		return rte_eth_linkstatus_set(dev, &link);
 
 	intr->flags &= ~NGBE_FLAG_NEED_LINK_CONFIG;
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (link_speed) {
 	default:
 	case NGBE_LINK_SPEED_UNKNOWN:
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 
 	case NGBE_LINK_SPEED_10M_FULL:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		lan_speed = 0;
 		break;
 
 	case NGBE_LINK_SPEED_100M_FULL:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		lan_speed = 1;
 		break;
 
 	case NGBE_LINK_SPEED_1GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		lan_speed = 2;
 		break;
 	}
@@ -912,11 +912,11 @@ ngbe_dev_link_status_print(struct rte_eth_dev *dev)
 
 	rte_eth_linkstatus_get(dev, &link);
 
-	if (link.link_status == ETH_LINK_UP) {
+	if (link.link_status == RTE_ETH_LINK_UP) {
 		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned int)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -956,7 +956,7 @@ ngbe_dev_interrupt_action(struct rte_eth_dev *dev)
 		ngbe_dev_link_update(dev, 0);
 
 		/* likely to up */
-		if (link.link_status != ETH_LINK_UP)
+		if (link.link_status != RTE_ETH_LINK_UP)
 			/* handle it 1 sec later, wait it being stable */
 			timeout = NGBE_LINK_UP_CHECK_TIMEOUT;
 		/* likely to down */
diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
index 25b9e5b1ce1b..ca03469d0e6d 100644
--- a/drivers/net/null/rte_eth_null.c
+++ b/drivers/net/null/rte_eth_null.c
@@ -61,16 +61,16 @@ struct pmd_internals {
 	rte_spinlock_t rss_lock;
 
 	uint16_t reta_size;
-	struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_128 /
-			RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_128 /
+			RTE_ETH_RETA_GROUP_SIZE];
 
 	uint8_t rss_key[40];                /**< 40-byte hash key. */
 };
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(eth_null_logtype, NOTICE);
@@ -189,7 +189,7 @@ eth_dev_start(struct rte_eth_dev *dev)
 	if (dev == NULL)
 		return -EINVAL;
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -199,7 +199,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
 	if (dev == NULL)
 		return 0;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -391,9 +391,9 @@ eth_rss_reta_update(struct rte_eth_dev *dev,
 	rte_spinlock_lock(&internal->rss_lock);
 
 	/* Copy RETA table */
-	for (i = 0; i < (internal->reta_size / RTE_RETA_GROUP_SIZE); i++) {
+	for (i = 0; i < (internal->reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
 		internal->reta_conf[i].mask = reta_conf[i].mask;
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				internal->reta_conf[i].reta[j] = reta_conf[i].reta[j];
 	}
@@ -416,8 +416,8 @@ eth_rss_reta_query(struct rte_eth_dev *dev,
 	rte_spinlock_lock(&internal->rss_lock);
 
 	/* Copy RETA table */
-	for (i = 0; i < (internal->reta_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (internal->reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = internal->reta_conf[i].reta[j];
 	}
@@ -548,8 +548,8 @@ eth_dev_null_create(struct rte_vdev_device *dev, struct pmd_options *args)
 	internals->port_id = eth_dev->data->port_id;
 	rte_eth_random_addr(internals->eth_addr.addr_bytes);
 
-	internals->flow_type_rss_offloads =  ETH_RSS_PROTO_MASK;
-	internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_RETA_GROUP_SIZE;
+	internals->flow_type_rss_offloads =  RTE_ETH_RSS_PROTO_MASK;
+	internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_ETH_RETA_GROUP_SIZE;
 
 	rte_memcpy(internals->rss_key, default_rss_key, 40);
 
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index f578123ed00b..5b8cbec67b5d 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -158,7 +158,7 @@ octeontx_link_status_print(struct rte_eth_dev *eth_dev,
 		octeontx_log_info("Port %u: Link Up - speed %u Mbps - %s",
 			  (eth_dev->data->port_id),
 			  link->link_speed,
-			  link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+			  link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 			  "full-duplex" : "half-duplex");
 	else
 		octeontx_log_info("Port %d: Link Down",
@@ -171,38 +171,38 @@ octeontx_link_status_update(struct octeontx_nic *nic,
 {
 	memset(link, 0, sizeof(*link));
 
-	link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	switch (nic->speed) {
 	case OCTEONTX_LINK_SPEED_SGMII:
-		link->link_speed = ETH_SPEED_NUM_1G;
+		link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case OCTEONTX_LINK_SPEED_XAUI:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 
 	case OCTEONTX_LINK_SPEED_RXAUI:
 	case OCTEONTX_LINK_SPEED_10G_R:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case OCTEONTX_LINK_SPEED_QSGMII:
-		link->link_speed = ETH_SPEED_NUM_5G;
+		link->link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 	case OCTEONTX_LINK_SPEED_40G_R:
-		link->link_speed = ETH_SPEED_NUM_40G;
+		link->link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 
 	case OCTEONTX_LINK_SPEED_RESERVE1:
 	case OCTEONTX_LINK_SPEED_RESERVE2:
 	default:
-		link->link_speed = ETH_SPEED_NUM_NONE;
+		link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 		octeontx_log_err("incorrect link speed %d", nic->speed);
 		break;
 	}
 
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
-	link->link_autoneg = ETH_LINK_AUTONEG;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 }
 
 static void
@@ -355,20 +355,20 @@ octeontx_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
 	uint16_t flags = 0;
 
-	if (nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= OCCTX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (nic->tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= OCCTX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(nic->tx_offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= OCCTX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (nic->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= OCCTX_TX_MULTI_SEG_F;
 
 	return flags;
@@ -380,21 +380,21 @@ octeontx_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
 	uint16_t flags = 0;
 
-	if (nic->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
-			 DEV_RX_OFFLOAD_UDP_CKSUM))
+	if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			 RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= OCCTX_RX_OFFLOAD_CSUM_F;
 
-	if (nic->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= OCCTX_RX_OFFLOAD_CSUM_F;
 
-	if (nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		flags |= OCCTX_RX_MULTI_SEG_F;
 		eth_dev->data->scattered_rx = 1;
 		/* If scatter mode is enabled, TX should also be in multi
 		 * seg mode, else memory leak will occur
 		 */
-		nic->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		nic->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	}
 
 	return flags;
@@ -423,18 +423,18 @@ octeontx_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-		rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+		rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		octeontx_log_err("unsupported rx qmode %d", rxmode->mq_mode);
 		return -EINVAL;
 	}
 
-	if (!(txmode->offloads & DEV_TX_OFFLOAD_MT_LOCKFREE)) {
+	if (!(txmode->offloads & RTE_ETH_TX_OFFLOAD_MT_LOCKFREE)) {
 		PMD_INIT_LOG(NOTICE, "cant disable lockfree tx");
-		txmode->offloads |= DEV_TX_OFFLOAD_MT_LOCKFREE;
+		txmode->offloads |= RTE_ETH_TX_OFFLOAD_MT_LOCKFREE;
 	}
 
-	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		octeontx_log_err("setting link speed/duplex not supported");
 		return -EINVAL;
 	}
@@ -530,13 +530,13 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	 * when this feature has not been enabled before.
 	 */
 	if (data->dev_started && frame_size > buffsz &&
-	    !(nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	    !(nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		octeontx_log_err("Scatter mode is disabled");
 		return -EINVAL;
 	}
 
 	/* Check <seg size> * <max_seg>  >= max_frame */
-	if ((nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER)	&&
+	if ((nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	&&
 	    (frame_size > buffsz * OCCTX_RX_NB_SEG_MAX))
 		return -EINVAL;
 
@@ -571,7 +571,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
 
 	/* Setup scatter mode if needed by jumbo */
 	if (data->mtu > buffsz) {
-		nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+		nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 		nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
 		nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
 	}
@@ -843,10 +843,10 @@ octeontx_dev_info(struct rte_eth_dev *dev,
 	struct octeontx_nic *nic = octeontx_pmd_priv(dev);
 
 	/* Autonegotiation may be disabled */
-	dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
-	dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			ETH_LINK_SPEED_40G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_40G;
 
 	/* Min/Max MTU supported */
 	dev_info->min_rx_bufsize = OCCTX_MIN_FRS;
@@ -1356,7 +1356,7 @@ octeontx_create(struct rte_vdev_device *dev, int port, uint8_t evdev,
 	nic->ev_ports = 1;
 	nic->print_flag = -1;
 
-	data->dev_link.link_status = ETH_LINK_DOWN;
+	data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	data->dev_started = 0;
 	data->promiscuous = 0;
 	data->all_multicast = 0;
diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index 3a02824e3948..c493fa7a03ed 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -55,23 +55,23 @@
 #define OCCTX_MAX_MTU		(OCCTX_MAX_FRS - OCCTX_L2_OVERHEAD)
 
 #define OCTEONTX_RX_OFFLOADS		(				   \
-					 DEV_RX_OFFLOAD_CHECKSUM	 | \
-					 DEV_RX_OFFLOAD_SCTP_CKSUM       | \
-					 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-					 DEV_RX_OFFLOAD_SCATTER	         | \
-					 DEV_RX_OFFLOAD_SCATTER		 | \
-					 DEV_RX_OFFLOAD_VLAN_FILTER)
+					 RTE_ETH_RX_OFFLOAD_CHECKSUM	 | \
+					 RTE_ETH_RX_OFFLOAD_SCTP_CKSUM       | \
+					 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+					 RTE_ETH_RX_OFFLOAD_SCATTER	         | \
+					 RTE_ETH_RX_OFFLOAD_SCATTER		 | \
+					 RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 
 #define OCTEONTX_TX_OFFLOADS		(				   \
-					 DEV_TX_OFFLOAD_MBUF_FAST_FREE	 | \
-					 DEV_TX_OFFLOAD_MT_LOCKFREE	 | \
-					 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-					 DEV_TX_OFFLOAD_OUTER_UDP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_IPV4_CKSUM	 | \
-					 DEV_TX_OFFLOAD_TCP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_UDP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_SCTP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_MULTI_SEGS)
+					 RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE	 | \
+					 RTE_ETH_TX_OFFLOAD_MT_LOCKFREE	 | \
+					 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+					 RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_TCP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_UDP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 static inline struct octeontx_nic *
 octeontx_pmd_priv(struct rte_eth_dev *dev)
diff --git a/drivers/net/octeontx/octeontx_ethdev_ops.c b/drivers/net/octeontx/octeontx_ethdev_ops.c
index dbe13ce3826b..6ec2b71b0672 100644
--- a/drivers/net/octeontx/octeontx_ethdev_ops.c
+++ b/drivers/net/octeontx/octeontx_ethdev_ops.c
@@ -43,20 +43,20 @@ octeontx_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	rxmode = &dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 			rc = octeontx_vlan_hw_filter(nic, true);
 			if (rc)
 				goto done;
 
-			nic->rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+			nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			nic->rx_offload_flags |= OCCTX_RX_VLAN_FLTR_F;
 		} else {
 			rc = octeontx_vlan_hw_filter(nic, false);
 			if (rc)
 				goto done;
 
-			nic->rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+			nic->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			nic->rx_offload_flags &= ~OCCTX_RX_VLAN_FLTR_F;
 		}
 	}
@@ -139,7 +139,7 @@ octeontx_dev_vlan_offload_init(struct rte_eth_dev *dev)
 
 	TAILQ_INIT(&nic->vlan_info.fltr_tbl);
 
-	rc = octeontx_dev_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+	rc = octeontx_dev_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
 	if (rc)
 		octeontx_log_err("Failed to set vlan offload rc=%d", rc);
 
@@ -219,13 +219,13 @@ octeontx_dev_flow_ctrl_get(struct rte_eth_dev *dev,
 		return rc;
 
 	if (conf.rx_pause && conf.tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (conf.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (conf.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	/* low_water & high_water values are in Bytes */
 	fc_conf->low_water = conf.low_water;
@@ -272,10 +272,10 @@ octeontx_dev_flow_ctrl_set(struct rte_eth_dev *dev,
 		return -EINVAL;
 	}
 
-	rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-			(fc_conf->mode == RTE_FC_RX_PAUSE);
-	tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-			(fc_conf->mode == RTE_FC_TX_PAUSE);
+	rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+			(fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+	tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+			(fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
 
 	conf.high_water = fc_conf->high_water;
 	conf.low_water = fc_conf->low_water;
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index f491e20e95c1..060d267f5de5 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -21,7 +21,7 @@ nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
 
 	if (otx2_dev_is_vf(dev) ||
 	    dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG)
-		capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+		capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	return capa;
 }
@@ -33,10 +33,10 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
 
 	/* TSO not supported for earlier chip revisions */
 	if (otx2_dev_is_96xx_A0(dev) || otx2_dev_is_95xx_Ax(dev))
-		capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
-			  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			  DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-			  DEV_TX_OFFLOAD_GRE_TNL_TSO);
+		capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+			  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
 	return capa;
 }
 
@@ -66,8 +66,8 @@ nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
 	req->npa_func = otx2_npa_pf_func_get();
 	req->sso_func = otx2_sso_pf_func_get();
 	req->rx_cfg = BIT_ULL(35 /* DIS_APAD */);
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
-			 DEV_RX_OFFLOAD_UDP_CKSUM)) {
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			 RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
 		req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */);
 		req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */);
 	}
@@ -373,7 +373,7 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
 
 	aq->rq.sso_ena = 0;
 
-	if (rxq->offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		aq->rq.ipsech_ena = 1;
 
 	aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */
@@ -665,7 +665,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
 	 * These are needed in deriving raw clock value from tsc counter.
 	 * read_clock eth op returns raw clock value.
 	 */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
 	    otx2_ethdev_is_ptp_en(dev)) {
 		rc = otx2_nix_raw_clock_tsc_conv(dev);
 		if (rc) {
@@ -692,7 +692,7 @@ nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
 	 * Maximum three segments can be supported with W8, Choose
 	 * NIX_MAXSQESZ_W16 for multi segment offload.
 	 */
-	if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		return NIX_MAXSQESZ_W16;
 	else
 		return NIX_MAXSQESZ_W8;
@@ -707,29 +707,29 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rxmode *rxmode = &conf->rxmode;
 	uint16_t flags = 0;
 
-	if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
-			(dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+			(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		flags |= NIX_RX_OFFLOAD_RSS_F;
 
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
-			 DEV_RX_OFFLOAD_UDP_CKSUM))
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			 RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		flags |= NIX_RX_MULTI_SEG_F;
 
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
-				DEV_RX_OFFLOAD_QINQ_STRIP))
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				RTE_ETH_RX_OFFLOAD_QINQ_STRIP))
 		flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		flags |= NIX_RX_OFFLOAD_SECURITY_F;
 
 	if (!dev->ptype_disable)
@@ -768,43 +768,43 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
 			 offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
 
-	if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
-	    conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+	    conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
 
-	if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= NIX_TX_MULTI_SEG_F;
 
 	/* Enable Inner checksum for TSO */
-	if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+	if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		flags |= (NIX_TX_OFFLOAD_TSO_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
 	/* Enable Inner and Outer checksum for Tunnel TSO */
-	if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		    DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		    DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		flags |= (NIX_TX_OFFLOAD_TSO_F |
 			  NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
-	if (conf & DEV_TX_OFFLOAD_SECURITY)
+	if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
 		flags |= NIX_TX_OFFLOAD_SECURITY_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
 	return flags;
@@ -914,8 +914,8 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
 	buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
 
 	if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
-		dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
-		dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+		dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 		/* Setting up the rx[tx]_offload_flags due to change
 		 * in rx[tx]_offloads.
@@ -1848,21 +1848,21 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
 		goto fail_configure;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-	    rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+	    rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode);
 		goto fail_configure;
 	}
 
-	if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		otx2_err("Unsupported mq tx mode %d", txmode->mq_mode);
 		goto fail_configure;
 	}
 
 	if (otx2_dev_is_Ax(dev) &&
-	    (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
-	    ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
-	    (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+	    ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
 		otx2_err("Outer IP and SCTP checksum unsupported");
 		goto fail_configure;
 	}
@@ -2235,7 +2235,7 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
 	 * enabled in PF owning this VF
 	 */
 	memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info));
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
 	    otx2_ethdev_is_ptp_en(dev))
 		otx2_nix_timesync_enable(eth_dev);
 	else
@@ -2563,8 +2563,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
 	rc = otx2_eth_sec_ctx_create(eth_dev);
 	if (rc)
 		goto free_mac_addrs;
-	dev->tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
-	dev->rx_offload_capa |= DEV_RX_OFFLOAD_SECURITY;
+	dev->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
+	dev->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SECURITY;
 
 	/* Initialize rte-flow */
 	rc = otx2_flow_init(dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 4557a0ee1945..a5282c6c1231 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -117,43 +117,43 @@
 #define CQ_TIMER_THRESH_DEFAULT	0xAULL /* ~1usec i.e (0xA * 100nsec) */
 #define CQ_TIMER_THRESH_MAX     255
 
-#define NIX_RSS_L3_L4_SRC_DST  (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY \
-				| ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define NIX_RSS_L3_L4_SRC_DST  (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY \
+				| RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
 
-#define NIX_RSS_OFFLOAD		(ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\
-				 ETH_RSS_TCP | ETH_RSS_SCTP | \
-				 ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD | \
-				 NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | \
-				 ETH_RSS_C_VLAN)
+#define NIX_RSS_OFFLOAD		(RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |\
+				 RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | \
+				 RTE_ETH_RSS_TUNNEL | RTE_ETH_RSS_L2_PAYLOAD | \
+				 NIX_RSS_L3_L4_SRC_DST | RTE_ETH_RSS_LEVEL_MASK | \
+				 RTE_ETH_RSS_C_VLAN)
 
 #define NIX_TX_OFFLOAD_CAPA ( \
-	DEV_TX_OFFLOAD_MBUF_FAST_FREE	| \
-	DEV_TX_OFFLOAD_MT_LOCKFREE	| \
-	DEV_TX_OFFLOAD_VLAN_INSERT	| \
-	DEV_TX_OFFLOAD_QINQ_INSERT	| \
-	DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM	| \
-	DEV_TX_OFFLOAD_OUTER_UDP_CKSUM	| \
-	DEV_TX_OFFLOAD_TCP_CKSUM	| \
-	DEV_TX_OFFLOAD_UDP_CKSUM	| \
-	DEV_TX_OFFLOAD_SCTP_CKSUM	| \
-	DEV_TX_OFFLOAD_TCP_TSO		| \
-	DEV_TX_OFFLOAD_VXLAN_TNL_TSO    | \
-	DEV_TX_OFFLOAD_GENEVE_TNL_TSO   | \
-	DEV_TX_OFFLOAD_GRE_TNL_TSO	| \
-	DEV_TX_OFFLOAD_MULTI_SEGS	| \
-	DEV_TX_OFFLOAD_IPV4_CKSUM)
+	RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE	| \
+	RTE_ETH_TX_OFFLOAD_MT_LOCKFREE	| \
+	RTE_ETH_TX_OFFLOAD_VLAN_INSERT	| \
+	RTE_ETH_TX_OFFLOAD_QINQ_INSERT	| \
+	RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_TCP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_UDP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_SCTP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_TCP_TSO		| \
+	RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO    | \
+	RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO   | \
+	RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO	| \
+	RTE_ETH_TX_OFFLOAD_MULTI_SEGS	| \
+	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 
 #define NIX_RX_OFFLOAD_CAPA ( \
-	DEV_RX_OFFLOAD_CHECKSUM		| \
-	DEV_RX_OFFLOAD_SCTP_CKSUM	| \
-	DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-	DEV_RX_OFFLOAD_SCATTER		| \
-	DEV_RX_OFFLOAD_OUTER_UDP_CKSUM	| \
-	DEV_RX_OFFLOAD_VLAN_STRIP	| \
-	DEV_RX_OFFLOAD_VLAN_FILTER	| \
-	DEV_RX_OFFLOAD_QINQ_STRIP	| \
-	DEV_RX_OFFLOAD_TIMESTAMP	| \
-	DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RX_OFFLOAD_CHECKSUM		| \
+	RTE_ETH_RX_OFFLOAD_SCTP_CKSUM	| \
+	RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+	RTE_ETH_RX_OFFLOAD_SCATTER		| \
+	RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM	| \
+	RTE_ETH_RX_OFFLOAD_VLAN_STRIP	| \
+	RTE_ETH_RX_OFFLOAD_VLAN_FILTER	| \
+	RTE_ETH_RX_OFFLOAD_QINQ_STRIP	| \
+	RTE_ETH_RX_OFFLOAD_TIMESTAMP	| \
+	RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define NIX_DEFAULT_RSS_CTX_GROUP  0
 #define NIX_DEFAULT_RSS_MCAM_IDX  -1
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index 83f905315b38..60bf6c3f5f05 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -49,12 +49,12 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
 
 	val = atoi(value);
 
-	if (val <= ETH_RSS_RETA_SIZE_64)
-		val = ETH_RSS_RETA_SIZE_64;
-	else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
-		val = ETH_RSS_RETA_SIZE_128;
-	else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
-		val = ETH_RSS_RETA_SIZE_256;
+	if (val <= RTE_ETH_RSS_RETA_SIZE_64)
+		val = RTE_ETH_RSS_RETA_SIZE_64;
+	else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
+		val = RTE_ETH_RSS_RETA_SIZE_128;
+	else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
+		val = RTE_ETH_RSS_RETA_SIZE_256;
 	else
 		val = NIX_RSS_RETA_SIZE;
 
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 22a8af5cba45..d5caaa326a5a 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -26,11 +26,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	 * when this feature has not been enabled before.
 	 */
 	if (data->dev_started && frame_size > buffsz &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER))
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER))
 		return -EINVAL;
 
 	/* Check <seg size> * <max_seg>  >= max_frame */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)	&&
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	&&
 	    (frame_size > buffsz * NIX_RX_NB_SEG_MAX))
 		return -EINVAL;
 
@@ -568,17 +568,17 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
 	};
 
 	/* Auto negotiation disabled */
-	devinfo->speed_capa = ETH_LINK_SPEED_FIXED;
+	devinfo->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
 	if (!otx2_dev_is_vf_or_sdp(dev) && !otx2_dev_is_lbk(dev)) {
-		devinfo->speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G;
+		devinfo->speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G;
 
 		/* 50G and 100G to be supported for board version C0
 		 * and above.
 		 */
 		if (!otx2_dev_is_Ax(dev))
-			devinfo->speed_capa |= ETH_LINK_SPEED_50G |
-					       ETH_LINK_SPEED_100G;
+			devinfo->speed_capa |= RTE_ETH_LINK_SPEED_50G |
+					       RTE_ETH_LINK_SPEED_100G;
 	}
 
 	devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.c b/drivers/net/octeontx2/otx2_ethdev_sec.c
index 7bd1ed6da043..4d40184de46d 100644
--- a/drivers/net/octeontx2/otx2_ethdev_sec.c
+++ b/drivers/net/octeontx2/otx2_ethdev_sec.c
@@ -869,8 +869,8 @@ otx2_eth_sec_init(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(sa_width < 32 || sa_width > 512 ||
 			 !RTE_IS_POWER_OF_2(sa_width));
 
-	if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+	if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
 		return 0;
 
 	if (rte_security_dynfield_register() < 0)
@@ -912,8 +912,8 @@ otx2_eth_sec_fini(struct rte_eth_dev *eth_dev)
 	uint16_t port = eth_dev->data->port_id;
 	char name[RTE_MEMZONE_NAMESIZE];
 
-	if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+	if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
 		return;
 
 	lookup_mem_sa_tbl_clear(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 6df0732189eb..1d0fe4e950d4 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -625,7 +625,7 @@ otx2_flow_create(struct rte_eth_dev *dev,
 		goto err_exit;
 	}
 
-	if (hw->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (hw->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		rc = flow_update_sec_tt(dev, actions);
 		if (rc != 0) {
 			rte_flow_error_set(error, EIO,
diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c
index 76bf48100183..071740de86a7 100644
--- a/drivers/net/octeontx2/otx2_flow_ctrl.c
+++ b/drivers/net/octeontx2/otx2_flow_ctrl.c
@@ -54,7 +54,7 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 	int rc;
 
 	if (otx2_dev_is_lbk(dev)) {
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		return 0;
 	}
 
@@ -66,13 +66,13 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 		goto done;
 
 	if (rsp->rx_pause && rsp->tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rsp->rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (rsp->tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 done:
 	return rc;
@@ -159,10 +159,10 @@ otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	if (fc_conf->mode == fc->mode)
 		return 0;
 
-	rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_RX_PAUSE);
-	tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_TX_PAUSE);
+	rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+	tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
 
 	/* Check if TX pause frame is already enabled or not */
 	if (fc->tx_pause ^ tx_pause) {
@@ -212,11 +212,11 @@ otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev)
 	/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
 	if (otx2_dev_is_Ax(dev) &&
 	    (dev->npc_flow.switch_header_type != OTX2_PRIV_FLAGS_HIGIG) &&
-	    (fc_conf.mode == RTE_FC_FULL || fc_conf.mode == RTE_FC_RX_PAUSE)) {
+	    (fc_conf.mode == RTE_ETH_FC_FULL || fc_conf.mode == RTE_ETH_FC_RX_PAUSE)) {
 		fc_conf.mode =
-				(fc_conf.mode == RTE_FC_FULL ||
-				fc_conf.mode == RTE_FC_TX_PAUSE) ?
-				RTE_FC_TX_PAUSE : RTE_FC_NONE;
+				(fc_conf.mode == RTE_ETH_FC_FULL ||
+				fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ?
+				RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
 	}
 
 	return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf);
@@ -234,7 +234,7 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
 		return 0;
 
 	memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
-	/* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+	/* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
 	 * by AF driver, update those info in PMD structure.
 	 */
 	rc = otx2_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -242,10 +242,10 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
 		goto exit;
 
 	fc->mode = fc_conf.mode;
-	fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_RX_PAUSE);
-	fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_TX_PAUSE);
+	fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+	fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
 
 exit:
 	return rc;
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index 79b92fda8a4a..91267bbb8182 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -852,7 +852,7 @@ parse_rss_action(struct rte_eth_dev *dev,
 					  attr, "No support of RSS in egress");
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS)
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS)
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ACTION,
 					  act, "multi-queue mode is disabled");
@@ -1186,7 +1186,7 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev,
 		 *FLOW_KEY_ALG index. So, till we update the action with
 		 *flow_key_alg index, set the action to drop.
 		 */
-		if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+		if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 			flow->npc_action = NIX_RX_ACTIONOP_DROP;
 		else
 			flow->npc_action = NIX_RX_ACTIONOP_UCAST;
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
index 81dd6243b977..8f5d0eed92b6 100644
--- a/drivers/net/octeontx2/otx2_link.c
+++ b/drivers/net/octeontx2/otx2_link.c
@@ -41,7 +41,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
 		otx2_info("Port %d: Link Up - speed %u Mbps - %s",
 			  (int)(eth_dev->data->port_id),
 			  (uint32_t)link->link_speed,
-			  link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+			  link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 			  "full-duplex" : "half-duplex");
 	else
 		otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id));
@@ -92,7 +92,7 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
 
 	eth_link.link_status = link->link_up;
 	eth_link.link_speed = link->speed;
-	eth_link.link_autoneg = ETH_LINK_AUTONEG;
+	eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	eth_link.link_duplex = link->full_duplex;
 
 	otx2_dev->speed = link->speed;
@@ -111,10 +111,10 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
 static int
 lbk_link_update(struct rte_eth_link *link)
 {
-	link->link_status = ETH_LINK_UP;
-	link->link_speed = ETH_SPEED_NUM_100G;
-	link->link_autoneg = ETH_LINK_FIXED;
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_status = RTE_ETH_LINK_UP;
+	link->link_speed = RTE_ETH_SPEED_NUM_100G;
+	link->link_autoneg = RTE_ETH_LINK_FIXED;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	return 0;
 }
 
@@ -131,7 +131,7 @@ cgx_link_update(struct otx2_eth_dev *dev, struct rte_eth_link *link)
 
 	link->link_status = rsp->link_info.link_up;
 	link->link_speed = rsp->link_info.speed;
-	link->link_autoneg = ETH_LINK_AUTONEG;
+	link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	if (rsp->link_info.full_duplex)
 		link->link_duplex = rsp->link_info.full_duplex;
@@ -233,22 +233,22 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
 
 	/* 50G and 100G to be supported for board version C0 and above */
 	if (!otx2_dev_is_Ax(dev)) {
-		if (link_speeds & ETH_LINK_SPEED_100G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_100G)
 			link_speed = 100000;
-		if (link_speeds & ETH_LINK_SPEED_50G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_50G)
 			link_speed = 50000;
 	}
-	if (link_speeds & ETH_LINK_SPEED_40G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
 		link_speed = 40000;
-	if (link_speeds & ETH_LINK_SPEED_25G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
 		link_speed = 25000;
-	if (link_speeds & ETH_LINK_SPEED_20G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_20G)
 		link_speed = 20000;
-	if (link_speeds & ETH_LINK_SPEED_10G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 		link_speed = 10000;
-	if (link_speeds & ETH_LINK_SPEED_5G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_5G)
 		link_speed = 5000;
-	if (link_speeds & ETH_LINK_SPEED_1G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
 		link_speed = 1000;
 
 	return link_speed;
@@ -257,11 +257,11 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
 static inline uint8_t
 nix_parse_eth_link_duplex(uint32_t link_speeds)
 {
-	if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
-			(link_speeds & ETH_LINK_SPEED_100M_HD))
-		return ETH_LINK_HALF_DUPLEX;
+	if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+			(link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+		return RTE_ETH_LINK_HALF_DUPLEX;
 	else
-		return ETH_LINK_FULL_DUPLEX;
+		return RTE_ETH_LINK_FULL_DUPLEX;
 }
 
 int
@@ -279,7 +279,7 @@ otx2_apply_link_speed(struct rte_eth_dev *eth_dev)
 	cfg.speed = nix_parse_link_speeds(dev, conf->link_speeds);
 	if (cfg.speed != SPEED_NONE && cfg.speed != dev->speed) {
 		cfg.duplex = nix_parse_eth_link_duplex(conf->link_speeds);
-		cfg.an = (conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+		cfg.an = (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 		return cgx_change_mode(dev, &cfg);
 	}
diff --git a/drivers/net/octeontx2/otx2_mcast.c b/drivers/net/octeontx2/otx2_mcast.c
index f84aa1bf570c..b9c63ad3bc21 100644
--- a/drivers/net/octeontx2/otx2_mcast.c
+++ b/drivers/net/octeontx2/otx2_mcast.c
@@ -100,7 +100,7 @@ nix_hw_update_mc_addr_list(struct rte_eth_dev *eth_dev)
 
 		action = NIX_RX_ACTIONOP_UCAST;
 
-		if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+		if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 			action = NIX_RX_ACTIONOP_RSS;
 			action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
 		}
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
index 91e5c0f6bd11..abb213058792 100644
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -250,7 +250,7 @@ otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
 	/* System time should be already on by default */
 	nix_start_timecounters(eth_dev);
 
-	dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+	dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 	dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
@@ -287,7 +287,7 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
 	if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
 		return -EINVAL;
 
-	dev->rx_offloads &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+	dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F;
 	dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F;
 
diff --git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
index 7dbe5f69ae65..68cef1caa394 100644
--- a/drivers/net/octeontx2/otx2_rss.c
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -85,8 +85,8 @@ otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Copy RETA table */
-	for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask >> j) & 0x01)
 				rss->ind_tbl[idx] = reta_conf[i].reta[j];
 			idx++;
@@ -118,8 +118,8 @@ otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Copy RETA table */
-	for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = rss->ind_tbl[j];
 	}
@@ -178,23 +178,23 @@ rss_get_key(struct otx2_eth_dev *dev, uint8_t *key)
 }
 
 #define RSS_IPV4_ENABLE ( \
-			  ETH_RSS_IPV4 | \
-			  ETH_RSS_FRAG_IPV4 | \
-			  ETH_RSS_NONFRAG_IPV4_UDP | \
-			  ETH_RSS_NONFRAG_IPV4_TCP | \
-			  ETH_RSS_NONFRAG_IPV4_SCTP)
+			  RTE_ETH_RSS_IPV4 | \
+			  RTE_ETH_RSS_FRAG_IPV4 | \
+			  RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
 #define RSS_IPV6_ENABLE ( \
-			  ETH_RSS_IPV6 | \
-			  ETH_RSS_FRAG_IPV6 | \
-			  ETH_RSS_NONFRAG_IPV6_UDP | \
-			  ETH_RSS_NONFRAG_IPV6_TCP | \
-			  ETH_RSS_NONFRAG_IPV6_SCTP)
+			  RTE_ETH_RSS_IPV6 | \
+			  RTE_ETH_RSS_FRAG_IPV6 | \
+			  RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 #define RSS_IPV6_EX_ENABLE ( \
-			     ETH_RSS_IPV6_EX | \
-			     ETH_RSS_IPV6_TCP_EX | \
-			     ETH_RSS_IPV6_UDP_EX)
+			     RTE_ETH_RSS_IPV6_EX | \
+			     RTE_ETH_RSS_IPV6_TCP_EX | \
+			     RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define RSS_MAX_LEVELS   3
 
@@ -233,24 +233,24 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
 
 	dev->rss_info.nix_rss = ethdev_rss;
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
 	    dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) {
 		flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
 	}
 
-	if (ethdev_rss & ETH_RSS_C_VLAN)
+	if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
 
-	if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
 
-	if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
 
-	if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
 
-	if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
 
 	if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -259,34 +259,34 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
 	if (ethdev_rss & RSS_IPV6_ENABLE)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
 
-	if (ethdev_rss & ETH_RSS_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_TCP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_UDP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_SCTP)
+	if (ethdev_rss & RTE_ETH_RSS_SCTP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
 
 	if (ethdev_rss & RSS_IPV6_EX_ENABLE)
 		flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
 
-	if (ethdev_rss & ETH_RSS_PORT)
+	if (ethdev_rss & RTE_ETH_RSS_PORT)
 		flowkey_cfg |= FLOW_KEY_TYPE_PORT;
 
-	if (ethdev_rss & ETH_RSS_NVGRE)
+	if (ethdev_rss & RTE_ETH_RSS_NVGRE)
 		flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
 
-	if (ethdev_rss & ETH_RSS_VXLAN)
+	if (ethdev_rss & RTE_ETH_RSS_VXLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
 
-	if (ethdev_rss & ETH_RSS_GENEVE)
+	if (ethdev_rss & RTE_ETH_RSS_GENEVE)
 		flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
 
-	if (ethdev_rss & ETH_RSS_GTPU)
+	if (ethdev_rss & RTE_ETH_RSS_GTPU)
 		flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
 
 	return flowkey_cfg;
@@ -343,7 +343,7 @@ otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
 		otx2_nix_rss_set_key(dev, rss_conf->rss_key,
 				     (uint32_t)rss_conf->rss_key_len);
 
-	rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 	flowkey_cfg =
@@ -390,7 +390,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
 	int rc;
 
 	/* Skip further configuration if selected mode is not RSS */
-	if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS || !qcnt)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS || !qcnt)
 		return 0;
 
 	/* Update default RSS key and cfg */
@@ -408,7 +408,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
 	}
 
 	rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
-	rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 	flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, rss_hash_level);
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index 0d85c898bfe7..2c18483b98fd 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -414,12 +414,12 @@ NIX_RX_FASTPATH_MODES
 	/* For PTP enabled, scalar rx function should be chosen as most of the
 	 * PTP apps are implemented to rx burst 1 pkt.
 	 */
-	if (dev->scalar_ena || dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (dev->scalar_ena || dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 		pick_rx_func(eth_dev, nix_eth_rx_burst);
 	else
 		pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
 
 	/* Copy multi seg version with no offload for tear down sequence */
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index ad704d745b04..135615580bbf 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -1070,7 +1070,7 @@ NIX_TX_FASTPATH_MODES
 	else
 		pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
 
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
 
 	rte_mb();
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index f5161e17a16d..cce643b7b51d 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -50,7 +50,7 @@ nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
 
 	action = NIX_RX_ACTIONOP_UCAST;
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		action = NIX_RX_ACTIONOP_RSS;
 		action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
 	}
@@ -99,7 +99,7 @@ nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type,
 	 * Take offset from LA since in case of untagged packet,
 	 * lbptr is zero.
 	 */
-	if (type == ETH_VLAN_TYPE_OUTER) {
+	if (type == RTE_ETH_VLAN_TYPE_OUTER) {
 		vtag_action.act.vtag0_def = vtag_index;
 		vtag_action.act.vtag0_lid = NPC_LID_LA;
 		vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
@@ -413,7 +413,7 @@ nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
 		if (vlan->strip_on ||
 		    (vlan->qinq_on && !vlan->qinq_before_def)) {
 			if (eth_dev->data->dev_conf.rxmode.mq_mode ==
-								ETH_MQ_RX_RSS)
+								RTE_ETH_MQ_RX_RSS)
 				vlan->def_rx_mcam_ent.action |=
 							NIX_RX_ACTIONOP_RSS;
 			else
@@ -717,48 +717,48 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 
 	rxmode = &eth_dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
-			offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
+			offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			rc = nix_vlan_hw_strip(eth_dev, true);
 		} else {
-			offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+			offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			rc = nix_vlan_hw_strip(eth_dev, false);
 		}
 		if (rc)
 			goto done;
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
-			offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
+			offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			rc = nix_vlan_hw_filter(eth_dev, true, 0);
 		} else {
-			offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+			offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			rc = nix_vlan_hw_filter(eth_dev, false, 0);
 		}
 		if (rc)
 			goto done;
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) {
 		if (!dev->vlan_info.qinq_on) {
-			offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+			offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 			rc = otx2_nix_config_double_vlan(eth_dev, true);
 			if (rc)
 				goto done;
 		}
 	} else {
 		if (dev->vlan_info.qinq_on) {
-			offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+			offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 			rc = otx2_nix_config_double_vlan(eth_dev, false);
 			if (rc)
 				goto done;
 		}
 	}
 
-	if (offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
-			DEV_RX_OFFLOAD_QINQ_STRIP)) {
+	if (offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+			RTE_ETH_RX_OFFLOAD_QINQ_STRIP)) {
 		dev->rx_offloads |= offloads;
 		dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
 		otx2_eth_set_rx_function(eth_dev);
@@ -780,7 +780,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
 	tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox);
 
 	tpid_cfg->tpid = tpid;
-	if (type == ETH_VLAN_TYPE_OUTER)
+	if (type == RTE_ETH_VLAN_TYPE_OUTER)
 		tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER;
 	else
 		tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER;
@@ -789,7 +789,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
 	if (rc)
 		return rc;
 
-	if (type == ETH_VLAN_TYPE_OUTER)
+	if (type == RTE_ETH_VLAN_TYPE_OUTER)
 		dev->vlan_info.outer_vlan_tpid = tpid;
 	else
 		dev->vlan_info.inner_vlan_tpid = tpid;
@@ -864,7 +864,7 @@ otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev,       uint16_t vlan_id, int on)
 		vlan->outer_vlan_idx = 0;
 	}
 
-	rc = nix_vlan_handle_default_tx_entry(dev, ETH_VLAN_TYPE_OUTER,
+	rc = nix_vlan_handle_default_tx_entry(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					      vtag_index, on);
 	if (rc < 0) {
 		printf("Default tx entry failed with rc %d\n", rc);
@@ -986,12 +986,12 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
 	} else {
 		/* Reinstall all mcam entries now if filter offload is set */
 		if (eth_dev->data->dev_conf.rxmode.offloads &
-		    DEV_RX_OFFLOAD_VLAN_FILTER)
+		    RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			nix_vlan_reinstall_vlan_filters(eth_dev);
 	}
 
 	mask =
-	    ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+	    RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
 	rc = otx2_nix_vlan_offload_set(eth_dev, mask);
 	if (rc) {
 		otx2_err("Failed to set vlan offload rc=%d", rc);
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index 698d22e22685..74dc36a17648 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -33,14 +33,14 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
 
 	otx_epvf = OTX_EP_DEV(eth_dev);
 
-	devinfo->speed_capa = ETH_LINK_SPEED_10G;
+	devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
 	devinfo->max_rx_queues = otx_epvf->max_rx_queues;
 	devinfo->max_tx_queues = otx_epvf->max_tx_queues;
 
 	devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
 	devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
-	devinfo->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
-	devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+	devinfo->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
+	devinfo->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
 
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index aa4dcd33cc79..9338b30672ec 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -563,7 +563,7 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
 			struct otx_ep_buf_free_info *finfo;
 			int j, frags, num_sg;
 
-			if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+			if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
 				goto xmit_fail;
 
 			finfo = (struct otx_ep_buf_free_info *)rte_malloc(NULL,
@@ -697,7 +697,7 @@ otx2_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
 			struct otx_ep_buf_free_info *finfo;
 			int j, frags, num_sg;
 
-			if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+			if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
 				goto xmit_fail;
 
 			finfo = (struct otx_ep_buf_free_info *)
@@ -954,7 +954,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
 	droq_pkt->l4_len = hdr_lens.l4_len;
 
 	if (droq_pkt->nb_segs > 1 &&
-	    !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	    !(otx_ep->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		rte_pktmbuf_free(droq_pkt);
 		goto oq_read_fail;
 	}
diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index d695c5eef7b0..ec29fd6bc53c 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -136,10 +136,10 @@ static const char *valid_arguments[] = {
 };
 
 static struct rte_eth_link pmd_link = {
-		.link_speed = ETH_SPEED_NUM_10G,
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_status = ETH_LINK_DOWN,
-		.link_autoneg = ETH_LINK_FIXED,
+		.link_speed = RTE_ETH_SPEED_NUM_10G,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_status = RTE_ETH_LINK_DOWN,
+		.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(eth_pcap_logtype, NOTICE);
@@ -659,7 +659,7 @@ eth_dev_start(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_tx_queues; i++)
 		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -714,7 +714,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_tx_queues; i++)
 		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index 4cc002ee8fab..047010e15ed0 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -22,15 +22,15 @@ struct pfe_vdev_init_params {
 static struct pfe *g_pfe;
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 /* Supported Tx offloads */
 static uint64_t dev_tx_offloads_sup =
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 /* TODO: make pfe_svr a runtime option.
  * Driver should be able to get the SVR
@@ -601,9 +601,9 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 	}
 
 	link.link_status = lstatus;
-	link.link_speed = ETH_LINK_SPEED_1G;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_speed = RTE_ETH_LINK_SPEED_1G;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	pfe_eth_atomic_write_link_status(dev, &link);
 
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 6667c2d7ab6d..511742c6a1b3 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -65,8 +65,8 @@ typedef u32 offsize_t;      /* In DWORDS !!! */
 struct eth_phy_cfg {
 /* 0 = autoneg, 1000/10000/20000/25000/40000/50000/100000 */
 	u32 speed;
-#define ETH_SPEED_AUTONEG   0
-#define ETH_SPEED_SMARTLINQ  0x8 /* deprecated - use link_modes field instead */
+#define RTE_ETH_SPEED_AUTONEG   0
+#define RTE_ETH_SPEED_SMARTLINQ  0x8 /* deprecated - use link_modes field instead */
 
 	u32 pause;      /* bitmask */
 #define ETH_PAUSE_NONE		0x0
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 27f6932dc74e..c907d7fd8312 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -342,9 +342,9 @@ qede_assign_rxtx_handlers(struct rte_eth_dev *dev, bool is_dummy)
 	}
 
 	use_tx_offload = !!(tx_offloads &
-			    (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
-			     DEV_TX_OFFLOAD_TCP_TSO | /* tso */
-			     DEV_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
+			    (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
+			     RTE_ETH_TX_OFFLOAD_TCP_TSO | /* tso */
+			     RTE_ETH_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
 
 	if (use_tx_offload) {
 		DP_INFO(edev, "Assigning qede_xmit_pkts\n");
@@ -1002,16 +1002,16 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
 	uint64_t rx_offloads = eth_dev->data->dev_conf.rxmode.offloads;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			(void)qede_vlan_stripping(eth_dev, 1);
 		else
 			(void)qede_vlan_stripping(eth_dev, 0);
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* VLAN filtering kicks in when a VLAN is added */
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 			qede_vlan_filter_set(eth_dev, 0, 1);
 		} else {
 			if (qdev->configured_vlans > 1) { /* Excluding VLAN0 */
@@ -1022,7 +1022,7 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 				 * enabled
 				 */
 				eth_dev->data->dev_conf.rxmode.offloads |=
-						DEV_RX_OFFLOAD_VLAN_FILTER;
+						RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			} else {
 				qede_vlan_filter_set(eth_dev, 0, 0);
 			}
@@ -1069,11 +1069,11 @@ int qede_config_rss(struct rte_eth_dev *eth_dev)
 	/* Configure default RETA */
 	memset(reta_conf, 0, sizeof(reta_conf));
 	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++)
-		reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+		reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
 
 	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
-		id = i / RTE_RETA_GROUP_SIZE;
-		pos = i % RTE_RETA_GROUP_SIZE;
+		id = i / RTE_ETH_RETA_GROUP_SIZE;
+		pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		q = i % QEDE_RSS_COUNT(eth_dev);
 		reta_conf[id].reta[pos] = q;
 	}
@@ -1112,12 +1112,12 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
 	}
 
 	/* Configure TPA parameters */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		if (qede_enable_tpa(eth_dev, true))
 			return -EINVAL;
 		/* Enable scatter mode for LRO */
 		if (!eth_dev->data->scattered_rx)
-			rxmode->offloads |= DEV_RX_OFFLOAD_SCATTER;
+			rxmode->offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 	}
 
 	/* Start queues */
@@ -1132,7 +1132,7 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
 	 * Also, we would like to retain similar behavior in PF case, so we
 	 * don't do PF/VF specific check here.
 	 */
-	if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 		if (qede_config_rss(eth_dev))
 			goto err;
 
@@ -1272,8 +1272,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE(edev);
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* We need to have min 1 RX queue.There is no min check in
 	 * rte_eth_dev_configure(), so we are checking it here.
@@ -1291,8 +1291,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 		DP_NOTICE(edev, false,
 			  "Invalid devargs supplied, requested change will not take effect\n");
 
-	if (!(rxmode->mq_mode == ETH_MQ_RX_NONE ||
-	      rxmode->mq_mode == ETH_MQ_RX_RSS)) {
+	if (!(rxmode->mq_mode == RTE_ETH_MQ_RX_NONE ||
+	      rxmode->mq_mode == RTE_ETH_MQ_RX_RSS)) {
 		DP_ERR(edev, "Unsupported multi-queue mode\n");
 		return -ENOTSUP;
 	}
@@ -1312,7 +1312,7 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 			return -ENOMEM;
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		eth_dev->data->scattered_rx = 1;
 
 	if (qede_start_vport(qdev, eth_dev->data->mtu))
@@ -1321,8 +1321,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 	qdev->mtu = eth_dev->data->mtu;
 
 	/* Enable VLAN offloads by default */
-	ret = qede_vlan_offload_set(eth_dev, ETH_VLAN_STRIP_MASK  |
-					     ETH_VLAN_FILTER_MASK);
+	ret = qede_vlan_offload_set(eth_dev, RTE_ETH_VLAN_STRIP_MASK  |
+					     RTE_ETH_VLAN_FILTER_MASK);
 	if (ret)
 		return ret;
 
@@ -1385,34 +1385,34 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->reta_size = ECORE_RSS_IND_TABLE_SIZE;
 	dev_info->hash_key_size = ECORE_RSS_KEY_SIZE * sizeof(uint32_t);
 	dev_info->flow_type_rss_offloads = (uint64_t)QEDE_RSS_OFFLOAD_ALL;
-	dev_info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM	|
-				     DEV_RX_OFFLOAD_UDP_CKSUM	|
-				     DEV_RX_OFFLOAD_TCP_CKSUM	|
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				     DEV_RX_OFFLOAD_TCP_LRO	|
-				     DEV_RX_OFFLOAD_KEEP_CRC    |
-				     DEV_RX_OFFLOAD_SCATTER	|
-				     DEV_RX_OFFLOAD_VLAN_FILTER |
-				     DEV_RX_OFFLOAD_VLAN_STRIP  |
-				     DEV_RX_OFFLOAD_RSS_HASH);
+	dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM	|
+				     RTE_ETH_RX_OFFLOAD_UDP_CKSUM	|
+				     RTE_ETH_RX_OFFLOAD_TCP_CKSUM	|
+				     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     RTE_ETH_RX_OFFLOAD_TCP_LRO	|
+				     RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+				     RTE_ETH_RX_OFFLOAD_SCATTER	|
+				     RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				     RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+				     RTE_ETH_RX_OFFLOAD_RSS_HASH);
 	dev_info->rx_queue_offload_capa = 0;
 
 	/* TX offloads are on a per-packet basis, so it is applicable
 	 * to both at port and queue levels.
 	 */
-	dev_info->tx_offload_capa = (DEV_TX_OFFLOAD_VLAN_INSERT	|
-				     DEV_TX_OFFLOAD_IPV4_CKSUM	|
-				     DEV_TX_OFFLOAD_UDP_CKSUM	|
-				     DEV_TX_OFFLOAD_TCP_CKSUM	|
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				     DEV_TX_OFFLOAD_MULTI_SEGS  |
-				     DEV_TX_OFFLOAD_TCP_TSO	|
-				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO);
+	dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_VLAN_INSERT	|
+				     RTE_ETH_TX_OFFLOAD_IPV4_CKSUM	|
+				     RTE_ETH_TX_OFFLOAD_UDP_CKSUM	|
+				     RTE_ETH_TX_OFFLOAD_TCP_CKSUM	|
+				     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+				     RTE_ETH_TX_OFFLOAD_TCP_TSO	|
+				     RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO);
 	dev_info->tx_queue_offload_capa = dev_info->tx_offload_capa;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
-		.offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+		.offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
 	};
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1424,17 +1424,17 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
 	memset(&link, 0, sizeof(struct qed_link_output));
 	qdev->ops->common->get_link(edev, &link);
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G)
-		speed_cap |= ETH_LINK_SPEED_1G;
+		speed_cap |= RTE_ETH_LINK_SPEED_1G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G)
-		speed_cap |= ETH_LINK_SPEED_10G;
+		speed_cap |= RTE_ETH_LINK_SPEED_10G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G)
-		speed_cap |= ETH_LINK_SPEED_25G;
+		speed_cap |= RTE_ETH_LINK_SPEED_25G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G)
-		speed_cap |= ETH_LINK_SPEED_40G;
+		speed_cap |= RTE_ETH_LINK_SPEED_40G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G)
-		speed_cap |= ETH_LINK_SPEED_50G;
+		speed_cap |= RTE_ETH_LINK_SPEED_50G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G)
-		speed_cap |= ETH_LINK_SPEED_100G;
+		speed_cap |= RTE_ETH_LINK_SPEED_100G;
 	dev_info->speed_capa = speed_cap;
 
 	return 0;
@@ -1461,10 +1461,10 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
 	/* Link Mode */
 	switch (q_link.duplex) {
 	case QEDE_DUPLEX_HALF:
-		link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case QEDE_DUPLEX_FULL:
-		link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case QEDE_DUPLEX_UNKNOWN:
 	default:
@@ -1473,11 +1473,11 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
 	link.link_duplex = link_duplex;
 
 	/* Link Status */
-	link.link_status = q_link.link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	link.link_status = q_link.link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	/* AN */
 	link.link_autoneg = (q_link.supported_caps & QEDE_SUPPORTED_AUTONEG) ?
-			     ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+			     RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
 
 	DP_INFO(edev, "Link - Speed %u Mode %u AN %u Status %u\n",
 		link.link_speed, link.link_duplex,
@@ -2012,12 +2012,12 @@ static int qede_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Pause is assumed to be supported (SUPPORTED_Pause) */
-	if (fc_conf->mode == RTE_FC_FULL)
+	if (fc_conf->mode == RTE_ETH_FC_FULL)
 		params.pause_config |= (QED_LINK_PAUSE_TX_ENABLE |
 					QED_LINK_PAUSE_RX_ENABLE);
-	if (fc_conf->mode == RTE_FC_TX_PAUSE)
+	if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
 		params.pause_config |= QED_LINK_PAUSE_TX_ENABLE;
-	if (fc_conf->mode == RTE_FC_RX_PAUSE)
+	if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
 		params.pause_config |= QED_LINK_PAUSE_RX_ENABLE;
 
 	params.link_up = true;
@@ -2041,13 +2041,13 @@ static int qede_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 
 	if (current_link.pause_config & (QED_LINK_PAUSE_RX_ENABLE |
 					 QED_LINK_PAUSE_TX_ENABLE))
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (current_link.pause_config & QED_LINK_PAUSE_RX_ENABLE)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (current_link.pause_config & QED_LINK_PAUSE_TX_ENABLE)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -2088,14 +2088,14 @@ qede_dev_supported_ptypes_get(struct rte_eth_dev *eth_dev)
 static void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf)
 {
 	*rss_caps = 0;
-	*rss_caps |= (hf & ETH_RSS_IPV4)              ? ECORE_RSS_IPV4 : 0;
-	*rss_caps |= (hf & ETH_RSS_IPV6)              ? ECORE_RSS_IPV6 : 0;
-	*rss_caps |= (hf & ETH_RSS_IPV6_EX)           ? ECORE_RSS_IPV6 : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_TCP)  ? ECORE_RSS_IPV4_TCP : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_TCP)  ? ECORE_RSS_IPV6_TCP : 0;
-	*rss_caps |= (hf & ETH_RSS_IPV6_TCP_EX)       ? ECORE_RSS_IPV6_TCP : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_UDP)  ? ECORE_RSS_IPV4_UDP : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_UDP)  ? ECORE_RSS_IPV6_UDP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV4)              ? ECORE_RSS_IPV4 : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV6)              ? ECORE_RSS_IPV6 : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV6_EX)           ? ECORE_RSS_IPV6 : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)  ? ECORE_RSS_IPV4_TCP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)  ? ECORE_RSS_IPV6_TCP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV6_TCP_EX)       ? ECORE_RSS_IPV6_TCP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)  ? ECORE_RSS_IPV4_UDP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)  ? ECORE_RSS_IPV6_UDP : 0;
 }
 
 int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
@@ -2221,7 +2221,7 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 	uint8_t entry;
 	int rc = 0;
 
-	if (reta_size > ETH_RSS_RETA_SIZE_128) {
+	if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
 		DP_ERR(edev, "reta_size %d is not supported by hardware\n",
 		       reta_size);
 		return -EINVAL;
@@ -2245,8 +2245,8 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 
 	for_each_hwfn(edev, i) {
 		for (j = 0; j < reta_size; j++) {
-			idx = j / RTE_RETA_GROUP_SIZE;
-			shift = j % RTE_RETA_GROUP_SIZE;
+			idx = j / RTE_ETH_RETA_GROUP_SIZE;
+			shift = j % RTE_ETH_RETA_GROUP_SIZE;
 			if (reta_conf[idx].mask & (1ULL << shift)) {
 				entry = reta_conf[idx].reta[shift];
 				fid = entry * edev->num_hwfns + i;
@@ -2282,15 +2282,15 @@ static int qede_rss_reta_query(struct rte_eth_dev *eth_dev,
 	uint16_t i, idx, shift;
 	uint8_t entry;
 
-	if (reta_size > ETH_RSS_RETA_SIZE_128) {
+	if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
 		DP_ERR(edev, "reta_size %d is not supported\n",
 		       reta_size);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift)) {
 			entry = qdev->rss_ind_table[i];
 			reta_conf[idx].reta[shift] = entry;
@@ -2718,16 +2718,16 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 	adapter->ipgre.num_filters = 0;
 	if (is_vf) {
 		adapter->vxlan.enable = true;
-		adapter->vxlan.filter_type = ETH_TUNNEL_FILTER_IMAC |
-					     ETH_TUNNEL_FILTER_IVLAN;
+		adapter->vxlan.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+					     RTE_ETH_TUNNEL_FILTER_IVLAN;
 		adapter->vxlan.udp_port = QEDE_VXLAN_DEF_PORT;
 		adapter->geneve.enable = true;
-		adapter->geneve.filter_type = ETH_TUNNEL_FILTER_IMAC |
-					      ETH_TUNNEL_FILTER_IVLAN;
+		adapter->geneve.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+					      RTE_ETH_TUNNEL_FILTER_IVLAN;
 		adapter->geneve.udp_port = QEDE_GENEVE_DEF_PORT;
 		adapter->ipgre.enable = true;
-		adapter->ipgre.filter_type = ETH_TUNNEL_FILTER_IMAC |
-					     ETH_TUNNEL_FILTER_IVLAN;
+		adapter->ipgre.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+					     RTE_ETH_TUNNEL_FILTER_IVLAN;
 	} else {
 		adapter->vxlan.enable = false;
 		adapter->geneve.enable = false;
diff --git a/drivers/net/qede/qede_filter.c b/drivers/net/qede/qede_filter.c
index c756594bfc4b..440440423a32 100644
--- a/drivers/net/qede/qede_filter.c
+++ b/drivers/net/qede/qede_filter.c
@@ -20,97 +20,97 @@ const struct _qede_udp_tunn_types {
 	const char *string;
 } qede_tunn_types[] = {
 	{
-		ETH_TUNNEL_FILTER_OMAC,
+		RTE_ETH_TUNNEL_FILTER_OMAC,
 		ECORE_FILTER_MAC,
 		ECORE_TUNN_CLSS_MAC_VLAN,
 		"outer-mac"
 	},
 	{
-		ETH_TUNNEL_FILTER_TENID,
+		RTE_ETH_TUNNEL_FILTER_TENID,
 		ECORE_FILTER_VNI,
 		ECORE_TUNN_CLSS_MAC_VNI,
 		"vni"
 	},
 	{
-		ETH_TUNNEL_FILTER_IMAC,
+		RTE_ETH_TUNNEL_FILTER_IMAC,
 		ECORE_FILTER_INNER_MAC,
 		ECORE_TUNN_CLSS_INNER_MAC_VLAN,
 		"inner-mac"
 	},
 	{
-		ETH_TUNNEL_FILTER_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_IVLAN,
 		ECORE_FILTER_INNER_VLAN,
 		ECORE_TUNN_CLSS_INNER_MAC_VLAN,
 		"inner-vlan"
 	},
 	{
-		ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_TENID,
+		RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_TENID,
 		ECORE_FILTER_MAC_VNI_PAIR,
 		ECORE_TUNN_CLSS_MAC_VNI,
 		"outer-mac and vni"
 	},
 	{
-		ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_IMAC,
+		RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_IMAC,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"outer-mac and inner-mac"
 	},
 	{
-		ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"outer-mac and inner-vlan"
 	},
 	{
-		ETH_TUNNEL_FILTER_TENID | ETH_TUNNEL_FILTER_IMAC,
+		RTE_ETH_TUNNEL_FILTER_TENID | RTE_ETH_TUNNEL_FILTER_IMAC,
 		ECORE_FILTER_INNER_MAC_VNI_PAIR,
 		ECORE_TUNN_CLSS_INNER_MAC_VNI,
 		"vni and inner-mac",
 	},
 	{
-		ETH_TUNNEL_FILTER_TENID | ETH_TUNNEL_FILTER_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_TENID | RTE_ETH_TUNNEL_FILTER_IVLAN,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"vni and inner-vlan",
 	},
 	{
-		ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
 		ECORE_FILTER_INNER_PAIR,
 		ECORE_TUNN_CLSS_INNER_MAC_VLAN,
 		"inner-mac and inner-vlan",
 	},
 	{
-		ETH_TUNNEL_FILTER_OIP,
+		RTE_ETH_TUNNEL_FILTER_OIP,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"outer-IP"
 	},
 	{
-		ETH_TUNNEL_FILTER_IIP,
+		RTE_ETH_TUNNEL_FILTER_IIP,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"inner-IP"
 	},
 	{
-		RTE_TUNNEL_FILTER_IMAC_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"IMAC_IVLAN"
 	},
 	{
-		RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID,
+		RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"IMAC_IVLAN_TENID"
 	},
 	{
-		RTE_TUNNEL_FILTER_IMAC_TENID,
+		RTE_ETH_TUNNEL_FILTER_IMAC_TENID,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"IMAC_TENID"
 	},
 	{
-		RTE_TUNNEL_FILTER_OMAC_TENID_IMAC,
+		RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"OMAC_TENID_IMAC"
@@ -144,7 +144,7 @@ int qede_check_fdir_support(struct rte_eth_dev *eth_dev)
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct rte_fdir_conf *fdir = &eth_dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fdir = &eth_dev->data->dev_conf.fdir_conf;
 
 	/* check FDIR modes */
 	switch (fdir->mode) {
@@ -542,7 +542,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
 	memset(&tunn, 0, sizeof(tunn));
 
 	switch (tunnel_udp->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (qdev->vxlan.udp_port != tunnel_udp->udp_port) {
 			DP_ERR(edev, "UDP port %u doesn't exist\n",
 				tunnel_udp->udp_port);
@@ -570,7 +570,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
 					ECORE_TUNN_CLSS_MAC_VLAN, false);
 
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (qdev->geneve.udp_port != tunnel_udp->udp_port) {
 			DP_ERR(edev, "UDP port %u doesn't exist\n",
 				tunnel_udp->udp_port);
@@ -622,7 +622,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
 	memset(&tunn, 0, sizeof(tunn));
 
 	switch (tunnel_udp->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (qdev->vxlan.udp_port == tunnel_udp->udp_port) {
 			DP_INFO(edev,
 				"UDP port %u for VXLAN was already configured\n",
@@ -659,7 +659,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
 
 		qdev->vxlan.udp_port = udp_port;
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (qdev->geneve.udp_port == tunnel_udp->udp_port) {
 			DP_INFO(edev,
 				"UDP port %u for GENEVE was already configured\n",
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index c2263787b4ec..d585db8b61e8 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -249,7 +249,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
 	bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
 	/* cache align the mbuf size to simplfy rx_buf_size calculation */
 	bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
-	if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)	||
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	||
 	    (max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) {
 		if (!dev->data->scattered_rx) {
 			DP_INFO(edev, "Forcing scatter-gather mode\n");
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index c9334448c887..15112b83f4f7 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -73,14 +73,14 @@
 #define QEDE_MAX_ETHER_HDR_LEN	(RTE_ETHER_HDR_LEN + QEDE_ETH_OVERHEAD)
 #define QEDE_ETH_MAX_LEN	(RTE_ETHER_MTU + QEDE_MAX_ETHER_HDR_LEN)
 
-#define QEDE_RSS_OFFLOAD_ALL    (ETH_RSS_IPV4			|\
-				 ETH_RSS_NONFRAG_IPV4_TCP	|\
-				 ETH_RSS_NONFRAG_IPV4_UDP	|\
-				 ETH_RSS_IPV6			|\
-				 ETH_RSS_NONFRAG_IPV6_TCP	|\
-				 ETH_RSS_NONFRAG_IPV6_UDP	|\
-				 ETH_RSS_VXLAN			|\
-				 ETH_RSS_GENEVE)
+#define QEDE_RSS_OFFLOAD_ALL    (RTE_ETH_RSS_IPV4			|\
+				 RTE_ETH_RSS_NONFRAG_IPV4_TCP	|\
+				 RTE_ETH_RSS_NONFRAG_IPV4_UDP	|\
+				 RTE_ETH_RSS_IPV6			|\
+				 RTE_ETH_RSS_NONFRAG_IPV6_TCP	|\
+				 RTE_ETH_RSS_NONFRAG_IPV6_UDP	|\
+				 RTE_ETH_RSS_VXLAN			|\
+				 RTE_ETH_RSS_GENEVE)
 
 #define QEDE_RXTX_MAX(qdev) \
 	(RTE_MAX(qdev->num_rx_queues, qdev->num_tx_queues))
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index 0440019e07e1..db10f035dfcb 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -56,10 +56,10 @@ struct pmd_internals {
 };
 
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(eth_ring_logtype, NOTICE);
@@ -102,7 +102,7 @@ eth_dev_configure(struct rte_eth_dev *dev __rte_unused) { return 0; }
 static int
 eth_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -110,21 +110,21 @@ static int
 eth_dev_stop(struct rte_eth_dev *dev)
 {
 	dev->data->dev_started = 0;
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
 static int
 eth_dev_set_link_down(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
 static int
 eth_dev_set_link_up(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -163,8 +163,8 @@ eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_mac_addrs = 1;
 	dev_info->max_rx_pktlen = (uint32_t)-1;
 	dev_info->max_rx_queues = (uint16_t)internals->max_rx_queues;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	dev_info->max_tx_queues = (uint16_t)internals->max_tx_queues;
 	dev_info->min_rx_bufsize = 0;
 
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 431c42f508d0..9c1be10ac93d 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -106,13 +106,13 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
 {
 	uint32_t phy_caps = 0;
 
-	if (~speeds & ETH_LINK_SPEED_FIXED) {
+	if (~speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		phy_caps |= (1 << EFX_PHY_CAP_AN);
 		/*
 		 * If no speeds are specified in the mask, any supported
 		 * may be negotiated
 		 */
-		if (speeds == ETH_LINK_SPEED_AUTONEG)
+		if (speeds == RTE_ETH_LINK_SPEED_AUTONEG)
 			phy_caps |=
 				(1 << EFX_PHY_CAP_1000FDX) |
 				(1 << EFX_PHY_CAP_10000FDX) |
@@ -121,17 +121,17 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
 				(1 << EFX_PHY_CAP_50000FDX) |
 				(1 << EFX_PHY_CAP_100000FDX);
 	}
-	if (speeds & ETH_LINK_SPEED_1G)
+	if (speeds & RTE_ETH_LINK_SPEED_1G)
 		phy_caps |= (1 << EFX_PHY_CAP_1000FDX);
-	if (speeds & ETH_LINK_SPEED_10G)
+	if (speeds & RTE_ETH_LINK_SPEED_10G)
 		phy_caps |= (1 << EFX_PHY_CAP_10000FDX);
-	if (speeds & ETH_LINK_SPEED_25G)
+	if (speeds & RTE_ETH_LINK_SPEED_25G)
 		phy_caps |= (1 << EFX_PHY_CAP_25000FDX);
-	if (speeds & ETH_LINK_SPEED_40G)
+	if (speeds & RTE_ETH_LINK_SPEED_40G)
 		phy_caps |= (1 << EFX_PHY_CAP_40000FDX);
-	if (speeds & ETH_LINK_SPEED_50G)
+	if (speeds & RTE_ETH_LINK_SPEED_50G)
 		phy_caps |= (1 << EFX_PHY_CAP_50000FDX);
-	if (speeds & ETH_LINK_SPEED_100G)
+	if (speeds & RTE_ETH_LINK_SPEED_100G)
 		phy_caps |= (1 << EFX_PHY_CAP_100000FDX);
 
 	return phy_caps;
@@ -401,10 +401,10 @@ sfc_set_fw_subvariant(struct sfc_adapter *sa)
 			tx_offloads |= txq_info->offloads;
 	}
 
-	if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			   DEV_TX_OFFLOAD_TCP_CKSUM |
-			   DEV_TX_OFFLOAD_UDP_CKSUM |
-			   DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
 		req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_DEFAULT;
 	else
 		req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_NO_TX_CSUM;
@@ -899,7 +899,7 @@ sfc_attach(struct sfc_adapter *sa)
 	sa->priv.shared->tunnel_encaps =
 		encp->enc_tunnel_encapsulations_supported;
 
-	if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		sa->tso = encp->enc_fw_assisted_tso_v2_enabled ||
 			  encp->enc_tso_v3_enabled;
 		if (!sa->tso)
@@ -908,8 +908,8 @@ sfc_attach(struct sfc_adapter *sa)
 
 	if (sa->tso &&
 	    (sfc_dp_tx_offload_capa(sa->priv.dp_tx) &
-	     (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-	      DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
+	     (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+	      RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
 		sa->tso_encap = encp->enc_fw_assisted_tso_v2_encap_enabled ||
 				encp->enc_tso_v3_enabled;
 		if (!sa->tso_encap)
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index d958fd642fb1..eeb73a7530ef 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -979,11 +979,11 @@ struct sfc_dp_rx sfc_ef100_rx = {
 				  SFC_DP_RX_FEAT_INTR |
 				  SFC_DP_RX_FEAT_STATS,
 	.dev_offload_capa	= 0,
-	.queue_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-				  DEV_RX_OFFLOAD_SCATTER |
-				  DEV_RX_OFFLOAD_RSS_HASH,
+	.queue_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				  RTE_ETH_RX_OFFLOAD_SCATTER |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
 	.get_dev_info		= sfc_ef100_rx_get_dev_info,
 	.qsize_up_rings		= sfc_ef100_rx_qsize_up_rings,
 	.qcreate		= sfc_ef100_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index e166fda888b1..67980a587fe4 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -971,16 +971,16 @@ struct sfc_dp_tx sfc_ef100_tx = {
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS |
 				  SFC_DP_TX_FEAT_STATS,
 	.dev_offload_capa	= 0,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_VLAN_INSERT |
-				  DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_MULTI_SEGS |
-				  DEV_TX_OFFLOAD_TCP_TSO |
-				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				  RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				  RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
 	.get_dev_info		= sfc_ef100_get_dev_info,
 	.qsize_up_rings		= sfc_ef100_tx_qsize_up_rings,
 	.qcreate		= sfc_ef100_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 991329e86f01..9ea207cca163 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -746,8 +746,8 @@ struct sfc_dp_rx sfc_ef10_essb_rx = {
 	},
 	.features		= SFC_DP_RX_FEAT_FLOW_FLAG |
 				  SFC_DP_RX_FEAT_FLOW_MARK,
-	.dev_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_RSS_HASH,
+	.dev_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
 	.queue_offload_capa	= 0,
 	.get_dev_info		= sfc_ef10_essb_rx_get_dev_info,
 	.pool_ops_supported	= sfc_ef10_essb_rx_pool_ops_supported,
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 49a7d4fb42fd..9aaabd30eee6 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -819,10 +819,10 @@ struct sfc_dp_rx sfc_ef10_rx = {
 	},
 	.features		= SFC_DP_RX_FEAT_MULTI_PROCESS |
 				  SFC_DP_RX_FEAT_INTR,
-	.dev_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_RX_OFFLOAD_RSS_HASH,
-	.queue_offload_capa	= DEV_RX_OFFLOAD_SCATTER,
+	.dev_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
+	.queue_offload_capa	= RTE_ETH_RX_OFFLOAD_SCATTER,
 	.get_dev_info		= sfc_ef10_rx_get_dev_info,
 	.qsize_up_rings		= sfc_ef10_rx_qsize_up_rings,
 	.qcreate		= sfc_ef10_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index ed43adb4ca5c..e7da4608bcb0 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -958,9 +958,9 @@ sfc_ef10_tx_qcreate(uint16_t port_id, uint16_t queue_id,
 	if (txq->sw_ring == NULL)
 		goto fail_sw_ring_alloc;
 
-	if (info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-			      DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			      DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) {
+	if (info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+			      RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			      RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) {
 		txq->tsoh = rte_calloc_socket("sfc-ef10-txq-tsoh",
 					      info->txq_entries,
 					      SFC_TSOH_STD_LEN,
@@ -1125,14 +1125,14 @@ struct sfc_dp_tx sfc_ef10_tx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_EF10,
 	},
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
-	.dev_offload_capa	= DEV_TX_OFFLOAD_MULTI_SEGS,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_TSO |
-				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+	.dev_offload_capa	= RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
 	.get_dev_info		= sfc_ef10_get_dev_info,
 	.qsize_up_rings		= sfc_ef10_tx_qsize_up_rings,
 	.qcreate		= sfc_ef10_tx_qcreate,
@@ -1152,11 +1152,11 @@ struct sfc_dp_tx sfc_ef10_simple_tx = {
 		.type		= SFC_DP_TX,
 	},
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
-	.dev_offload_capa	= DEV_TX_OFFLOAD_MBUF_FAST_FREE,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM,
+	.dev_offload_capa	= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM,
 	.get_dev_info		= sfc_ef10_get_dev_info,
 	.qsize_up_rings		= sfc_ef10_tx_qsize_up_rings,
 	.qcreate		= sfc_ef10_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index f5986b610fff..833d833a0408 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -105,19 +105,19 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_vfs = sa->sriov.num_vfs;
 
 	/* Autonegotiation may be disabled */
-	dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_1000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_1G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_10000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_10G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_25000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_25G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_40000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_50000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_100000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
 
 	dev_info->max_rx_queues = sa->rxq_max;
 	dev_info->max_tx_queues = sa->txq_max;
@@ -145,8 +145,8 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->tx_offload_capa = sfc_tx_get_dev_offload_caps(sa) |
 				    dev_info->tx_queue_offload_capa;
 
-	if (dev_info->tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
-		txq_offloads_def |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	if (dev_info->tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+		txq_offloads_def |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->default_txconf.offloads |= txq_offloads_def;
 
@@ -989,16 +989,16 @@ sfc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	switch (link_fc) {
 	case 0:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		break;
 	case EFX_FCNTL_RESPOND:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case EFX_FCNTL_GENERATE:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case (EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE):
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	default:
 		sfc_err(sa, "%s: unexpected flow control value %#x",
@@ -1029,16 +1029,16 @@ sfc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		fcntl = 0;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		fcntl = EFX_FCNTL_RESPOND;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		fcntl = EFX_FCNTL_GENERATE;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		fcntl = EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE;
 		break;
 	default:
@@ -1313,7 +1313,7 @@ sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 	qinfo->conf.rx_deferred_start = rxq_info->deferred_start;
 	qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads;
 	if (rxq_info->type_flags & EFX_RXQ_FLAG_SCATTER) {
-		qinfo->conf.offloads |= DEV_RX_OFFLOAD_SCATTER;
+		qinfo->conf.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 		qinfo->scattered_rx = 1;
 	}
 	qinfo->nb_desc = rxq_info->entries;
@@ -1523,9 +1523,9 @@ static efx_tunnel_protocol_t
 sfc_tunnel_rte_type_to_efx_udp_proto(enum rte_eth_tunnel_type rte_type)
 {
 	switch (rte_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		return EFX_TUNNEL_PROTOCOL_VXLAN;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		return EFX_TUNNEL_PROTOCOL_GENEVE;
 	default:
 		return EFX_TUNNEL_NPROTOS;
@@ -1652,7 +1652,7 @@ sfc_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	/*
 	 * Mapping of hash configuration between RTE and EFX is not one-to-one,
-	 * hence, conversion is done here to derive a correct set of ETH_RSS
+	 * hence, conversion is done here to derive a correct set of RTE_ETH_RSS
 	 * flags which corresponds to the active EFX configuration stored
 	 * locally in 'sfc_adapter' and kept up-to-date
 	 */
@@ -1778,8 +1778,8 @@ sfc_dev_rss_reta_query(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	for (entry = 0; entry < reta_size; entry++) {
-		int grp = entry / RTE_RETA_GROUP_SIZE;
-		int grp_idx = entry % RTE_RETA_GROUP_SIZE;
+		int grp = entry / RTE_ETH_RETA_GROUP_SIZE;
+		int grp_idx = entry % RTE_ETH_RETA_GROUP_SIZE;
 
 		if ((reta_conf[grp].mask >> grp_idx) & 1)
 			reta_conf[grp].reta[grp_idx] = rss->tbl[entry];
@@ -1828,10 +1828,10 @@ sfc_dev_rss_reta_update(struct rte_eth_dev *dev,
 	rte_memcpy(rss_tbl_new, rss->tbl, sizeof(rss->tbl));
 
 	for (entry = 0; entry < reta_size; entry++) {
-		int grp_idx = entry % RTE_RETA_GROUP_SIZE;
+		int grp_idx = entry % RTE_ETH_RETA_GROUP_SIZE;
 		struct rte_eth_rss_reta_entry64 *grp;
 
-		grp = &reta_conf[entry / RTE_RETA_GROUP_SIZE];
+		grp = &reta_conf[entry / RTE_ETH_RETA_GROUP_SIZE];
 
 		if (grp->mask & (1ull << grp_idx)) {
 			if (grp->reta[grp_idx] >= rss->channels) {
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 8096af56739f..be2dfe778a0d 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -392,7 +392,7 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = NULL;
 	const struct rte_flow_item_vlan *mask = NULL;
 	const struct rte_flow_item_vlan supp_mask = {
-		.tci = rte_cpu_to_be_16(ETH_VLAN_ID_MAX),
+		.tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
 		.inner_type = RTE_BE16(0xffff),
 	};
 
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index 5320d8903dac..27b02b1119fb 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -573,66 +573,66 @@ sfc_port_link_mode_to_info(efx_link_mode_t link_mode,
 
 	memset(link_info, 0, sizeof(*link_info));
 	if ((link_mode == EFX_LINK_DOWN) || (link_mode == EFX_LINK_UNKNOWN))
-		link_info->link_status = ETH_LINK_DOWN;
+		link_info->link_status = RTE_ETH_LINK_DOWN;
 	else
-		link_info->link_status = ETH_LINK_UP;
+		link_info->link_status = RTE_ETH_LINK_UP;
 
 	switch (link_mode) {
 	case EFX_LINK_10HDX:
-		link_info->link_speed  = ETH_SPEED_NUM_10M;
-		link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_10M;
+		link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case EFX_LINK_10FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_10M;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_10M;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_100HDX:
-		link_info->link_speed  = ETH_SPEED_NUM_100M;
-		link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_100M;
+		link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case EFX_LINK_100FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_100M;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_100M;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_1000HDX:
-		link_info->link_speed  = ETH_SPEED_NUM_1G;
-		link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_1G;
+		link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case EFX_LINK_1000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_1G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_1G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_10000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_10G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_10G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_25000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_25G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_25G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_40000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_40G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_40G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_50000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_50G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_50G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_100000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_100G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_100G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	default:
 		SFC_ASSERT(B_FALSE);
 		/* FALLTHROUGH */
 	case EFX_LINK_UNKNOWN:
 	case EFX_LINK_DOWN:
-		link_info->link_speed  = ETH_SPEED_NUM_NONE;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_NONE;
 		link_info->link_duplex = 0;
 		break;
 	}
 
-	link_info->link_autoneg = ETH_LINK_AUTONEG;
+	link_info->link_autoneg = RTE_ETH_LINK_AUTONEG;
 }
 
 int
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index 2500b14cb006..9d88d554c1ba 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -405,7 +405,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
 	}
 
 	switch (conf->rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		if (nb_rx_queues != 1) {
 			sfcr_err(sr, "Rx RSS is not supported with %u queues",
 				 nb_rx_queues);
@@ -420,7 +420,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
 			ret = -EINVAL;
 		}
 		break;
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 		break;
 	default:
 		sfcr_err(sr, "Rx mode MQ modes other than RSS not supported");
@@ -428,7 +428,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
 		break;
 	}
 
-	if (conf->txmode.mq_mode != ETH_MQ_TX_NONE) {
+	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
 		sfcr_err(sr, "Tx mode MQ modes not supported");
 		ret = -EINVAL;
 	}
@@ -553,8 +553,8 @@ sfc_repr_dev_link_update(struct rte_eth_dev *dev,
 		sfc_port_link_mode_to_info(EFX_LINK_UNKNOWN, &link);
 	} else {
 		memset(&link, 0, sizeof(link));
-		link.link_status = ETH_LINK_UP;
-		link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		link.link_status = RTE_ETH_LINK_UP;
+		link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index c60ef17a922a..23df27c8f45a 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -648,9 +648,9 @@ struct sfc_dp_rx sfc_efx_rx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_RX_EFX,
 	},
 	.features		= SFC_DP_RX_FEAT_INTR,
-	.dev_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_RSS_HASH,
-	.queue_offload_capa	= DEV_RX_OFFLOAD_SCATTER,
+	.dev_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
+	.queue_offload_capa	= RTE_ETH_RX_OFFLOAD_SCATTER,
 	.qsize_up_rings		= sfc_efx_rx_qsize_up_rings,
 	.qcreate		= sfc_efx_rx_qcreate,
 	.qdestroy		= sfc_efx_rx_qdestroy,
@@ -931,7 +931,7 @@ sfc_rx_get_offload_mask(struct sfc_adapter *sa)
 	uint64_t no_caps = 0;
 
 	if (encp->enc_tunnel_encapsulations_supported == 0)
-		no_caps |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		no_caps |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 	return ~no_caps;
 }
@@ -1140,7 +1140,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 
 	if (!sfc_rx_check_scatter(sa->port.pdu, buf_size,
 				  encp->enc_rx_prefix_size,
-				  (offloads & DEV_RX_OFFLOAD_SCATTER),
+				  (offloads & RTE_ETH_RX_OFFLOAD_SCATTER),
 				  encp->enc_rx_scatter_max,
 				  &error)) {
 		sfc_err(sa, "RxQ %d (internal %u) MTU check failed: %s",
@@ -1166,15 +1166,15 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 		rxq_info->type = EFX_RXQ_TYPE_DEFAULT;
 
 	rxq_info->type_flags |=
-		(offloads & DEV_RX_OFFLOAD_SCATTER) ?
+		(offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ?
 		EFX_RXQ_FLAG_SCATTER : EFX_RXQ_FLAG_NONE;
 
 	if ((encp->enc_tunnel_encapsulations_supported != 0) &&
 	    (sfc_dp_rx_offload_capa(sa->priv.dp_rx) &
-	     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+	     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
 		rxq_info->type_flags |= EFX_RXQ_FLAG_INNER_CLASSES;
 
-	if (offloads & DEV_RX_OFFLOAD_RSS_HASH)
+	if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)
 		rxq_info->type_flags |= EFX_RXQ_FLAG_RSS_HASH;
 
 	if ((sa->negotiated_rx_metadata & RTE_ETH_RX_METADATA_USER_FLAG) != 0)
@@ -1211,7 +1211,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 	rxq_info->refill_mb_pool = mb_pool;
 
 	if (rss->hash_support == EFX_RX_HASH_AVAILABLE && rss->channels > 0 &&
-	    (offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	    (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		rxq_info->rxq_flags = SFC_RXQ_FLAG_RSS_HASH;
 	else
 		rxq_info->rxq_flags = 0;
@@ -1313,19 +1313,19 @@ sfc_rx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
  * Mapping between RTE RSS hash functions and their EFX counterparts.
  */
 static const struct sfc_rss_hf_rte_to_efx sfc_rss_hf_map[] = {
-	{ ETH_RSS_NONFRAG_IPV4_TCP,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 	  EFX_RX_HASH(IPV4_TCP, 4TUPLE) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 	  EFX_RX_HASH(IPV4_UDP, 4TUPLE) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX,
 	  EFX_RX_HASH(IPV6_TCP, 4TUPLE) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX,
 	  EFX_RX_HASH(IPV6_UDP, 4TUPLE) },
-	{ ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER,
+	{ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	  EFX_RX_HASH(IPV4_TCP, 2TUPLE) | EFX_RX_HASH(IPV4_UDP, 2TUPLE) |
 	  EFX_RX_HASH(IPV4, 2TUPLE) },
-	{ ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER |
-	  ETH_RSS_IPV6_EX,
+	{ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+	  RTE_ETH_RSS_IPV6_EX,
 	  EFX_RX_HASH(IPV6_TCP, 2TUPLE) | EFX_RX_HASH(IPV6_UDP, 2TUPLE) |
 	  EFX_RX_HASH(IPV6, 2TUPLE) }
 };
@@ -1645,10 +1645,10 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
 	int rc = 0;
 
 	switch (rxmode->mq_mode) {
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 		/* No special checks are required */
 		break;
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		if (rss->context_type == EFX_RX_SCALE_UNAVAILABLE) {
 			sfc_err(sa, "RSS is not available");
 			rc = EINVAL;
@@ -1665,16 +1665,16 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
 	 * so unsupported offloads cannot be added as the result of
 	 * below check.
 	 */
-	if ((rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM) !=
-	    (offloads_supported & DEV_RX_OFFLOAD_CHECKSUM)) {
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM) !=
+	    (offloads_supported & RTE_ETH_RX_OFFLOAD_CHECKSUM)) {
 		sfc_warn(sa, "Rx checksum offloads cannot be disabled - always on (IPv4/TCP/UDP)");
-		rxmode->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 	}
 
-	if ((offloads_supported & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
-	    (~rxmode->offloads & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+	if ((offloads_supported & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+	    (~rxmode->offloads & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 		sfc_warn(sa, "Rx outer IPv4 checksum offload cannot be disabled - always on");
-		rxmode->offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 	}
 
 	return rc;
@@ -1820,7 +1820,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	}
 
 configure_rss:
-	rss->channels = (dev_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) ?
+	rss->channels = (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) ?
 			 MIN(sas->ethdev_rxq_count, EFX_MAXRSS) : 0;
 
 	if (rss->channels > 0) {
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 13392cdd5a09..0273788c20ce 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -54,23 +54,23 @@ sfc_tx_get_offload_mask(struct sfc_adapter *sa)
 	uint64_t no_caps = 0;
 
 	if (!encp->enc_hw_tx_insert_vlan_enabled)
-		no_caps |= DEV_TX_OFFLOAD_VLAN_INSERT;
+		no_caps |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if (!encp->enc_tunnel_encapsulations_supported)
-		no_caps |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+		no_caps |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 	if (!sa->tso)
-		no_caps |= DEV_TX_OFFLOAD_TCP_TSO;
+		no_caps |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (!sa->tso_encap ||
 	    (encp->enc_tunnel_encapsulations_supported &
 	     (1u << EFX_TUNNEL_PROTOCOL_VXLAN)) == 0)
-		no_caps |= DEV_TX_OFFLOAD_VXLAN_TNL_TSO;
+		no_caps |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
 
 	if (!sa->tso_encap ||
 	    (encp->enc_tunnel_encapsulations_supported &
 	     (1u << EFX_TUNNEL_PROTOCOL_GENEVE)) == 0)
-		no_caps |= DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+		no_caps |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
 
 	return ~no_caps;
 }
@@ -114,8 +114,8 @@ sfc_tx_qcheck_conf(struct sfc_adapter *sa, unsigned int txq_max_fill_level,
 	}
 
 	/* We either perform both TCP and UDP offload, or no offload at all */
-	if (((offloads & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) !=
-	    ((offloads & DEV_TX_OFFLOAD_UDP_CKSUM) == 0)) {
+	if (((offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) !=
+	    ((offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0)) {
 		sfc_err(sa, "TCP and UDP offloads can't be set independently");
 		rc = EINVAL;
 	}
@@ -309,7 +309,7 @@ sfc_tx_check_mode(struct sfc_adapter *sa, const struct rte_eth_txmode *txmode)
 	int rc = 0;
 
 	switch (txmode->mq_mode) {
-	case ETH_MQ_TX_NONE:
+	case RTE_ETH_MQ_TX_NONE:
 		break;
 	default:
 		sfc_err(sa, "Tx multi-queue mode %u not supported",
@@ -529,23 +529,23 @@ sfc_tx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 	if (rc != 0)
 		goto fail_ev_qstart;
 
-	if (txq_info->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+	if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		flags |= EFX_TXQ_CKSUM_IPV4;
 
-	if (txq_info->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+	if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 		flags |= EFX_TXQ_CKSUM_INNER_IPV4;
 
-	if ((txq_info->offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
-	    (txq_info->offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+	if ((txq_info->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+	    (txq_info->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
 		flags |= EFX_TXQ_CKSUM_TCPUDP;
 
-		if (offloads_supported & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+		if (offloads_supported & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 			flags |= EFX_TXQ_CKSUM_INNER_TCPUDP;
 	}
 
-	if (txq_info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+	if (txq_info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
 		flags |= EFX_TXQ_FATSOV2;
 
 	rc = efx_tx_qcreate(sa->nic, txq->hw_index, 0, &txq->mem,
@@ -876,9 +876,9 @@ sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 		/*
 		 * Here VLAN TCI is expected to be zero in case if no
-		 * DEV_TX_OFFLOAD_VLAN_INSERT capability is advertised;
+		 * RTE_ETH_TX_OFFLOAD_VLAN_INSERT capability is advertised;
 		 * if the calling app ignores the absence of
-		 * DEV_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
+		 * RTE_ETH_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
 		 * TX_ERROR will occur
 		 */
 		pkt_descs += sfc_efx_tx_maybe_insert_tag(txq, m_seg, &pend);
@@ -1242,13 +1242,13 @@ struct sfc_dp_tx sfc_efx_tx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_TX_EFX,
 	},
 	.features		= 0,
-	.dev_offload_capa	= DEV_TX_OFFLOAD_VLAN_INSERT |
-				  DEV_TX_OFFLOAD_MULTI_SEGS,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_TSO,
+	.dev_offload_capa	= RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				  RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_TSO,
 	.qsize_up_rings		= sfc_efx_tx_qsize_up_rings,
 	.qcreate		= sfc_efx_tx_qcreate,
 	.qdestroy		= sfc_efx_tx_qdestroy,
diff --git a/drivers/net/softnic/rte_eth_softnic.c b/drivers/net/softnic/rte_eth_softnic.c
index b3b55b9035b1..3ef33818a9e0 100644
--- a/drivers/net/softnic/rte_eth_softnic.c
+++ b/drivers/net/softnic/rte_eth_softnic.c
@@ -173,7 +173,7 @@ pmd_dev_start(struct rte_eth_dev *dev)
 		return status;
 
 	/* Link UP */
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -184,7 +184,7 @@ pmd_dev_stop(struct rte_eth_dev *dev)
 	struct pmd_internals *p = dev->data->dev_private;
 
 	/* Link DOWN */
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	/* Firmware */
 	softnic_pipeline_disable_all(p);
@@ -386,10 +386,10 @@ pmd_ethdev_register(struct rte_vdev_device *vdev,
 
 	/* dev->data */
 	dev->data->dev_private = dev_private;
-	dev->data->dev_link.link_speed = ETH_SPEED_NUM_100G;
-	dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+	dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	dev->data->mac_addrs = &eth_addr;
 	dev->data->promiscuous = 1;
 	dev->data->numa_node = params->cpu_id;
diff --git a/drivers/net/szedata2/rte_eth_szedata2.c b/drivers/net/szedata2/rte_eth_szedata2.c
index 3c6a285e3c5e..6a084e3e1b1b 100644
--- a/drivers/net/szedata2/rte_eth_szedata2.c
+++ b/drivers/net/szedata2/rte_eth_szedata2.c
@@ -1042,7 +1042,7 @@ static int
 eth_dev_configure(struct rte_eth_dev *dev)
 {
 	struct rte_eth_dev_data *data = dev->data;
-	if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		dev->rx_pkt_burst = eth_szedata2_rx_scattered;
 		data->scattered_rx = 1;
 	} else {
@@ -1064,11 +1064,11 @@ eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_rx_queues = internals->max_rx_queues;
 	dev_info->max_tx_queues = internals->max_tx_queues;
 	dev_info->min_rx_bufsize = 0;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
 	dev_info->tx_offload_capa = 0;
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->tx_queue_offload_capa = 0;
-	dev_info->speed_capa = ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -1202,10 +1202,10 @@ eth_link_update(struct rte_eth_dev *dev,
 
 	memset(&link, 0, sizeof(link));
 
-	link.link_speed = ETH_SPEED_NUM_100G;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_status = ETH_LINK_UP;
-	link.link_autoneg = ETH_LINK_FIXED;
+	link.link_speed = RTE_ETH_SPEED_NUM_100G;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 
 	rte_eth_linkstatus_set(dev, &link);
 	return 0;
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index e4f1ad45219e..5d5350d78e03 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -70,16 +70,16 @@
 
 #define TAP_IOV_DEFAULT_MAX 1024
 
-#define TAP_RX_OFFLOAD (DEV_RX_OFFLOAD_SCATTER |	\
-			DEV_RX_OFFLOAD_IPV4_CKSUM |	\
-			DEV_RX_OFFLOAD_UDP_CKSUM |	\
-			DEV_RX_OFFLOAD_TCP_CKSUM)
+#define TAP_RX_OFFLOAD (RTE_ETH_RX_OFFLOAD_SCATTER |	\
+			RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	\
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
 
-#define TAP_TX_OFFLOAD (DEV_TX_OFFLOAD_MULTI_SEGS |	\
-			DEV_TX_OFFLOAD_IPV4_CKSUM |	\
-			DEV_TX_OFFLOAD_UDP_CKSUM |	\
-			DEV_TX_OFFLOAD_TCP_CKSUM |	\
-			DEV_TX_OFFLOAD_TCP_TSO)
+#define TAP_TX_OFFLOAD (RTE_ETH_TX_OFFLOAD_MULTI_SEGS |	\
+			RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |	\
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |	\
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM |	\
+			RTE_ETH_TX_OFFLOAD_TCP_TSO)
 
 static int tap_devices_count;
 
@@ -97,10 +97,10 @@ static const char *valid_arguments[] = {
 static volatile uint32_t tap_trigger;	/* Rx trigger */
 
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 static void
@@ -433,7 +433,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 		len = readv(process_private->rxq_fds[rxq->queue_id],
 			*rxq->iovecs,
-			1 + (rxq->rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ?
+			1 + (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ?
 			     rxq->nb_rx_desc : 1));
 		if (len < (int)sizeof(struct tun_pi))
 			break;
@@ -489,7 +489,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		seg->next = NULL;
 		mbuf->packet_type = rte_net_get_ptype(mbuf, NULL,
 						      RTE_PTYPE_ALL_MASK);
-		if (rxq->rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+		if (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 			tap_verify_csum(mbuf);
 
 		/* account for the receive frame */
@@ -866,7 +866,7 @@ tap_link_set_down(struct rte_eth_dev *dev)
 	struct pmd_internals *pmd = dev->data->dev_private;
 	struct ifreq ifr = { .ifr_flags = IFF_UP };
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 0, LOCAL_ONLY);
 }
 
@@ -876,7 +876,7 @@ tap_link_set_up(struct rte_eth_dev *dev)
 	struct pmd_internals *pmd = dev->data->dev_private;
 	struct ifreq ifr = { .ifr_flags = IFF_UP };
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 1, LOCAL_AND_REMOTE);
 }
 
@@ -956,30 +956,30 @@ tap_dev_speed_capa(void)
 	uint32_t speed = pmd_link.link_speed;
 	uint32_t capa = 0;
 
-	if (speed >= ETH_SPEED_NUM_10M)
-		capa |= ETH_LINK_SPEED_10M;
-	if (speed >= ETH_SPEED_NUM_100M)
-		capa |= ETH_LINK_SPEED_100M;
-	if (speed >= ETH_SPEED_NUM_1G)
-		capa |= ETH_LINK_SPEED_1G;
-	if (speed >= ETH_SPEED_NUM_5G)
-		capa |= ETH_LINK_SPEED_2_5G;
-	if (speed >= ETH_SPEED_NUM_5G)
-		capa |= ETH_LINK_SPEED_5G;
-	if (speed >= ETH_SPEED_NUM_10G)
-		capa |= ETH_LINK_SPEED_10G;
-	if (speed >= ETH_SPEED_NUM_20G)
-		capa |= ETH_LINK_SPEED_20G;
-	if (speed >= ETH_SPEED_NUM_25G)
-		capa |= ETH_LINK_SPEED_25G;
-	if (speed >= ETH_SPEED_NUM_40G)
-		capa |= ETH_LINK_SPEED_40G;
-	if (speed >= ETH_SPEED_NUM_50G)
-		capa |= ETH_LINK_SPEED_50G;
-	if (speed >= ETH_SPEED_NUM_56G)
-		capa |= ETH_LINK_SPEED_56G;
-	if (speed >= ETH_SPEED_NUM_100G)
-		capa |= ETH_LINK_SPEED_100G;
+	if (speed >= RTE_ETH_SPEED_NUM_10M)
+		capa |= RTE_ETH_LINK_SPEED_10M;
+	if (speed >= RTE_ETH_SPEED_NUM_100M)
+		capa |= RTE_ETH_LINK_SPEED_100M;
+	if (speed >= RTE_ETH_SPEED_NUM_1G)
+		capa |= RTE_ETH_LINK_SPEED_1G;
+	if (speed >= RTE_ETH_SPEED_NUM_5G)
+		capa |= RTE_ETH_LINK_SPEED_2_5G;
+	if (speed >= RTE_ETH_SPEED_NUM_5G)
+		capa |= RTE_ETH_LINK_SPEED_5G;
+	if (speed >= RTE_ETH_SPEED_NUM_10G)
+		capa |= RTE_ETH_LINK_SPEED_10G;
+	if (speed >= RTE_ETH_SPEED_NUM_20G)
+		capa |= RTE_ETH_LINK_SPEED_20G;
+	if (speed >= RTE_ETH_SPEED_NUM_25G)
+		capa |= RTE_ETH_LINK_SPEED_25G;
+	if (speed >= RTE_ETH_SPEED_NUM_40G)
+		capa |= RTE_ETH_LINK_SPEED_40G;
+	if (speed >= RTE_ETH_SPEED_NUM_50G)
+		capa |= RTE_ETH_LINK_SPEED_50G;
+	if (speed >= RTE_ETH_SPEED_NUM_56G)
+		capa |= RTE_ETH_LINK_SPEED_56G;
+	if (speed >= RTE_ETH_SPEED_NUM_100G)
+		capa |= RTE_ETH_LINK_SPEED_100G;
 
 	return capa;
 }
@@ -1196,15 +1196,15 @@ tap_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 		tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, REMOTE_ONLY);
 		if (!(ifr.ifr_flags & IFF_UP) ||
 		    !(ifr.ifr_flags & IFF_RUNNING)) {
-			dev_link->link_status = ETH_LINK_DOWN;
+			dev_link->link_status = RTE_ETH_LINK_DOWN;
 			return 0;
 		}
 	}
 	tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, LOCAL_ONLY);
 	dev_link->link_status =
 		((ifr.ifr_flags & IFF_UP) && (ifr.ifr_flags & IFF_RUNNING) ?
-		 ETH_LINK_UP :
-		 ETH_LINK_DOWN);
+		 RTE_ETH_LINK_UP :
+		 RTE_ETH_LINK_DOWN);
 	return 0;
 }
 
@@ -1391,7 +1391,7 @@ tap_gso_ctx_setup(struct rte_gso_ctx *gso_ctx, struct rte_eth_dev *dev)
 	int ret;
 
 	/* initialize GSO context */
-	gso_types = DEV_TX_OFFLOAD_TCP_TSO;
+	gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	if (!pmd->gso_ctx_mp) {
 		/*
 		 * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE
@@ -1606,9 +1606,9 @@ tap_tx_queue_setup(struct rte_eth_dev *dev,
 
 	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
 	txq->csum = !!(offloads &
-			(DEV_TX_OFFLOAD_IPV4_CKSUM |
-			 DEV_TX_OFFLOAD_UDP_CKSUM |
-			 DEV_TX_OFFLOAD_TCP_CKSUM));
+			(RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			 RTE_ETH_TX_OFFLOAD_TCP_CKSUM));
 
 	ret = tap_setup_queue(dev, internals, tx_queue_id, 0);
 	if (ret == -1)
@@ -1760,7 +1760,7 @@ static int
 tap_flow_ctrl_get(struct rte_eth_dev *dev __rte_unused,
 		  struct rte_eth_fc_conf *fc_conf)
 {
-	fc_conf->mode = RTE_FC_NONE;
+	fc_conf->mode = RTE_ETH_FC_NONE;
 	return 0;
 }
 
@@ -1768,7 +1768,7 @@ static int
 tap_flow_ctrl_set(struct rte_eth_dev *dev __rte_unused,
 		  struct rte_eth_fc_conf *fc_conf)
 {
-	if (fc_conf->mode != RTE_FC_NONE)
+	if (fc_conf->mode != RTE_ETH_FC_NONE)
 		return -ENOTSUP;
 	return 0;
 }
@@ -2262,7 +2262,7 @@ rte_pmd_tun_probe(struct rte_vdev_device *dev)
 			}
 		}
 	}
-	pmd_link.link_speed = ETH_SPEED_NUM_10G;
+	pmd_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 
 	TAP_LOG(DEBUG, "Initializing pmd_tun for %s", name);
 
@@ -2436,7 +2436,7 @@ rte_pmd_tap_probe(struct rte_vdev_device *dev)
 		return 0;
 	}
 
-	speed = ETH_SPEED_NUM_10G;
+	speed = RTE_ETH_SPEED_NUM_10G;
 
 	/* use tap%d which causes kernel to choose next available */
 	strlcpy(tap_name, DEFAULT_TAP_NAME "%d", RTE_ETH_NAME_MAX_LEN);
diff --git a/drivers/net/tap/tap_rss.h b/drivers/net/tap/tap_rss.h
index 176e7180bdaa..48c151cf6b68 100644
--- a/drivers/net/tap/tap_rss.h
+++ b/drivers/net/tap/tap_rss.h
@@ -13,7 +13,7 @@
 #define TAP_RSS_HASH_KEY_SIZE 40
 
 /* Supported RSS */
-#define TAP_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP))
+#define TAP_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP))
 
 /* hashed fields for RSS */
 enum hash_field {
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 8ce9a99dc074..762647e3b6ee 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -61,14 +61,14 @@ nicvf_link_status_update(struct nicvf *nic,
 {
 	memset(link, 0, sizeof(*link));
 
-	link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	if (nic->duplex == NICVF_HALF_DUPLEX)
-		link->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	else if (nic->duplex == NICVF_FULL_DUPLEX)
-		link->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link->link_speed = nic->speed;
-	link->link_autoneg = ETH_LINK_AUTONEG;
+	link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 }
 
 static void
@@ -134,7 +134,7 @@ nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		/* rte_eth_link_get() might need to wait up to 9 seconds */
 		for (i = 0; i < MAX_CHECK_TIME; i++) {
 			nicvf_link_status_update(nic, &link);
-			if (link.link_status == ETH_LINK_UP)
+			if (link.link_status == RTE_ETH_LINK_UP)
 				break;
 			rte_delay_ms(CHECK_INTERVAL);
 		}
@@ -390,35 +390,35 @@ nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
 {
 	uint64_t nic_rss = 0;
 
-	if (ethdev_rss & ETH_RSS_IPV4)
+	if (ethdev_rss & RTE_ETH_RSS_IPV4)
 		nic_rss |= RSS_IP_ENA;
 
-	if (ethdev_rss & ETH_RSS_IPV6)
+	if (ethdev_rss & RTE_ETH_RSS_IPV6)
 		nic_rss |= RSS_IP_ENA;
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
 
-	if (ethdev_rss & ETH_RSS_PORT)
+	if (ethdev_rss & RTE_ETH_RSS_PORT)
 		nic_rss |= RSS_L2_EXTENDED_HASH_ENA;
 
 	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
-		if (ethdev_rss & ETH_RSS_VXLAN)
+		if (ethdev_rss & RTE_ETH_RSS_VXLAN)
 			nic_rss |= RSS_TUN_VXLAN_ENA;
 
-		if (ethdev_rss & ETH_RSS_GENEVE)
+		if (ethdev_rss & RTE_ETH_RSS_GENEVE)
 			nic_rss |= RSS_TUN_GENEVE_ENA;
 
-		if (ethdev_rss & ETH_RSS_NVGRE)
+		if (ethdev_rss & RTE_ETH_RSS_NVGRE)
 			nic_rss |= RSS_TUN_NVGRE_ENA;
 	}
 
@@ -431,28 +431,28 @@ nicvf_rss_nic_to_ethdev(struct nicvf *nic,  uint64_t nic_rss)
 	uint64_t ethdev_rss = 0;
 
 	if (nic_rss & RSS_IP_ENA)
-		ethdev_rss |= (ETH_RSS_IPV4 | ETH_RSS_IPV6);
+		ethdev_rss |= (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6);
 
 	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_TCP_ENA))
-		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_TCP |
-				ETH_RSS_NONFRAG_IPV6_TCP);
+		ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP);
 
 	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_UDP_ENA))
-		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_UDP |
-				ETH_RSS_NONFRAG_IPV6_UDP);
+		ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP);
 
 	if (nic_rss & RSS_L2_EXTENDED_HASH_ENA)
-		ethdev_rss |= ETH_RSS_PORT;
+		ethdev_rss |= RTE_ETH_RSS_PORT;
 
 	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
 		if (nic_rss & RSS_TUN_VXLAN_ENA)
-			ethdev_rss |= ETH_RSS_VXLAN;
+			ethdev_rss |= RTE_ETH_RSS_VXLAN;
 
 		if (nic_rss & RSS_TUN_GENEVE_ENA)
-			ethdev_rss |= ETH_RSS_GENEVE;
+			ethdev_rss |= RTE_ETH_RSS_GENEVE;
 
 		if (nic_rss & RSS_TUN_NVGRE_ENA)
-			ethdev_rss |= ETH_RSS_NVGRE;
+			ethdev_rss |= RTE_ETH_RSS_NVGRE;
 	}
 	return ethdev_rss;
 }
@@ -479,8 +479,8 @@ nicvf_dev_reta_query(struct rte_eth_dev *dev,
 		return ret;
 
 	/* Copy RETA table */
-	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = tbl[j];
 	}
@@ -509,8 +509,8 @@ nicvf_dev_reta_update(struct rte_eth_dev *dev,
 		return ret;
 
 	/* Copy RETA table */
-	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				tbl[j] = reta_conf[i].reta[j];
 	}
@@ -807,9 +807,9 @@ nicvf_configure_rss(struct rte_eth_dev *dev)
 		    dev->data->nb_rx_queues,
 		    dev->data->dev_conf.lpbk_mode, rsshf);
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
 		ret = nicvf_rss_term(nic);
-	else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+	else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 		ret = nicvf_rss_config(nic, dev->data->nb_rx_queues, rsshf);
 	if (ret)
 		PMD_INIT_LOG(ERR, "Failed to configure RSS %d", ret);
@@ -870,7 +870,7 @@ nicvf_set_tx_function(struct rte_eth_dev *dev)
 
 	for (i = 0; i < dev->data->nb_tx_queues; i++) {
 		txq = dev->data->tx_queues[i];
-		if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
 			multiseg = true;
 			break;
 		}
@@ -992,7 +992,7 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
 	txq->offloads = offloads;
 
-	is_single_pool = !!(offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE);
+	is_single_pool = !!(offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE);
 
 	/* Choose optimum free threshold value for multipool case */
 	if (!is_single_pool) {
@@ -1382,11 +1382,11 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	PMD_INIT_FUNC_TRACE();
 
 	/* Autonegotiation may be disabled */
-	dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
-	dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
-				 ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+				 RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
 	if (nicvf_hw_version(nic) != PCI_SUB_DEVICE_ID_CN81XX_NICVF)
-		dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
 
 	dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU;
 	dev_info->max_rx_pktlen = NIC_HW_MAX_MTU + RTE_ETHER_HDR_LEN;
@@ -1415,10 +1415,10 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = NICVF_DEFAULT_TX_FREE_THRESH,
-		.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE |
-			DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM   |
-			DEV_TX_OFFLOAD_UDP_CKSUM          |
-			DEV_TX_OFFLOAD_TCP_CKSUM,
+		.offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+			RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM   |
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM          |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM,
 	};
 
 	return 0;
@@ -1582,8 +1582,8 @@ nicvf_vf_start(struct rte_eth_dev *dev, struct nicvf *nic, uint32_t rbdrsz)
 		     nic->rbdr->tail, nb_rbdr_desc, nic->vf_id);
 
 	/* Configure VLAN Strip */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	ret = nicvf_vlan_offload_config(dev, mask);
 
 	/* Based on the packet type(IPv4 or IPv6), the nicvf HW aligns L3 data
@@ -1711,7 +1711,7 @@ nicvf_dev_start(struct rte_eth_dev *dev)
 	/* Setup scatter mode if needed by jumbo */
 	if (dev->data->mtu + (uint32_t)NIC_HW_L2_OVERHEAD + 2 * VLAN_TAG_SIZE > buffsz)
 		dev->data->scattered_rx = 1;
-	if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
+	if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) != 0)
 		dev->data->scattered_rx = 1;
 
 	/* Setup MTU */
@@ -1896,8 +1896,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (!rte_eal_has_hugepages()) {
 		PMD_INIT_LOG(INFO, "Huge page is not configured");
@@ -1909,8 +1909,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-		rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+		rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		PMD_INIT_LOG(INFO, "Unsupported rx qmode %d", rxmode->mq_mode);
 		return -EINVAL;
 	}
@@ -1920,7 +1920,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(INFO, "Setting link speed/duplex not supported");
 		return -EINVAL;
 	}
@@ -1955,7 +1955,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		nic->offload_cksum = 1;
 
 	PMD_INIT_LOG(DEBUG, "Configured ethdev port%d hwcap=0x%" PRIx64,
@@ -2032,8 +2032,8 @@ nicvf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	struct nicvf *nic = nicvf_pmd_priv(dev);
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			nicvf_vlan_hw_strip(nic, true);
 		else
 			nicvf_vlan_hw_strip(nic, false);
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index 5d38750d6313..cb474e26b81e 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -16,32 +16,32 @@
 #define NICVF_UNKNOWN_DUPLEX		0xff
 
 #define NICVF_RSS_OFFLOAD_PASS1 ( \
-	ETH_RSS_PORT | \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_PORT | \
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define NICVF_RSS_OFFLOAD_TUNNEL ( \
-	ETH_RSS_VXLAN | \
-	ETH_RSS_GENEVE | \
-	ETH_RSS_NVGRE)
+	RTE_ETH_RSS_VXLAN | \
+	RTE_ETH_RSS_GENEVE | \
+	RTE_ETH_RSS_NVGRE)
 
 #define NICVF_TX_OFFLOAD_CAPA ( \
-	DEV_TX_OFFLOAD_IPV4_CKSUM       | \
-	DEV_TX_OFFLOAD_UDP_CKSUM        | \
-	DEV_TX_OFFLOAD_TCP_CKSUM        | \
-	DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-	DEV_TX_OFFLOAD_MBUF_FAST_FREE   | \
-	DEV_TX_OFFLOAD_MULTI_SEGS)
+	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM       | \
+	RTE_ETH_TX_OFFLOAD_UDP_CKSUM        | \
+	RTE_ETH_TX_OFFLOAD_TCP_CKSUM        | \
+	RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+	RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE   | \
+	RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define NICVF_RX_OFFLOAD_CAPA ( \
-	DEV_RX_OFFLOAD_CHECKSUM    | \
-	DEV_RX_OFFLOAD_VLAN_STRIP  | \
-	DEV_RX_OFFLOAD_SCATTER     | \
-	DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RX_OFFLOAD_CHECKSUM    | \
+	RTE_ETH_RX_OFFLOAD_VLAN_STRIP  | \
+	RTE_ETH_RX_OFFLOAD_SCATTER     | \
+	RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define NICVF_DEFAULT_RX_FREE_THRESH    224
 #define NICVF_DEFAULT_TX_FREE_THRESH    224
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 7b46ffb68635..0b0f9db7cb2a 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -998,7 +998,7 @@ txgbe_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
 	rxbal = rd32(hw, TXGBE_RXBAL(rxq->reg_idx));
 	rxbah = rd32(hw, TXGBE_RXBAH(rxq->reg_idx));
 	rxcfg = rd32(hw, TXGBE_RXCFG(rxq->reg_idx));
-	if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 		restart = (rxcfg & TXGBE_RXCFG_ENA) &&
 			!(rxcfg & TXGBE_RXCFG_VLAN);
 		rxcfg |= TXGBE_RXCFG_VLAN;
@@ -1033,7 +1033,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 	vlan_ext = (portctrl & TXGBE_PORTCTL_VLANEXT);
 	qinq = vlan_ext && (portctrl & TXGBE_PORTCTL_QINQ);
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
+	case RTE_ETH_VLAN_TYPE_INNER:
 		if (vlan_ext) {
 			wr32m(hw, TXGBE_VLANCTL,
 				TXGBE_VLANCTL_TPID_MASK,
@@ -1053,7 +1053,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 				TXGBE_TAGTPID_LSB(tpid));
 		}
 		break;
-	case ETH_VLAN_TYPE_OUTER:
+	case RTE_ETH_VLAN_TYPE_OUTER:
 		if (vlan_ext) {
 			/* Only the high 16-bits is valid */
 			wr32m(hw, TXGBE_EXTAG,
@@ -1138,10 +1138,10 @@ txgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 
 	if (on) {
 		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
-		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
 		rxq->vlan_flags = PKT_RX_VLAN;
-		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
 
@@ -1240,7 +1240,7 @@ txgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
 
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			txgbe_vlan_strip_queue_set(dev, i, 1);
 		else
 			txgbe_vlan_strip_queue_set(dev, i, 0);
@@ -1254,17 +1254,17 @@ txgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	struct txgbe_rx_queue *rxq;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		rxmode = &dev->data->dev_conf.rxmode;
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 		else
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 	}
 }
@@ -1275,25 +1275,25 @@ txgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	rxmode = &dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_STRIP_MASK)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK)
 		txgbe_vlan_hw_strip_config(dev);
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			txgbe_vlan_hw_filter_enable(dev);
 		else
 			txgbe_vlan_hw_filter_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			txgbe_vlan_hw_extend_enable(dev);
 		else
 			txgbe_vlan_hw_extend_disable(dev);
 	}
 
-	if (mask & ETH_QINQ_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+	if (mask & RTE_ETH_QINQ_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
 			txgbe_qinq_hw_strip_enable(dev);
 		else
 			txgbe_qinq_hw_strip_disable(dev);
@@ -1331,10 +1331,10 @@ txgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
 	switch (nb_rx_q) {
 	case 1:
 	case 2:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
 		break;
 	case 4:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
 		break;
 	default:
 		return -EINVAL;
@@ -1357,18 +1357,18 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
 		/* check multi-queue mode */
 		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
 			break;
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
 			/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
 			PMD_INIT_LOG(ERR, "SRIOV active,"
 					" unsupported mq_mode rx %d.",
 					dev_conf->rxmode.mq_mode);
 			return -EINVAL;
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
 			if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
 				if (txgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
 					PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -1378,13 +1378,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 					return -EINVAL;
 				}
 			break;
-		case ETH_MQ_RX_VMDQ_ONLY:
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_NONE:
 			/* if nothing mq mode configure, use default scheme */
 			dev->data->dev_conf.rxmode.mq_mode =
-				ETH_MQ_RX_VMDQ_ONLY;
+				RTE_ETH_MQ_RX_VMDQ_ONLY;
 			break;
-		default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+		default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
 			/* SRIOV only works in VMDq enable mode */
 			PMD_INIT_LOG(ERR, "SRIOV is active,"
 					" wrong mq_mode rx %d.",
@@ -1393,13 +1393,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 		}
 
 		switch (dev_conf->txmode.mq_mode) {
-		case ETH_MQ_TX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+		case RTE_ETH_MQ_TX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+			dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
 			break;
-		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
+		default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
 			dev->data->dev_conf.txmode.mq_mode =
-				ETH_MQ_TX_VMDQ_ONLY;
+				RTE_ETH_MQ_TX_VMDQ_ONLY;
 			break;
 		}
 
@@ -1414,13 +1414,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 			return -EINVAL;
 		}
 	} else {
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
 			PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
 					  " not supported.");
 			return -EINVAL;
 		}
 		/* check configuration for vmdb+dcb mode */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_conf *conf;
 
 			if (nb_rx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1429,15 +1429,15 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools must be %d or %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_tx_conf *conf;
 
 			if (nb_tx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1446,39 +1446,39 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools != %d and"
 						" nb_queue_pools != %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
 
 		/* For DCB mode check our configuration before we go further */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
 			const struct rte_eth_dcb_rx_conf *conf;
 
 			conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
 
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 			const struct rte_eth_dcb_tx_conf *conf;
 
 			conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
@@ -1495,8 +1495,8 @@ txgbe_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multiple queue mode checking */
 	ret  = txgbe_check_mq_mode(dev);
@@ -1694,15 +1694,15 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 		goto error;
 	}
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = txgbe_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
 		goto error;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
 		/* Enable vlan filtering for VMDq */
 		txgbe_vmdq_vlan_hw_filter_enable(dev);
 	}
@@ -1763,8 +1763,8 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 	if (err)
 		goto error;
 
-	allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_10G;
+	allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G;
 
 	link_speeds = &dev->data->dev_conf.link_speeds;
 	if (((*link_speeds) >> 1) & ~(allowed_speeds >> 1)) {
@@ -1773,20 +1773,20 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 	}
 
 	speed = 0x0;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		speed = (TXGBE_LINK_SPEED_100M_FULL |
 			 TXGBE_LINK_SPEED_1GB_FULL |
 			 TXGBE_LINK_SPEED_10GB_FULL);
 	} else {
-		if (*link_speeds & ETH_LINK_SPEED_10G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
 			speed |= TXGBE_LINK_SPEED_10GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
 			speed |= TXGBE_LINK_SPEED_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_2_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
 			speed |= TXGBE_LINK_SPEED_2_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_1G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed |= TXGBE_LINK_SPEED_1GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_100M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed |= TXGBE_LINK_SPEED_100M_FULL;
 	}
 
@@ -2601,7 +2601,7 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
-	dev_info->max_vmdq_pools = ETH_64_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->vmdq_queue_num = dev_info->max_rx_queues;
 	dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
@@ -2634,11 +2634,11 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->tx_desc_lim = tx_desc_lim;
 
 	dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
-	dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
 
 	/* Driver-preferred Rx/Tx parameters */
 	dev_info->default_rxportconf.burst_size = 32;
@@ -2695,11 +2695,11 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	int wait = 1;
 
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_AUTONEG);
 
 	hw->mac.get_link_status = true;
 
@@ -2713,8 +2713,8 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
 
 	if (err != 0) {
-		link.link_speed = ETH_SPEED_NUM_100M;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
@@ -2733,34 +2733,34 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	}
 
 	intr->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (link_speed) {
 	default:
 	case TXGBE_LINK_SPEED_UNKNOWN:
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	case TXGBE_LINK_SPEED_100M_FULL:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	case TXGBE_LINK_SPEED_1GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case TXGBE_LINK_SPEED_2_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_2_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 
 	case TXGBE_LINK_SPEED_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 
 	case TXGBE_LINK_SPEED_10GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	}
 
@@ -2990,7 +2990,7 @@ txgbe_dev_link_status_print(struct rte_eth_dev *dev)
 		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned int)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3221,13 +3221,13 @@ txgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		tx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -3359,16 +3359,16 @@ txgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 		return -ENOTSUP;
 	}
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += 4) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
 		if (!mask)
 			continue;
@@ -3400,16 +3400,16 @@ txgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += 4) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
 		if (!mask)
 			continue;
@@ -3576,12 +3576,12 @@ txgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
 		return -ENOTSUP;
 
 	if (on) {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = ~0;
 			wr32(hw, TXGBE_UCADDRTBL(i), ~0);
 		}
 	} else {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = 0;
 			wr32(hw, TXGBE_UCADDRTBL(i), 0);
 		}
@@ -3605,15 +3605,15 @@ txgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
 {
 	uint32_t new_val = orig_val;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
 		new_val |= TXGBE_POOLETHCTL_UTA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
 		new_val |= TXGBE_POOLETHCTL_MCHA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 		new_val |= TXGBE_POOLETHCTL_UCHA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 		new_val |= TXGBE_POOLETHCTL_BCA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 		new_val |= TXGBE_POOLETHCTL_MCP;
 
 	return new_val;
@@ -4264,15 +4264,15 @@ txgbe_start_timecounters(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 
 	switch (link.link_speed) {
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		incval = TXGBE_INCVAL_100;
 		shift = TXGBE_INCVAL_SHIFT_100;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		incval = TXGBE_INCVAL_1GB;
 		shift = TXGBE_INCVAL_SHIFT_1GB;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 	default:
 		incval = TXGBE_INCVAL_10GB;
 		shift = TXGBE_INCVAL_SHIFT_10GB;
@@ -4628,7 +4628,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	uint8_t nb_tcs;
 	uint8_t i, j;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
 	else
 		dcb_info->nb_tcs = 1;
@@ -4639,7 +4639,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	if (dcb_config->vt_mode) { /* vt is enabled */
 		struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
 		if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
 			for (j = 0; j < nb_tcs; j++) {
@@ -4663,9 +4663,9 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	} else { /* vt is disabled */
 		struct rte_eth_dcb_rx_conf *rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
-		if (dcb_info->nb_tcs == ETH_4_TCS) {
+		if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4678,7 +4678,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 			dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
 			dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
 			dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
-		} else if (dcb_info->nb_tcs == ETH_8_TCS) {
+		} else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4908,7 +4908,7 @@ txgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = txgbe_e_tag_filter_add(dev, l2_tunnel);
 		break;
 	default:
@@ -4939,7 +4939,7 @@ txgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 		return ret;
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = txgbe_e_tag_filter_del(dev, l2_tunnel);
 		break;
 	default:
@@ -4979,7 +4979,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
 			ret = -EINVAL;
@@ -4987,7 +4987,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_VXLANPORT, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add Geneve port 0 is not allowed.");
 			ret = -EINVAL;
@@ -4995,7 +4995,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_GENEVEPORT, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add Teredo port 0 is not allowed.");
 			ret = -EINVAL;
@@ -5003,7 +5003,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_TEREDOPORT, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
 			ret = -EINVAL;
@@ -5035,7 +5035,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORT);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5045,7 +5045,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_VXLANPORT, 0);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		cur_port = (uint16_t)rd32(hw, TXGBE_GENEVEPORT);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5055,7 +5055,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_GENEVEPORT, 0);
 		break;
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		cur_port = (uint16_t)rd32(hw, TXGBE_TEREDOPORT);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5065,7 +5065,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_TEREDOPORT, 0);
 		break;
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORTGPE);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index fd65d89ffe7d..8304b68292da 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -60,15 +60,15 @@
 #define TXGBE_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
 
 #define TXGBE_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define TXGBE_MISC_VEC_ID               RTE_INTR_VEC_ZERO_OFFSET
 #define TXGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 43dc0ed39b75..283b52e8f3db 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -486,14 +486,14 @@ txgbevf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
-	dev_info->max_vmdq_pools = ETH_64_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
 				     dev_info->rx_queue_offload_capa);
 	dev_info->tx_queue_offload_capa = txgbe_get_tx_queue_offloads(dev);
 	dev_info->tx_offload_capa = txgbe_get_tx_port_offloads(dev);
 	dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -574,22 +574,22 @@ txgbevf_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
 		     dev->data->port_id);
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/*
 	 * VF has no ability to enable/disable HW CRC
 	 * Keep the persistent behavior the same as Host PF
 	 */
 #ifndef RTE_LIBRTE_TXGBE_PF_DISABLE_STRIP_CRC
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
-		conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #else
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
 		PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #endif
 
@@ -647,8 +647,8 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
 	txgbevf_set_vfta_all(dev, 1);
 
 	/* Set HW strip */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = txgbevf_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -891,10 +891,10 @@ txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	int on = 0;
 
 	/* VF function only support hw strip feature, others are not support */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
-			on = !!(rxq->offloads &	DEV_RX_OFFLOAD_VLAN_STRIP);
+			on = !!(rxq->offloads &	RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 			txgbevf_vlan_strip_queue_set(dev, i, on);
 		}
 	}
diff --git a/drivers/net/txgbe/txgbe_fdir.c b/drivers/net/txgbe/txgbe_fdir.c
index 8abb86228608..e303d87176ed 100644
--- a/drivers/net/txgbe/txgbe_fdir.c
+++ b/drivers/net/txgbe/txgbe_fdir.c
@@ -102,22 +102,22 @@ txgbe_fdir_enable(struct txgbe_hw *hw, uint32_t fdirctrl)
  * flexbytes matching field, and drop queue (only for perfect matching mode).
  */
 static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf,
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf,
 		     uint32_t *fdirctrl, uint32_t *flex)
 {
 	*fdirctrl = 0;
 	*flex = 0;
 
 	switch (conf->pballoc) {
-	case RTE_FDIR_PBALLOC_64K:
+	case RTE_ETH_FDIR_PBALLOC_64K:
 		/* 8k - 1 signature filters */
 		*fdirctrl |= TXGBE_FDIRCTL_BUF_64K;
 		break;
-	case RTE_FDIR_PBALLOC_128K:
+	case RTE_ETH_FDIR_PBALLOC_128K:
 		/* 16k - 1 signature filters */
 		*fdirctrl |= TXGBE_FDIRCTL_BUF_128K;
 		break;
-	case RTE_FDIR_PBALLOC_256K:
+	case RTE_ETH_FDIR_PBALLOC_256K:
 		/* 32k - 1 signature filters */
 		*fdirctrl |= TXGBE_FDIRCTL_BUF_256K;
 		break;
@@ -521,15 +521,15 @@ txgbe_atr_compute_hash(struct txgbe_atr_input *atr_input,
 
 static uint32_t
 atr_compute_perfect_hash(struct txgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
 	uint32_t bucket_hash;
 
 	bucket_hash = txgbe_atr_compute_hash(input,
 				TXGBE_ATR_BUCKET_HASH_KEY);
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		bucket_hash &= PERFECT_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		bucket_hash &= PERFECT_BUCKET_128KB_HASH_MASK;
 	else
 		bucket_hash &= PERFECT_BUCKET_64KB_HASH_MASK;
@@ -564,15 +564,15 @@ txgbe_fdir_check_cmd_complete(struct txgbe_hw *hw, uint32_t *fdircmd)
  */
 static uint32_t
 atr_compute_signature_hash(struct txgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
 	uint32_t bucket_hash, sig_hash;
 
 	bucket_hash = txgbe_atr_compute_hash(input,
 				TXGBE_ATR_BUCKET_HASH_KEY);
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		bucket_hash &= SIG_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		bucket_hash &= SIG_BUCKET_128KB_HASH_MASK;
 	else
 		bucket_hash &= SIG_BUCKET_64KB_HASH_MASK;
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index eae400b14176..6d7fd1842843 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -1215,7 +1215,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
 		return -rte_errno;
 	}
 
-	filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+	filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
 	/**
 	 * grp and e_cid_base are bit fields and only use 14 bits.
 	 * e-tag id is taken as little endian by HW.
diff --git a/drivers/net/txgbe/txgbe_ipsec.c b/drivers/net/txgbe/txgbe_ipsec.c
index ccd747973ba2..445733f3ba46 100644
--- a/drivers/net/txgbe/txgbe_ipsec.c
+++ b/drivers/net/txgbe/txgbe_ipsec.c
@@ -372,7 +372,7 @@ txgbe_crypto_create_session(void *device,
 	aead_xform = &conf->crypto_xform->aead;
 
 	if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 			ic_session->op = TXGBE_OP_AUTHENTICATED_DECRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -380,7 +380,7 @@ txgbe_crypto_create_session(void *device,
 			return -ENOTSUP;
 		}
 	} else {
-		if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+		if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 			ic_session->op = TXGBE_OP_AUTHENTICATED_ENCRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -611,11 +611,11 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	tx_offloads = dev->data->dev_conf.txmode.offloads;
 
 	/* sanity checks */
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
 		return -1;
 	}
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
 		return -1;
 	}
@@ -634,7 +634,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	reg |= TXGBE_SECRXCTL_CRCSTRIP;
 	wr32(hw, TXGBE_SECRXCTL, reg);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		wr32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA, 0);
 		reg = rd32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA);
 		if (reg != 0) {
@@ -642,7 +642,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 			return -1;
 		}
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 		wr32(hw, TXGBE_SECTXCTL, TXGBE_SECTXCTL_STFWD);
 		reg = rd32(hw, TXGBE_SECTXCTL);
 		if (reg != TXGBE_SECTXCTL_STFWD) {
diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c
index a48972b1a381..30be2873307a 100644
--- a/drivers/net/txgbe/txgbe_pf.c
+++ b/drivers/net/txgbe/txgbe_pf.c
@@ -101,15 +101,15 @@ int txgbe_pf_host_init(struct rte_eth_dev *eth_dev)
 	memset(uta_info, 0, sizeof(struct txgbe_uta_info));
 	hw->mac.mc_filter_type = 0;
 
-	if (vf_num >= ETH_32_POOLS) {
+	if (vf_num >= RTE_ETH_32_POOLS) {
 		nb_queue = 2;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
-	} else if (vf_num >= ETH_16_POOLS) {
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+	} else if (vf_num >= RTE_ETH_16_POOLS) {
 		nb_queue = 4;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
 	} else {
 		nb_queue = 8;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
 	}
 
 	RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -256,13 +256,13 @@ int txgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
 	gcr_ext &= ~TXGBE_PORTCTL_NUMVT_MASK;
 
 	switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		gcr_ext |= TXGBE_PORTCTL_NUMVT_64;
 		break;
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		gcr_ext |= TXGBE_PORTCTL_NUMVT_32;
 		break;
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		gcr_ext |= TXGBE_PORTCTL_NUMVT_16;
 		break;
 	}
@@ -611,29 +611,29 @@ txgbe_get_vf_queues(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf)
 	/* Notify VF of number of DCB traffic classes */
 	eth_conf = &eth_dev->data->dev_conf;
 	switch (eth_conf->txmode.mq_mode) {
-	case ETH_MQ_TX_NONE:
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_NONE:
+	case RTE_ETH_MQ_TX_DCB:
 		PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
 			", but its tx mode = %d\n", vf,
 			eth_conf->txmode.mq_mode);
 		return -1;
 
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		vmdq_dcb_tx_conf = &eth_conf->tx_adv_conf.vmdq_dcb_tx_conf;
 		switch (vmdq_dcb_tx_conf->nb_queue_pools) {
-		case ETH_16_POOLS:
-			num_tcs = ETH_8_TCS;
+		case RTE_ETH_16_POOLS:
+			num_tcs = RTE_ETH_8_TCS;
 			break;
-		case ETH_32_POOLS:
-			num_tcs = ETH_4_TCS;
+		case RTE_ETH_32_POOLS:
+			num_tcs = RTE_ETH_4_TCS;
 			break;
 		default:
 			return -1;
 		}
 		break;
 
-	/* ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
-	case ETH_MQ_TX_VMDQ_ONLY:
+	/* RTE_ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
+	case RTE_ETH_MQ_TX_VMDQ_ONLY:
 		hw = TXGBE_DEV_HW(eth_dev);
 		vmvir = rd32(hw, TXGBE_POOLTAG(vf));
 		vlana = vmvir & TXGBE_POOLTAG_ACT_MASK;
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 7e18dcce0a86..1204dc5499a5 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1960,7 +1960,7 @@ txgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
 uint64_t
 txgbe_get_rx_queue_offloads(struct rte_eth_dev *dev __rte_unused)
 {
-	return DEV_RX_OFFLOAD_VLAN_STRIP;
+	return RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 }
 
 uint64_t
@@ -1970,34 +1970,34 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
 	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
 	struct rte_eth_dev_sriov *sriov = &RTE_ETH_DEV_SRIOV(dev);
 
-	offloads = DEV_RX_OFFLOAD_IPV4_CKSUM  |
-		   DEV_RX_OFFLOAD_UDP_CKSUM   |
-		   DEV_RX_OFFLOAD_TCP_CKSUM   |
-		   DEV_RX_OFFLOAD_KEEP_CRC    |
-		   DEV_RX_OFFLOAD_VLAN_FILTER |
-		   DEV_RX_OFFLOAD_RSS_HASH |
-		   DEV_RX_OFFLOAD_SCATTER;
+	offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+		   RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+		   RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		   RTE_ETH_RX_OFFLOAD_RSS_HASH |
+		   RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	if (!txgbe_is_vf(dev))
-		offloads |= (DEV_RX_OFFLOAD_VLAN_FILTER |
-			     DEV_RX_OFFLOAD_QINQ_STRIP |
-			     DEV_RX_OFFLOAD_VLAN_EXTEND);
+		offloads |= (RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+			     RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+			     RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
 
 	/*
 	 * RSC is only supported by PF devices in a non-SR-IOV
 	 * mode.
 	 */
 	if (hw->mac.type == txgbe_mac_raptor && !sriov->active)
-		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+		offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 
 	if (hw->mac.type == txgbe_mac_raptor)
-		offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
 
-	offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+	offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		offloads |= DEV_RX_OFFLOAD_SECURITY;
+		offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
 #endif
 
 	return offloads;
@@ -2222,32 +2222,32 @@ txgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
 	uint64_t tx_offload_capa;
 
 	tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM   |
-		DEV_TX_OFFLOAD_SCTP_CKSUM  |
-		DEV_TX_OFFLOAD_TCP_TSO     |
-		DEV_TX_OFFLOAD_UDP_TSO	   |
-		DEV_TX_OFFLOAD_UDP_TNL_TSO	|
-		DEV_TX_OFFLOAD_IP_TNL_TSO	|
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO	|
-		DEV_TX_OFFLOAD_GRE_TNL_TSO	|
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO	|
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO	|
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO     |
+		RTE_ETH_TX_OFFLOAD_UDP_TSO	   |
+		RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_IP_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	if (!txgbe_is_vf(dev))
-		tx_offload_capa |= DEV_TX_OFFLOAD_QINQ_INSERT;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
 
-	tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+	tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 
-	tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-			   DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+	tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
 #endif
 	return tx_offload_capa;
 }
@@ -2349,7 +2349,7 @@ txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_deferred_start = tx_conf->tx_deferred_start;
 #ifdef RTE_LIB_SECURITY
 	txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SECURITY);
+			RTE_ETH_TX_OFFLOAD_SECURITY);
 #endif
 
 	/* Modification to set tail pointer for virtual function
@@ -2599,7 +2599,7 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
 		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -2900,20 +2900,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
 	if (hw->mac.type == txgbe_mac_raptor_vf) {
 		mrqc = rd32(hw, TXGBE_VFPLCFG);
 		mrqc &= ~TXGBE_VFPLCFG_RSSMASK;
-		if (rss_hf & ETH_RSS_IPV4)
+		if (rss_hf & RTE_ETH_RSS_IPV4)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV4TCP;
-		if (rss_hf & ETH_RSS_IPV6 ||
-		    rss_hf & ETH_RSS_IPV6_EX)
+		if (rss_hf & RTE_ETH_RSS_IPV6 ||
+		    rss_hf & RTE_ETH_RSS_IPV6_EX)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV6;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
-		    rss_hf & ETH_RSS_IPV6_TCP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV6TCP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV4UDP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
-		    rss_hf & ETH_RSS_IPV6_UDP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV6UDP;
 
 		if (rss_hf)
@@ -2930,20 +2930,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
 	} else {
 		mrqc = rd32(hw, TXGBE_RACTL);
 		mrqc &= ~TXGBE_RACTL_RSSMASK;
-		if (rss_hf & ETH_RSS_IPV4)
+		if (rss_hf & RTE_ETH_RSS_IPV4)
 			mrqc |= TXGBE_RACTL_RSSIPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			mrqc |= TXGBE_RACTL_RSSIPV4TCP;
-		if (rss_hf & ETH_RSS_IPV6 ||
-		    rss_hf & ETH_RSS_IPV6_EX)
+		if (rss_hf & RTE_ETH_RSS_IPV6 ||
+		    rss_hf & RTE_ETH_RSS_IPV6_EX)
 			mrqc |= TXGBE_RACTL_RSSIPV6;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
-		    rss_hf & ETH_RSS_IPV6_TCP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 			mrqc |= TXGBE_RACTL_RSSIPV6TCP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 			mrqc |= TXGBE_RACTL_RSSIPV4UDP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
-		    rss_hf & ETH_RSS_IPV6_UDP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 			mrqc |= TXGBE_RACTL_RSSIPV6UDP;
 
 		if (rss_hf)
@@ -2984,39 +2984,39 @@ txgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 	if (hw->mac.type == txgbe_mac_raptor_vf) {
 		mrqc = rd32(hw, TXGBE_VFPLCFG);
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV4)
-			rss_hf |= ETH_RSS_IPV4;
+			rss_hf |= RTE_ETH_RSS_IPV4;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV4TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV6)
-			rss_hf |= ETH_RSS_IPV6 |
-				  ETH_RSS_IPV6_EX;
+			rss_hf |= RTE_ETH_RSS_IPV6 |
+				  RTE_ETH_RSS_IPV6_EX;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV6TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
-				  ETH_RSS_IPV6_TCP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				  RTE_ETH_RSS_IPV6_TCP_EX;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV4UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV6UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
-				  ETH_RSS_IPV6_UDP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				  RTE_ETH_RSS_IPV6_UDP_EX;
 		if (!(mrqc & TXGBE_VFPLCFG_RSSENA))
 			rss_hf = 0;
 	} else {
 		mrqc = rd32(hw, TXGBE_RACTL);
 		if (mrqc & TXGBE_RACTL_RSSIPV4)
-			rss_hf |= ETH_RSS_IPV4;
+			rss_hf |= RTE_ETH_RSS_IPV4;
 		if (mrqc & TXGBE_RACTL_RSSIPV4TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		if (mrqc & TXGBE_RACTL_RSSIPV6)
-			rss_hf |= ETH_RSS_IPV6 |
-				  ETH_RSS_IPV6_EX;
+			rss_hf |= RTE_ETH_RSS_IPV6 |
+				  RTE_ETH_RSS_IPV6_EX;
 		if (mrqc & TXGBE_RACTL_RSSIPV6TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
-				  ETH_RSS_IPV6_TCP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				  RTE_ETH_RSS_IPV6_TCP_EX;
 		if (mrqc & TXGBE_RACTL_RSSIPV4UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 		if (mrqc & TXGBE_RACTL_RSSIPV6UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
-				  ETH_RSS_IPV6_UDP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				  RTE_ETH_RSS_IPV6_UDP_EX;
 		if (!(mrqc & TXGBE_RACTL_RSSENA))
 			rss_hf = 0;
 	}
@@ -3046,7 +3046,7 @@ txgbe_rss_configure(struct rte_eth_dev *dev)
 	 */
 	if (adapter->rss_reta_updated == 0) {
 		reta = 0;
-		for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+		for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
 			if (j == dev->data->nb_rx_queues)
 				j = 0;
 			reta = (reta >> 8) | LS32(j, 24, 0xFF);
@@ -3083,12 +3083,12 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
 	num_pools = cfg->nb_queue_pools;
 	/* Check we have a valid number of pools */
-	if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+	if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
 		txgbe_rss_disable(dev);
 		return;
 	}
 	/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
-	nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+	nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
 
 	/*
 	 * split rx buffer up into sections, each for 1 traffic class
@@ -3103,7 +3103,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
 	}
 	/* zero alloc all unused TCs */
-	for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		uint32_t rxpbsize = rd32(hw, TXGBE_PBRXSIZE(i));
 
 		rxpbsize &= (~(0x3FF << 10));
@@ -3111,7 +3111,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
 	}
 
-	if (num_pools == ETH_16_POOLS) {
+	if (num_pools == RTE_ETH_16_POOLS) {
 		mrqc = TXGBE_PORTCTL_NUMTC_8;
 		mrqc |= TXGBE_PORTCTL_NUMVT_16;
 	} else {
@@ -3130,7 +3130,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	wr32(hw, TXGBE_POOLCTL, vt_ctl);
 
 	queue_mapping = 0;
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 		/*
 		 * mapping is done with 3 bits per priority,
 		 * so shift by i*3 each time
@@ -3151,7 +3151,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		wr32(hw, TXGBE_VLANTBL(i), 0xFFFFFFFF);
 
 	wr32(hw, TXGBE_POOLRXENA(0),
-			num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+			num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	wr32(hw, TXGBE_ETHADDRIDX, 0);
 	wr32(hw, TXGBE_ETHADDRASSL, 0xFFFFFFFF);
@@ -3221,7 +3221,7 @@ txgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
 	/*PF VF Transmit Enable*/
 	wr32(hw, TXGBE_POOLTXENA(0),
 		vmdq_tx_conf->nb_queue_pools ==
-				ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+				RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	/*Configure general DCB TX parameters*/
 	txgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3237,12 +3237,12 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
-	if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3252,7 +3252,7 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3270,12 +3270,12 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
-	if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3285,7 +3285,7 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3312,7 +3312,7 @@ txgbe_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3339,7 +3339,7 @@ txgbe_dcb_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3475,7 +3475,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(dev);
 
 	switch (dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_VMDQ_DCB:
+	case RTE_ETH_MQ_RX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/*
@@ -3486,8 +3486,8 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		/*Configure general VMDQ and DCB RX parameters*/
 		txgbe_vmdq_dcb_configure(dev);
 		break;
-	case ETH_MQ_RX_DCB:
-	case ETH_MQ_RX_DCB_RSS:
+	case RTE_ETH_MQ_RX_DCB:
+	case RTE_ETH_MQ_RX_DCB_RSS:
 		dcb_config->vt_mode = false;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -3500,7 +3500,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		break;
 	}
 	switch (dev->data->dev_conf.txmode.mq_mode) {
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/* get DCB and VT TX configuration parameters
@@ -3511,7 +3511,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		txgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
 		break;
 
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_DCB:
 		dcb_config->vt_mode = false;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/* get DCB TX configuration parameters from rte_eth_conf */
@@ -3527,15 +3527,15 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	nb_tcs = dcb_config->num_tcs.pfc_tcs;
 	/* Unpack map */
 	txgbe_dcb_unpack_map_cee(dcb_config, TXGBE_DCB_RX_CONFIG, map);
-	if (nb_tcs == ETH_4_TCS) {
+	if (nb_tcs == RTE_ETH_4_TCS) {
 		/* Avoid un-configured priority mapping to TC0 */
 		uint8_t j = 4;
 		uint8_t mask = 0xFF;
 
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
 			mask = (uint8_t)(mask & (~(1 << map[i])));
 		for (i = 0; mask && (i < TXGBE_DCB_TC_MAX); i++) {
-			if ((mask & 0x1) && j < ETH_DCB_NUM_USER_PRIORITIES)
+			if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
 				map[j++] = i;
 			mask >>= 1;
 		}
@@ -3576,7 +3576,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
 
 		/* zero alloc all unused TCs */
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			wr32(hw, TXGBE_PBRXSIZE(i), 0);
 	}
 	if (config_dcb_tx) {
@@ -3592,7 +3592,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			wr32(hw, TXGBE_PBTXDMATH(i), txpbthresh);
 		}
 		/* Clear unused TCs, if any, to zero buffer size*/
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			wr32(hw, TXGBE_PBTXSIZE(i), 0);
 			wr32(hw, TXGBE_PBTXDMATH(i), 0);
 		}
@@ -3634,7 +3634,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	txgbe_dcb_config_tc_stats_raptor(hw, dcb_config);
 
 	/* Check if the PFC is supported */
-	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
 		for (i = 0; i < nb_tcs; i++) {
 			/* If the TC count is 8,
@@ -3648,7 +3648,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			tc->pfc = txgbe_dcb_pfc_enabled;
 		}
 		txgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
-		if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+		if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
 			pfc_en &= 0x0F;
 		ret = txgbe_dcb_config_pfc(hw, pfc_en, map);
 	}
@@ -3719,12 +3719,12 @@ void txgbe_configure_dcb(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	/* check support mq_mode for DCB */
-	if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB &&
-	    dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB &&
-	    dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS)
+	if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
 		return;
 
-	if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+	if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
 		return;
 
 	/** Configure DCB hardware **/
@@ -3780,7 +3780,7 @@ txgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 
 	/* pool enabling for receive - 64 */
 	wr32(hw, TXGBE_POOLRXENA(0), UINT32_MAX);
-	if (num_pools == ETH_64_POOLS)
+	if (num_pools == RTE_ETH_64_POOLS)
 		wr32(hw, TXGBE_POOLRXENA(1), UINT32_MAX);
 
 	/*
@@ -3904,11 +3904,11 @@ txgbe_config_vf_rss(struct rte_eth_dev *dev)
 	mrqc = rd32(hw, TXGBE_PORTCTL);
 	mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_64;
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_32;
 		break;
 
@@ -3931,15 +3931,15 @@ txgbe_config_vf_default(struct rte_eth_dev *dev)
 	mrqc = rd32(hw, TXGBE_PORTCTL);
 	mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_64;
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_32;
 		break;
 
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_16;
 		break;
 	default:
@@ -3962,21 +3962,21 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * any DCB/RSS w/o VMDq multi-queue setting
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_DCB_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			txgbe_rss_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
 			txgbe_vmdq_dcb_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
 			txgbe_vmdq_rx_hw_configure(dev);
 			break;
 
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_NONE:
 		default:
 			/* if mq_mode is none, disable rss mode.*/
 			txgbe_rss_disable(dev);
@@ -3987,18 +3987,18 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * Support RSS together with SRIOV.
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			txgbe_config_vf_rss(dev);
 			break;
-		case ETH_MQ_RX_VMDQ_DCB:
-		case ETH_MQ_RX_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_DCB:
 		/* In SRIOV, the configuration is the same as VMDq case */
 			txgbe_vmdq_dcb_configure(dev);
 			break;
 		/* DCB/RSS together with SRIOV is not supported */
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
-		case ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
 			PMD_INIT_LOG(ERR,
 				"Could not support DCB/RSS with VMDq & SRIOV");
 			return -1;
@@ -4028,7 +4028,7 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV inactive scheme
 		 * any DCB w/o VMDq multi-queue setting
 		 */
-		if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+		if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
 			txgbe_vmdq_tx_hw_configure(hw);
 		else
 			wr32m(hw, TXGBE_PORTCTL, TXGBE_PORTCTL_NUMVT_MASK, 0);
@@ -4038,13 +4038,13 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV active scheme
 		 * FIXME if support DCB together with VMDq & SRIOV
 		 */
-		case ETH_64_POOLS:
+		case RTE_ETH_64_POOLS:
 			mtqc = TXGBE_PORTCTL_NUMVT_64;
 			break;
-		case ETH_32_POOLS:
+		case RTE_ETH_32_POOLS:
 			mtqc = TXGBE_PORTCTL_NUMVT_32;
 			break;
-		case ETH_16_POOLS:
+		case RTE_ETH_16_POOLS:
 			mtqc = TXGBE_PORTCTL_NUMVT_16;
 			break;
 		default:
@@ -4107,10 +4107,10 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* Sanity check */
 	dev->dev_ops->dev_infos_get(dev, &dev_info);
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		rsc_capable = true;
 
-	if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
 				   "support it");
 		return -EINVAL;
@@ -4118,22 +4118,22 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* RSC global configuration */
 
-	if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
-	     (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+	     (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		PMD_INIT_LOG(CRIT, "LRO can't be enabled when HW CRC "
 				    "is disabled");
 		return -EINVAL;
 	}
 
 	rfctl = rd32(hw, TXGBE_PSRCTL);
-	if (rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if (rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		rfctl &= ~TXGBE_PSRCTL_RSCDIA;
 	else
 		rfctl |= TXGBE_PSRCTL_RSCDIA;
 	wr32(hw, TXGBE_PSRCTL, rfctl);
 
 	/* If LRO hasn't been requested - we are done here. */
-	if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		return 0;
 
 	/* Set PSRCTL.RSCACK bit */
@@ -4273,7 +4273,7 @@ txgbe_set_rx_function(struct rte_eth_dev *dev)
 		struct txgbe_rx_queue *rxq = dev->data->rx_queues[i];
 
 		rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_SECURITY);
+				RTE_ETH_RX_OFFLOAD_SECURITY);
 	}
 #endif
 }
@@ -4316,7 +4316,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Configure CRC stripping, if any.
 	 */
 	hlreg0 = rd32(hw, TXGBE_SECRXCTL);
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		hlreg0 &= ~TXGBE_SECRXCTL_CRCSTRIP;
 	else
 		hlreg0 |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4344,7 +4344,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	/* Setup RX queues */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -4354,7 +4354,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 * call to configure.
 		 */
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -4391,11 +4391,11 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 		if (dev->data->mtu + TXGBE_ETH_OVERHEAD +
 				2 * TXGBE_VLAN_TAG_SIZE > buf_size)
 			dev->data->scattered_rx = 1;
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		dev->data->scattered_rx = 1;
 
 	/*
@@ -4410,7 +4410,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 */
 	rxcsum = rd32(hw, TXGBE_PSRCTL);
 	rxcsum |= TXGBE_PSRCTL_PCSD;
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= TXGBE_PSRCTL_L4CSUM;
 	else
 		rxcsum &= ~TXGBE_PSRCTL_L4CSUM;
@@ -4419,7 +4419,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 
 	if (hw->mac.type == txgbe_mac_raptor) {
 		rdrxctl = rd32(hw, TXGBE_SECRXCTL);
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rdrxctl &= ~TXGBE_SECRXCTL_CRCSTRIP;
 		else
 			rdrxctl |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4542,8 +4542,8 @@ txgbe_dev_rxtx_start(struct rte_eth_dev *dev)
 		txgbe_setup_loopback_link_raptor(hw);
 
 #ifdef RTE_LIB_SECURITY
-	if ((dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) ||
-	    (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_SECURITY)) {
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) ||
+	    (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY)) {
 		ret = txgbe_crypto_enable_ipsec(dev);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR,
@@ -4851,7 +4851,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	/* Set PSR type for VF RSS according to max Rx queue */
 	psrtype = TXGBE_VFPLCFG_PSRL4HDR |
@@ -4903,7 +4903,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
 		 */
 		wr32(hw, TXGBE_RXCFG(i), srrctl);
 
-		if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
 		    /* It adds dual VLAN length for supporting dual VLAN */
 		    (dev->data->mtu + TXGBE_ETH_OVERHEAD +
 				2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
@@ -4912,8 +4912,8 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
 			dev->data->scattered_rx = 1;
 		}
 
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	/*
@@ -5084,7 +5084,7 @@ txgbe_config_rss_filter(struct rte_eth_dev *dev,
 	 * little-endian order.
 	 */
 	reta = 0;
-	for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+	for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
 		if (j == conf->conf.queue_num)
 			j = 0;
 		reta = (reta >> 8) | LS32(conf->conf.queue[j], 24, 0xFF);
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index b96f58a3f848..27d4c842c0e7 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -309,7 +309,7 @@ struct txgbe_rx_queue {
 	uint8_t             rx_deferred_start; /**< not in global dev start. */
 	/** flags to set in mbuf when a vlan is detected. */
 	uint64_t            vlan_flags;
-	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
 	/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
 	struct rte_mbuf fake_mbuf;
 	/** hold packets to return to application */
@@ -392,7 +392,7 @@ struct txgbe_tx_queue {
 	uint8_t             pthresh;       /**< Prefetch threshold register. */
 	uint8_t             hthresh;       /**< Host threshold register. */
 	uint8_t             wthresh;       /**< Write-back threshold reg. */
-	uint64_t            offloads; /* Tx offload flags of DEV_TX_OFFLOAD_* */
+	uint64_t            offloads; /* Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
 	uint32_t            ctx_curr;      /**< Hardware context states. */
 	/** Hardware context0 history. */
 	struct txgbe_ctx_info ctx_cache[TXGBE_CTX_NUM];
diff --git a/drivers/net/txgbe/txgbe_tm.c b/drivers/net/txgbe/txgbe_tm.c
index 3abe3959eb1a..3171be73d05d 100644
--- a/drivers/net/txgbe/txgbe_tm.c
+++ b/drivers/net/txgbe/txgbe_tm.c
@@ -118,14 +118,14 @@ txgbe_tc_nb_get(struct rte_eth_dev *dev)
 	uint8_t nb_tcs = 0;
 
 	eth_conf = &dev->data->dev_conf;
-	if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+	if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 		nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
-	} else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	} else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
-		    ETH_32_POOLS)
-			nb_tcs = ETH_4_TCS;
+		    RTE_ETH_32_POOLS)
+			nb_tcs = RTE_ETH_4_TCS;
 		else
-			nb_tcs = ETH_8_TCS;
+			nb_tcs = RTE_ETH_8_TCS;
 	} else {
 		nb_tcs = 1;
 	}
@@ -364,10 +364,10 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 	if (vf_num) {
 		/* no DCB */
 		if (nb_tcs == 1) {
-			if (vf_num >= ETH_32_POOLS) {
+			if (vf_num >= RTE_ETH_32_POOLS) {
 				*nb = 2;
 				*base = vf_num * 2;
-			} else if (vf_num >= ETH_16_POOLS) {
+			} else if (vf_num >= RTE_ETH_16_POOLS) {
 				*nb = 4;
 				*base = vf_num * 4;
 			} else {
@@ -381,7 +381,7 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 		}
 	} else {
 		/* VT off */
-		if (nb_tcs == ETH_8_TCS) {
+		if (nb_tcs == RTE_ETH_8_TCS) {
 			switch (tc_node_no) {
 			case 0:
 				*base = 0;
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 86498365e149..17b6a1a1ceec 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -125,8 +125,8 @@ static pthread_mutex_t internal_list_lock = PTHREAD_MUTEX_INITIALIZER;
 
 static struct rte_eth_link pmd_link = {
 		.link_speed = 10000,
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_status = ETH_LINK_DOWN
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_status = RTE_ETH_LINK_DOWN
 };
 
 struct rte_vhost_vring_state {
@@ -817,7 +817,7 @@ new_device(int vid)
 
 	rte_vhost_get_mtu(vid, &eth_dev->data->mtu);
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	rte_atomic32_set(&internal->dev_attached, 1);
 	update_queuing_status(eth_dev);
@@ -852,7 +852,7 @@ destroy_device(int vid)
 	rte_atomic32_set(&internal->dev_attached, 0);
 	update_queuing_status(eth_dev);
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	if (eth_dev->data->rx_queues && eth_dev->data->tx_queues) {
 		for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1118,7 +1118,7 @@ eth_dev_configure(struct rte_eth_dev *dev)
 	if (vhost_driver_setup(dev) < 0)
 		return -1;
 
-	internal->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	internal->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 	return 0;
 }
@@ -1267,9 +1267,9 @@ eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_tx_queues = internal->max_queues;
 	dev_info->min_rx_bufsize = 0;
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-				DEV_TX_OFFLOAD_VLAN_INSERT;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	return 0;
 }
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index ddf0e26ab4db..94120b349023 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -712,7 +712,7 @@ int
 virtio_dev_close(struct rte_eth_dev *dev)
 {
 	struct virtio_hw *hw = dev->data->dev_private;
-	struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+	struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
 
 	PMD_INIT_LOG(DEBUG, "virtio_dev_close");
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1774,7 +1774,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 		     hw->mac_addr[0], hw->mac_addr[1], hw->mac_addr[2],
 		     hw->mac_addr[3], hw->mac_addr[4], hw->mac_addr[5]);
 
-	if (hw->speed == ETH_SPEED_NUM_UNKNOWN) {
+	if (hw->speed == RTE_ETH_SPEED_NUM_UNKNOWN) {
 		if (virtio_with_feature(hw, VIRTIO_NET_F_SPEED_DUPLEX)) {
 			config = &local_config;
 			virtio_read_dev_config(hw,
@@ -1788,7 +1788,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 		}
 	}
 	if (hw->duplex == DUPLEX_UNKNOWN)
-		hw->duplex = ETH_LINK_FULL_DUPLEX;
+		hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	PMD_INIT_LOG(DEBUG, "link speed = %d, duplex = %d",
 		hw->speed, hw->duplex);
 	if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ)) {
@@ -1887,7 +1887,7 @@ int
 eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
 {
 	struct virtio_hw *hw = eth_dev->data->dev_private;
-	uint32_t speed = ETH_SPEED_NUM_UNKNOWN;
+	uint32_t speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	int vectorized = 0;
 	int ret;
 
@@ -1958,22 +1958,22 @@ static uint32_t
 virtio_dev_speed_capa_get(uint32_t speed)
 {
 	switch (speed) {
-	case ETH_SPEED_NUM_10G:
-		return ETH_LINK_SPEED_10G;
-	case ETH_SPEED_NUM_20G:
-		return ETH_LINK_SPEED_20G;
-	case ETH_SPEED_NUM_25G:
-		return ETH_LINK_SPEED_25G;
-	case ETH_SPEED_NUM_40G:
-		return ETH_LINK_SPEED_40G;
-	case ETH_SPEED_NUM_50G:
-		return ETH_LINK_SPEED_50G;
-	case ETH_SPEED_NUM_56G:
-		return ETH_LINK_SPEED_56G;
-	case ETH_SPEED_NUM_100G:
-		return ETH_LINK_SPEED_100G;
-	case ETH_SPEED_NUM_200G:
-		return ETH_LINK_SPEED_200G;
+	case RTE_ETH_SPEED_NUM_10G:
+		return RTE_ETH_LINK_SPEED_10G;
+	case RTE_ETH_SPEED_NUM_20G:
+		return RTE_ETH_LINK_SPEED_20G;
+	case RTE_ETH_SPEED_NUM_25G:
+		return RTE_ETH_LINK_SPEED_25G;
+	case RTE_ETH_SPEED_NUM_40G:
+		return RTE_ETH_LINK_SPEED_40G;
+	case RTE_ETH_SPEED_NUM_50G:
+		return RTE_ETH_LINK_SPEED_50G;
+	case RTE_ETH_SPEED_NUM_56G:
+		return RTE_ETH_LINK_SPEED_56G;
+	case RTE_ETH_SPEED_NUM_100G:
+		return RTE_ETH_LINK_SPEED_100G;
+	case RTE_ETH_SPEED_NUM_200G:
+		return RTE_ETH_LINK_SPEED_200G;
 	default:
 		return 0;
 	}
@@ -2089,14 +2089,14 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "configure");
 	req_features = VIRTIO_PMD_DEFAULT_GUEST_FEATURES;
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) {
 		PMD_DRV_LOG(ERR,
 			"Unsupported Rx multi queue mode %d",
 			rxmode->mq_mode);
 		return -EINVAL;
 	}
 
-	if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		PMD_DRV_LOG(ERR,
 			"Unsupported Tx multi queue mode %d",
 			txmode->mq_mode);
@@ -2114,20 +2114,20 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 
 	hw->max_rx_pkt_len = ether_hdr_len + rxmode->mtu;
 
-	if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
-			   DEV_RX_OFFLOAD_TCP_CKSUM))
+	if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
 		req_features |= (1ULL << VIRTIO_NET_F_GUEST_CSUM);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		req_features |=
 			(1ULL << VIRTIO_NET_F_GUEST_TSO4) |
 			(1ULL << VIRTIO_NET_F_GUEST_TSO6);
 
-	if (tx_offloads & (DEV_TX_OFFLOAD_UDP_CKSUM |
-			   DEV_TX_OFFLOAD_TCP_CKSUM))
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM))
 		req_features |= (1ULL << VIRTIO_NET_F_CSUM);
 
-	if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		req_features |=
 			(1ULL << VIRTIO_NET_F_HOST_TSO4) |
 			(1ULL << VIRTIO_NET_F_HOST_TSO6);
@@ -2139,15 +2139,15 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 			return ret;
 	}
 
-	if ((rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
-			    DEV_RX_OFFLOAD_TCP_CKSUM)) &&
+	if ((rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			    RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) &&
 		!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_CSUM)) {
 		PMD_DRV_LOG(ERR,
 			"rx checksum not available on this host");
 		return -ENOTSUP;
 	}
 
-	if ((rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) &&
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
 		(!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO4) ||
 		 !virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO6))) {
 		PMD_DRV_LOG(ERR,
@@ -2159,12 +2159,12 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 	if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
 		virtio_dev_cq_start(dev);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 		hw->vlan_strip = 1;
 
-	hw->rx_ol_scatter = (rx_offloads & DEV_RX_OFFLOAD_SCATTER);
+	hw->rx_ol_scatter = (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
 
-	if ((rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 			!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
 		PMD_DRV_LOG(ERR,
 			    "vlan filtering not available on this host");
@@ -2217,7 +2217,7 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 				hw->use_vec_rx = 0;
 			}
 
-			if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+			if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 				PMD_DRV_LOG(INFO,
 					"disabled packed ring vectorized rx for TCP_LRO enabled");
 				hw->use_vec_rx = 0;
@@ -2244,10 +2244,10 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 				hw->use_vec_rx = 0;
 			}
 
-			if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
-					   DEV_RX_OFFLOAD_TCP_CKSUM |
-					   DEV_RX_OFFLOAD_TCP_LRO |
-					   DEV_RX_OFFLOAD_VLAN_STRIP)) {
+			if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+					   RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+					   RTE_ETH_RX_OFFLOAD_TCP_LRO |
+					   RTE_ETH_RX_OFFLOAD_VLAN_STRIP)) {
 				PMD_DRV_LOG(INFO,
 					"disabled split ring vectorized rx for offloading enabled");
 				hw->use_vec_rx = 0;
@@ -2440,7 +2440,7 @@ virtio_dev_stop(struct rte_eth_dev *dev)
 {
 	struct virtio_hw *hw = dev->data->dev_private;
 	struct rte_eth_link link;
-	struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+	struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
 
 	PMD_INIT_LOG(DEBUG, "stop");
 	dev->data->dev_started = 0;
@@ -2481,28 +2481,28 @@ virtio_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complet
 	memset(&link, 0, sizeof(link));
 	link.link_duplex = hw->duplex;
 	link.link_speed  = hw->speed;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	if (!hw->started) {
-		link.link_status = ETH_LINK_DOWN;
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	} else if (virtio_with_feature(hw, VIRTIO_NET_F_STATUS)) {
 		PMD_INIT_LOG(DEBUG, "Get link status from hw");
 		virtio_read_dev_config(hw,
 				offsetof(struct virtio_net_config, status),
 				&status, sizeof(status));
 		if ((status & VIRTIO_NET_S_LINK_UP) == 0) {
-			link.link_status = ETH_LINK_DOWN;
-			link.link_speed = ETH_SPEED_NUM_NONE;
+			link.link_status = RTE_ETH_LINK_DOWN;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 			PMD_INIT_LOG(DEBUG, "Port %d is down",
 				     dev->data->port_id);
 		} else {
-			link.link_status = ETH_LINK_UP;
+			link.link_status = RTE_ETH_LINK_UP;
 			PMD_INIT_LOG(DEBUG, "Port %d is up",
 				     dev->data->port_id);
 		}
 	} else {
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -2515,8 +2515,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct virtio_hw *hw = dev->data->dev_private;
 	uint64_t offloads = rxmode->offloads;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if ((offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if ((offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 				!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
 
 			PMD_DRV_LOG(NOTICE,
@@ -2526,8 +2526,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		}
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK)
-		hw->vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	if (mask & RTE_ETH_VLAN_STRIP_MASK)
+		hw->vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 	return 0;
 }
@@ -2549,32 +2549,32 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = hw->max_mtu;
 
 	host_features = VIRTIO_OPS(hw)->get_features(hw);
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	if (host_features & (1ULL << VIRTIO_NET_F_MRG_RXBUF))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SCATTER;
 	if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
 		dev_info->rx_offload_capa |=
-			DEV_RX_OFFLOAD_TCP_CKSUM |
-			DEV_RX_OFFLOAD_UDP_CKSUM;
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
 	}
 	if (host_features & (1ULL << VIRTIO_NET_F_CTRL_VLAN))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_FILTER;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	tso_mask = (1ULL << VIRTIO_NET_F_GUEST_TSO4) |
 		(1ULL << VIRTIO_NET_F_GUEST_TSO6);
 	if ((host_features & tso_mask) == tso_mask)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_LRO;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-				    DEV_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				    RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	if (host_features & (1ULL << VIRTIO_NET_F_CSUM)) {
 		dev_info->tx_offload_capa |=
-			DEV_TX_OFFLOAD_UDP_CKSUM |
-			DEV_TX_OFFLOAD_TCP_CKSUM;
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 	}
 	tso_mask = (1ULL << VIRTIO_NET_F_HOST_TSO4) |
 		(1ULL << VIRTIO_NET_F_HOST_TSO6);
 	if ((host_features & tso_mask) == tso_mask)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (host_features & (1ULL << VIRTIO_F_RING_PACKED)) {
 		/*
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index a19895af1f17..26d9edf5319c 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -41,20 +41,20 @@
 #define	VMXNET3_TX_MAX_SEG	UINT8_MAX
 
 #define VMXNET3_TX_OFFLOAD_CAP		\
-	(DEV_TX_OFFLOAD_VLAN_INSERT |	\
-	 DEV_TX_OFFLOAD_TCP_CKSUM |	\
-	 DEV_TX_OFFLOAD_UDP_CKSUM |	\
-	 DEV_TX_OFFLOAD_TCP_TSO |	\
-	 DEV_TX_OFFLOAD_MULTI_SEGS)
+	(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |	\
+	 RTE_ETH_TX_OFFLOAD_TCP_CKSUM |	\
+	 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |	\
+	 RTE_ETH_TX_OFFLOAD_TCP_TSO |	\
+	 RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define VMXNET3_RX_OFFLOAD_CAP		\
-	(DEV_RX_OFFLOAD_VLAN_STRIP |	\
-	 DEV_RX_OFFLOAD_VLAN_FILTER |   \
-	 DEV_RX_OFFLOAD_SCATTER |	\
-	 DEV_RX_OFFLOAD_UDP_CKSUM |	\
-	 DEV_RX_OFFLOAD_TCP_CKSUM |	\
-	 DEV_RX_OFFLOAD_TCP_LRO |	\
-	 DEV_RX_OFFLOAD_RSS_HASH)
+	(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |	\
+	 RTE_ETH_RX_OFFLOAD_VLAN_FILTER |   \
+	 RTE_ETH_RX_OFFLOAD_SCATTER |	\
+	 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+	 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |	\
+	 RTE_ETH_RX_OFFLOAD_TCP_LRO |	\
+	 RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 int vmxnet3_segs_dynfield_offset = -1;
 
@@ -398,9 +398,9 @@ eth_vmxnet3_dev_init(struct rte_eth_dev *eth_dev)
 
 	/* set the initial link status */
 	memset(&link, 0, sizeof(link));
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_speed = ETH_SPEED_NUM_10G;
-	link.link_autoneg = ETH_LINK_FIXED;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 	rte_eth_linkstatus_set(eth_dev, &link);
 
 	return 0;
@@ -486,8 +486,8 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (dev->data->nb_tx_queues > VMXNET3_MAX_TX_QUEUES ||
 	    dev->data->nb_rx_queues > VMXNET3_MAX_RX_QUEUES) {
@@ -547,7 +547,7 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
 	hw->queueDescPA = mz->iova;
 	hw->queue_desc_len = (uint16_t)size;
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		/* Allocate memory structure for UPT1_RSSConf and configure */
 		mz = gpa_zone_reserve(dev, sizeof(struct VMXNET3_RSSConf),
 				      "rss_conf", rte_socket_id(),
@@ -843,15 +843,15 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
 	devRead->rxFilterConf.rxMode = 0;
 
 	/* Setting up feature flags */
-	if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		devRead->misc.uptFeatures |= VMXNET3_F_RXCSUM;
 
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		devRead->misc.uptFeatures |= VMXNET3_F_LRO;
 		devRead->misc.maxNumRxSG = 0;
 	}
 
-	if (port_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (port_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		ret = vmxnet3_rss_configure(dev);
 		if (ret != VMXNET3_SUCCESS)
 			return ret;
@@ -863,7 +863,7 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
 	}
 
 	ret = vmxnet3_dev_vlan_offload_set(dev,
-			ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+			RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
 	if (ret)
 		return ret;
 
@@ -930,7 +930,7 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
 	}
 
 	if (VMXNET3_VERSION_GE_4(hw) &&
-	    dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	    dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		/* Check for additional RSS  */
 		ret = vmxnet3_v4_rss_configure(dev);
 		if (ret != VMXNET3_SUCCESS) {
@@ -1039,9 +1039,9 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
 
 	/* Clear recorded link status */
 	memset(&link, 0, sizeof(link));
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_speed = ETH_SPEED_NUM_10G;
-	link.link_autoneg = ETH_LINK_FIXED;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 	rte_eth_linkstatus_set(dev, &link);
 
 	hw->adapter_stopped = 1;
@@ -1365,7 +1365,7 @@ vmxnet3_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
 	dev_info->min_mtu = VMXNET3_MIN_MTU;
 	dev_info->max_mtu = VMXNET3_MAX_MTU;
-	dev_info->speed_capa = ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
 	dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
 
 	dev_info->flow_type_rss_offloads = VMXNET3_RSS_OFFLOAD_ALL;
@@ -1447,10 +1447,10 @@ __vmxnet3_dev_link_update(struct rte_eth_dev *dev,
 	ret = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD);
 
 	if (ret & 0x1)
-		link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_speed = ETH_SPEED_NUM_10G;
-	link.link_autoneg = ETH_LINK_FIXED;
+		link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 
 	return rte_eth_linkstatus_set(dev, &link);
 }
@@ -1503,7 +1503,7 @@ vmxnet3_dev_promiscuous_disable(struct rte_eth_dev *dev)
 	uint32_t *vf_table = hw->shared->devRead.rxFilterConf.vfTable;
 	uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 		memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
 	else
 		memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
@@ -1573,8 +1573,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	uint32_t *vf_table = devRead->rxFilterConf.vfTable;
 	uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			devRead->misc.uptFeatures |= UPT1_F_RXVLAN;
 		else
 			devRead->misc.uptFeatures &= ~UPT1_F_RXVLAN;
@@ -1583,8 +1583,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 				       VMXNET3_CMD_UPDATE_FEATURE);
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
 		else
 			memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.h b/drivers/net/vmxnet3/vmxnet3_ethdev.h
index 8950175460f0..ef858ac9512f 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.h
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.h
@@ -32,18 +32,18 @@
 				VMXNET3_MAX_RX_QUEUES + 1)
 
 #define VMXNET3_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 
 #define VMXNET3_V4_RSS_MASK ( \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define VMXNET3_MANDATORY_V4_RSS ( \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP)
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 
 /* RSS configuration structure - shared with device through GPA */
 typedef struct VMXNET3_RSSConf {
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index b01c4c01f9c9..870100fa4f11 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -1326,13 +1326,13 @@ vmxnet3_v4_rss_configure(struct rte_eth_dev *dev)
 	rss_hf = port_rss_conf->rss_hf &
 		(VMXNET3_V4_RSS_MASK | VMXNET3_RSS_OFFLOAD_ALL);
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP6;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP6;
 
 	VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD,
@@ -1389,13 +1389,13 @@ vmxnet3_rss_configure(struct rte_eth_dev *dev)
 	/* loading hashType */
 	dev_rss_conf->hashType = 0;
 	rss_hf = port_rss_conf->rss_hf & VMXNET3_RSS_OFFLOAD_ALL;
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV4;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV6;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV6;
 
 	return VMXNET3_SUCCESS;
diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
index a26076b312e5..ecafc5e4f1a9 100644
--- a/examples/bbdev_app/main.c
+++ b/examples/bbdev_app/main.c
@@ -70,11 +70,11 @@ mbuf_input(struct rte_mbuf *mbuf)
 
 static const struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -327,7 +327,7 @@ check_port_link_status(uint16_t port_id)
 
 		if (link_get_err >= 0 && link.link_status) {
 			const char *dp = (link.link_duplex ==
-				ETH_LINK_FULL_DUPLEX) ?
+				RTE_ETH_LINK_FULL_DUPLEX) ?
 				"full-duplex" : "half-duplex";
 			printf("\nPort %u Link Up - speed %s - %s\n",
 				port_id,
diff --git a/examples/bond/main.c b/examples/bond/main.c
index fd8fd767c811..1087b0dad125 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -114,17 +114,17 @@ static struct rte_mempool *mbuf_pool;
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -148,9 +148,9 @@ slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
 			"Error during getting device (port %u) info: %s\n",
 			portid, strerror(-retval));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 		dev_info.flow_type_rss_offloads;
@@ -240,9 +240,9 @@ bond_port_init(struct rte_mempool *mbuf_pool)
 			"Error during getting device (port %u) info: %s\n",
 			BOND_PORT, strerror(-retval));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	retval = rte_eth_dev_configure(BOND_PORT, 1, 1, &local_port_conf);
 	if (retval != 0)
 		rte_exit(EXIT_FAILURE, "port %u: configuration failed (res=%d)\n",
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index 8c4a8feec0c2..c681e237ea46 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -80,15 +80,15 @@ struct app_stats prev_app_stats;
 
 static const struct rte_eth_conf port_conf_default = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
-			.rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
-				ETH_RSS_TCP | ETH_RSS_SCTP,
+			.rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+				RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
 		}
 	},
 };
@@ -126,9 +126,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	port_conf.rx_adv_conf.rss_conf.rss_hf &=
 		dev_info.flow_type_rss_offloads;
diff --git a/examples/ethtool/ethtool-app/main.c b/examples/ethtool/ethtool-app/main.c
index 1bc675962bf3..cdd9e9b60bd8 100644
--- a/examples/ethtool/ethtool-app/main.c
+++ b/examples/ethtool/ethtool-app/main.c
@@ -98,7 +98,7 @@ static void setup_ports(struct app_config *app_cfg, int cnt_ports)
 	int ret;
 
 	memset(&cfg_port, 0, sizeof(cfg_port));
-	cfg_port.txmode.mq_mode = ETH_MQ_TX_NONE;
+	cfg_port.txmode.mq_mode = RTE_ETH_MQ_TX_NONE;
 
 	for (idx_port = 0; idx_port < cnt_ports; idx_port++) {
 		struct app_port *ptr_port = &app_cfg->ports[idx_port];
diff --git a/examples/ethtool/lib/rte_ethtool.c b/examples/ethtool/lib/rte_ethtool.c
index 413251630709..e7cdf8d5775b 100644
--- a/examples/ethtool/lib/rte_ethtool.c
+++ b/examples/ethtool/lib/rte_ethtool.c
@@ -233,13 +233,13 @@ rte_ethtool_get_pauseparam(uint16_t port_id,
 	pause_param->tx_pause = 0;
 	pause_param->rx_pause = 0;
 	switch (fc_conf.mode) {
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		pause_param->rx_pause = 1;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		pause_param->tx_pause = 1;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		pause_param->rx_pause = 1;
 		pause_param->tx_pause = 1;
 	default:
@@ -277,14 +277,14 @@ rte_ethtool_set_pauseparam(uint16_t port_id,
 
 	if (pause_param->tx_pause) {
 		if (pause_param->rx_pause)
-			fc_conf.mode = RTE_FC_FULL;
+			fc_conf.mode = RTE_ETH_FC_FULL;
 		else
-			fc_conf.mode = RTE_FC_TX_PAUSE;
+			fc_conf.mode = RTE_ETH_FC_TX_PAUSE;
 	} else {
 		if (pause_param->rx_pause)
-			fc_conf.mode = RTE_FC_RX_PAUSE;
+			fc_conf.mode = RTE_ETH_FC_RX_PAUSE;
 		else
-			fc_conf.mode = RTE_FC_NONE;
+			fc_conf.mode = RTE_ETH_FC_NONE;
 	}
 
 	status = rte_eth_dev_flow_ctrl_set(port_id, &fc_conf);
@@ -398,12 +398,12 @@ rte_ethtool_net_set_rx_mode(uint16_t port_id)
 	for (vf = 0; vf < num_vfs; vf++) {
 #ifdef RTE_NET_IXGBE
 		rte_pmd_ixgbe_set_vf_rxmode(port_id, vf,
-			ETH_VMDQ_ACCEPT_UNTAG, 0);
+			RTE_ETH_VMDQ_ACCEPT_UNTAG, 0);
 #endif
 	}
 
 	/* Enable Rx vlan filter, VF unspport status is discard */
-	ret = rte_eth_dev_set_vlan_offload(port_id, ETH_VLAN_FILTER_MASK);
+	ret = rte_eth_dev_set_vlan_offload(port_id, RTE_ETH_VLAN_FILTER_MASK);
 	if (ret != 0)
 		return ret;
 
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index e26be8edf28f..193a16463449 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -283,13 +283,13 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 	struct rte_eth_rxconf rx_conf;
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
-				.rss_hf = ETH_RSS_IP |
-					  ETH_RSS_TCP |
-					  ETH_RSS_UDP,
+				.rss_hf = RTE_ETH_RSS_IP |
+					  RTE_ETH_RSS_TCP |
+					  RTE_ETH_RSS_UDP,
 			}
 		}
 	};
@@ -311,12 +311,12 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_RSS_HASH)
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_RSS_HASH)
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	rx_conf = dev_info.default_rxconf;
 	rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index 476b147bdfcc..1b841d46ad93 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -614,13 +614,13 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 	struct rte_eth_rxconf rx_conf;
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
-				.rss_hf = ETH_RSS_IP |
-					  ETH_RSS_TCP |
-					  ETH_RSS_UDP,
+				.rss_hf = RTE_ETH_RSS_IP |
+					  RTE_ETH_RSS_TCP |
+					  RTE_ETH_RSS_UDP,
 			}
 		}
 	};
@@ -642,9 +642,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	rx_conf = dev_info.default_rxconf;
 	rx_conf.offloads = port_conf.rxmode.offloads;
 
diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
index 8a43f6ac0f92..6185b340600c 100644
--- a/examples/flow_classify/flow_classify.c
+++ b/examples/flow_classify/flow_classify.c
@@ -212,9 +212,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/flow_filtering/main.c b/examples/flow_filtering/main.c
index dd8a33d036ee..bfc1949c8428 100644
--- a/examples/flow_filtering/main.c
+++ b/examples/flow_filtering/main.c
@@ -113,7 +113,7 @@ assert_link_status(void)
 	memset(&link, 0, sizeof(link));
 	do {
 		link_get_err = rte_eth_link_get(port_id, &link);
-		if (link_get_err == 0 && link.link_status == ETH_LINK_UP)
+		if (link_get_err == 0 && link.link_status == RTE_ETH_LINK_UP)
 			break;
 		rte_delay_ms(CHECK_INTERVAL);
 	} while (--rep_cnt);
@@ -121,7 +121,7 @@ assert_link_status(void)
 	if (link_get_err < 0)
 		rte_exit(EXIT_FAILURE, ":: error: link get is failing: %s\n",
 			 rte_strerror(-link_get_err));
-	if (link.link_status == ETH_LINK_DOWN)
+	if (link.link_status == RTE_ETH_LINK_DOWN)
 		rte_exit(EXIT_FAILURE, ":: error: link is still down\n");
 }
 
@@ -138,12 +138,12 @@ init_port(void)
 		},
 		.txmode = {
 			.offloads =
-				DEV_TX_OFFLOAD_VLAN_INSERT |
-				DEV_TX_OFFLOAD_IPV4_CKSUM  |
-				DEV_TX_OFFLOAD_UDP_CKSUM   |
-				DEV_TX_OFFLOAD_TCP_CKSUM   |
-				DEV_TX_OFFLOAD_SCTP_CKSUM  |
-				DEV_TX_OFFLOAD_TCP_TSO,
+				RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_TCP_TSO,
 		},
 	};
 	struct rte_eth_txconf txq_conf;
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index ccfee585f850..b1aa2767a0af 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -819,12 +819,12 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
 	/* Configuring port to use RSS for multiple RX queues. 8< */
 	static const struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_PROTO_MASK,
+				.rss_hf = RTE_ETH_RSS_PROTO_MASK,
 			}
 		}
 	};
@@ -852,9 +852,9 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
 
 	local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 		dev_info.flow_type_rss_offloads;
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(portid, nb_queues, 1, &local_port_conf);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Cannot configure device:"
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index d51133199c42..4ffe997baf23 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -148,13 +148,13 @@ static struct rte_eth_conf port_conf = {
 		.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
 			RTE_ETHER_CRC_LEN,
 		.split_hdr_size = 0,
-		.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
-			     DEV_RX_OFFLOAD_SCATTER),
+		.offloads = (RTE_ETH_RX_OFFLOAD_CHECKSUM |
+			     RTE_ETH_RX_OFFLOAD_SCATTER),
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_MULTI_SEGS),
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
 	},
 };
 
@@ -623,7 +623,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 9ba02e687adb..0290767af473 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -45,7 +45,7 @@ link_next(struct link *link)
 static struct rte_eth_conf port_conf_default = {
 	.link_speeds = 0,
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
 		.split_hdr_size = 0, /* Header split buffer size */
 	},
@@ -57,12 +57,12 @@ static struct rte_eth_conf port_conf_default = {
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
 
-#define RETA_CONF_SIZE     (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE     (RTE_ETH_RSS_RETA_SIZE_512 / RTE_ETH_RETA_GROUP_SIZE)
 
 static int
 rss_setup(uint16_t port_id,
@@ -77,11 +77,11 @@ rss_setup(uint16_t port_id,
 	memset(reta_conf, 0, sizeof(reta_conf));
 
 	for (i = 0; i < reta_size; i++)
-		reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+		reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
 
 	for (i = 0; i < reta_size; i++) {
-		uint32_t reta_id = i / RTE_RETA_GROUP_SIZE;
-		uint32_t reta_pos = i % RTE_RETA_GROUP_SIZE;
+		uint32_t reta_id = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint32_t reta_pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint32_t rss_qs_pos = i % rss->n_queues;
 
 		reta_conf[reta_id].reta[reta_pos] =
@@ -139,7 +139,7 @@ link_create(const char *name, struct link_params *params)
 	rss = params->rx.rss;
 	if (rss) {
 		if ((port_info.reta_size == 0) ||
-			(port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+			(port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
 			return NULL;
 
 		if ((rss->n_queues == 0) ||
@@ -157,9 +157,9 @@ link_create(const char *name, struct link_params *params)
 	/* Port */
 	memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
 	if (rss) {
-		port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+		port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
 		port_conf.rx_adv_conf.rss_conf.rss_hf =
-			(ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+			(RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
 			port_info.flow_type_rss_offloads;
 	}
 
@@ -267,5 +267,5 @@ link_is_up(const char *name)
 	if (rte_eth_link_get(link->port_id, &link_params) < 0)
 		return 0;
 
-	return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+	return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
 }
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 06dc42799314..41e35593867b 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -160,22 +160,22 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_RSS,
+		.mq_mode        = RTE_ETH_MQ_RX_RSS,
 		.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
 			RTE_ETHER_CRC_LEN,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_MULTI_SEGS),
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
 	},
 };
 
@@ -737,7 +737,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -1095,9 +1095,9 @@ main(int argc, char **argv)
 		n_tx_queue = nb_lcores;
 		if (n_tx_queue > MAX_TX_QUEUE_PER_PORT)
 			n_tx_queue = MAX_TX_QUEUE_PER_PORT;
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index a10e330f5003..1c60ac28e317 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -233,19 +233,19 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode	= ETH_MQ_RX_RSS,
+		.mq_mode	= RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
-				ETH_RSS_TCP | ETH_RSS_SCTP,
+			.rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+				RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -1444,10 +1444,10 @@ print_usage(const char *prgname)
 		"               \"parallel\" : Parallel\n"
 		"  --" CMD_LINE_OPT_RX_OFFLOAD
 		": bitmask of the RX HW offload capabilities to enable/use\n"
-		"                         (DEV_RX_OFFLOAD_*)\n"
+		"                         (RTE_ETH_RX_OFFLOAD_*)\n"
 		"  --" CMD_LINE_OPT_TX_OFFLOAD
 		": bitmask of the TX HW offload capabilities to enable/use\n"
-		"                         (DEV_TX_OFFLOAD_*)\n"
+		"                         (RTE_ETH_TX_OFFLOAD_*)\n"
 		"  --" CMD_LINE_OPT_REASSEMBLE " NUM"
 		": max number of entries in reassemble(fragment) table\n"
 		"    (zero (default value) disables reassembly)\n"
@@ -1898,7 +1898,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2201,8 +2201,8 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 	local_port_conf.rxmode.mtu = mtu_size;
 
 	if (multi_seg_required()) {
-		local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
-		local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		local_port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+		local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	}
 
 	local_port_conf.rxmode.offloads |= req_rx_offloads;
@@ -2225,12 +2225,12 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 			portid, local_port_conf.txmode.offloads,
 			dev_info.tx_offload_capa);
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM)
-		local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
+		local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 
 	printf("port %u configurng rx_offloads=0x%" PRIx64
 		", tx_offloads=0x%" PRIx64 "\n",
@@ -2288,7 +2288,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 		/* Pre-populate pkt offloads based on capabilities */
 		qconf->outbound.ipv4_offloads = PKT_TX_IPV4;
 		qconf->outbound.ipv6_offloads = PKT_TX_IPV6;
-		if (local_port_conf.txmode.offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+		if (local_port_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 			qconf->outbound.ipv4_offloads |= PKT_TX_IP_CKSUM;
 
 		tx_queueid++;
@@ -2649,7 +2649,7 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads)
 	struct rte_flow *flow;
 	int ret;
 
-	if (!(rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+	if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
 		return;
 
 	/* Add the default rte_flow to enable SECURITY for all ESP packets */
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 17a28556c971..5cdd794f017f 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -986,7 +986,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
 
 	if (inbound) {
 		if ((dev_info.rx_offload_capa &
-				DEV_RX_OFFLOAD_SECURITY) == 0) {
+				RTE_ETH_RX_OFFLOAD_SECURITY) == 0) {
 			RTE_LOG(WARNING, PORT,
 				"hardware RX IPSec offload is not supported\n");
 			return -EINVAL;
@@ -994,7 +994,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
 
 	} else { /* outbound */
 		if ((dev_info.tx_offload_capa &
-				DEV_TX_OFFLOAD_SECURITY) == 0) {
+				RTE_ETH_TX_OFFLOAD_SECURITY) == 0) {
 			RTE_LOG(WARNING, PORT,
 				"hardware TX IPSec offload is not supported\n");
 			return -EINVAL;
@@ -1628,7 +1628,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
 				rule_type ==
 				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
 				&& rule->portid == port_id)
-			*rx_offloads |= DEV_RX_OFFLOAD_SECURITY;
+			*rx_offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
 	}
 
 	/* Check for outbound rules that use offloads and use this port */
@@ -1639,7 +1639,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
 				rule_type ==
 				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
 				&& rule->portid == port_id)
-			*tx_offloads |= DEV_TX_OFFLOAD_SECURITY;
+			*tx_offloads |= RTE_ETH_TX_OFFLOAD_SECURITY;
 	}
 	return 0;
 }
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index 73391ce1a96d..bdcaa3bcd1ca 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -114,8 +114,8 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
 	},
 };
 
@@ -619,7 +619,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/kni/main.c b/examples/kni/main.c
index 69a0afced6cc..d324ee224109 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -94,7 +94,7 @@ static struct kni_port_params *kni_port_params_array[RTE_MAX_ETHPORTS];
 /* Options for configuring ethernet port */
 static struct rte_eth_conf port_conf = {
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -607,9 +607,9 @@ init_port(uint16_t port)
 			"Error during getting device (port %u) info: %s\n",
 			port, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(port, 1, 1, &local_port_conf);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Could not configure port%u (%d)\n",
@@ -687,7 +687,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 6e2016752fca..04a3bdace20c 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -215,11 +215,11 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -1807,7 +1807,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2631,9 +2631,9 @@ initialize_ports(struct l2fwd_crypto_options *options)
 			return retval;
 		}
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		retval = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (retval < 0) {
 			printf("Cannot configure device: err=%d, port=%u\n",
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index 9040be5ed9b6..cf3d1b8aaf40 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -14,7 +14,7 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
 			.split_hdr_size = 0,
 		},
 		.txmode = {
-			.mq_mode = ETH_MQ_TX_NONE,
+			.mq_mode = RTE_ETH_MQ_TX_NONE,
 		},
 	};
 	uint16_t nb_ports_available = 0;
@@ -22,9 +22,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
 	int ret;
 
 	if (rsrc->event_mode) {
-		port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+		port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
 		port_conf.rx_adv_conf.rss_conf.rss_key = NULL;
-		port_conf.rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP;
+		port_conf.rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP;
 	}
 
 	/* Initialise each port */
@@ -60,9 +60,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
 				local_port_conf.rx_adv_conf.rss_conf.rss_hf);
 		}
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure RX and TX queue. 8< */
 		ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 1db89f2bd139..9806204b81d1 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -395,7 +395,7 @@ check_all_ports_link_status(struct l2fwd_resources *rsrc,
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c
index 62981663ea78..d8eabe4c869e 100644
--- a/examples/l2fwd-jobstats/main.c
+++ b/examples/l2fwd-jobstats/main.c
@@ -93,7 +93,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -725,7 +725,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -868,9 +868,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure the RX and TX queues. 8< */
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/l2fwd-keepalive/main.c b/examples/l2fwd-keepalive/main.c
index af59d51b3ec4..78fc48f781fc 100644
--- a/examples/l2fwd-keepalive/main.c
+++ b/examples/l2fwd-keepalive/main.c
@@ -82,7 +82,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -477,7 +477,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -649,9 +649,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
 			rte_exit(EXIT_FAILURE,
diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
index 8feb50e0f542..c9d8d4918a34 100644
--- a/examples/l2fwd/main.c
+++ b/examples/l2fwd/main.c
@@ -94,7 +94,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -605,7 +605,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -791,9 +791,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure the number of queues for a port. */
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 410ec94b4131..1fb180723582 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -123,19 +123,19 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode	= ETH_MQ_RX_RSS,
+		.mq_mode	= RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
-				ETH_RSS_TCP | ETH_RSS_SCTP,
+			.rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+				RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -1935,7 +1935,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2003,7 +2003,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -2087,9 +2087,9 @@ main(int argc, char **argv)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index 05385807e83e..7f00c65609ed 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -111,17 +111,17 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -607,7 +607,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* Clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -731,7 +731,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -828,9 +828,9 @@ main(int argc, char **argv)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 39624993b081..21c79567b1f7 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -249,18 +249,18 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_RSS,
+		.mq_mode        = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_UDP,
+			.rss_hf = RTE_ETH_RSS_UDP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	}
 };
 
@@ -2196,7 +2196,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2509,7 +2509,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -2637,9 +2637,9 @@ main(int argc, char **argv)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 961860ea18ef..7c7613a83aad 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -75,9 +75,9 @@ l3fwd_eth_dev_port_setup(struct rte_eth_conf *port_conf)
 			rte_panic("Error during getting device (port %u) info:"
 				  "%s\n", port_id, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+						RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 						dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 202ef78b6e95..5dd3e4136ea1 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -119,18 +119,18 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -902,7 +902,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -987,7 +987,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -1052,15 +1052,15 @@ l3fwd_poll_resource_setup(void)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
 
 		if (dev_info.max_rx_queues == 1)
-			local_port_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+			local_port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
 
 		if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
 				port_conf.rx_adv_conf.rss_conf.rss_hf) {
diff --git a/examples/link_status_interrupt/main.c b/examples/link_status_interrupt/main.c
index ce8ae059d789..551f0524da79 100644
--- a/examples/link_status_interrupt/main.c
+++ b/examples/link_status_interrupt/main.c
@@ -82,7 +82,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.intr_conf = {
 		.lsc = 1, /**< lsc interrupt feature enabled */
@@ -146,7 +146,7 @@ print_stats(void)
 			   link_get_err < 0 ? "0" :
 			   rte_eth_link_speed_to_str(link.link_speed),
 			   link_get_err < 0 ? "Link get failed" :
-			   (link.link_duplex == ETH_LINK_FULL_DUPLEX ? \
+			   (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex"),
 			   port_statistics[portid].tx,
 			   port_statistics[portid].rx,
@@ -506,7 +506,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -633,9 +633,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure RX and TX queues. 8< */
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/multi_process/client_server_mp/mp_server/init.c b/examples/multi_process/client_server_mp/mp_server/init.c
index be669c2bcc06..a4d7a3e5436a 100644
--- a/examples/multi_process/client_server_mp/mp_server/init.c
+++ b/examples/multi_process/client_server_mp/mp_server/init.c
@@ -93,7 +93,7 @@ init_port(uint16_t port_num)
 	/* for port configuration all features are off by default */
 	const struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS
+			.mq_mode = RTE_ETH_MQ_RX_RSS
 		}
 	};
 	const uint16_t rx_rings = 1, tx_rings = num_clients;
@@ -212,7 +212,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/multi_process/symmetric_mp/main.c b/examples/multi_process/symmetric_mp/main.c
index a66328ba0caf..b35886a77b00 100644
--- a/examples/multi_process/symmetric_mp/main.c
+++ b/examples/multi_process/symmetric_mp/main.c
@@ -175,18 +175,18 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
 {
 	struct rte_eth_conf port_conf = {
 			.rxmode = {
-				.mq_mode	= ETH_MQ_RX_RSS,
+				.mq_mode	= RTE_ETH_MQ_RX_RSS,
 				.split_hdr_size = 0,
-				.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+				.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 			},
 			.rx_adv_conf = {
 				.rss_conf = {
 					.rss_key = NULL,
-					.rss_hf = ETH_RSS_IP,
+					.rss_hf = RTE_ETH_RSS_IP,
 				},
 			},
 			.txmode = {
-				.mq_mode = ETH_MQ_TX_NONE,
+				.mq_mode = RTE_ETH_MQ_TX_NONE,
 			}
 	};
 	const uint16_t rx_rings = num_queues, tx_rings = num_queues;
@@ -217,9 +217,9 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
 
 	info.default_rxconf.rx_drop_en = 1;
 
-	if (info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
 	port_conf.rx_adv_conf.rss_conf.rss_hf &= info.flow_type_rss_offloads;
@@ -391,7 +391,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
index e9a388710647..f110fc129f55 100644
--- a/examples/ntb/ntb_fwd.c
+++ b/examples/ntb/ntb_fwd.c
@@ -89,17 +89,17 @@ static uint16_t pkt_burst = NTB_DFLT_PKT_BURST;
 
 static struct rte_eth_conf eth_port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c
index 4f6982bc1289..b01ac60fd196 100644
--- a/examples/packet_ordering/main.c
+++ b/examples/packet_ordering/main.c
@@ -294,9 +294,9 @@ configure_eth_port(uint16_t port_id)
 		return ret;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(port_id, rxRings, txRings, &port_conf);
 	if (ret != 0)
 		return ret;
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 74e016e1d20d..3a6a33bda3b0 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -306,18 +306,18 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_TCP,
+			.rss_hf = RTE_ETH_RSS_TCP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -3437,7 +3437,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -3490,7 +3490,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -3589,9 +3589,9 @@ main(int argc, char **argv)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 4f20dfc4be06..569207a79d62 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -133,7 +133,7 @@ mempool_find(struct obj *obj, const char *name)
 static struct rte_eth_conf port_conf_default = {
 	.link_speeds = 0,
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
 		.split_hdr_size = 0, /* Header split buffer size */
 	},
@@ -145,12 +145,12 @@ static struct rte_eth_conf port_conf_default = {
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
 
-#define RETA_CONF_SIZE     (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE     (RTE_ETH_RSS_RETA_SIZE_512 / RTE_ETH_RETA_GROUP_SIZE)
 
 static int
 rss_setup(uint16_t port_id,
@@ -165,11 +165,11 @@ rss_setup(uint16_t port_id,
 	memset(reta_conf, 0, sizeof(reta_conf));
 
 	for (i = 0; i < reta_size; i++)
-		reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+		reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
 
 	for (i = 0; i < reta_size; i++) {
-		uint32_t reta_id = i / RTE_RETA_GROUP_SIZE;
-		uint32_t reta_pos = i % RTE_RETA_GROUP_SIZE;
+		uint32_t reta_id = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint32_t reta_pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint32_t rss_qs_pos = i % rss->n_queues;
 
 		reta_conf[reta_id].reta[reta_pos] =
@@ -227,7 +227,7 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
 	rss = params->rx.rss;
 	if (rss) {
 		if ((port_info.reta_size == 0) ||
-			(port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+			(port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
 			return NULL;
 
 		if ((rss->n_queues == 0) ||
@@ -245,9 +245,9 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
 	/* Port */
 	memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
 	if (rss) {
-		port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+		port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
 		port_conf.rx_adv_conf.rss_conf.rss_hf =
-			(ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+			(RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
 			port_info.flow_type_rss_offloads;
 	}
 
@@ -356,7 +356,7 @@ link_is_up(struct obj *obj, const char *name)
 	if (rte_eth_link_get(link->port_id, &link_params) < 0)
 		return 0;
 
-	return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+	return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
 }
 
 struct link *
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 229a277032cb..979d9eb9e9d0 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -193,14 +193,14 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	/* Force full Tx path in the driver, required for IEEE1588 */
-	port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index c32d2e12e633..743bae2da50a 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -51,18 +51,18 @@ static struct rte_mempool *pool = NULL;
  ***/
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode	= ETH_MQ_RX_RSS,
+		.mq_mode	= RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_DCB_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -332,8 +332,8 @@ main(int argc, char **argv)
 			"Error during getting device (port %u) info: %s\n",
 			port_rx, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
-		conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+		conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
 	if (conf.rx_adv_conf.rss_conf.rss_hf !=
@@ -378,8 +378,8 @@ main(int argc, char **argv)
 			"Error during getting device (port %u) info: %s\n",
 			port_tx, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
-		conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+		conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
 	if (conf.rx_adv_conf.rss_conf.rss_hf !=
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 1367569c65db..9b34e4a76b1b 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -60,7 +60,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_DCB_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -105,9 +105,9 @@ app_init_port(uint16_t portid, struct rte_mempool *mp)
 			"Error during getting device (port %u) info: %s\n",
 			portid, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE,
diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
index 6845c396b8d9..1903d8b095a1 100644
--- a/examples/rxtx_callbacks/main.c
+++ b/examples/rxtx_callbacks/main.c
@@ -141,17 +141,17 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	if (hw_timestamping) {
-		if (!(dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)) {
+		if (!(dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
 			printf("\nERROR: Port %u does not support hardware timestamping\n"
 					, port);
 			return -1;
 		}
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 		rte_mbuf_dyn_rx_timestamp_register(&hwts_dynfield_offset, NULL);
 		if (hwts_dynfield_offset < 0) {
 			printf("ERROR: Failed to register timestamp field\n");
diff --git a/examples/server_node_efd/server/init.c b/examples/server_node_efd/server/init.c
index a19934dbe0c8..0e5e3b5a9815 100644
--- a/examples/server_node_efd/server/init.c
+++ b/examples/server_node_efd/server/init.c
@@ -95,7 +95,7 @@ init_port(uint16_t port_num)
 	/* for port configuration all features are off by default */
 	struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 	};
 	const uint16_t rx_rings = 1, tx_rings = num_nodes;
@@ -114,9 +114,9 @@ init_port(uint16_t port_num)
 	if (retval != 0)
 		return retval;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/*
 	 * Standard DPDK port initialisation - config port, then set up
@@ -276,7 +276,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
index fd7207aee758..16435ee3ccc2 100644
--- a/examples/skeleton/basicfwd.c
+++ b/examples/skeleton/basicfwd.c
@@ -49,9 +49,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 97218917067e..44376417f83d 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -110,23 +110,23 @@ static int nb_sockets;
 /* empty vmdq configuration structure. Filled in programatically */
 static struct rte_eth_conf vmdq_conf_default = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_VMDQ_ONLY,
+		.mq_mode        = RTE_ETH_MQ_RX_VMDQ_ONLY,
 		.split_hdr_size = 0,
 		/*
 		 * VLAN strip is necessary for 1G NIC such as I350,
 		 * this fixes bug of ipv4 forwarding in guest can't
 		 * forward pakets from one virtio dev to another virtio dev.
 		 */
-		.offloads = DEV_RX_OFFLOAD_VLAN_STRIP,
+		.offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP,
 	},
 
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_TCP_CKSUM |
-			     DEV_TX_OFFLOAD_VLAN_INSERT |
-			     DEV_TX_OFFLOAD_MULTI_SEGS |
-			     DEV_TX_OFFLOAD_TCP_TSO),
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+			     RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+			     RTE_ETH_TX_OFFLOAD_TCP_TSO),
 	},
 	.rx_adv_conf = {
 		/*
@@ -134,7 +134,7 @@ static struct rte_eth_conf vmdq_conf_default = {
 		 * appropriate values
 		 */
 		.vmdq_rx_conf = {
-			.nb_queue_pools = ETH_8_POOLS,
+			.nb_queue_pools = RTE_ETH_8_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -291,9 +291,9 @@ port_init(uint16_t port)
 		return -1;
 
 	rx_rings = (uint16_t)dev_info.max_rx_queues;
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	/* Configure ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
 	if (retval != 0) {
@@ -557,8 +557,8 @@ us_vhost_parse_args(int argc, char **argv)
 		case 'P':
 			promiscuous = 1;
 			vmdq_conf_default.rx_adv_conf.vmdq_rx_conf.rx_mode =
-				ETH_VMDQ_ACCEPT_BROADCAST |
-				ETH_VMDQ_ACCEPT_MULTICAST;
+				RTE_ETH_VMDQ_ACCEPT_BROADCAST |
+				RTE_ETH_VMDQ_ACCEPT_MULTICAST;
 			break;
 
 		case OPT_VM2VM_NUM:
diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
index e19d79a40802..b159291d77ce 100644
--- a/examples/vm_power_manager/main.c
+++ b/examples/vm_power_manager/main.c
@@ -73,9 +73,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
@@ -270,7 +270,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 		       /* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
index 85996bf864b7..feee642f594d 100644
--- a/examples/vmdq/main.c
+++ b/examples/vmdq/main.c
@@ -65,12 +65,12 @@ static uint8_t rss_enable;
 /* empty vmdq configuration structure. Filled in programatically */
 static const struct rte_eth_conf vmdq_conf_default = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_VMDQ_ONLY,
+		.mq_mode        = RTE_ETH_MQ_RX_VMDQ_ONLY,
 		.split_hdr_size = 0,
 	},
 
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.rx_adv_conf = {
 		/*
@@ -78,7 +78,7 @@ static const struct rte_eth_conf vmdq_conf_default = {
 		 * appropriate values
 		 */
 		.vmdq_rx_conf = {
-			.nb_queue_pools = ETH_8_POOLS,
+			.nb_queue_pools = RTE_ETH_8_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -156,11 +156,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t num_pools)
 	(void)(rte_memcpy(&eth_conf->rx_adv_conf.vmdq_rx_conf, &conf,
 		   sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf)));
 	if (rss_enable) {
-		eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
-		eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
-							ETH_RSS_UDP |
-							ETH_RSS_TCP |
-							ETH_RSS_SCTP;
+		eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
+		eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+							RTE_ETH_RSS_UDP |
+							RTE_ETH_RSS_TCP |
+							RTE_ETH_RSS_SCTP;
 	}
 	return 0;
 }
@@ -258,9 +258,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	retval = rte_eth_dev_configure(port, rxRings, txRings, &port_conf);
 	if (retval != 0)
 		return retval;
diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c
index be0179fdeaf0..d2218f2cf741 100644
--- a/examples/vmdq_dcb/main.c
+++ b/examples/vmdq_dcb/main.c
@@ -59,8 +59,8 @@ static uint16_t ports[RTE_MAX_ETHPORTS];
 static unsigned num_ports;
 
 /* number of pools (if user does not specify any, 32 by default */
-static enum rte_eth_nb_pools num_pools = ETH_32_POOLS;
-static enum rte_eth_nb_tcs   num_tcs   = ETH_4_TCS;
+static enum rte_eth_nb_pools num_pools = RTE_ETH_32_POOLS;
+static enum rte_eth_nb_tcs   num_tcs   = RTE_ETH_4_TCS;
 static uint16_t num_queues, num_vmdq_queues;
 static uint16_t vmdq_pool_base, vmdq_queue_base;
 static uint8_t rss_enable;
@@ -68,11 +68,11 @@ static uint8_t rss_enable;
 /* Empty vmdq+dcb configuration structure. Filled in programmatically. 8< */
 static const struct rte_eth_conf vmdq_dcb_conf_default = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_VMDQ_DCB,
+		.mq_mode        = RTE_ETH_MQ_RX_VMDQ_DCB,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_VMDQ_DCB,
+		.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB,
 	},
 	/*
 	 * should be overridden separately in code with
@@ -80,7 +80,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 	 */
 	.rx_adv_conf = {
 		.vmdq_dcb_conf = {
-			.nb_queue_pools = ETH_32_POOLS,
+			.nb_queue_pools = RTE_ETH_32_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -88,12 +88,12 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 			.dcb_tc = {0},
 		},
 		.dcb_rx_conf = {
-				.nb_tcs = ETH_4_TCS,
+				.nb_tcs = RTE_ETH_4_TCS,
 				/** Traffic class each UP mapped to. */
 				.dcb_tc = {0},
 		},
 		.vmdq_rx_conf = {
-			.nb_queue_pools = ETH_32_POOLS,
+			.nb_queue_pools = RTE_ETH_32_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -102,7 +102,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 	},
 	.tx_adv_conf = {
 		.vmdq_dcb_tx_conf = {
-			.nb_queue_pools = ETH_32_POOLS,
+			.nb_queue_pools = RTE_ETH_32_POOLS,
 			.dcb_tc = {0},
 		},
 	},
@@ -156,7 +156,7 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
 		conf.pool_map[i].pools = 1UL << i;
 		vmdq_conf.pool_map[i].pools = 1UL << i;
 	}
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		conf.dcb_tc[i] = i % num_tcs;
 		dcb_conf.dcb_tc[i] = i % num_tcs;
 		tx_conf.dcb_tc[i] = i % num_tcs;
@@ -172,11 +172,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
 	(void)(rte_memcpy(&eth_conf->tx_adv_conf.vmdq_dcb_tx_conf, &tx_conf,
 			  sizeof(tx_conf)));
 	if (rss_enable) {
-		eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
-		eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
-							ETH_RSS_UDP |
-							ETH_RSS_TCP |
-							ETH_RSS_SCTP;
+		eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
+		eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+							RTE_ETH_RSS_UDP |
+							RTE_ETH_RSS_TCP |
+							RTE_ETH_RSS_SCTP;
 	}
 	return 0;
 }
@@ -270,9 +270,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
 	port_conf.rx_adv_conf.rss_conf.rss_hf &=
@@ -381,9 +381,9 @@ vmdq_parse_num_pools(const char *q_arg)
 	if (n != 16 && n != 32)
 		return -1;
 	if (n == 16)
-		num_pools = ETH_16_POOLS;
+		num_pools = RTE_ETH_16_POOLS;
 	else
-		num_pools = ETH_32_POOLS;
+		num_pools = RTE_ETH_32_POOLS;
 
 	return 0;
 }
@@ -403,9 +403,9 @@ vmdq_parse_num_tcs(const char *q_arg)
 	if (n != 4 && n != 8)
 		return -1;
 	if (n == 4)
-		num_tcs = ETH_4_TCS;
+		num_tcs = RTE_ETH_4_TCS;
 	else
-		num_tcs = ETH_8_TCS;
+		num_tcs = RTE_ETH_8_TCS;
 
 	return 0;
 }
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index b530ac6e320a..dcbffd4265fa 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -114,7 +114,7 @@ struct rte_eth_dev_data {
 	/** Device Ethernet link address. @see rte_eth_dev_release_port() */
 	struct rte_ether_addr *mac_addrs;
 	/** Bitmap associating MAC addresses to pools */
-	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+	uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
 	/**
 	 * Device Ethernet MAC addresses of hash filtering.
 	 * @see rte_eth_dev_release_port()
@@ -1700,23 +1700,23 @@ struct rte_eth_syn_filter {
 /**
  * filter type of tunneling packet
  */
-#define ETH_TUNNEL_FILTER_OMAC  0x01 /**< filter by outer MAC addr */
-#define ETH_TUNNEL_FILTER_OIP   0x02 /**< filter by outer IP Addr */
-#define ETH_TUNNEL_FILTER_TENID 0x04 /**< filter by tenant ID */
-#define ETH_TUNNEL_FILTER_IMAC  0x08 /**< filter by inner MAC addr */
-#define ETH_TUNNEL_FILTER_IVLAN 0x10 /**< filter by inner VLAN ID */
-#define ETH_TUNNEL_FILTER_IIP   0x20 /**< filter by inner IP addr */
-
-#define RTE_TUNNEL_FILTER_IMAC_IVLAN (ETH_TUNNEL_FILTER_IMAC | \
-					ETH_TUNNEL_FILTER_IVLAN)
-#define RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID (ETH_TUNNEL_FILTER_IMAC | \
-					ETH_TUNNEL_FILTER_IVLAN | \
-					ETH_TUNNEL_FILTER_TENID)
-#define RTE_TUNNEL_FILTER_IMAC_TENID (ETH_TUNNEL_FILTER_IMAC | \
-					ETH_TUNNEL_FILTER_TENID)
-#define RTE_TUNNEL_FILTER_OMAC_TENID_IMAC (ETH_TUNNEL_FILTER_OMAC | \
-					ETH_TUNNEL_FILTER_TENID | \
-					ETH_TUNNEL_FILTER_IMAC)
+#define RTE_ETH_TUNNEL_FILTER_OMAC  0x01 /**< filter by outer MAC addr */
+#define RTE_ETH_TUNNEL_FILTER_OIP   0x02 /**< filter by outer IP Addr */
+#define RTE_ETH_TUNNEL_FILTER_TENID 0x04 /**< filter by tenant ID */
+#define RTE_ETH_TUNNEL_FILTER_IMAC  0x08 /**< filter by inner MAC addr */
+#define RTE_ETH_TUNNEL_FILTER_IVLAN 0x10 /**< filter by inner VLAN ID */
+#define RTE_ETH_TUNNEL_FILTER_IIP   0x20 /**< filter by inner IP addr */
+
+#define RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN (RTE_ETH_TUNNEL_FILTER_IMAC | \
+					  RTE_ETH_TUNNEL_FILTER_IVLAN)
+#define RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID (RTE_ETH_TUNNEL_FILTER_IMAC | \
+						RTE_ETH_TUNNEL_FILTER_IVLAN | \
+						RTE_ETH_TUNNEL_FILTER_TENID)
+#define RTE_ETH_TUNNEL_FILTER_IMAC_TENID (RTE_ETH_TUNNEL_FILTER_IMAC | \
+					  RTE_ETH_TUNNEL_FILTER_TENID)
+#define RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC (RTE_ETH_TUNNEL_FILTER_OMAC | \
+					       RTE_ETH_TUNNEL_FILTER_TENID | \
+					       RTE_ETH_TUNNEL_FILTER_IMAC)
 
 /**
  *  Select IPv4 or IPv6 for tunnel filters.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 4ea5a657e003..9b6007803dd8 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -101,9 +101,6 @@ static const struct rte_eth_xstats_name_off eth_dev_txq_stats_strings[] = {
 #define RTE_NB_TXQ_STATS RTE_DIM(eth_dev_txq_stats_strings)
 
 #define RTE_RX_OFFLOAD_BIT2STR(_name)	\
-	{ DEV_RX_OFFLOAD_##_name, #_name }
-
-#define RTE_ETH_RX_OFFLOAD_BIT2STR(_name)	\
 	{ RTE_ETH_RX_OFFLOAD_##_name, #_name }
 
 static const struct {
@@ -128,14 +125,14 @@ static const struct {
 	RTE_RX_OFFLOAD_BIT2STR(SCTP_CKSUM),
 	RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM),
 	RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
-	RTE_ETH_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
+	RTE_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
 };
 
 #undef RTE_RX_OFFLOAD_BIT2STR
 #undef RTE_ETH_RX_OFFLOAD_BIT2STR
 
 #define RTE_TX_OFFLOAD_BIT2STR(_name)	\
-	{ DEV_TX_OFFLOAD_##_name, #_name }
+	{ RTE_ETH_TX_OFFLOAD_##_name, #_name }
 
 static const struct {
 	uint64_t offload;
@@ -1182,32 +1179,32 @@ uint32_t
 rte_eth_speed_bitflag(uint32_t speed, int duplex)
 {
 	switch (speed) {
-	case ETH_SPEED_NUM_10M:
-		return duplex ? ETH_LINK_SPEED_10M : ETH_LINK_SPEED_10M_HD;
-	case ETH_SPEED_NUM_100M:
-		return duplex ? ETH_LINK_SPEED_100M : ETH_LINK_SPEED_100M_HD;
-	case ETH_SPEED_NUM_1G:
-		return ETH_LINK_SPEED_1G;
-	case ETH_SPEED_NUM_2_5G:
-		return ETH_LINK_SPEED_2_5G;
-	case ETH_SPEED_NUM_5G:
-		return ETH_LINK_SPEED_5G;
-	case ETH_SPEED_NUM_10G:
-		return ETH_LINK_SPEED_10G;
-	case ETH_SPEED_NUM_20G:
-		return ETH_LINK_SPEED_20G;
-	case ETH_SPEED_NUM_25G:
-		return ETH_LINK_SPEED_25G;
-	case ETH_SPEED_NUM_40G:
-		return ETH_LINK_SPEED_40G;
-	case ETH_SPEED_NUM_50G:
-		return ETH_LINK_SPEED_50G;
-	case ETH_SPEED_NUM_56G:
-		return ETH_LINK_SPEED_56G;
-	case ETH_SPEED_NUM_100G:
-		return ETH_LINK_SPEED_100G;
-	case ETH_SPEED_NUM_200G:
-		return ETH_LINK_SPEED_200G;
+	case RTE_ETH_SPEED_NUM_10M:
+		return duplex ? RTE_ETH_LINK_SPEED_10M : RTE_ETH_LINK_SPEED_10M_HD;
+	case RTE_ETH_SPEED_NUM_100M:
+		return duplex ? RTE_ETH_LINK_SPEED_100M : RTE_ETH_LINK_SPEED_100M_HD;
+	case RTE_ETH_SPEED_NUM_1G:
+		return RTE_ETH_LINK_SPEED_1G;
+	case RTE_ETH_SPEED_NUM_2_5G:
+		return RTE_ETH_LINK_SPEED_2_5G;
+	case RTE_ETH_SPEED_NUM_5G:
+		return RTE_ETH_LINK_SPEED_5G;
+	case RTE_ETH_SPEED_NUM_10G:
+		return RTE_ETH_LINK_SPEED_10G;
+	case RTE_ETH_SPEED_NUM_20G:
+		return RTE_ETH_LINK_SPEED_20G;
+	case RTE_ETH_SPEED_NUM_25G:
+		return RTE_ETH_LINK_SPEED_25G;
+	case RTE_ETH_SPEED_NUM_40G:
+		return RTE_ETH_LINK_SPEED_40G;
+	case RTE_ETH_SPEED_NUM_50G:
+		return RTE_ETH_LINK_SPEED_50G;
+	case RTE_ETH_SPEED_NUM_56G:
+		return RTE_ETH_LINK_SPEED_56G;
+	case RTE_ETH_SPEED_NUM_100G:
+		return RTE_ETH_LINK_SPEED_100G;
+	case RTE_ETH_SPEED_NUM_200G:
+		return RTE_ETH_LINK_SPEED_200G;
 	default:
 		return 0;
 	}
@@ -1528,7 +1525,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 * If LRO is enabled, check that the maximum aggregated packet
 	 * size is supported by the configured device.
 	 */
-	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		uint32_t max_rx_pktlen;
 		uint32_t overhead_len;
 
@@ -1585,12 +1582,12 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	}
 
 	/* Check if Rx RSS distribution is disabled but RSS hash is enabled. */
-	if (((dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) == 0) &&
-	    (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
+	    (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		RTE_ETHDEV_LOG(ERR,
 			"Ethdev port_id=%u config invalid Rx mq_mode without RSS but %s offload is requested\n",
 			port_id,
-			rte_eth_dev_rx_offload_name(DEV_RX_OFFLOAD_RSS_HASH));
+			rte_eth_dev_rx_offload_name(RTE_ETH_RX_OFFLOAD_RSS_HASH));
 		ret = -EINVAL;
 		goto rollback;
 	}
@@ -2213,7 +2210,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	 * size is supported by the configured device.
 	 */
 	/* Get the real Ethernet overhead length */
-	if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (local_conf.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		uint32_t overhead_len;
 		uint32_t max_rx_pktlen;
 		int ret;
@@ -2793,21 +2790,21 @@ const char *
 rte_eth_link_speed_to_str(uint32_t link_speed)
 {
 	switch (link_speed) {
-	case ETH_SPEED_NUM_NONE: return "None";
-	case ETH_SPEED_NUM_10M:  return "10 Mbps";
-	case ETH_SPEED_NUM_100M: return "100 Mbps";
-	case ETH_SPEED_NUM_1G:   return "1 Gbps";
-	case ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
-	case ETH_SPEED_NUM_5G:   return "5 Gbps";
-	case ETH_SPEED_NUM_10G:  return "10 Gbps";
-	case ETH_SPEED_NUM_20G:  return "20 Gbps";
-	case ETH_SPEED_NUM_25G:  return "25 Gbps";
-	case ETH_SPEED_NUM_40G:  return "40 Gbps";
-	case ETH_SPEED_NUM_50G:  return "50 Gbps";
-	case ETH_SPEED_NUM_56G:  return "56 Gbps";
-	case ETH_SPEED_NUM_100G: return "100 Gbps";
-	case ETH_SPEED_NUM_200G: return "200 Gbps";
-	case ETH_SPEED_NUM_UNKNOWN: return "Unknown";
+	case RTE_ETH_SPEED_NUM_NONE: return "None";
+	case RTE_ETH_SPEED_NUM_10M:  return "10 Mbps";
+	case RTE_ETH_SPEED_NUM_100M: return "100 Mbps";
+	case RTE_ETH_SPEED_NUM_1G:   return "1 Gbps";
+	case RTE_ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
+	case RTE_ETH_SPEED_NUM_5G:   return "5 Gbps";
+	case RTE_ETH_SPEED_NUM_10G:  return "10 Gbps";
+	case RTE_ETH_SPEED_NUM_20G:  return "20 Gbps";
+	case RTE_ETH_SPEED_NUM_25G:  return "25 Gbps";
+	case RTE_ETH_SPEED_NUM_40G:  return "40 Gbps";
+	case RTE_ETH_SPEED_NUM_50G:  return "50 Gbps";
+	case RTE_ETH_SPEED_NUM_56G:  return "56 Gbps";
+	case RTE_ETH_SPEED_NUM_100G: return "100 Gbps";
+	case RTE_ETH_SPEED_NUM_200G: return "200 Gbps";
+	case RTE_ETH_SPEED_NUM_UNKNOWN: return "Unknown";
 	default: return "Invalid";
 	}
 }
@@ -2831,14 +2828,14 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
 		return -EINVAL;
 	}
 
-	if (eth_link->link_status == ETH_LINK_DOWN)
+	if (eth_link->link_status == RTE_ETH_LINK_DOWN)
 		return snprintf(str, len, "Link down");
 	else
 		return snprintf(str, len, "Link up at %s %s %s",
 			rte_eth_link_speed_to_str(eth_link->link_speed),
-			(eth_link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+			(eth_link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 			"FDX" : "HDX",
-			(eth_link->link_autoneg == ETH_LINK_AUTONEG) ?
+			(eth_link->link_autoneg == RTE_ETH_LINK_AUTONEG) ?
 			"Autoneg" : "Fixed");
 }
 
@@ -3745,7 +3742,7 @@ rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on)
 	dev = &rte_eth_devices[port_id];
 
 	if (!(dev->data->dev_conf.rxmode.offloads &
-	      DEV_RX_OFFLOAD_VLAN_FILTER)) {
+	      RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
 		RTE_ETHDEV_LOG(ERR, "Port %u: VLAN-filtering disabled\n",
 			port_id);
 		return -ENOSYS;
@@ -3832,44 +3829,44 @@ rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask)
 	dev_offloads = orig_offloads;
 
 	/* check which option changed by application */
-	cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	cur = !!(offload_mask & RTE_ETH_VLAN_STRIP_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
-		mask |= ETH_VLAN_STRIP_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+		mask |= RTE_ETH_VLAN_STRIP_MASK;
 	}
 
-	cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+	cur = !!(offload_mask & RTE_ETH_VLAN_FILTER_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
-		mask |= ETH_VLAN_FILTER_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+		mask |= RTE_ETH_VLAN_FILTER_MASK;
 	}
 
-	cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND);
+	cur = !!(offload_mask & RTE_ETH_VLAN_EXTEND_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
-		mask |= ETH_VLAN_EXTEND_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
+		mask |= RTE_ETH_VLAN_EXTEND_MASK;
 	}
 
-	cur = !!(offload_mask & ETH_QINQ_STRIP_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP);
+	cur = !!(offload_mask & RTE_ETH_QINQ_STRIP_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
-		mask |= ETH_QINQ_STRIP_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
+		mask |= RTE_ETH_QINQ_STRIP_MASK;
 	}
 
 	/*no change*/
@@ -3914,17 +3911,17 @@ rte_eth_dev_get_vlan_offload(uint16_t port_id)
 	dev = &rte_eth_devices[port_id];
 	dev_offloads = &dev->data->dev_conf.rxmode.offloads;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-		ret |= ETH_VLAN_STRIP_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+		ret |= RTE_ETH_VLAN_STRIP_OFFLOAD;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
-		ret |= ETH_VLAN_FILTER_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+		ret |= RTE_ETH_VLAN_FILTER_OFFLOAD;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
-		ret |= ETH_VLAN_EXTEND_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
+		ret |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
-		ret |= ETH_QINQ_STRIP_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+		ret |= RTE_ETH_QINQ_STRIP_OFFLOAD;
 
 	return ret;
 }
@@ -4001,7 +3998,7 @@ rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (pfc_conf->priority > (ETH_DCB_NUM_USER_PRIORITIES - 1)) {
+	if (pfc_conf->priority > (RTE_ETH_DCB_NUM_USER_PRIORITIES - 1)) {
 		RTE_ETHDEV_LOG(ERR, "Invalid priority, only 0-7 allowed\n");
 		return -EINVAL;
 	}
@@ -4019,7 +4016,7 @@ eth_check_reta_mask(struct rte_eth_rss_reta_entry64 *reta_conf,
 {
 	uint16_t i, num;
 
-	num = (reta_size + RTE_RETA_GROUP_SIZE - 1) / RTE_RETA_GROUP_SIZE;
+	num = (reta_size + RTE_ETH_RETA_GROUP_SIZE - 1) / RTE_ETH_RETA_GROUP_SIZE;
 	for (i = 0; i < num; i++) {
 		if (reta_conf[i].mask)
 			return 0;
@@ -4041,8 +4038,8 @@ eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & RTE_BIT64(shift)) &&
 			(reta_conf[idx].reta[shift] >= max_rxq)) {
 			RTE_ETHDEV_LOG(ERR,
@@ -4198,7 +4195,7 @@ rte_eth_dev_udp_tunnel_port_add(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+	if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
 		RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
 		return -EINVAL;
 	}
@@ -4224,7 +4221,7 @@ rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+	if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
 		RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
 		return -EINVAL;
 	}
@@ -4365,8 +4362,8 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr,
 			port_id);
 		return -EINVAL;
 	}
-	if (pool >= ETH_64_POOLS) {
-		RTE_ETHDEV_LOG(ERR, "Pool ID must be 0-%d\n", ETH_64_POOLS - 1);
+	if (pool >= RTE_ETH_64_POOLS) {
+		RTE_ETHDEV_LOG(ERR, "Pool ID must be 0-%d\n", RTE_ETH_64_POOLS - 1);
 		return -EINVAL;
 	}
 
@@ -6275,7 +6272,7 @@ eth_dev_handle_port_link_status(const char *cmd __rte_unused,
 	rte_tel_data_add_dict_string(d, status_str, "UP");
 	rte_tel_data_add_dict_u64(d, "speed", link.link_speed);
 	rte_tel_data_add_dict_string(d, "duplex",
-			(link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+			(link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 				"full-duplex" : "half-duplex");
 	return 0;
 }
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 21f570832921..1de810d5cdbf 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -250,7 +250,7 @@ void rte_eth_iterator_cleanup(struct rte_dev_iterator *iter);
  * field is not supported, its value is 0.
  * All byte-related statistics do not include Ethernet FCS regardless
  * of whether these bytes have been delivered to the application
- * (see DEV_RX_OFFLOAD_KEEP_CRC).
+ * (see RTE_ETH_RX_OFFLOAD_KEEP_CRC).
  */
 struct rte_eth_stats {
 	uint64_t ipackets;  /**< Total number of successfully received packets. */
@@ -281,43 +281,75 @@ struct rte_eth_stats {
 /**@{@name Link speed capabilities
  * Device supported speeds bitmap flags
  */
-#define ETH_LINK_SPEED_AUTONEG 0             /**< Autonegotiate (all speeds) */
-#define ETH_LINK_SPEED_FIXED   RTE_BIT32(0)  /**< Disable autoneg (fixed speed) */
-#define ETH_LINK_SPEED_10M_HD  RTE_BIT32(1)  /**<  10 Mbps half-duplex */
-#define ETH_LINK_SPEED_10M     RTE_BIT32(2)  /**<  10 Mbps full-duplex */
-#define ETH_LINK_SPEED_100M_HD RTE_BIT32(3)  /**< 100 Mbps half-duplex */
-#define ETH_LINK_SPEED_100M    RTE_BIT32(4)  /**< 100 Mbps full-duplex */
-#define ETH_LINK_SPEED_1G      RTE_BIT32(5)  /**<   1 Gbps */
-#define ETH_LINK_SPEED_2_5G    RTE_BIT32(6)  /**< 2.5 Gbps */
-#define ETH_LINK_SPEED_5G      RTE_BIT32(7)  /**<   5 Gbps */
-#define ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**<  10 Gbps */
-#define ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**<  20 Gbps */
-#define ETH_LINK_SPEED_25G     RTE_BIT32(10) /**<  25 Gbps */
-#define ETH_LINK_SPEED_40G     RTE_BIT32(11) /**<  40 Gbps */
-#define ETH_LINK_SPEED_50G     RTE_BIT32(12) /**<  50 Gbps */
-#define ETH_LINK_SPEED_56G     RTE_BIT32(13) /**<  56 Gbps */
-#define ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
-#define ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
+#define RTE_ETH_LINK_SPEED_AUTONEG 0             /**< Autonegotiate (all speeds) */
+#define ETH_LINK_SPEED_AUTONEG     RTE_ETH_LINK_SPEED_AUTONEG
+#define RTE_ETH_LINK_SPEED_FIXED   RTE_BIT32(0)  /**< Disable autoneg (fixed speed) */
+#define ETH_LINK_SPEED_FIXED       RTE_ETH_LINK_SPEED_FIXED
+#define RTE_ETH_LINK_SPEED_10M_HD  RTE_BIT32(1)  /**<  10 Mbps half-duplex */
+#define ETH_LINK_SPEED_10M_HD      RTE_ETH_LINK_SPEED_10M_HD
+#define RTE_ETH_LINK_SPEED_10M     RTE_BIT32(2)  /**<  10 Mbps full-duplex */
+#define ETH_LINK_SPEED_10M         RTE_ETH_LINK_SPEED_10M
+#define RTE_ETH_LINK_SPEED_100M_HD RTE_BIT32(3)  /**< 100 Mbps half-duplex */
+#define ETH_LINK_SPEED_100M_HD     RTE_ETH_LINK_SPEED_100M_HD
+#define RTE_ETH_LINK_SPEED_100M    RTE_BIT32(4)  /**< 100 Mbps full-duplex */
+#define ETH_LINK_SPEED_100M        RTE_ETH_LINK_SPEED_100M
+#define RTE_ETH_LINK_SPEED_1G      RTE_BIT32(5)  /**<   1 Gbps */
+#define ETH_LINK_SPEED_1G          RTE_ETH_LINK_SPEED_1G
+#define RTE_ETH_LINK_SPEED_2_5G    RTE_BIT32(6)  /**< 2.5 Gbps */
+#define ETH_LINK_SPEED_2_5G        RTE_ETH_LINK_SPEED_2_5G
+#define RTE_ETH_LINK_SPEED_5G      RTE_BIT32(7)  /**<   5 Gbps */
+#define ETH_LINK_SPEED_5G          RTE_ETH_LINK_SPEED_5G
+#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**<  10 Gbps */
+#define ETH_LINK_SPEED_10G         RTE_ETH_LINK_SPEED_10G
+#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**<  20 Gbps */
+#define ETH_LINK_SPEED_20G         RTE_ETH_LINK_SPEED_20G
+#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**<  25 Gbps */
+#define ETH_LINK_SPEED_25G         RTE_ETH_LINK_SPEED_25G
+#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**<  40 Gbps */
+#define ETH_LINK_SPEED_40G         RTE_ETH_LINK_SPEED_40G
+#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**<  50 Gbps */
+#define ETH_LINK_SPEED_50G         RTE_ETH_LINK_SPEED_50G
+#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**<  56 Gbps */
+#define ETH_LINK_SPEED_56G         RTE_ETH_LINK_SPEED_56G
+#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
+#define ETH_LINK_SPEED_100G        RTE_ETH_LINK_SPEED_100G
+#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
+#define ETH_LINK_SPEED_200G        RTE_ETH_LINK_SPEED_200G
 /**@}*/
 
 /**@{@name Link speed
  * Ethernet numeric link speeds in Mbps
  */
-#define ETH_SPEED_NUM_NONE         0 /**< Not defined */
-#define ETH_SPEED_NUM_10M         10 /**<  10 Mbps */
-#define ETH_SPEED_NUM_100M       100 /**< 100 Mbps */
-#define ETH_SPEED_NUM_1G        1000 /**<   1 Gbps */
-#define ETH_SPEED_NUM_2_5G      2500 /**< 2.5 Gbps */
-#define ETH_SPEED_NUM_5G        5000 /**<   5 Gbps */
-#define ETH_SPEED_NUM_10G      10000 /**<  10 Gbps */
-#define ETH_SPEED_NUM_20G      20000 /**<  20 Gbps */
-#define ETH_SPEED_NUM_25G      25000 /**<  25 Gbps */
-#define ETH_SPEED_NUM_40G      40000 /**<  40 Gbps */
-#define ETH_SPEED_NUM_50G      50000 /**<  50 Gbps */
-#define ETH_SPEED_NUM_56G      56000 /**<  56 Gbps */
-#define ETH_SPEED_NUM_100G    100000 /**< 100 Gbps */
-#define ETH_SPEED_NUM_200G    200000 /**< 200 Gbps */
-#define ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define RTE_ETH_SPEED_NUM_NONE         0 /**< Not defined */
+#define ETH_SPEED_NUM_NONE        RTE_ETH_SPEED_NUM_NONE
+#define RTE_ETH_SPEED_NUM_10M         10 /**<  10 Mbps */
+#define ETH_SPEED_NUM_10M         RTE_ETH_SPEED_NUM_10M
+#define RTE_ETH_SPEED_NUM_100M       100 /**< 100 Mbps */
+#define ETH_SPEED_NUM_100M        RTE_ETH_SPEED_NUM_100M
+#define RTE_ETH_SPEED_NUM_1G        1000 /**<   1 Gbps */
+#define ETH_SPEED_NUM_1G          RTE_ETH_SPEED_NUM_1G
+#define RTE_ETH_SPEED_NUM_2_5G      2500 /**< 2.5 Gbps */
+#define ETH_SPEED_NUM_2_5G        RTE_ETH_SPEED_NUM_2_5G
+#define RTE_ETH_SPEED_NUM_5G        5000 /**<   5 Gbps */
+#define ETH_SPEED_NUM_5G          RTE_ETH_SPEED_NUM_5G
+#define RTE_ETH_SPEED_NUM_10G      10000 /**<  10 Gbps */
+#define ETH_SPEED_NUM_10G         RTE_ETH_SPEED_NUM_10G
+#define RTE_ETH_SPEED_NUM_20G      20000 /**<  20 Gbps */
+#define ETH_SPEED_NUM_20G         RTE_ETH_SPEED_NUM_20G
+#define RTE_ETH_SPEED_NUM_25G      25000 /**<  25 Gbps */
+#define ETH_SPEED_NUM_25G         RTE_ETH_SPEED_NUM_25G
+#define RTE_ETH_SPEED_NUM_40G      40000 /**<  40 Gbps */
+#define ETH_SPEED_NUM_40G         RTE_ETH_SPEED_NUM_40G
+#define RTE_ETH_SPEED_NUM_50G      50000 /**<  50 Gbps */
+#define ETH_SPEED_NUM_50G         RTE_ETH_SPEED_NUM_50G
+#define RTE_ETH_SPEED_NUM_56G      56000 /**<  56 Gbps */
+#define ETH_SPEED_NUM_56G         RTE_ETH_SPEED_NUM_56G
+#define RTE_ETH_SPEED_NUM_100G    100000 /**< 100 Gbps */
+#define ETH_SPEED_NUM_100G        RTE_ETH_SPEED_NUM_100G
+#define RTE_ETH_SPEED_NUM_200G    200000 /**< 200 Gbps */
+#define ETH_SPEED_NUM_200G        RTE_ETH_SPEED_NUM_200G
+#define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define ETH_SPEED_NUM_UNKNOWN     RTE_ETH_SPEED_NUM_UNKNOWN
 /**@}*/
 
 /**
@@ -325,21 +357,27 @@ struct rte_eth_stats {
  */
 __extension__
 struct rte_eth_link {
-	uint32_t link_speed;        /**< ETH_SPEED_NUM_ */
-	uint16_t link_duplex  : 1;  /**< ETH_LINK_[HALF/FULL]_DUPLEX */
-	uint16_t link_autoneg : 1;  /**< ETH_LINK_[AUTONEG/FIXED] */
-	uint16_t link_status  : 1;  /**< ETH_LINK_[DOWN/UP] */
+	uint32_t link_speed;        /**< RTE_ETH_SPEED_NUM_ */
+	uint16_t link_duplex  : 1;  /**< RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+	uint16_t link_autoneg : 1;  /**< RTE_ETH_LINK_[AUTONEG/FIXED] */
+	uint16_t link_status  : 1;  /**< RTE_ETH_LINK_[DOWN/UP] */
 } __rte_aligned(8);      /**< aligned for atomic64 read/write */
 
 /**@{@name Link negotiation
  * Constants used in link management.
  */
-#define ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
-#define ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
-#define ETH_LINK_DOWN        0 /**< Link is down (see link_status). */
-#define ETH_LINK_UP          1 /**< Link is up (see link_status). */
-#define ETH_LINK_FIXED       0 /**< No autonegotiation (see link_autoneg). */
-#define ETH_LINK_AUTONEG     1 /**< Autonegotiated (see link_autoneg). */
+#define RTE_ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
+#define ETH_LINK_HALF_DUPLEX     RTE_ETH_LINK_HALF_DUPLEX
+#define RTE_ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
+#define ETH_LINK_FULL_DUPLEX     RTE_ETH_LINK_FULL_DUPLEX
+#define RTE_ETH_LINK_DOWN        0 /**< Link is down (see link_status). */
+#define ETH_LINK_DOWN            RTE_ETH_LINK_DOWN
+#define RTE_ETH_LINK_UP          1 /**< Link is up (see link_status). */
+#define ETH_LINK_UP              RTE_ETH_LINK_UP
+#define RTE_ETH_LINK_FIXED       0 /**< No autonegotiation (see link_autoneg). */
+#define ETH_LINK_FIXED           RTE_ETH_LINK_FIXED
+#define RTE_ETH_LINK_AUTONEG     1 /**< Autonegotiated (see link_autoneg). */
+#define ETH_LINK_AUTONEG         RTE_ETH_LINK_AUTONEG
 #define RTE_ETH_LINK_MAX_STR_LEN 40 /**< Max length of default link string. */
 /**@}*/
 
@@ -356,9 +394,12 @@ struct rte_eth_thresh {
 /**@{@name Multi-queue mode
  * @see rte_eth_conf.rxmode.mq_mode.
  */
-#define ETH_MQ_RX_RSS_FLAG  0x1 /**< Enable RSS. @see rte_eth_rss_conf */
-#define ETH_MQ_RX_DCB_FLAG  0x2 /**< Enable DCB. */
-#define ETH_MQ_RX_VMDQ_FLAG 0x4 /**< Enable VMDq. */
+#define RTE_ETH_MQ_RX_RSS_FLAG  0x1
+#define ETH_MQ_RX_RSS_FLAG      RTE_ETH_MQ_RX_RSS_FLAG
+#define RTE_ETH_MQ_RX_DCB_FLAG  0x2
+#define ETH_MQ_RX_DCB_FLAG      RTE_ETH_MQ_RX_DCB_FLAG
+#define RTE_ETH_MQ_RX_VMDQ_FLAG 0x4
+#define ETH_MQ_RX_VMDQ_FLAG     RTE_ETH_MQ_RX_VMDQ_FLAG
 /**@}*/
 
 /**
@@ -367,50 +408,49 @@ struct rte_eth_thresh {
  */
 enum rte_eth_rx_mq_mode {
 	/** None of DCB, RSS or VMDq mode */
-	ETH_MQ_RX_NONE = 0,
+	RTE_ETH_MQ_RX_NONE = 0,
 
 	/** For Rx side, only RSS is on */
-	ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
+	RTE_ETH_MQ_RX_RSS = RTE_ETH_MQ_RX_RSS_FLAG,
 	/** For Rx side,only DCB is on. */
-	ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
+	RTE_ETH_MQ_RX_DCB = RTE_ETH_MQ_RX_DCB_FLAG,
 	/** Both DCB and RSS enable */
-	ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG,
+	RTE_ETH_MQ_RX_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
 
 	/** Only VMDq, no RSS nor DCB */
-	ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_ONLY = RTE_ETH_MQ_RX_VMDQ_FLAG,
 	/** RSS mode with VMDq */
-	ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG,
 	/** Use VMDq+DCB to route traffic to queues */
-	ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG | ETH_MQ_RX_DCB_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_DCB = RTE_ETH_MQ_RX_VMDQ_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
 	/** Enable both VMDq and DCB in VMDq */
-	ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG |
-				 ETH_MQ_RX_VMDQ_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG |
+				 RTE_ETH_MQ_RX_VMDQ_FLAG,
 };
 
-/**
- * for Rx mq mode backward compatible
- */
-#define ETH_RSS                       ETH_MQ_RX_RSS
-#define VMDQ_DCB                      ETH_MQ_RX_VMDQ_DCB
-#define ETH_DCB_RX                    ETH_MQ_RX_DCB
+#define ETH_MQ_RX_NONE		RTE_ETH_MQ_RX_NONE
+#define ETH_MQ_RX_RSS		RTE_ETH_MQ_RX_RSS
+#define ETH_MQ_RX_DCB		RTE_ETH_MQ_RX_DCB
+#define ETH_MQ_RX_DCB_RSS	RTE_ETH_MQ_RX_DCB_RSS
+#define ETH_MQ_RX_VMDQ_ONLY	RTE_ETH_MQ_RX_VMDQ_ONLY
+#define ETH_MQ_RX_VMDQ_RSS	RTE_ETH_MQ_RX_VMDQ_RSS
+#define ETH_MQ_RX_VMDQ_DCB	RTE_ETH_MQ_RX_VMDQ_DCB
+#define ETH_MQ_RX_VMDQ_DCB_RSS	RTE_ETH_MQ_RX_VMDQ_DCB_RSS
 
 /**
  * A set of values to identify what method is to be used to transmit
  * packets using multi-TCs.
  */
 enum rte_eth_tx_mq_mode {
-	ETH_MQ_TX_NONE    = 0,  /**< It is in neither DCB nor VT mode. */
-	ETH_MQ_TX_DCB,          /**< For Tx side,only DCB is on. */
-	ETH_MQ_TX_VMDQ_DCB,	/**< For Tx side,both DCB and VT is on. */
-	ETH_MQ_TX_VMDQ_ONLY,    /**< Only VT on, no DCB */
+	RTE_ETH_MQ_TX_NONE    = 0,  /**< It is in neither DCB nor VT mode. */
+	RTE_ETH_MQ_TX_DCB,          /**< For Tx side,only DCB is on. */
+	RTE_ETH_MQ_TX_VMDQ_DCB,     /**< For Tx side,both DCB and VT is on. */
+	RTE_ETH_MQ_TX_VMDQ_ONLY,    /**< Only VT on, no DCB */
 };
-
-/**
- * for Tx mq mode backward compatible
- */
-#define ETH_DCB_NONE                ETH_MQ_TX_NONE
-#define ETH_VMDQ_DCB_TX             ETH_MQ_TX_VMDQ_DCB
-#define ETH_DCB_TX                  ETH_MQ_TX_DCB
+#define ETH_MQ_TX_NONE		RTE_ETH_MQ_TX_NONE
+#define ETH_MQ_TX_DCB		RTE_ETH_MQ_TX_DCB
+#define ETH_MQ_TX_VMDQ_DCB	RTE_ETH_MQ_TX_VMDQ_DCB
+#define ETH_MQ_TX_VMDQ_ONLY	RTE_ETH_MQ_TX_VMDQ_ONLY
 
 /**
  * A structure used to configure the Rx features of an Ethernet port.
@@ -423,7 +463,7 @@ struct rte_eth_rxmode {
 	uint32_t max_lro_pkt_size;
 	uint16_t split_hdr_size;  /**< hdr buf size (header_split enabled).*/
 	/**
-	 * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Per-port Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
 	 * Only offloads set on rx_offload_capa field on rte_eth_dev_info
 	 * structure are allowed to be set.
 	 */
@@ -438,12 +478,17 @@ struct rte_eth_rxmode {
  * Note that single VLAN is treated the same as inner VLAN.
  */
 enum rte_vlan_type {
-	ETH_VLAN_TYPE_UNKNOWN = 0,
-	ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
-	ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
-	ETH_VLAN_TYPE_MAX,
+	RTE_ETH_VLAN_TYPE_UNKNOWN = 0,
+	RTE_ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
+	RTE_ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
+	RTE_ETH_VLAN_TYPE_MAX,
 };
 
+#define ETH_VLAN_TYPE_UNKNOWN	RTE_ETH_VLAN_TYPE_UNKNOWN
+#define ETH_VLAN_TYPE_INNER	RTE_ETH_VLAN_TYPE_INNER
+#define ETH_VLAN_TYPE_OUTER	RTE_ETH_VLAN_TYPE_OUTER
+#define ETH_VLAN_TYPE_MAX	RTE_ETH_VLAN_TYPE_MAX
+
 /**
  * A structure used to describe a VLAN filter.
  * If the bit corresponding to a VID is set, such VID is on.
@@ -514,38 +559,70 @@ struct rte_eth_rss_conf {
  * Below macros are defined for RSS offload types, they can be used to
  * fill rte_eth_rss_conf.rss_hf or rte_flow_action_rss.types.
  */
-#define ETH_RSS_IPV4               RTE_BIT64(2)
-#define ETH_RSS_FRAG_IPV4          RTE_BIT64(3)
-#define ETH_RSS_NONFRAG_IPV4_TCP   RTE_BIT64(4)
-#define ETH_RSS_NONFRAG_IPV4_UDP   RTE_BIT64(5)
-#define ETH_RSS_NONFRAG_IPV4_SCTP  RTE_BIT64(6)
-#define ETH_RSS_NONFRAG_IPV4_OTHER RTE_BIT64(7)
-#define ETH_RSS_IPV6               RTE_BIT64(8)
-#define ETH_RSS_FRAG_IPV6          RTE_BIT64(9)
-#define ETH_RSS_NONFRAG_IPV6_TCP   RTE_BIT64(10)
-#define ETH_RSS_NONFRAG_IPV6_UDP   RTE_BIT64(11)
-#define ETH_RSS_NONFRAG_IPV6_SCTP  RTE_BIT64(12)
-#define ETH_RSS_NONFRAG_IPV6_OTHER RTE_BIT64(13)
-#define ETH_RSS_L2_PAYLOAD         RTE_BIT64(14)
-#define ETH_RSS_IPV6_EX            RTE_BIT64(15)
-#define ETH_RSS_IPV6_TCP_EX        RTE_BIT64(16)
-#define ETH_RSS_IPV6_UDP_EX        RTE_BIT64(17)
-#define ETH_RSS_PORT               RTE_BIT64(18)
-#define ETH_RSS_VXLAN              RTE_BIT64(19)
-#define ETH_RSS_GENEVE             RTE_BIT64(20)
-#define ETH_RSS_NVGRE              RTE_BIT64(21)
-#define ETH_RSS_GTPU               RTE_BIT64(23)
-#define ETH_RSS_ETH                RTE_BIT64(24)
-#define ETH_RSS_S_VLAN             RTE_BIT64(25)
-#define ETH_RSS_C_VLAN             RTE_BIT64(26)
-#define ETH_RSS_ESP                RTE_BIT64(27)
-#define ETH_RSS_AH                 RTE_BIT64(28)
-#define ETH_RSS_L2TPV3             RTE_BIT64(29)
-#define ETH_RSS_PFCP               RTE_BIT64(30)
-#define ETH_RSS_PPPOE              RTE_BIT64(31)
-#define ETH_RSS_ECPRI              RTE_BIT64(32)
-#define ETH_RSS_MPLS               RTE_BIT64(33)
-#define ETH_RSS_IPV4_CHKSUM        RTE_BIT64(34)
+#define RTE_ETH_RSS_IPV4               RTE_BIT64(2)
+#define ETH_RSS_IPV4                   RTE_ETH_RSS_IPV4
+#define RTE_ETH_RSS_FRAG_IPV4          RTE_BIT64(3)
+#define ETH_RSS_FRAG_IPV4              RTE_ETH_RSS_FRAG_IPV4
+#define RTE_ETH_RSS_NONFRAG_IPV4_TCP   RTE_BIT64(4)
+#define ETH_RSS_NONFRAG_IPV4_TCP       RTE_ETH_RSS_NONFRAG_IPV4_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV4_UDP   RTE_BIT64(5)
+#define ETH_RSS_NONFRAG_IPV4_UDP       RTE_ETH_RSS_NONFRAG_IPV4_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV4_SCTP  RTE_BIT64(6)
+#define ETH_RSS_NONFRAG_IPV4_SCTP      RTE_ETH_RSS_NONFRAG_IPV4_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV4_OTHER RTE_BIT64(7)
+#define ETH_RSS_NONFRAG_IPV4_OTHER     RTE_ETH_RSS_NONFRAG_IPV4_OTHER
+#define RTE_ETH_RSS_IPV6               RTE_BIT64(8)
+#define ETH_RSS_IPV6                   RTE_ETH_RSS_IPV6
+#define RTE_ETH_RSS_FRAG_IPV6          RTE_BIT64(9)
+#define ETH_RSS_FRAG_IPV6              RTE_ETH_RSS_FRAG_IPV6
+#define RTE_ETH_RSS_NONFRAG_IPV6_TCP   RTE_BIT64(10)
+#define ETH_RSS_NONFRAG_IPV6_TCP       RTE_ETH_RSS_NONFRAG_IPV6_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV6_UDP   RTE_BIT64(11)
+#define ETH_RSS_NONFRAG_IPV6_UDP       RTE_ETH_RSS_NONFRAG_IPV6_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV6_SCTP  RTE_BIT64(12)
+#define ETH_RSS_NONFRAG_IPV6_SCTP      RTE_ETH_RSS_NONFRAG_IPV6_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV6_OTHER RTE_BIT64(13)
+#define ETH_RSS_NONFRAG_IPV6_OTHER     RTE_ETH_RSS_NONFRAG_IPV6_OTHER
+#define RTE_ETH_RSS_L2_PAYLOAD         RTE_BIT64(14)
+#define ETH_RSS_L2_PAYLOAD             RTE_ETH_RSS_L2_PAYLOAD
+#define RTE_ETH_RSS_IPV6_EX            RTE_BIT64(15)
+#define ETH_RSS_IPV6_EX                RTE_ETH_RSS_IPV6_EX
+#define RTE_ETH_RSS_IPV6_TCP_EX        RTE_BIT64(16)
+#define ETH_RSS_IPV6_TCP_EX            RTE_ETH_RSS_IPV6_TCP_EX
+#define RTE_ETH_RSS_IPV6_UDP_EX        RTE_BIT64(17)
+#define ETH_RSS_IPV6_UDP_EX            RTE_ETH_RSS_IPV6_UDP_EX
+#define RTE_ETH_RSS_PORT               RTE_BIT64(18)
+#define ETH_RSS_PORT                   RTE_ETH_RSS_PORT
+#define RTE_ETH_RSS_VXLAN              RTE_BIT64(19)
+#define ETH_RSS_VXLAN                  RTE_ETH_RSS_VXLAN
+#define RTE_ETH_RSS_GENEVE             RTE_BIT64(20)
+#define ETH_RSS_GENEVE                 RTE_ETH_RSS_GENEVE
+#define RTE_ETH_RSS_NVGRE              RTE_BIT64(21)
+#define ETH_RSS_NVGRE                  RTE_ETH_RSS_NVGRE
+#define RTE_ETH_RSS_GTPU               RTE_BIT64(23)
+#define ETH_RSS_GTPU                   RTE_ETH_RSS_GTPU
+#define RTE_ETH_RSS_ETH                RTE_BIT64(24)
+#define ETH_RSS_ETH                    RTE_ETH_RSS_ETH
+#define RTE_ETH_RSS_S_VLAN             RTE_BIT64(25)
+#define ETH_RSS_S_VLAN                 RTE_ETH_RSS_S_VLAN
+#define RTE_ETH_RSS_C_VLAN             RTE_BIT64(26)
+#define ETH_RSS_C_VLAN                 RTE_ETH_RSS_C_VLAN
+#define RTE_ETH_RSS_ESP                RTE_BIT64(27)
+#define ETH_RSS_ESP                    RTE_ETH_RSS_ESP
+#define RTE_ETH_RSS_AH                 RTE_BIT64(28)
+#define ETH_RSS_AH                     RTE_ETH_RSS_AH
+#define RTE_ETH_RSS_L2TPV3             RTE_BIT64(29)
+#define ETH_RSS_L2TPV3                 RTE_ETH_RSS_L2TPV3
+#define RTE_ETH_RSS_PFCP               RTE_BIT64(30)
+#define ETH_RSS_PFCP                   RTE_ETH_RSS_PFCP
+#define RTE_ETH_RSS_PPPOE              RTE_BIT64(31)
+#define ETH_RSS_PPPOE                  RTE_ETH_RSS_PPPOE
+#define RTE_ETH_RSS_ECPRI              RTE_BIT64(32)
+#define ETH_RSS_ECPRI                  RTE_ETH_RSS_ECPRI
+#define RTE_ETH_RSS_MPLS               RTE_BIT64(33)
+#define ETH_RSS_MPLS                   RTE_ETH_RSS_MPLS
+#define RTE_ETH_RSS_IPV4_CHKSUM        RTE_BIT64(34)
+#define ETH_RSS_IPV4_CHKSUM            RTE_ETH_RSS_IPV4_CHKSUM
 
 /**
  * The ETH_RSS_L4_CHKSUM works on checksum field of any L4 header.
@@ -554,41 +631,48 @@ struct rte_eth_rss_conf {
  * checksum type for constructing the use of RSS offload bits.
  *
  * Due to above reason, some old APIs (and configuration) don't support
- * ETH_RSS_L4_CHKSUM. The rte_flow RSS API supports it.
+ * RTE_ETH_RSS_L4_CHKSUM. The rte_flow RSS API supports it.
  *
  * For the case that checksum is not used in an UDP header,
  * it takes the reserved value 0 as input for the hash function.
  */
-#define ETH_RSS_L4_CHKSUM          RTE_BIT64(35)
+#define RTE_ETH_RSS_L4_CHKSUM          RTE_BIT64(35)
+#define ETH_RSS_L4_CHKSUM              RTE_ETH_RSS_L4_CHKSUM
 
 /*
- * We use the following macros to combine with above ETH_RSS_* for
+ * We use the following macros to combine with above RTE_ETH_RSS_* for
  * more specific input set selection. These bits are defined starting
  * from the high end of the 64 bits.
- * Note: If we use above ETH_RSS_* without SRC/DST_ONLY, it represents
+ * Note: If we use above RTE_ETH_RSS_* without SRC/DST_ONLY, it represents
  * both SRC and DST are taken into account. If SRC_ONLY and DST_ONLY of
  * the same level are used simultaneously, it is the same case as none of
  * them are added.
  */
-#define ETH_RSS_L3_SRC_ONLY        RTE_BIT64(63)
-#define ETH_RSS_L3_DST_ONLY        RTE_BIT64(62)
-#define ETH_RSS_L4_SRC_ONLY        RTE_BIT64(61)
-#define ETH_RSS_L4_DST_ONLY        RTE_BIT64(60)
-#define ETH_RSS_L2_SRC_ONLY        RTE_BIT64(59)
-#define ETH_RSS_L2_DST_ONLY        RTE_BIT64(58)
+#define RTE_ETH_RSS_L3_SRC_ONLY        RTE_BIT64(63)
+#define ETH_RSS_L3_SRC_ONLY            RTE_ETH_RSS_L3_SRC_ONLY
+#define RTE_ETH_RSS_L3_DST_ONLY        RTE_BIT64(62)
+#define ETH_RSS_L3_DST_ONLY            RTE_ETH_RSS_L3_DST_ONLY
+#define RTE_ETH_RSS_L4_SRC_ONLY        RTE_BIT64(61)
+#define ETH_RSS_L4_SRC_ONLY            RTE_ETH_RSS_L4_SRC_ONLY
+#define RTE_ETH_RSS_L4_DST_ONLY        RTE_BIT64(60)
+#define ETH_RSS_L4_DST_ONLY            RTE_ETH_RSS_L4_DST_ONLY
+#define RTE_ETH_RSS_L2_SRC_ONLY        RTE_BIT64(59)
+#define ETH_RSS_L2_SRC_ONLY            RTE_ETH_RSS_L2_SRC_ONLY
+#define RTE_ETH_RSS_L2_DST_ONLY        RTE_BIT64(58)
+#define ETH_RSS_L2_DST_ONLY            RTE_ETH_RSS_L2_DST_ONLY
 
 /*
  * Only select IPV6 address prefix as RSS input set according to
- * https://tools.ietf.org/html/rfc6052
- * Must be combined with ETH_RSS_IPV6, ETH_RSS_NONFRAG_IPV6_UDP,
- * ETH_RSS_NONFRAG_IPV6_TCP, ETH_RSS_NONFRAG_IPV6_SCTP.
+ * https:tools.ietf.org/html/rfc6052
+ * Must be combined with RTE_ETH_RSS_IPV6, RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ * RTE_ETH_RSS_NONFRAG_IPV6_TCP, RTE_ETH_RSS_NONFRAG_IPV6_SCTP.
  */
-#define RTE_ETH_RSS_L3_PRE32	   RTE_BIT64(57)
-#define RTE_ETH_RSS_L3_PRE40	   RTE_BIT64(56)
-#define RTE_ETH_RSS_L3_PRE48	   RTE_BIT64(55)
-#define RTE_ETH_RSS_L3_PRE56	   RTE_BIT64(54)
-#define RTE_ETH_RSS_L3_PRE64	   RTE_BIT64(53)
-#define RTE_ETH_RSS_L3_PRE96	   RTE_BIT64(52)
+#define RTE_ETH_RSS_L3_PRE32           RTE_BIT64(57)
+#define RTE_ETH_RSS_L3_PRE40           RTE_BIT64(56)
+#define RTE_ETH_RSS_L3_PRE48           RTE_BIT64(55)
+#define RTE_ETH_RSS_L3_PRE56           RTE_BIT64(54)
+#define RTE_ETH_RSS_L3_PRE64           RTE_BIT64(53)
+#define RTE_ETH_RSS_L3_PRE96           RTE_BIT64(52)
 
 /*
  * Use the following macros to combine with the above layers
@@ -603,22 +687,27 @@ struct rte_eth_rss_conf {
  * It basically stands for the innermost encapsulation level RSS
  * can be performed on according to PMD and device capabilities.
  */
-#define ETH_RSS_LEVEL_PMD_DEFAULT       (0ULL << 50)
+#define RTE_ETH_RSS_LEVEL_PMD_DEFAULT  (0ULL << 50)
+#define ETH_RSS_LEVEL_PMD_DEFAULT      RTE_ETH_RSS_LEVEL_PMD_DEFAULT
 
 /**
  * level 1, requests RSS to be performed on the outermost packet
  * encapsulation level.
  */
-#define ETH_RSS_LEVEL_OUTERMOST         (1ULL << 50)
+#define RTE_ETH_RSS_LEVEL_OUTERMOST    (1ULL << 50)
+#define ETH_RSS_LEVEL_OUTERMOST        RTE_ETH_RSS_LEVEL_OUTERMOST
 
 /**
  * level 2, requests RSS to be performed on the specified inner packet
  * encapsulation level, from outermost to innermost (lower to higher values).
  */
-#define ETH_RSS_LEVEL_INNERMOST         (2ULL << 50)
-#define ETH_RSS_LEVEL_MASK              (3ULL << 50)
+#define RTE_ETH_RSS_LEVEL_INNERMOST    (2ULL << 50)
+#define ETH_RSS_LEVEL_INNERMOST        RTE_ETH_RSS_LEVEL_INNERMOST
+#define RTE_ETH_RSS_LEVEL_MASK         (3ULL << 50)
+#define ETH_RSS_LEVEL_MASK             RTE_ETH_RSS_LEVEL_MASK
 
-#define ETH_RSS_LEVEL(rss_hf) ((rss_hf & ETH_RSS_LEVEL_MASK) >> 50)
+#define RTE_ETH_RSS_LEVEL(rss_hf) ((rss_hf & RTE_ETH_RSS_LEVEL_MASK) >> 50)
+#define ETH_RSS_LEVEL(rss_hf)          RTE_ETH_RSS_LEVEL(rss_hf)
 
 /**
  * For input set change of hash filter, if SRC_ONLY and DST_ONLY of
@@ -633,219 +722,312 @@ struct rte_eth_rss_conf {
 static inline uint64_t
 rte_eth_rss_hf_refine(uint64_t rss_hf)
 {
-	if ((rss_hf & ETH_RSS_L3_SRC_ONLY) && (rss_hf & ETH_RSS_L3_DST_ONLY))
-		rss_hf &= ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+	if ((rss_hf & RTE_ETH_RSS_L3_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L3_DST_ONLY))
+		rss_hf &= ~(RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
 
-	if ((rss_hf & ETH_RSS_L4_SRC_ONLY) && (rss_hf & ETH_RSS_L4_DST_ONLY))
-		rss_hf &= ~(ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+	if ((rss_hf & RTE_ETH_RSS_L4_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L4_DST_ONLY))
+		rss_hf &= ~(RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
 
 	return rss_hf;
 }
 
-#define ETH_RSS_IPV6_PRE32 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE32 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32	RTE_ETH_RSS_IPV6_PRE32
 
-#define ETH_RSS_IPV6_PRE40 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE40 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40	RTE_ETH_RSS_IPV6_PRE40
 
-#define ETH_RSS_IPV6_PRE48 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE48 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48	RTE_ETH_RSS_IPV6_PRE48
 
-#define ETH_RSS_IPV6_PRE56 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE56 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56	RTE_ETH_RSS_IPV6_PRE56
 
-#define ETH_RSS_IPV6_PRE64 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE64 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64	RTE_ETH_RSS_IPV6_PRE64
 
-#define ETH_RSS_IPV6_PRE96 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE96 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96	RTE_ETH_RSS_IPV6_PRE96
 
-#define ETH_RSS_IPV6_PRE32_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE32_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_UDP	RTE_ETH_RSS_IPV6_PRE32_UDP
 
-#define ETH_RSS_IPV6_PRE40_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE40_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_UDP	RTE_ETH_RSS_IPV6_PRE40_UDP
 
-#define ETH_RSS_IPV6_PRE48_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE48_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_UDP	RTE_ETH_RSS_IPV6_PRE48_UDP
 
-#define ETH_RSS_IPV6_PRE56_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE56_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_UDP	RTE_ETH_RSS_IPV6_PRE56_UDP
 
-#define ETH_RSS_IPV6_PRE64_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE64_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_UDP	RTE_ETH_RSS_IPV6_PRE64_UDP
 
-#define ETH_RSS_IPV6_PRE96_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE96_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_UDP	RTE_ETH_RSS_IPV6_PRE96_UDP
 
-#define ETH_RSS_IPV6_PRE32_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE32_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_TCP	RTE_ETH_RSS_IPV6_PRE32_TCP
 
-#define ETH_RSS_IPV6_PRE40_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE40_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_TCP	RTE_ETH_RSS_IPV6_PRE40_TCP
 
-#define ETH_RSS_IPV6_PRE48_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE48_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_TCP	RTE_ETH_RSS_IPV6_PRE48_TCP
 
-#define ETH_RSS_IPV6_PRE56_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE56_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_TCP	RTE_ETH_RSS_IPV6_PRE56_TCP
 
-#define ETH_RSS_IPV6_PRE64_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE64_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_TCP	RTE_ETH_RSS_IPV6_PRE64_TCP
 
-#define ETH_RSS_IPV6_PRE96_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE96_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_TCP	RTE_ETH_RSS_IPV6_PRE96_TCP
 
-#define ETH_RSS_IPV6_PRE32_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE32_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_SCTP	RTE_ETH_RSS_IPV6_PRE32_SCTP
 
-#define ETH_RSS_IPV6_PRE40_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE40_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_SCTP	RTE_ETH_RSS_IPV6_PRE40_SCTP
 
-#define ETH_RSS_IPV6_PRE48_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE48_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_SCTP	RTE_ETH_RSS_IPV6_PRE48_SCTP
 
-#define ETH_RSS_IPV6_PRE56_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE56_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_SCTP	RTE_ETH_RSS_IPV6_PRE56_SCTP
 
-#define ETH_RSS_IPV6_PRE64_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE64_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_SCTP	RTE_ETH_RSS_IPV6_PRE64_SCTP
 
-#define ETH_RSS_IPV6_PRE96_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE96_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE96)
-
-#define ETH_RSS_IP ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_IPV6_EX)
-
-#define ETH_RSS_UDP ( \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_UDP_EX)
-
-#define ETH_RSS_TCP ( \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_IPV6_TCP_EX)
-
-#define ETH_RSS_SCTP ( \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP)
-
-#define ETH_RSS_TUNNEL ( \
-	ETH_RSS_VXLAN  | \
-	ETH_RSS_GENEVE | \
-	ETH_RSS_NVGRE)
-
-#define ETH_RSS_VLAN ( \
-	ETH_RSS_S_VLAN  | \
-	ETH_RSS_C_VLAN)
+#define ETH_RSS_IPV6_PRE96_SCTP	RTE_ETH_RSS_IPV6_PRE96_SCTP
+
+#define RTE_ETH_RSS_IP ( \
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_IPV6_EX)
+#define ETH_RSS_IP	RTE_ETH_RSS_IP
+
+#define RTE_ETH_RSS_UDP ( \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
+#define ETH_RSS_UDP	RTE_ETH_RSS_UDP
+
+#define RTE_ETH_RSS_TCP ( \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_IPV6_TCP_EX)
+#define ETH_RSS_TCP	RTE_ETH_RSS_TCP
+
+#define RTE_ETH_RSS_SCTP ( \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+#define ETH_RSS_SCTP	RTE_ETH_RSS_SCTP
+
+#define RTE_ETH_RSS_TUNNEL ( \
+	RTE_ETH_RSS_VXLAN  | \
+	RTE_ETH_RSS_GENEVE | \
+	RTE_ETH_RSS_NVGRE)
+#define ETH_RSS_TUNNEL	RTE_ETH_RSS_TUNNEL
+
+#define RTE_ETH_RSS_VLAN ( \
+	RTE_ETH_RSS_S_VLAN  | \
+	RTE_ETH_RSS_C_VLAN)
+#define ETH_RSS_VLAN	RTE_ETH_RSS_VLAN
 
 /** Mask of valid RSS hash protocols */
-#define ETH_RSS_PROTO_MASK ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L2_PAYLOAD | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX | \
-	ETH_RSS_PORT  | \
-	ETH_RSS_VXLAN | \
-	ETH_RSS_GENEVE | \
-	ETH_RSS_NVGRE | \
-	ETH_RSS_MPLS)
+#define RTE_ETH_RSS_PROTO_MASK ( \
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L2_PAYLOAD | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX | \
+	RTE_ETH_RSS_PORT  | \
+	RTE_ETH_RSS_VXLAN | \
+	RTE_ETH_RSS_GENEVE | \
+	RTE_ETH_RSS_NVGRE | \
+	RTE_ETH_RSS_MPLS)
+#define ETH_RSS_PROTO_MASK	RTE_ETH_RSS_PROTO_MASK
 
 /*
  * Definitions used for redirection table entry size.
  * Some RSS RETA sizes may not be supported by some drivers, check the
  * documentation or the description of relevant functions for more details.
  */
-#define ETH_RSS_RETA_SIZE_64  64
-#define ETH_RSS_RETA_SIZE_128 128
-#define ETH_RSS_RETA_SIZE_256 256
-#define ETH_RSS_RETA_SIZE_512 512
-#define RTE_RETA_GROUP_SIZE   64
+#define RTE_ETH_RSS_RETA_SIZE_64  64
+#define ETH_RSS_RETA_SIZE_64      RTE_ETH_RSS_RETA_SIZE_64
+#define RTE_ETH_RSS_RETA_SIZE_128 128
+#define ETH_RSS_RETA_SIZE_128     RTE_ETH_RSS_RETA_SIZE_128
+#define RTE_ETH_RSS_RETA_SIZE_256 256
+#define ETH_RSS_RETA_SIZE_256     RTE_ETH_RSS_RETA_SIZE_256
+#define RTE_ETH_RSS_RETA_SIZE_512 512
+#define ETH_RSS_RETA_SIZE_512     RTE_ETH_RSS_RETA_SIZE_512
+#define RTE_ETH_RETA_GROUP_SIZE   64
+#define RTE_RETA_GROUP_SIZE       RTE_ETH_RETA_GROUP_SIZE
 
 /**@{@name VMDq and DCB maximums */
-#define ETH_VMDQ_MAX_VLAN_FILTERS   64 /**< Maximum nb. of VMDq VLAN filters. */
-#define ETH_DCB_NUM_USER_PRIORITIES 8  /**< Maximum nb. of DCB priorities. */
-#define ETH_VMDQ_DCB_NUM_QUEUES     128 /**< Maximum nb. of VMDq DCB queues. */
-#define ETH_DCB_NUM_QUEUES          128 /**< Maximum nb. of DCB queues. */
+#define RTE_ETH_VMDQ_MAX_VLAN_FILTERS   64 /**< Maximum nb. of VMDq VLAN filters. */
+#define ETH_VMDQ_MAX_VLAN_FILTERS       RTE_ETH_VMDQ_MAX_VLAN_FILTERS
+#define RTE_ETH_DCB_NUM_USER_PRIORITIES 8  /**< Maximum nb. of DCB priorities. */
+#define ETH_DCB_NUM_USER_PRIORITIES     RTE_ETH_DCB_NUM_USER_PRIORITIES
+#define RTE_ETH_VMDQ_DCB_NUM_QUEUES     128 /**< Maximum nb. of VMDq DCB queues. */
+#define ETH_VMDQ_DCB_NUM_QUEUES         RTE_ETH_VMDQ_DCB_NUM_QUEUES
+#define RTE_ETH_DCB_NUM_QUEUES          128 /**< Maximum nb. of DCB queues. */
+#define ETH_DCB_NUM_QUEUES              RTE_ETH_DCB_NUM_QUEUES
 /**@}*/
 
 /**@{@name DCB capabilities */
-#define ETH_DCB_PG_SUPPORT      0x00000001 /**< Priority Group(ETS) support. */
-#define ETH_DCB_PFC_SUPPORT     0x00000002 /**< Priority Flow Control support. */
+#define RTE_ETH_DCB_PG_SUPPORT      0x00000001 /**< Priority Group(ETS) support. */
+#define ETH_DCB_PG_SUPPORT          RTE_ETH_DCB_PG_SUPPORT
+#define RTE_ETH_DCB_PFC_SUPPORT     0x00000002 /**< Priority Flow Control support. */
+#define ETH_DCB_PFC_SUPPORT         RTE_ETH_DCB_PFC_SUPPORT
 /**@}*/
 
 /**@{@name VLAN offload bits */
-#define ETH_VLAN_STRIP_OFFLOAD   0x0001 /**< VLAN Strip  On/Off */
-#define ETH_VLAN_FILTER_OFFLOAD  0x0002 /**< VLAN Filter On/Off */
-#define ETH_VLAN_EXTEND_OFFLOAD  0x0004 /**< VLAN Extend On/Off */
-#define ETH_QINQ_STRIP_OFFLOAD   0x0008 /**< QINQ Strip On/Off */
-
-#define ETH_VLAN_STRIP_MASK   0x0001 /**< VLAN Strip  setting mask */
-#define ETH_VLAN_FILTER_MASK  0x0002 /**< VLAN Filter  setting mask*/
-#define ETH_VLAN_EXTEND_MASK  0x0004 /**< VLAN Extend  setting mask*/
-#define ETH_QINQ_STRIP_MASK   0x0008 /**< QINQ Strip  setting mask */
-#define ETH_VLAN_ID_MAX       0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define RTE_ETH_VLAN_STRIP_OFFLOAD   0x0001 /**< VLAN Strip  On/Off */
+#define ETH_VLAN_STRIP_OFFLOAD       RTE_ETH_VLAN_STRIP_OFFLOAD
+#define RTE_ETH_VLAN_FILTER_OFFLOAD  0x0002 /**< VLAN Filter On/Off */
+#define ETH_VLAN_FILTER_OFFLOAD      RTE_ETH_VLAN_FILTER_OFFLOAD
+#define RTE_ETH_VLAN_EXTEND_OFFLOAD  0x0004 /**< VLAN Extend On/Off */
+#define ETH_VLAN_EXTEND_OFFLOAD      RTE_ETH_VLAN_EXTEND_OFFLOAD
+#define RTE_ETH_QINQ_STRIP_OFFLOAD   0x0008 /**< QINQ Strip On/Off */
+#define ETH_QINQ_STRIP_OFFLOAD       RTE_ETH_QINQ_STRIP_OFFLOAD
+
+#define RTE_ETH_VLAN_STRIP_MASK      0x0001 /**< VLAN Strip  setting mask */
+#define ETH_VLAN_STRIP_MASK          RTE_ETH_VLAN_STRIP_MASK
+#define RTE_ETH_VLAN_FILTER_MASK     0x0002 /**< VLAN Filter  setting mask*/
+#define ETH_VLAN_FILTER_MASK         RTE_ETH_VLAN_FILTER_MASK
+#define RTE_ETH_VLAN_EXTEND_MASK     0x0004 /**< VLAN Extend  setting mask*/
+#define ETH_VLAN_EXTEND_MASK         RTE_ETH_VLAN_EXTEND_MASK
+#define RTE_ETH_QINQ_STRIP_MASK      0x0008 /**< QINQ Strip  setting mask */
+#define ETH_QINQ_STRIP_MASK          RTE_ETH_QINQ_STRIP_MASK
+#define RTE_ETH_VLAN_ID_MAX          0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define ETH_VLAN_ID_MAX              RTE_ETH_VLAN_ID_MAX
 /**@}*/
 
 /* Definitions used for receive MAC address   */
-#define ETH_NUM_RECEIVE_MAC_ADDR  128 /**< Maximum nb. of receive mac addr. */
+#define RTE_ETH_NUM_RECEIVE_MAC_ADDR   128 /**< Maximum nb. of receive mac addr. */
+#define ETH_NUM_RECEIVE_MAC_ADDR       RTE_ETH_NUM_RECEIVE_MAC_ADDR
 
 /* Definitions used for unicast hash  */
-#define ETH_VMDQ_NUM_UC_HASH_ARRAY  128 /**< Maximum nb. of UC hash array. */
+#define RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY 128 /**< Maximum nb. of UC hash array. */
+#define ETH_VMDQ_NUM_UC_HASH_ARRAY     RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY
 
 /**@{@name VMDq Rx mode
  * @see rte_eth_vmdq_rx_conf.rx_mode
  */
-#define ETH_VMDQ_ACCEPT_UNTAG   0x0001 /**< accept untagged packets. */
-#define ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
-#define ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
-#define ETH_VMDQ_ACCEPT_BROADCAST   0x0008 /**< accept broadcast packets. */
-#define ETH_VMDQ_ACCEPT_MULTICAST   0x0010 /**< multicast promiscuous. */
+#define RTE_ETH_VMDQ_ACCEPT_UNTAG      0x0001 /**< accept untagged packets. */
+#define ETH_VMDQ_ACCEPT_UNTAG          RTE_ETH_VMDQ_ACCEPT_UNTAG
+#define RTE_ETH_VMDQ_ACCEPT_HASH_MC    0x0002 /**< accept packets in multicast table . */
+#define ETH_VMDQ_ACCEPT_HASH_MC        RTE_ETH_VMDQ_ACCEPT_HASH_MC
+#define RTE_ETH_VMDQ_ACCEPT_HASH_UC    0x0004 /**< accept packets in unicast table. */
+#define ETH_VMDQ_ACCEPT_HASH_UC        RTE_ETH_VMDQ_ACCEPT_HASH_UC
+#define RTE_ETH_VMDQ_ACCEPT_BROADCAST  0x0008 /**< accept broadcast packets. */
+#define ETH_VMDQ_ACCEPT_BROADCAST      RTE_ETH_VMDQ_ACCEPT_BROADCAST
+#define RTE_ETH_VMDQ_ACCEPT_MULTICAST  0x0010 /**< multicast promiscuous. */
+#define ETH_VMDQ_ACCEPT_MULTICAST      RTE_ETH_VMDQ_ACCEPT_MULTICAST
 /**@}*/
 
+/** Maximum nb. of vlan per mirror rule */
+#define RTE_ETH_MIRROR_MAX_VLANS       64
+#define ETH_MIRROR_MAX_VLANS           RTE_ETH_MIRROR_MAX_VLANS
+
+#define RTE_ETH_MIRROR_VIRTUAL_POOL_UP    0x01  /**< Virtual Pool uplink Mirroring. */
+#define ETH_MIRROR_VIRTUAL_POOL_UP        RTE_ETH_MIRROR_VIRTUAL_POOL_UP
+#define RTE_ETH_MIRROR_UPLINK_PORT        0x02  /**< Uplink Port Mirroring. */
+#define ETH_MIRROR_UPLINK_PORT            RTE_ETH_MIRROR_UPLINK_PORT
+#define RTE_ETH_MIRROR_DOWNLINK_PORT      0x04  /**< Downlink Port Mirroring. */
+#define ETH_MIRROR_DOWNLINK_PORT          RTE_ETH_MIRROR_DOWNLINK_PORT
+#define RTE_ETH_MIRROR_VLAN               0x08  /**< VLAN Mirroring. */
+#define ETH_MIRROR_VLAN                   RTE_ETH_MIRROR_VLAN
+#define RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN  0x10  /**< Virtual Pool downlink Mirroring. */
+#define ETH_MIRROR_VIRTUAL_POOL_DOWN      RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN
+
+/**
+ * A structure used to configure VLAN traffic mirror of an Ethernet port.
+ */
+struct rte_eth_vlan_mirror {
+	uint64_t vlan_mask; /**< mask for valid VLAN ID. */
+	/** VLAN ID list for vlan mirroring. */
+	uint16_t vlan_id[RTE_ETH_MIRROR_MAX_VLANS];
+};
+
+/**
+ * A structure used to configure traffic mirror of an Ethernet port.
+ */
+struct rte_eth_mirror_conf {
+	uint8_t rule_type;  /**< Mirroring rule type */
+	uint8_t dst_pool;   /**< Destination pool for this mirror rule. */
+	uint64_t pool_mask; /**< Bitmap of pool for pool mirroring */
+	/** VLAN ID setting for VLAN mirroring. */
+	struct rte_eth_vlan_mirror vlan;
+};
+
 /**
  * A structure used to configure 64 entries of Redirection Table of the
  * Receive Side Scaling (RSS) feature of an Ethernet port. To configure
@@ -856,7 +1038,7 @@ struct rte_eth_rss_reta_entry64 {
 	/** Mask bits indicate which entries need to be updated/queried. */
 	uint64_t mask;
 	/** Group of 64 redirection table entries. */
-	uint16_t reta[RTE_RETA_GROUP_SIZE];
+	uint16_t reta[RTE_ETH_RETA_GROUP_SIZE];
 };
 
 /**
@@ -864,38 +1046,44 @@ struct rte_eth_rss_reta_entry64 {
  * in DCB configurations
  */
 enum rte_eth_nb_tcs {
-	ETH_4_TCS = 4, /**< 4 TCs with DCB. */
-	ETH_8_TCS = 8  /**< 8 TCs with DCB. */
+	RTE_ETH_4_TCS = 4, /**< 4 TCs with DCB. */
+	RTE_ETH_8_TCS = 8  /**< 8 TCs with DCB. */
 };
+#define ETH_4_TCS RTE_ETH_4_TCS
+#define ETH_8_TCS RTE_ETH_8_TCS
 
 /**
  * This enum indicates the possible number of queue pools
  * in VMDq configurations.
  */
 enum rte_eth_nb_pools {
-	ETH_8_POOLS = 8,    /**< 8 VMDq pools. */
-	ETH_16_POOLS = 16,  /**< 16 VMDq pools. */
-	ETH_32_POOLS = 32,  /**< 32 VMDq pools. */
-	ETH_64_POOLS = 64   /**< 64 VMDq pools. */
+	RTE_ETH_8_POOLS = 8,    /**< 8 VMDq pools. */
+	RTE_ETH_16_POOLS = 16,  /**< 16 VMDq pools. */
+	RTE_ETH_32_POOLS = 32,  /**< 32 VMDq pools. */
+	RTE_ETH_64_POOLS = 64   /**< 64 VMDq pools. */
 };
+#define ETH_8_POOLS	RTE_ETH_8_POOLS
+#define ETH_16_POOLS	RTE_ETH_16_POOLS
+#define ETH_32_POOLS	RTE_ETH_32_POOLS
+#define ETH_64_POOLS	RTE_ETH_64_POOLS
 
 /* This structure may be extended in future. */
 struct rte_eth_dcb_rx_conf {
 	enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs */
 	/** Traffic class each UP mapped to. */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_vmdq_dcb_tx_conf {
 	enum rte_eth_nb_pools nb_queue_pools; /**< With DCB, 16 or 32 pools. */
 	/** Traffic class each UP mapped to. */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_dcb_tx_conf {
 	enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs. */
 	/** Traffic class each UP mapped to. */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_vmdq_tx_conf {
@@ -921,9 +1109,9 @@ struct rte_eth_vmdq_dcb_conf {
 	struct {
 		uint16_t vlan_id; /**< The VLAN ID of the received frame */
 		uint64_t pools;   /**< Bitmask of pools for packet Rx */
-	} pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */
+	} pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */
 	/** Selects a queue in a pool */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 /**
@@ -933,7 +1121,7 @@ struct rte_eth_vmdq_dcb_conf {
  * Using this feature, packets are routed to a pool of queues. By default,
  * the pool selection is based on the MAC address, the VLAN ID in the
  * VLAN tag as specified in the pool_map array.
- * Passing the ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool
+ * Passing the RTE_ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool
  * selection using only the MAC address. MAC address to pool mapping is done
  * using the rte_eth_dev_mac_addr_add function, with the pool parameter
  * corresponding to the pool ID.
@@ -954,7 +1142,7 @@ struct rte_eth_vmdq_rx_conf {
 	struct {
 		uint16_t vlan_id; /**< The VLAN ID of the received frame */
 		uint64_t pools;   /**< Bitmask of pools for packet Rx */
-	} pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */
+	} pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */
 };
 
 /**
@@ -963,7 +1151,7 @@ struct rte_eth_vmdq_rx_conf {
 struct rte_eth_txmode {
 	enum rte_eth_tx_mq_mode mq_mode; /**< Tx multi-queues mode. */
 	/**
-	 * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+	 * Per-port Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags.
 	 * Only offloads set on tx_offload_capa field on rte_eth_dev_info
 	 * structure are allowed to be set.
 	 */
@@ -1055,7 +1243,7 @@ struct rte_eth_rxconf {
 	uint16_t share_group;
 	uint16_t share_qid; /**< Shared Rx queue ID in group */
 	/**
-	 * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Per-queue Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
 	 * Only offloads set on rx_queue_offload_capa or rx_offload_capa
 	 * fields on rte_eth_dev_info structure are allowed to be set.
 	 */
@@ -1084,7 +1272,7 @@ struct rte_eth_txconf {
 
 	uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
 	/**
-	 * Per-queue Tx offloads to be set  using DEV_TX_OFFLOAD_* flags.
+	 * Per-queue Tx offloads to be set  using RTE_ETH_TX_OFFLOAD_* flags.
 	 * Only offloads set on tx_queue_offload_capa or tx_offload_capa
 	 * fields on rte_eth_dev_info structure are allowed to be set.
 	 */
@@ -1195,12 +1383,17 @@ struct rte_eth_desc_lim {
  * This enum indicates the flow control mode
  */
 enum rte_eth_fc_mode {
-	RTE_FC_NONE = 0, /**< Disable flow control. */
-	RTE_FC_RX_PAUSE, /**< Rx pause frame, enable flowctrl on Tx side. */
-	RTE_FC_TX_PAUSE, /**< Tx pause frame, enable flowctrl on Rx side. */
-	RTE_FC_FULL      /**< Enable flow control on both side. */
+	RTE_ETH_FC_NONE = 0, /**< Disable flow control. */
+	RTE_ETH_FC_RX_PAUSE, /**< Rx pause frame, enable flowctrl on Tx side. */
+	RTE_ETH_FC_TX_PAUSE, /**< Tx pause frame, enable flowctrl on Rx side. */
+	RTE_ETH_FC_FULL      /**< Enable flow control on both side. */
 };
 
+#define RTE_FC_NONE	RTE_ETH_FC_NONE
+#define RTE_FC_RX_PAUSE	RTE_ETH_FC_RX_PAUSE
+#define RTE_FC_TX_PAUSE	RTE_ETH_FC_TX_PAUSE
+#define RTE_FC_FULL	RTE_ETH_FC_FULL
+
 /**
  * A structure used to configure Ethernet flow control parameter.
  * These parameters will be configured into the register of the NIC.
@@ -1231,18 +1424,29 @@ struct rte_eth_pfc_conf {
  * @see rte_eth_udp_tunnel
  */
 enum rte_eth_tunnel_type {
-	RTE_TUNNEL_TYPE_NONE = 0,
-	RTE_TUNNEL_TYPE_VXLAN,
-	RTE_TUNNEL_TYPE_GENEVE,
-	RTE_TUNNEL_TYPE_TEREDO,
-	RTE_TUNNEL_TYPE_NVGRE,
-	RTE_TUNNEL_TYPE_IP_IN_GRE,
-	RTE_L2_TUNNEL_TYPE_E_TAG,
-	RTE_TUNNEL_TYPE_VXLAN_GPE,
-	RTE_TUNNEL_TYPE_ECPRI,
-	RTE_TUNNEL_TYPE_MAX,
+	RTE_ETH_TUNNEL_TYPE_NONE = 0,
+	RTE_ETH_TUNNEL_TYPE_VXLAN,
+	RTE_ETH_TUNNEL_TYPE_GENEVE,
+	RTE_ETH_TUNNEL_TYPE_TEREDO,
+	RTE_ETH_TUNNEL_TYPE_NVGRE,
+	RTE_ETH_TUNNEL_TYPE_IP_IN_GRE,
+	RTE_ETH_L2_TUNNEL_TYPE_E_TAG,
+	RTE_ETH_TUNNEL_TYPE_VXLAN_GPE,
+	RTE_ETH_TUNNEL_TYPE_ECPRI,
+	RTE_ETH_TUNNEL_TYPE_MAX,
 };
 
+#define RTE_TUNNEL_TYPE_NONE		RTE_ETH_TUNNEL_TYPE_NONE
+#define RTE_TUNNEL_TYPE_VXLAN		RTE_ETH_TUNNEL_TYPE_VXLAN
+#define RTE_TUNNEL_TYPE_GENEVE		RTE_ETH_TUNNEL_TYPE_GENEVE
+#define RTE_TUNNEL_TYPE_TEREDO		RTE_ETH_TUNNEL_TYPE_TEREDO
+#define RTE_TUNNEL_TYPE_NVGRE		RTE_ETH_TUNNEL_TYPE_NVGRE
+#define RTE_TUNNEL_TYPE_IP_IN_GRE	RTE_ETH_TUNNEL_TYPE_IP_IN_GRE
+#define RTE_L2_TUNNEL_TYPE_E_TAG	RTE_ETH_L2_TUNNEL_TYPE_E_TAG
+#define RTE_TUNNEL_TYPE_VXLAN_GPE	RTE_ETH_TUNNEL_TYPE_VXLAN_GPE
+#define RTE_TUNNEL_TYPE_ECPRI		RTE_ETH_TUNNEL_TYPE_ECPRI
+#define RTE_TUNNEL_TYPE_MAX		RTE_ETH_TUNNEL_TYPE_MAX
+
 /* Deprecated API file for rte_eth_dev_filter_* functions */
 #include "rte_eth_ctrl.h"
 
@@ -1250,11 +1454,16 @@ enum rte_eth_tunnel_type {
  *  Memory space that can be configured to store Flow Director filters
  *  in the board memory.
  */
-enum rte_fdir_pballoc_type {
-	RTE_FDIR_PBALLOC_64K = 0,  /**< 64k. */
-	RTE_FDIR_PBALLOC_128K,     /**< 128k. */
-	RTE_FDIR_PBALLOC_256K,     /**< 256k. */
+enum rte_eth_fdir_pballoc_type {
+	RTE_ETH_FDIR_PBALLOC_64K = 0,  /**< 64k. */
+	RTE_ETH_FDIR_PBALLOC_128K,     /**< 128k. */
+	RTE_ETH_FDIR_PBALLOC_256K,     /**< 256k. */
 };
+#define rte_fdir_pballoc_type	rte_eth_fdir_pballoc_type
+
+#define RTE_FDIR_PBALLOC_64K	RTE_ETH_FDIR_PBALLOC_64K
+#define RTE_FDIR_PBALLOC_128K	RTE_ETH_FDIR_PBALLOC_128K
+#define RTE_FDIR_PBALLOC_256K	RTE_ETH_FDIR_PBALLOC_256K
 
 /**
  *  Select report mode of FDIR hash information in Rx descriptors.
@@ -1271,9 +1480,9 @@ enum rte_fdir_status_mode {
  *
  * If mode is RTE_FDIR_MODE_NONE, the pballoc value is ignored.
  */
-struct rte_fdir_conf {
+struct rte_eth_fdir_conf {
 	enum rte_fdir_mode mode; /**< Flow Director mode. */
-	enum rte_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
+	enum rte_eth_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
 	enum rte_fdir_status_mode status;  /**< How to report FDIR hash. */
 	/** Rx queue of packets matching a "drop" filter in perfect mode. */
 	uint8_t drop_queue;
@@ -1282,6 +1491,8 @@ struct rte_fdir_conf {
 	struct rte_eth_fdir_flex_conf flex_conf;
 };
 
+#define rte_fdir_conf rte_eth_fdir_conf
+
 /**
  * UDP tunneling configuration.
  *
@@ -1299,7 +1510,7 @@ struct rte_eth_udp_tunnel {
 /**
  * A structure used to enable/disable specific device interrupts.
  */
-struct rte_intr_conf {
+struct rte_eth_intr_conf {
 	/** enable/disable lsc interrupt. 0 (default) - disable, 1 enable */
 	uint32_t lsc:1;
 	/** enable/disable rxq interrupt. 0 (default) - disable, 1 enable */
@@ -1308,18 +1519,20 @@ struct rte_intr_conf {
 	uint32_t rmv:1;
 };
 
+#define rte_intr_conf rte_eth_intr_conf
+
 /**
  * A structure used to configure an Ethernet port.
  * Depending upon the Rx multi-queue mode, extra advanced
  * configuration settings may be needed.
  */
 struct rte_eth_conf {
-	uint32_t link_speeds; /**< bitmap of ETH_LINK_SPEED_XXX of speeds to be
-				used. ETH_LINK_SPEED_FIXED disables link
+	uint32_t link_speeds; /**< bitmap of RTE_ETH_LINK_SPEED_XXX of speeds to be
+				used. RTE_ETH_LINK_SPEED_FIXED disables link
 				autonegotiation, and a unique speed shall be
 				set. Otherwise, the bitmap defines the set of
 				speeds to be advertised. If the special value
-				ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
+				RTE_ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
 				supported are advertised. */
 	struct rte_eth_rxmode rxmode; /**< Port Rx configuration. */
 	struct rte_eth_txmode txmode; /**< Port Tx configuration. */
@@ -1346,47 +1559,67 @@ struct rte_eth_conf {
 		struct rte_eth_vmdq_tx_conf vmdq_tx_conf;
 	} tx_adv_conf; /**< Port Tx DCB configuration (union). */
 	/** Currently,Priority Flow Control(PFC) are supported,if DCB with PFC
-	    is needed,and the variable must be set ETH_DCB_PFC_SUPPORT. */
+	    is needed,and the variable must be set RTE_ETH_DCB_PFC_SUPPORT. */
 	uint32_t dcb_capability_en;
-	struct rte_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
-	struct rte_intr_conf intr_conf; /**< Interrupt mode configuration. */
+	struct rte_eth_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
+	struct rte_eth_intr_conf intr_conf; /**< Interrupt mode configuration. */
 };
 
 /**
  * Rx offload capabilities of a device.
  */
-#define DEV_RX_OFFLOAD_VLAN_STRIP  0x00000001
-#define DEV_RX_OFFLOAD_IPV4_CKSUM  0x00000002
-#define DEV_RX_OFFLOAD_UDP_CKSUM   0x00000004
-#define DEV_RX_OFFLOAD_TCP_CKSUM   0x00000008
-#define DEV_RX_OFFLOAD_TCP_LRO     0x00000010
-#define DEV_RX_OFFLOAD_QINQ_STRIP  0x00000020
-#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
-#define DEV_RX_OFFLOAD_MACSEC_STRIP     0x00000080
-#define DEV_RX_OFFLOAD_HEADER_SPLIT	0x00000100
-#define DEV_RX_OFFLOAD_VLAN_FILTER	0x00000200
-#define DEV_RX_OFFLOAD_VLAN_EXTEND	0x00000400
-#define DEV_RX_OFFLOAD_SCATTER		0x00002000
+#define RTE_ETH_RX_OFFLOAD_VLAN_STRIP       0x00000001
+#define DEV_RX_OFFLOAD_VLAN_STRIP           RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+#define RTE_ETH_RX_OFFLOAD_IPV4_CKSUM       0x00000002
+#define DEV_RX_OFFLOAD_IPV4_CKSUM           RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_UDP_CKSUM        0x00000004
+#define DEV_RX_OFFLOAD_UDP_CKSUM            RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_CKSUM        0x00000008
+#define DEV_RX_OFFLOAD_TCP_CKSUM            RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_LRO          0x00000010
+#define DEV_RX_OFFLOAD_TCP_LRO              RTE_ETH_RX_OFFLOAD_TCP_LRO
+#define RTE_ETH_RX_OFFLOAD_QINQ_STRIP       0x00000020
+#define DEV_RX_OFFLOAD_QINQ_STRIP           RTE_ETH_RX_OFFLOAD_QINQ_STRIP
+#define RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
+#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_MACSEC_STRIP     0x00000080
+#define DEV_RX_OFFLOAD_MACSEC_STRIP         RTE_ETH_RX_OFFLOAD_MACSEC_STRIP
+#define RTE_ETH_RX_OFFLOAD_HEADER_SPLIT     0x00000100
+#define DEV_RX_OFFLOAD_HEADER_SPLIT         RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
+#define RTE_ETH_RX_OFFLOAD_VLAN_FILTER      0x00000200
+#define DEV_RX_OFFLOAD_VLAN_FILTER          RTE_ETH_RX_OFFLOAD_VLAN_FILTER
+#define RTE_ETH_RX_OFFLOAD_VLAN_EXTEND      0x00000400
+#define DEV_RX_OFFLOAD_VLAN_EXTEND          RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
+#define RTE_ETH_RX_OFFLOAD_SCATTER          0x00002000
+#define DEV_RX_OFFLOAD_SCATTER              RTE_ETH_RX_OFFLOAD_SCATTER
 /**
  * Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
  * and RTE_MBUF_DYNFLAG_RX_TIMESTAMP_NAME is set in ol_flags.
  * The mbuf field and flag are registered when the offload is configured.
  */
-#define DEV_RX_OFFLOAD_TIMESTAMP	0x00004000
-#define DEV_RX_OFFLOAD_SECURITY         0x00008000
-#define DEV_RX_OFFLOAD_KEEP_CRC		0x00010000
-#define DEV_RX_OFFLOAD_SCTP_CKSUM	0x00020000
-#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM  0x00040000
-#define DEV_RX_OFFLOAD_RSS_HASH		0x00080000
-#define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
-
-#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
-				 DEV_RX_OFFLOAD_UDP_CKSUM | \
-				 DEV_RX_OFFLOAD_TCP_CKSUM)
-#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
-			     DEV_RX_OFFLOAD_VLAN_FILTER | \
-			     DEV_RX_OFFLOAD_VLAN_EXTEND | \
-			     DEV_RX_OFFLOAD_QINQ_STRIP)
+#define RTE_ETH_RX_OFFLOAD_TIMESTAMP        0x00004000
+#define DEV_RX_OFFLOAD_TIMESTAMP            RTE_ETH_RX_OFFLOAD_TIMESTAMP
+#define RTE_ETH_RX_OFFLOAD_SECURITY         0x00008000
+#define DEV_RX_OFFLOAD_SECURITY             RTE_ETH_RX_OFFLOAD_SECURITY
+#define RTE_ETH_RX_OFFLOAD_KEEP_CRC         0x00010000
+#define DEV_RX_OFFLOAD_KEEP_CRC             RTE_ETH_RX_OFFLOAD_KEEP_CRC
+#define RTE_ETH_RX_OFFLOAD_SCTP_CKSUM       0x00020000
+#define DEV_RX_OFFLOAD_SCTP_CKSUM           RTE_ETH_RX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM  0x00040000
+#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM      RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_RSS_HASH         0x00080000
+#define DEV_RX_OFFLOAD_RSS_HASH             RTE_ETH_RX_OFFLOAD_RSS_HASH
+#define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT     0x00100000
+
+#define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+				 RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+				 RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_CHECKSUM	RTE_ETH_RX_OFFLOAD_CHECKSUM
+#define RTE_ETH_RX_OFFLOAD_VLAN (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+			     RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+			     RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+			     RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+#define DEV_RX_OFFLOAD_VLAN	RTE_ETH_RX_OFFLOAD_VLAN
 
 /*
  * If new Rx offload capabilities are defined, they also must be
@@ -1396,54 +1629,76 @@ struct rte_eth_conf {
 /**
  * Tx offload capabilities of a device.
  */
-#define DEV_TX_OFFLOAD_VLAN_INSERT 0x00000001
-#define DEV_TX_OFFLOAD_IPV4_CKSUM  0x00000002
-#define DEV_TX_OFFLOAD_UDP_CKSUM   0x00000004
-#define DEV_TX_OFFLOAD_TCP_CKSUM   0x00000008
-#define DEV_TX_OFFLOAD_SCTP_CKSUM  0x00000010
-#define DEV_TX_OFFLOAD_TCP_TSO     0x00000020
-#define DEV_TX_OFFLOAD_UDP_TSO     0x00000040
-#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_QINQ_INSERT 0x00000100
-#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO    0x00000200    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GRE_TNL_TSO      0x00000400    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_IPIP_TNL_TSO     0x00000800    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO   0x00001000    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_MACSEC_INSERT    0x00002000
+#define RTE_ETH_TX_OFFLOAD_VLAN_INSERT      0x00000001
+#define DEV_TX_OFFLOAD_VLAN_INSERT          RTE_ETH_TX_OFFLOAD_VLAN_INSERT
+#define RTE_ETH_TX_OFFLOAD_IPV4_CKSUM       0x00000002
+#define DEV_TX_OFFLOAD_IPV4_CKSUM           RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_UDP_CKSUM        0x00000004
+#define DEV_TX_OFFLOAD_UDP_CKSUM            RTE_ETH_TX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_CKSUM        0x00000008
+#define DEV_TX_OFFLOAD_TCP_CKSUM            RTE_ETH_TX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_SCTP_CKSUM       0x00000010
+#define DEV_TX_OFFLOAD_SCTP_CKSUM           RTE_ETH_TX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_TSO          0x00000020
+#define DEV_TX_OFFLOAD_TCP_TSO              RTE_ETH_TX_OFFLOAD_TCP_TSO
+#define RTE_ETH_TX_OFFLOAD_UDP_TSO          0x00000040
+#define DEV_TX_OFFLOAD_UDP_TSO              RTE_ETH_TX_OFFLOAD_UDP_TSO
+#define RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_QINQ_INSERT      0x00000100
+#define DEV_TX_OFFLOAD_QINQ_INSERT          RTE_ETH_TX_OFFLOAD_QINQ_INSERT
+#define RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO    0x00000200    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO        RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO      0x00000400    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GRE_TNL_TSO          RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO     0x00000800    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_IPIP_TNL_TSO         RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO   0x00001000    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO       RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_MACSEC_INSERT    0x00002000
+#define DEV_TX_OFFLOAD_MACSEC_INSERT        RTE_ETH_TX_OFFLOAD_MACSEC_INSERT
 /**
  * Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
  * Tx queue without SW lock.
  */
-#define DEV_TX_OFFLOAD_MT_LOCKFREE      0x00004000
+#define RTE_ETH_TX_OFFLOAD_MT_LOCKFREE      0x00004000
+#define DEV_TX_OFFLOAD_MT_LOCKFREE          RTE_ETH_TX_OFFLOAD_MT_LOCKFREE
 /** Device supports multi segment send. */
-#define DEV_TX_OFFLOAD_MULTI_SEGS	0x00008000
+#define RTE_ETH_TX_OFFLOAD_MULTI_SEGS       0x00008000
+#define DEV_TX_OFFLOAD_MULTI_SEGS           RTE_ETH_TX_OFFLOAD_MULTI_SEGS
 /**
  * Device supports optimization for fast release of mbufs.
  * When set application must guarantee that per-queue all mbufs comes from
  * the same mempool and has refcnt = 1.
  */
-#define DEV_TX_OFFLOAD_MBUF_FAST_FREE	0x00010000
-#define DEV_TX_OFFLOAD_SECURITY         0x00020000
+#define RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE   0x00010000
+#define DEV_TX_OFFLOAD_MBUF_FAST_FREE       RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
+#define RTE_ETH_TX_OFFLOAD_SECURITY         0x00020000
+#define DEV_TX_OFFLOAD_SECURITY             RTE_ETH_TX_OFFLOAD_SECURITY
 /**
  * Device supports generic UDP tunneled packet TSO.
  * Application must set PKT_TX_TUNNEL_UDP and other mbuf fields required
  * for tunnel TSO.
  */
-#define DEV_TX_OFFLOAD_UDP_TNL_TSO      0x00040000
+#define RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO      0x00040000
+#define DEV_TX_OFFLOAD_UDP_TNL_TSO          RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO
 /**
  * Device supports generic IP tunneled packet TSO.
  * Application must set PKT_TX_TUNNEL_IP and other mbuf fields required
  * for tunnel TSO.
  */
-#define DEV_TX_OFFLOAD_IP_TNL_TSO       0x00080000
+#define RTE_ETH_TX_OFFLOAD_IP_TNL_TSO       0x00080000
+#define DEV_TX_OFFLOAD_IP_TNL_TSO           RTE_ETH_TX_OFFLOAD_IP_TNL_TSO
 /** Device supports outer UDP checksum */
-#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM  0x00100000
+#define RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM  0x00100000
+#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM      RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM
 /**
  * Device sends on time read from RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
  * if RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME is set in ol_flags.
  * The mbuf field and flag are registered when the offload is configured.
  */
-#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP     RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP
 /*
  * If new Tx offload capabilities are defined, they also must be
  * mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1493,7 +1748,7 @@ struct rte_eth_dev_portconf {
  * Default values for switch domain ID when ethdev does not support switch
  * domain definitions.
  */
-#define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID	(UINT16_MAX)
+#define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID   (UINT16_MAX)
 
 /**
  * Ethernet device associated switch information
@@ -1591,7 +1846,7 @@ struct rte_eth_dev_info {
 	uint16_t vmdq_pool_base;  /**< First ID of VMDq pools. */
 	struct rte_eth_desc_lim rx_desc_lim;  /**< Rx descriptors limits */
 	struct rte_eth_desc_lim tx_desc_lim;  /**< Tx descriptors limits */
-	uint32_t speed_capa;  /**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+	uint32_t speed_capa;  /**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
 	/** Configured number of Rx/Tx queues */
 	uint16_t nb_rx_queues; /**< Number of Rx queues. */
 	uint16_t nb_tx_queues; /**< Number of Tx queues. */
@@ -1695,8 +1950,10 @@ struct rte_eth_xstat_name {
 	char name[RTE_ETH_XSTATS_NAME_SIZE]; /**< The statistic name. */
 };
 
-#define ETH_DCB_NUM_TCS    8
-#define ETH_MAX_VMDQ_POOL  64
+#define RTE_ETH_DCB_NUM_TCS    8
+#define ETH_DCB_NUM_TCS        RTE_ETH_DCB_NUM_TCS
+#define RTE_ETH_MAX_VMDQ_POOL  64
+#define ETH_MAX_VMDQ_POOL      RTE_ETH_MAX_VMDQ_POOL
 
 /**
  * A structure used to get the information of queue and
@@ -1707,12 +1964,12 @@ struct rte_eth_dcb_tc_queue_mapping {
 	struct {
 		uint16_t base;
 		uint16_t nb_queue;
-	} tc_rxq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+	} tc_rxq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS];
 	/** Rx queues assigned to tc per Pool */
 	struct {
 		uint16_t base;
 		uint16_t nb_queue;
-	} tc_txq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+	} tc_txq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS];
 };
 
 /**
@@ -1721,8 +1978,8 @@ struct rte_eth_dcb_tc_queue_mapping {
  */
 struct rte_eth_dcb_info {
 	uint8_t nb_tcs;        /**< number of TCs */
-	uint8_t prio_tc[ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
-	uint8_t tc_bws[ETH_DCB_NUM_TCS]; /**< Tx BW percentage for each TC */
+	uint8_t prio_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
+	uint8_t tc_bws[RTE_ETH_DCB_NUM_TCS]; /**< Tx BW percentage for each TC */
 	/** Rx queues assigned to tc */
 	struct rte_eth_dcb_tc_queue_mapping tc_queue;
 };
@@ -1746,7 +2003,7 @@ enum rte_eth_fec_mode {
 
 /* A structure used to get capabilities per link speed */
 struct rte_eth_fec_capa {
-	uint32_t speed; /**< Link speed (see ETH_SPEED_NUM_*) */
+	uint32_t speed; /**< Link speed (see RTE_ETH_SPEED_NUM_*) */
 	uint32_t capa;  /**< FEC capabilities bitmask */
 };
 
@@ -1769,13 +2026,17 @@ struct rte_eth_fec_capa {
 
 /**@{@name L2 tunnel configuration */
 /** L2 tunnel enable mask */
-#define ETH_L2_TUNNEL_ENABLE_MASK       0x00000001
+#define RTE_ETH_L2_TUNNEL_ENABLE_MASK       0x00000001
+#define ETH_L2_TUNNEL_ENABLE_MASK           RTE_ETH_L2_TUNNEL_ENABLE_MASK
 /** L2 tunnel insertion mask */
-#define ETH_L2_TUNNEL_INSERTION_MASK    0x00000002
+#define RTE_ETH_L2_TUNNEL_INSERTION_MASK    0x00000002
+#define ETH_L2_TUNNEL_INSERTION_MASK        RTE_ETH_L2_TUNNEL_INSERTION_MASK
 /** L2 tunnel stripping mask */
-#define ETH_L2_TUNNEL_STRIPPING_MASK    0x00000004
+#define RTE_ETH_L2_TUNNEL_STRIPPING_MASK    0x00000004
+#define ETH_L2_TUNNEL_STRIPPING_MASK        RTE_ETH_L2_TUNNEL_STRIPPING_MASK
 /** L2 tunnel forwarding mask */
-#define ETH_L2_TUNNEL_FORWARDING_MASK   0x00000008
+#define RTE_ETH_L2_TUNNEL_FORWARDING_MASK   0x00000008
+#define ETH_L2_TUNNEL_FORWARDING_MASK       RTE_ETH_L2_TUNNEL_FORWARDING_MASK
 /**@}*/
 
 /**
@@ -2086,14 +2347,14 @@ uint16_t rte_eth_dev_count_total(void);
  * @param speed
  *   Numerical speed value in Mbps
  * @param duplex
- *   ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
+ *   RTE_ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
  * @return
  *   0 if the speed cannot be mapped
  */
 uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
 
 /**
- * Get DEV_RX_OFFLOAD_* flag name.
+ * Get RTE_ETH_RX_OFFLOAD_* flag name.
  *
  * @param offload
  *   Offload flag.
@@ -2103,7 +2364,7 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
 const char *rte_eth_dev_rx_offload_name(uint64_t offload);
 
 /**
- * Get DEV_TX_OFFLOAD_* flag name.
+ * Get RTE_ETH_TX_OFFLOAD_* flag name.
  *
  * @param offload
  *   Offload flag.
@@ -2211,7 +2472,7 @@ rte_eth_dev_is_removed(uint16_t port_id);
  *   of the Prefetch, Host, and Write-Back threshold registers of the receive
  *   ring.
  *   In addition it contains the hardware offloads features to activate using
- *   the DEV_RX_OFFLOAD_* flags.
+ *   the RTE_ETH_RX_OFFLOAD_* flags.
  *   If an offloading set in rx_conf->offloads
  *   hasn't been set in the input argument eth_conf->rxmode.offloads
  *   to rte_eth_dev_configure(), it is a new added offloading, it must be
@@ -2788,7 +3049,7 @@ const char *rte_eth_link_speed_to_str(uint32_t link_speed);
  *
  * @param str
  *   A pointer to a string to be filled with textual representation of
- *   device status. At least ETH_LINK_MAX_STR_LEN bytes should be allocated to
+ *   device status. At least RTE_ETH_LINK_MAX_STR_LEN bytes should be allocated to
  *   store default link status text.
  * @param len
  *   Length of available memory at 'str' string.
@@ -3334,10 +3595,10 @@ int rte_eth_dev_set_vlan_ether_type(uint16_t port_id,
  *   The port identifier of the Ethernet device.
  * @param offload_mask
  *   The VLAN Offload bit mask can be mixed use with "OR"
- *       ETH_VLAN_STRIP_OFFLOAD
- *       ETH_VLAN_FILTER_OFFLOAD
- *       ETH_VLAN_EXTEND_OFFLOAD
- *       ETH_QINQ_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_FILTER_OFFLOAD
+ *       RTE_ETH_VLAN_EXTEND_OFFLOAD
+ *       RTE_ETH_QINQ_STRIP_OFFLOAD
  * @return
  *   - (0) if successful.
  *   - (-ENOTSUP) if hardware-assisted VLAN filtering not configured.
@@ -3353,10 +3614,10 @@ int rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask);
  *   The port identifier of the Ethernet device.
  * @return
  *   - (>0) if successful. Bit mask to indicate
- *       ETH_VLAN_STRIP_OFFLOAD
- *       ETH_VLAN_FILTER_OFFLOAD
- *       ETH_VLAN_EXTEND_OFFLOAD
- *       ETH_QINQ_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_FILTER_OFFLOAD
+ *       RTE_ETH_VLAN_EXTEND_OFFLOAD
+ *       RTE_ETH_QINQ_STRIP_OFFLOAD
  *   - (-ENODEV) if *port_id* invalid.
  */
 int rte_eth_dev_get_vlan_offload(uint16_t port_id);
@@ -5382,7 +5643,7 @@ uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
  * rte_eth_tx_burst() function must [attempt to] free the *rte_mbuf*  buffers
  * of those packets whose transmission was effectively completed.
  *
- * If the PMD is DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+ * If the PMD is RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
  * invoke this function concurrently on the same Tx queue without SW lock.
  * @see rte_eth_dev_info_get, struct rte_eth_txconf::offloads
  *
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index db3392bf9759..59d9d9eeb63f 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2957,7 +2957,7 @@ struct rte_flow_action_rss {
 	 * through.
 	 */
 	uint32_t level;
-	uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+	uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
 	uint32_t key_len; /**< Hash key length in bytes. */
 	uint32_t queue_num; /**< Number of entries in @p queue. */
 	const uint8_t *key; /**< Hash key. */
diff --git a/lib/gso/rte_gso.c b/lib/gso/rte_gso.c
index 0d02ec3cee05..119fdcac0b7f 100644
--- a/lib/gso/rte_gso.c
+++ b/lib/gso/rte_gso.c
@@ -15,13 +15,13 @@
 #include "gso_udp4.h"
 
 #define ILLEGAL_UDP_GSO_CTX(ctx) \
-	((((ctx)->gso_types & DEV_TX_OFFLOAD_UDP_TSO) == 0) || \
+	((((ctx)->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO) == 0) || \
 	 (ctx)->gso_size < RTE_GSO_UDP_SEG_SIZE_MIN)
 
 #define ILLEGAL_TCP_GSO_CTX(ctx) \
-	((((ctx)->gso_types & (DEV_TX_OFFLOAD_TCP_TSO | \
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-		DEV_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
+	((((ctx)->gso_types & (RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
 		(ctx)->gso_size < RTE_GSO_SEG_SIZE_MIN)
 
 int
@@ -54,28 +54,28 @@ rte_gso_segment(struct rte_mbuf *pkt,
 	ol_flags = pkt->ol_flags;
 
 	if ((IS_IPV4_VXLAN_TCP4(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
 			((IS_IPV4_GRE_TCP4(pkt->ol_flags) &&
-			 (gso_ctx->gso_types & DEV_TX_OFFLOAD_GRE_TNL_TSO)))) {
+			 (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))) {
 		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
 		ret = gso_tunnel_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_VXLAN_UDP4(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) &&
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
 		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
 		ret = gso_tunnel_udp4_segment(pkt, gso_size,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_TCP(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_TCP_TSO)) {
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
 		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
 		ret = gso_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_UDP(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
 		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
 		ret = gso_udp4_segment(pkt, gso_size, direct_pool,
 				indirect_pool, pkts_out, nb_pkts_out);
diff --git a/lib/gso/rte_gso.h b/lib/gso/rte_gso.h
index d93ee8e5b171..0a65afc11e64 100644
--- a/lib/gso/rte_gso.h
+++ b/lib/gso/rte_gso.h
@@ -52,11 +52,11 @@ struct rte_gso_ctx {
 	uint32_t gso_types;
 	/**< the bit mask of required GSO types. The GSO library
 	 * uses the same macros as that of describing device TX
-	 * offloading capabilities (i.e. DEV_TX_OFFLOAD_*_TSO) for
+	 * offloading capabilities (i.e. RTE_ETH_TX_OFFLOAD_*_TSO) for
 	 * gso_types.
 	 *
 	 * For example, if applications want to segment TCP/IPv4
-	 * packets, set DEV_TX_OFFLOAD_TCP_TSO in gso_types.
+	 * packets, set RTE_ETH_TX_OFFLOAD_TCP_TSO in gso_types.
 	 */
 	uint16_t gso_size;
 	/**< maximum size of an output GSO segment, including packet
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index fdaaaf67f2f3..57e871201816 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -185,7 +185,7 @@ extern "C" {
  * The detection of PKT_RX_OUTER_L4_CKSUM_GOOD shall be based on the given
  * HW capability, At minimum, the PMD should support
  * PKT_RX_OUTER_L4_CKSUM_UNKNOWN and PKT_RX_OUTER_L4_CKSUM_BAD states
- * if the DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
+ * if the RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
  */
 #define PKT_RX_OUTER_L4_CKSUM_MASK	((1ULL << 21) | (1ULL << 22))
 
@@ -208,7 +208,7 @@ extern "C" {
  * a) Fill outer_l2_len and outer_l3_len in mbuf.
  * b) Set the PKT_TX_OUTER_UDP_CKSUM flag.
  * c) Set the PKT_TX_OUTER_IPV4 or PKT_TX_OUTER_IPV6 flag.
- * 2) Configure DEV_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
+ * 2) Configure RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
  */
 #define PKT_TX_OUTER_UDP_CKSUM     (1ULL << 41)
 
@@ -254,7 +254,7 @@ extern "C" {
  * It can be used for tunnels which are not standards or listed above.
  * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_GRE
  * or PKT_TX_TUNNEL_IPIP if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_IP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_IP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
  * PKT_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
@@ -267,7 +267,7 @@ extern "C" {
  * It can be used for tunnels which are not standards or listed above.
  * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_VXLAN
  * if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_UDP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
  * PKT_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
diff --git a/lib/mbuf/rte_mbuf_dyn.h b/lib/mbuf/rte_mbuf_dyn.h
index fb03cf1dcf90..29abe8da53cf 100644
--- a/lib/mbuf/rte_mbuf_dyn.h
+++ b/lib/mbuf/rte_mbuf_dyn.h
@@ -37,7 +37,7 @@
  *   of the dynamic field to be registered:
  *   const struct rte_mbuf_dynfield rte_dynfield_my_feature = { ... };
  * - The application initializes the PMD, and asks for this feature
- *   at port initialization by passing DEV_RX_OFFLOAD_MY_FEATURE in
+ *   at port initialization by passing RTE_ETH_RX_OFFLOAD_MY_FEATURE in
  *   rxconf. This will make the PMD to register the field by calling
  *   rte_mbuf_dynfield_register(&rte_dynfield_my_feature). The PMD
  *   stores the returned offset.
-- 
2.31.1


^ permalink raw reply	[relevance 1%]

* Re: [dpdk-dev] [PATCH] lpm: fix buffer overflow
  2021-10-20 19:55  3% ` David Marchand
@ 2021-10-21 17:15  0%   ` Medvedkin, Vladimir
  0 siblings, 0 replies; 200+ results
From: Medvedkin, Vladimir @ 2021-10-21 17:15 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Bruce Richardson, alex, dpdk stable

Hi David,

On 20/10/2021 21:55, David Marchand wrote:
> Hello Vladimir,
> 
> On Fri, Oct 8, 2021 at 11:29 PM Vladimir Medvedkin
> <vladimir.medvedkin@intel.com> wrote:
>>
>> This patch fixes buffer overflow reported by ASAN,
>> please reference https://bugs.dpdk.org/show_bug.cgi?id=819
>>
>> The rte_lpm6 keeps routing information for control plane purpose
>> inside the rte_hash table which uses rte_jhash() as a hash function.
>>  From the rte_jhash() documentation: If input key is not aligned to
>> four byte boundaries or a multiple of four bytes in length,
>> the memory region just after may be read (but not used in the
>> computation).
>> rte_lpm6 uses 17 bytes keys consisting of IPv6 address (16 bytes) +
>> depth (1 byte).
>>
>> This patch increases the size of the depth field up to uint32_t
>> and sets the alignment to 4 bytes.
>>
>> Bugzilla ID: 819
>> Fixes: 86b3b21952a8 ("lpm6: store rules in hash table")
>> Cc: alex@therouter.net
>> Cc: stable@dpdk.org
> 
> This change should be internal, and not breaking ABI, but are we sure
> we want to backport it?
> 

I think yes, I don't see any reason why we should not backport it.
Do you think we should not?

> 
>>
>> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
>> ---
>>   lib/lpm/rte_lpm6.c | 4 ++--
>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/lib/lpm/rte_lpm6.c b/lib/lpm/rte_lpm6.c
>> index 37baabb..d5e0918 100644
>> --- a/lib/lpm/rte_lpm6.c
>> +++ b/lib/lpm/rte_lpm6.c
>> @@ -80,8 +80,8 @@ struct rte_lpm6_rule {
>>   /** Rules tbl entry key. */
>>   struct rte_lpm6_rule_key {
>>          uint8_t ip[RTE_LPM6_IPV6_ADDR_SIZE]; /**< Rule IP address. */
>> -       uint8_t depth; /**< Rule depth. */
>> -};
>> +       uint32_t depth; /**< Rule depth. */
>> +} __rte_aligned(sizeof(uint32_t));
> 
> I would recommend doing the same than for hash tests: keep growing
> depth to 32bits, but no enforcement of alignment and add build check
> on structure size being sizeof(uin32_t) aligned.
> 

Agree, will send v2

> 
>>
>>   /* Header of tbl8 */
>>   struct rte_lpm_tbl8_hdr {
>> --
>> 2.7.4
>>
> 
> 

-- 
Regards,
Vladimir

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v15 06/12] pdump: support pcapng and filtering
  2021-10-20 21:42  1%   ` [dpdk-dev] [PATCH v15 06/12] pdump: support pcapng and filtering Stephen Hemminger
@ 2021-10-21 14:16  0%     ` Kinsella, Ray
  2021-10-27  6:34  0%     ` Wang, Yinan
  1 sibling, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-10-21 14:16 UTC (permalink / raw)
  To: Stephen Hemminger, dev; +Cc: Reshma Pattan, Anatoly Burakov



On 20/10/2021 22:42, Stephen Hemminger wrote:
> This enhances the DPDK pdump library to support new
> pcapng format and filtering via BPF.
> 
> The internal client/server protocol is changed to support
> two versions: the original pdump basic version and a
> new pcapng version.
> 
> The internal version number (not part of exposed API or ABI)
> is intentionally increased to cause any attempt to try
> mismatched primary/secondary process to fail.
> 
> Add new API to do allow filtering of captured packets with
> DPDK BPF (eBPF) filter program. It keeps statistics
> on packets captured, filtered, and missed (because ring was full).
> 
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> Acked-by: Reshma Pattan <reshma.pattan@intel.com>
> ---
>   lib/meson.build       |   4 +-
>   lib/pdump/meson.build |   2 +-
>   lib/pdump/rte_pdump.c | 432 ++++++++++++++++++++++++++++++------------
>   lib/pdump/rte_pdump.h | 113 ++++++++++-
>   lib/pdump/version.map |   8 +
>   5 files changed, 433 insertions(+), 126 deletions(-)
> 

Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v7 0/2] Support IOMMU for DMA device
  @ 2021-10-21 12:33  0%   ` Maxime Coquelin
  0 siblings, 0 replies; 200+ results
From: Maxime Coquelin @ 2021-10-21 12:33 UTC (permalink / raw)
  To: Xuan Ding, dev, anatoly.burakov, chenbo.xia
  Cc: jiayu.hu, cheng1.jiang, bruce.richardson, sunil.pai.g,
	yinan.wang, yvonnex.yang



On 10/11/21 09:59, Xuan Ding wrote:
> This series supports DMA device to use vfio in async vhost.
> 
> The first patch extends the capability of current vfio dma mapping
> API to allow partial unmapping for adjacent memory if the platform
> does not support partial unmapping. The second patch involves the
> IOMMU programming for guest memory in async vhost.
> 
> v7:
> * Fix an operator error.
> 
> v6:
> * Fix a potential memory leak.
> 
> v5:
> * Fix issue of a pointer be freed early.
> 
> v4:
> * Fix a format issue.
> 
> v3:
> * Move the async_map_status flag to virtio_net structure to avoid
> ABI breaking.
> 
> v2:
> * Add rte_errno filtering for some devices bound in the kernel driver.
> * Add a flag to check the status of region mapping.
> * Fix one typo.
> 
> Xuan Ding (2):
>    vfio: allow partially unmapping adjacent memory
>    vhost: enable IOMMU for async vhost
> 
>   lib/eal/linux/eal_vfio.c | 338 ++++++++++++++++++++++++++-------------
>   lib/vhost/vhost.h        |   4 +
>   lib/vhost/vhost_user.c   | 116 +++++++++++++-
>   3 files changed, 346 insertions(+), 112 deletions(-)
> 


Applied to dpdk-next-virtio/main.

Thanks,
Maxime


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [EXT] Re: [PATCH v3 2/7] eal/interrupts: implement get set APIs
  2021-10-21  9:16  0%           ` Harman Kalra
@ 2021-10-21 12:33  0%             ` Dmitry Kozlyuk
  0 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-21 12:33 UTC (permalink / raw)
  To: Harman Kalra
  Cc: Stephen Hemminger, Thomas Monjalon, david.marchand, dev, Ray Kinsella

2021-10-21 09:16 (UTC+0000), Harman Kalra:
> > -----Original Message-----
> > From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> > Sent: Wednesday, October 20, 2021 9:01 PM
> > To: Harman Kalra <hkalra@marvell.com>
> > Cc: Stephen Hemminger <stephen@networkplumber.org>; Thomas
> > Monjalon <thomas@monjalon.net>; david.marchand@redhat.com;
> > dev@dpdk.org; Ray Kinsella <mdr@ashroe.eu>
> > Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement
> > get set APIs
> >   
> > > >  
> > > > > +	/* Detect if DPDK malloc APIs are ready to be used. */
> > > > > +	mem_allocator = rte_malloc_is_ready();
> > > > > +	if (mem_allocator)
> > > > > +		intr_handle = rte_zmalloc(NULL, sizeof(struct  
> > > > rte_intr_handle),  
> > > > > +					  0);
> > > > > +	else
> > > > > +		intr_handle = calloc(1, sizeof(struct rte_intr_handle));  
> > > >
> > > > This is problematic way to do this.
> > > > The reason to use rte_malloc vs malloc should be determined by usage.
> > > >
> > > > If the pointer will be shared between primary/secondary process then
> > > > it has to be in hugepages (ie rte_malloc). If it is not shared then
> > > > then use regular malloc.
> > > >
> > > > But what you have done is created a method which will be a latent
> > > > bug for anyone using primary/secondary process.
> > > >
> > > > Either:
> > > >     intr_handle is not allowed to be used in secondary.
> > > >       Then always use malloc().
> > > > Or.
> > > >     intr_handle can be used by both primary and secondary.
> > > >     Then always use rte_malloc().
> > > >     Any code path that allocates intr_handle before pool is
> > > >     ready is broken.  
> > >
> > > Hi Stephan,
> > >
> > > Till V2, I implemented this API in a way where user of the API can
> > > choose If he wants intr handle to be allocated using malloc or
> > > rte_malloc by passing a flag arg to the rte_intr_instanc_alloc API.
> > > User of the API will best know if the intr handle is to be shared with  
> > secondary or not.  
> > >
> > > But after some discussions and suggestions from the community we
> > > decided to drop that flag argument and auto detect on whether
> > > rte_malloc APIs are ready to be used and thereafter make all further  
> > allocations via rte_malloc.  
> > > Currently alarm subsystem (or any driver doing allocation in
> > > constructor) gets interrupt instance allocated using glibc malloc that
> > > too because rte_malloc* is not ready by rte_eal_alarm_init(), while
> > > all further consumers gets instance allocated via rte_malloc.  
> > 
> > Just as a comment, bus scanning is the real issue, not the alarms.
> > Alarms could be initialized after the memory management (but it's irrelevant
> > because their handle is not accessed from the outside).
> > However, MM needs to know bus IOVA requirements to initialize, which is
> > usually determined by at least bus device requirements.
> >   
> > >  I think this should not cause any issue in primary/secondary model as
> > > all interrupt instance pointer will be shared.  
> > 
> > What do you mean? Aren't we discussing the issue that those allocated early
> > are not shared?
> >   
> > > Infact to avoid any surprises of primary/secondary not working we
> > > thought of making all allocations via rte_malloc.  
> > 
> > I don't see why anyone would not make them shared.
> > In order to only use rte_malloc(), we need:
> > 1. In bus drivers, move handle allocation from scan to probe stage.
> > 2. In EAL, move alarm initialization to after the MM.
> > It all can be done later with v3 design---but there are out-of-tree drivers.
> > We need to force them to make step 1 at some point.
> > I see two options:
> > a) Right now have an external API that only works with rte_malloc()
> >    and internal API with autodetection. Fix DPDK and drop internal API.
> > b) Have external API with autodetection. Fix DPDK.
> >    At the next ABI breakage drop autodetection and libc-malloc.
> >   
> > > David, Thomas, Dmitry, please add if I missed anything.
> > >
> > > Can we please conclude on this series APIs as API freeze deadline (rc1) is  
> > very near.
> > 
> > I support v3 design with no options and autodetection, because that's the
> > interface we want in the end.
> > Implementation can be improved later.  
> 
> Hi All,
> 
> I came across 2 issues introduced with auto detection mechanism.
> 1. In case of primary secondary model.  Primary application is started which makes lots of allocations via
> rte_malloc*
>     
>     Secondary side:
>     a. Secondary starts, in its "rte_eal_init()" it makes some allocation via rte_*, and in one of the allocation
> request for heap expand is made as current memseg got exhausted. (malloc_heap_alloc_on_heap_id ()->
>    alloc_more_mem_on_socket()->try_expand_heap())
>    b. A request to primary for heap expand is sent. Please note secondary holds the spinlock while making
> the request. (malloc_heap_alloc_on_heap_id ()->rte_spinlock_lock(&(heap->lock));)
> 
>    Primary side:
>    a. Primary receives the request, install a new hugepage and setups up the heap (handle_alloc_request())
>    b. To inform all the secondaries about the new memseg, primary sends a sync notice where it sets up an 
> alarm (rte_mp_request_async ()->mp_request_async()).
>    c. Inside alarm setup API, we register an interrupt callback.
>    d. Inside rte_intr_callback_register(), a new interrupt instance allocation is requested for "src->intr_handle"
>    e. Since memory management is detected as up, inside "rte_intr_instance_alloc()", call to "rte_zmalloc" for
> allocating memory and further inside "malloc_heap_alloc_on_heap_id()", primary will experience a deadlock
> while taking up the spinlock because this spinlock is already hold by secondary.
> 
> 
> 2. "eal_flags_file_prefix_autotest" is failing because the spawned process by this tests are expected to cleanup
> their hugepage traces from respective directories (eg /dev/hugepage). 
> a. Inside eal_cleanup, rte_free()->malloc_heap_free(), where element to be freed is added to the free list and
> checked if nearby elements can be joined together and form a big free chunk (malloc_elem_free()).
> b. If this free chunk is big enough than the hugepage size, respective hugepage can be uninstalled after making
> sure no allocation from this hugepage exists. (malloc_heap_free()->malloc_heap_free_pages()->eal_memalloc_free_seg())
> 
> But because of interrupt allocations made for pci intr handles (used for VFIO) and other driver specific interrupt
> handles are not cleaned up in "rte_eal_cleanup()", these hugepage files are not removed and test fails.

Sad to hear. But it's a great and thorough analysis.

> There could be more such issues, I think we should firstly fix the DPDK.
> 1. Memory management should be made independent and should be the first thing to come up in rte_eal_init()

As I have explained, buses must be able to report IOVA requirement
at this point (`get_iommu_class()` bus method).
Either `scan()` must complete before that
or `get_iommu_class()` must be able to work before `scan()` is called.

> 2. rte_eal_cleanup() should be exactly opposite to rte_eal_init(), just like bus_probe, we should have bus_remove
> to clean up all the memory allocations.

Yes. For most buses it will be just "unplug each device".
In fact, EAL could do it with `unplug()`, but it is not mandatory.

> 
> Regarding this IRQ series, I would like to fall back to our original design i.e. rte_intr_instance_alloc() should take
> an argument whether its memory should be allocated using glibc malloc or rte_malloc*.

Seems there's no other option to make it on time.

> Decision for allocation
> (malloc or rte_malloc) can be made on fact that in the existing code is the interrupt handle is shared?
> Eg.  a. In case of alarm intr_handle was global entry and not confined to any structure, so this can be allocated from
> normal malloc.
> b. PCI device, had static entry for intr_handle inside "struct rte_pci_device" and memory for struct rte_pci_device is
> via normal malloc, so it intr_handle can also be malloc'ed
> c. Some driver with intr_handle inside its priv structure, and this priv structure gets allocated via rte_malloc, so
> Intr_handle can also be rte_malloc.
> 
> Later once DPDK is fixed up, this argument can be removed and all allocations can be via rte_malloc family without
> any auto detection.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 6/8] cryptodev: rework session framework
  2021-10-21  6:53  0%       ` Akhil Goyal
@ 2021-10-21 10:38  0%         ` Ananyev, Konstantin
  0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-10-21 10:38 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas, david.marchand, hemant.agrawal, Anoob Joseph,
	De Lara Guarch,  Pablo, Trahe, Fiona, Doherty, Declan, matan,
	g.singh, Zhang, Roy Fan, jianjay.zhou, asomalap, ruifeng.wang,
	Nicolau, Radu, ajit.khaparde, Nagadheeraj Rottela, Ankur Dwivedi,
	Power, Ciara, Wang, Haiyue, jiawenwu, jianwang


> > > As per current design, rte_cryptodev_sym_session_create() and
> > > rte_cryptodev_sym_session_init() use separate mempool objects
> > > for a single session.
> > > And structure rte_cryptodev_sym_session is not directly used
> > > by the application, it may cause ABI breakage if the structure
> > > is modified in future.
> > >
> > > To address these two issues, the rte_cryptodev_sym_session_create
> > > will take one mempool object for both the session and session
> > > private data. The API rte_cryptodev_sym_session_init will now not
> > > take mempool object.
> > > rte_cryptodev_sym_session_create will now return an opaque session
> > > pointer which will be used by the app in rte_cryptodev_sym_session_init
> > > and other APIs.
> > >
> > > With this change, rte_cryptodev_sym_session_init will send
> > > pointer to session private data of corresponding driver to the PMD
> > > based on the driver_id for filling the PMD data.
> > >
> > > In data path, opaque session pointer is attached to rte_crypto_op
> > > and the PMD can call an internal library API to get the session
> > > private data pointer based on the driver id.
> > >
> > > Note: currently nb_drivers are getting updated in RTE_INIT which
> > > result in increasing the memory requirements for session.
> > > User can compile off drivers which are not in use to reduce the
> > > memory consumption of a session.
> > >
> > > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > > ---
> >
> > With that patch ipsec-secgw functional tests crashes for AES_GCM test-cases.
> > To be more specific:
> > examples/ipsec-secgw/test/run_test.sh -4 tun_aesgcm
> >
> > [24126592.561071] traps: dpdk-ipsec-secg[3254860] general protection fault
> > ip:7f3ac2397027 sp:7ffeaade8848 error:0 in
> > libIPSec_MB.so.1.0.0[7f3ac238f000+2a20000]
> >
> > Looking a bit deeper, it fails at:
> > #0  0x00007ff9274f4027 in aes_keyexp_128_enc_avx512 ()
> >    from /lib/libIPSec_MB.so.1
> > #1  0x00007ff929f0ac97 in aes_gcm_pre_128_avx_gen4 ()
> >    from /lib/libIPSec_MB.so.1
> > #2  0x0000561757073753 in aesni_gcm_session_configure
> > (mb_mgr=0x56175c5fe400,
> >     session=0x17e3b72d8, xform=0x17e05d7c0)
> >     at ../drivers/crypto/ipsec_mb/pmd_aesni_gcm.c:132
> > #3  0x00005617570592af in ipsec_mb_sym_session_configure (
> >     dev=0x56175be0c940 <rte_crypto_devices>, xform=0x17e05d7c0,
> >     sess=0x17e3b72d8) at ../drivers/crypto/ipsec_mb/ipsec_mb_ops.c:330
> > #4  0x0000561753b4d6ae in rte_cryptodev_sym_session_init (dev_id=0
> > '\000',
> >     sess_opaque=0x17e3b4940, xforms=0x17e05d7c0)
> >     at ../lib/cryptodev/rte_cryptodev.c:1736
> > #5  0x0000561752ef99b7 in create_lookaside_session (
> >     ipsec_ctx=0x56175aa6a210 <lcore_conf+1105232>, sa=0x17e05d140,
> >     ips=0x17e05d140) at ../examples/ipsec-secgw/ipsec.c:145
> > #6  0x0000561752f0cf98 in fill_ipsec_session (ss=0x17e05d140,
> >     ctx=0x56175aa6a210 <lcore_conf+1105232>, sa=0x17e05d140)
> >     at ../examples/ipsec-secgw/ipsec_process.c:89
> > #7  0x0000561752f0d7dd in ipsec_process (
> >     ctx=0x56175aa6a210 <lcore_conf+1105232>, trf=0x7ffd192326a0)
> >     at ../examples/ipsec-secgw/ipsec_process.c:300
> > #8  0x0000561752f21027 in process_pkts_outbound (
> > --Type <RET> for more, q to quit, c to continue without paging--
> >     ipsec_ctx=0x56175aa6a210 <lcore_conf+1105232>,
> > traffic=0x7ffd192326a0)
> >     at ../examples/ipsec-secgw/ipsec-secgw.c:839
> > #9  0x0000561752f21b2e in process_pkts (
> >     qconf=0x56175aa57340 <lcore_conf+1027712>, pkts=0x7ffd19233c20,
> >     nb_pkts=1 '\001', portid=1) at ../examples/ipsec-secgw/ipsec-secgw.c:1072
> > #10 0x0000561752f224db in ipsec_poll_mode_worker ()
> >     at ../examples/ipsec-secgw/ipsec-secgw.c:1262
> > #11 0x0000561752f38adc in ipsec_launch_one_lcore (args=0x56175c549700)
> >     at ../examples/ipsec-secgw/ipsec_worker.c:654
> > #12 0x0000561753cbc523 in rte_eal_mp_remote_launch (
> >     f=0x561752f38ab5 <ipsec_launch_one_lcore>, arg=0x56175c549700,
> >     call_main=CALL_MAIN) at ../lib/eal/common/eal_common_launch.c:64
> > #13 0x0000561752f265ed in main (argc=12, argv=0x7ffd19234168)
> >     at ../examples/ipsec-secgw/ipsec-secgw.c:2978
> > (gdb) frame 2
> > #2  0x0000561757073753 in aesni_gcm_session_configure
> > (mb_mgr=0x56175c5fe400,
> >     session=0x17e3b72d8, xform=0x17e05d7c0)
> >     at ../drivers/crypto/ipsec_mb/pmd_aesni_gcm.c:132
> > 132                     mb_mgr->gcm128_pre(key, &sess->gdata_key);
> >
> > Because of un-expected unaligned memory access:
> > (gdb) disas
> > Dump of assembler code for function aes_keyexp_128_enc_avx512:
> >    0x00007ff9274f400b <+0>:     endbr64
> >    0x00007ff9274f400f <+4>:     cmp    $0x0,%rdi
> >    0x00007ff9274f4013 <+8>:     je     0x7ff9274f41b4
> > <aes_keyexp_128_enc_avx512+425>
> >    0x00007ff9274f4019 <+14>:    cmp    $0x0,%rsi
> >    0x00007ff9274f401d <+18>:    je     0x7ff9274f41b4
> > <aes_keyexp_128_enc_avx512+425>
> >    0x00007ff9274f4023 <+24>:    vmovdqu (%rdi),%xmm1
> > => 0x00007ff9274f4027 <+28>:    vmovdqa %xmm1,(%rsi)
> >
> > (gdb) print/x $rsi
> > $12 = 0x17e3b72e8
> >
> > And this is caused because now AES_GCM session private data is not 16B-bits
> > aligned anymore:
> > (gdb) print ((struct aesni_gcm_session *)sess->sess_data[index].data)
> > $29 = (struct aesni_gcm_session *) 0x17e3b72d8
> >
> > print &((struct aesni_gcm_session *)sess->sess_data[index].data)-
> > >gdata_key
> > $31 = (struct gcm_key_data *) 0x17e3b72e8
> >
> > As I understand the reason for that is that we changed the way how
> > sess_data[index].data
> > is populated. Now it is just:
> > sess->sess_data[index].data = (void *)((uint8_t *)sess +
> >                                 rte_cryptodev_sym_get_header_session_size() +
> >                                 (index * sess->priv_sz));
> >
> > So, as I can see, there is no guarantee that PMD's private sess data will be
> > aligned on 16B
> > as expected.
> >
> Agreed, that there is no guarantee that the sess_priv will be aligned.
> I believe this is requirement from the PMD side for a particular alignment.

Yes, it is PMD specific requirement.
The problem is that with new approach you proposed there is no simple way for PMD to
fulfil that requirement.
In current version of DPDK:
- PMD reports size of private data, note that it reports extra space needed
   to align its data properly inside provided buffer.
- Then it ss up to higher layer to allocate mempool with elements big enough to hold 
   PMD private data.
- At session init that mempool is passed to PMD sym_session_confgure() and it is 
 PMD responsibility to allocate buffer (from given mempool) for its private data
  align it properly, and update sess->sess_data[].data.
With this patch:
 -  PMD still reports size of private data, but now it is cryptodev layer who allocates
     memory for PMD private data and updates sess->sess_data[].data.
  
So PMD simply has no way to allocate/align its private data in a way it likes to.
Of course it can simply do alignment on the fly for each operation, something like:

void *p = get_sym_session_private_data(sess, dev->driver_id);
sess_priv = RTE_PTR_ALIGN_FLOOR(p, PMD_SES_ALIGN);

But it is way too ugly and error-prone. 

Another potential problem with that approach (when cryptodev allocates memory for 
PMD private session data and updates sess->sess_data[].data for it) - it could happen
that private data for different PMDs can endup on the same cache-line. 
If we'll ever have a case with simultaneous session processing by multiple-devices
it can cause all sorts of performance problems.

All in all - these changes for (remove second mempool, change the way we allocate/setup
session private data) seems premature to me.
So, I think to go ahead with this series (hiding rte_cryptodev_sym_session) for 21.11
we need to drop changes for sess_data[] management allocation and keep only changes
directly related to hide sym_session.    
My apologies for not reviewing/testing properly that series earlier.

> Is it possible for the PMD to use __rte_aligned for the fields which are required to

The data structure inside PMD is properly aligned.
The problem is that now cryptodev layer might provide to PMD memory that is not properly aligned.

> Be aligned. For aesni_gcm it is 16B aligned requirement, for some other PMD it may be
> 64B alignment.





^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [EXT] Re: [PATCH v3 2/7] eal/interrupts: implement get set APIs
  2021-10-20 15:30  3%         ` Dmitry Kozlyuk
@ 2021-10-21  9:16  0%           ` Harman Kalra
  2021-10-21 12:33  0%             ` Dmitry Kozlyuk
  0 siblings, 1 reply; 200+ results
From: Harman Kalra @ 2021-10-21  9:16 UTC (permalink / raw)
  To: Dmitry Kozlyuk
  Cc: Stephen Hemminger, Thomas Monjalon, david.marchand, dev, Ray Kinsella



> -----Original Message-----
> From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> Sent: Wednesday, October 20, 2021 9:01 PM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: Stephen Hemminger <stephen@networkplumber.org>; Thomas
> Monjalon <thomas@monjalon.net>; david.marchand@redhat.com;
> dev@dpdk.org; Ray Kinsella <mdr@ashroe.eu>
> Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement
> get set APIs
> 
> > >
> > > > +	/* Detect if DPDK malloc APIs are ready to be used. */
> > > > +	mem_allocator = rte_malloc_is_ready();
> > > > +	if (mem_allocator)
> > > > +		intr_handle = rte_zmalloc(NULL, sizeof(struct
> > > rte_intr_handle),
> > > > +					  0);
> > > > +	else
> > > > +		intr_handle = calloc(1, sizeof(struct rte_intr_handle));
> > >
> > > This is problematic way to do this.
> > > The reason to use rte_malloc vs malloc should be determined by usage.
> > >
> > > If the pointer will be shared between primary/secondary process then
> > > it has to be in hugepages (ie rte_malloc). If it is not shared then
> > > then use regular malloc.
> > >
> > > But what you have done is created a method which will be a latent
> > > bug for anyone using primary/secondary process.
> > >
> > > Either:
> > >     intr_handle is not allowed to be used in secondary.
> > >       Then always use malloc().
> > > Or.
> > >     intr_handle can be used by both primary and secondary.
> > >     Then always use rte_malloc().
> > >     Any code path that allocates intr_handle before pool is
> > >     ready is broken.
> >
> > Hi Stephan,
> >
> > Till V2, I implemented this API in a way where user of the API can
> > choose If he wants intr handle to be allocated using malloc or
> > rte_malloc by passing a flag arg to the rte_intr_instanc_alloc API.
> > User of the API will best know if the intr handle is to be shared with
> secondary or not.
> >
> > But after some discussions and suggestions from the community we
> > decided to drop that flag argument and auto detect on whether
> > rte_malloc APIs are ready to be used and thereafter make all further
> allocations via rte_malloc.
> > Currently alarm subsystem (or any driver doing allocation in
> > constructor) gets interrupt instance allocated using glibc malloc that
> > too because rte_malloc* is not ready by rte_eal_alarm_init(), while
> > all further consumers gets instance allocated via rte_malloc.
> 
> Just as a comment, bus scanning is the real issue, not the alarms.
> Alarms could be initialized after the memory management (but it's irrelevant
> because their handle is not accessed from the outside).
> However, MM needs to know bus IOVA requirements to initialize, which is
> usually determined by at least bus device requirements.
> 
> >  I think this should not cause any issue in primary/secondary model as
> > all interrupt instance pointer will be shared.
> 
> What do you mean? Aren't we discussing the issue that those allocated early
> are not shared?
> 
> > Infact to avoid any surprises of primary/secondary not working we
> > thought of making all allocations via rte_malloc.
> 
> I don't see why anyone would not make them shared.
> In order to only use rte_malloc(), we need:
> 1. In bus drivers, move handle allocation from scan to probe stage.
> 2. In EAL, move alarm initialization to after the MM.
> It all can be done later with v3 design---but there are out-of-tree drivers.
> We need to force them to make step 1 at some point.
> I see two options:
> a) Right now have an external API that only works with rte_malloc()
>    and internal API with autodetection. Fix DPDK and drop internal API.
> b) Have external API with autodetection. Fix DPDK.
>    At the next ABI breakage drop autodetection and libc-malloc.
> 
> > David, Thomas, Dmitry, please add if I missed anything.
> >
> > Can we please conclude on this series APIs as API freeze deadline (rc1) is
> very near.
> 
> I support v3 design with no options and autodetection, because that's the
> interface we want in the end.
> Implementation can be improved later.

Hi All,

I came across 2 issues introduced with auto detection mechanism.
1. In case of primary secondary model.  Primary application is started which makes lots of allocations via
rte_malloc*
    
    Secondary side:
    a. Secondary starts, in its "rte_eal_init()" it makes some allocation via rte_*, and in one of the allocation
request for heap expand is made as current memseg got exhausted. (malloc_heap_alloc_on_heap_id ()->
   alloc_more_mem_on_socket()->try_expand_heap())
   b. A request to primary for heap expand is sent. Please note secondary holds the spinlock while making
the request. (malloc_heap_alloc_on_heap_id ()->rte_spinlock_lock(&(heap->lock));)

   Primary side:
   a. Primary receives the request, install a new hugepage and setups up the heap (handle_alloc_request())
   b. To inform all the secondaries about the new memseg, primary sends a sync notice where it sets up an 
alarm (rte_mp_request_async ()->mp_request_async()).
   c. Inside alarm setup API, we register an interrupt callback.
   d. Inside rte_intr_callback_register(), a new interrupt instance allocation is requested for "src->intr_handle"
   e. Since memory management is detected as up, inside "rte_intr_instance_alloc()", call to "rte_zmalloc" for
allocating memory and further inside "malloc_heap_alloc_on_heap_id()", primary will experience a deadlock
while taking up the spinlock because this spinlock is already hold by secondary.


2. "eal_flags_file_prefix_autotest" is failing because the spawned process by this tests are expected to cleanup
their hugepage traces from respective directories (eg /dev/hugepage). 
a. Inside eal_cleanup, rte_free()->malloc_heap_free(), where element to be freed is added to the free list and
checked if nearby elements can be joined together and form a big free chunk (malloc_elem_free()).
b. If this free chunk is big enough than the hugepage size, respective hugepage can be uninstalled after making
sure no allocation from this hugepage exists. (malloc_heap_free()->malloc_heap_free_pages()->eal_memalloc_free_seg())

But because of interrupt allocations made for pci intr handles (used for VFIO) and other driver specific interrupt
handles are not cleaned up in "rte_eal_cleanup()", these hugepage files are not removed and test fails.

There could be more such issues, I think we should firstly fix the DPDK.
1. Memory management should be made independent and should be the first thing to come up in rte_eal_init()
2. rte_eal_cleanup() should be exactly opposite to rte_eal_init(), just like bus_probe, we should have bus_remove
to clean up all the memory allocations.

Regarding this IRQ series, I would like to fall back to our original design i.e. rte_intr_instance_alloc() should take
an argument whether its memory should be allocated using glibc malloc or rte_malloc*. Decision for allocation
(malloc or rte_malloc) can be made on fact that in the existing code is the interrupt handle is shared?
Eg.  a. In case of alarm intr_handle was global entry and not confined to any structure, so this can be allocated from
normal malloc.
b. PCI device, had static entry for intr_handle inside "struct rte_pci_device" and memory for struct rte_pci_device is
via normal malloc, so it intr_handle can also be malloc'ed
c. Some driver with intr_handle inside its priv structure, and this priv structure gets allocated via rte_malloc, so
Intr_handle can also be rte_malloc.

Later once DPDK is fixed up, this argument can be removed and all allocations can be via rte_malloc family without
any auto detection.


David, Dmitry, Thomas, Stephan, please share your views....

Thanks
Harman

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 0/8] crypto/security session framework rework
  2021-10-20 18:04  0%         ` Akhil Goyal
@ 2021-10-21  8:43  0%           ` Zhang, Roy Fan
  0 siblings, 0 replies; 200+ results
From: Zhang, Roy Fan @ 2021-10-21  8:43 UTC (permalink / raw)
  To: Akhil Goyal, Power, Ciara, dev, Ananyev, Konstantin, thomas,
	De Lara Guarch, Pablo
  Cc: david.marchand, hemant.agrawal, Anoob Joseph, Trahe, Fiona,
	Doherty, Declan, matan, g.singh, jianjay.zhou, asomalap,
	ruifeng.wang, Nicolau, Radu, ajit.khaparde, Nagadheeraj Rottela,
	Ankur Dwivedi, Wang, Haiyue, jiawenwu, jianwang,
	Jerin Jacob Kollanukkaran, Nithin Kumar Dabilpuram

Hi Akhil,

> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Wednesday, October 20, 2021 7:05 PM
> To: Power, Ciara <ciara.power@intel.com>; dev@dpdk.org; Ananyev,
> Konstantin <konstantin.ananyev@intel.com>; thomas@monjalon.net; Zhang,
> Roy Fan <roy.fan.zhang@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>
> Cc: david.marchand@redhat.com; hemant.agrawal@nxp.com; Anoob Joseph
> <anoobj@marvell.com>; Trahe, Fiona <fiona.trahe@intel.com>; Doherty,
> Declan <declan.doherty@intel.com>; matan@nvidia.com; g.singh@nxp.com;
> jianjay.zhou@huawei.com; asomalap@amd.com; ruifeng.wang@arm.com;
> Nicolau, Radu <radu.nicolau@intel.com>; ajit.khaparde@broadcom.com;
> Nagadheeraj Rottela <rnagadheeraj@marvell.com>; Ankur Dwivedi
> <adwivedi@marvell.com>; Wang, Haiyue <haiyue.wang@intel.com>;
> jiawenwu@trustnetic.com; jianwang@trustnetic.com; Jerin Jacob
> Kollanukkaran <jerinj@marvell.com>; Nithin Kumar Dabilpuram
> <ndabilpuram@marvell.com>
> Subject: RE: [PATCH v3 0/8] crypto/security session framework rework
> 
> > > > I am seeing test failures for cryptodev_scheduler_autotest:
> > > > + Tests Total :       638
> > > >  + Tests Skipped :     280
> > > >  + Tests Executed :    638
> > > >  + Tests Unsupported:   0
> > > >  + Tests Passed :      18
> > > >  + Tests Failed :      340
> > > >
> > > > The error showing for each testcase:
> > > > scheduler_pmd_sym_session_configure() line 487: unable to config
> sym
> > > > session
> > > > CRYPTODEV: rte_cryptodev_sym_session_init() line 1743: dev_id 2
> failed
> > to
> > > > configure session details
> > > >
> > > > I believe the problem happens in
> > scheduler_pmd_sym_session_configure.
> > > > The full sess object is no longer accessible in here, but it is required to
> be
> > > > passed to rte_cryptodev_sym_session_init.
> > > > The init function expects access to sess rather than the private data,
> and
> > > now
> > > > fails as a result.
> > > >
> > > > static int
> > > > scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
> > > >         struct rte_crypto_sym_xform *xform, void *sess,
> > > >         rte_iova_t sess_iova __rte_unused)
> > > > {
> > > >         struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> > > >         uint32_t i;
> > > >         int ret;
> > > >         for (i = 0; i < sched_ctx->nb_workers; i++) {
> > > >                 struct scheduler_worker *worker = &sched_ctx->workers[i];
> > > >                 ret = rte_cryptodev_sym_session_init(worker->dev_id, sess,
> > > >                                         xform);
> > > >                 if (ret < 0) {
> > > >                         CR_SCHED_LOG(ERR, "unable to config sym session");
> > > >                         return ret;
> > > >                 }
> > > >         }
> > > >         return 0;
> > > > }
> > > >
> > > It looks like scheduler PMD is managing the stuff on its own for other
> > PMDs.
> > > The APIs are designed such that the app can call session_init multiple
> times
> > > With different dev_id on same sess.
> > > But here scheduler PMD internally want to configure other PMDs
> sess_priv
> > > By calling session_init.
> > >
> > > I wonder, why we have this 2 step session_create and session_init?
> > > Why can't we have it similar to security session create and let the
> scheduler
> > > PMD have its big session private data which can hold priv_data of as many
> > > PMDs
> > > as it want to schedule.
> > >
> > > Konstantin/Fan/Pablo what are your thoughts on this issue?
> > > Can we resolve this issue at priority in RC1(or probably RC2) for this
> release
> > > or
> > > else we defer it for next ABI break release?
> > >
> > > Thomas,
> > > Can we defer this for RC2? It does not seem to be fixed in 1 day.
> >
> > On another thought, this can be fixed with current patch also by having a
> big
> > session
> > Private data for scheduler PMD which is big enough to hold all other PMDs
> > data which
> > it want to schedule and then call the sess_configure function pointer of dev
> > directly.
> > What say? And this PMD change can be done in RC2. And this patchset go
> as
> > is in RC1.
> Here is the diff in scheduler PMD which should fix this issue in current
> patchset.
> 
> diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c
> b/drivers/crypto/scheduler/scheduler_pmd_ops.c
> index b92ffd6026..0611ea2c6a 100644
> --- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
> +++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
> @@ -450,9 +450,8 @@ scheduler_pmd_qp_setup(struct rte_cryptodev *dev,
> uint16_t qp_id,
>  }
> 
>  static uint32_t
> -scheduler_pmd_sym_session_get_size(struct rte_cryptodev *dev
> __rte_unused)
> +get_max_session_priv_size(struct scheduler_ctx *sched_ctx)
>  {
> -       struct scheduler_ctx *sched_ctx = dev->data->dev_private;
>         uint8_t i = 0;
>         uint32_t max_priv_sess_size = 0;
> 
> @@ -469,20 +468,35 @@ scheduler_pmd_sym_session_get_size(struct
> rte_cryptodev *dev __rte_unused)
>         return max_priv_sess_size;
>  }
> 
> +static uint32_t
> +scheduler_pmd_sym_session_get_size(struct rte_cryptodev *dev)
> +{
> +       struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> +
> +       return get_max_session_priv_size(sched_ctx) * sched_ctx-
> >nb_workers;
> +}
> +
>  static int
>  scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
>         struct rte_crypto_sym_xform *xform, void *sess,
>         rte_iova_t sess_iova __rte_unused)
>  {
>         struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> +       uint32_t worker_sess_priv_sz = get_max_session_priv_size(sched_ctx);
>         uint32_t i;
>         int ret;
> 
>         for (i = 0; i < sched_ctx->nb_workers; i++) {
>                 struct scheduler_worker *worker = &sched_ctx->workers[i];
> +               struct rte_cryptodev *worker_dev =
> +                               rte_cryptodev_pmd_get_dev(worker->dev_id);
> +               uint8_t index = worker_dev->driver_id;
> 
> -               ret = rte_cryptodev_sym_session_init(worker->dev_id, sess,
> -                                       xform);
> +               ret = worker_dev->dev_ops->sym_session_configure(
> +                               worker_dev,
> +                               xform,
> +                               (uint8_t *)sess + (index * worker_sess_priv_sz),
> +                               sess_iova + (index * worker_sess_priv_sz));

This won't work. This will make the session configuration finish successfully
 but  the private data the worker initialized is not the private data the worker
will use during enqueue/dequeue (workers only uses the session private
data based on its driver id).

>                 if (ret < 0) {
>                         CR_SCHED_LOG(ERR, "unable to config sym session");
>                         return ret;

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] ring: fix size of name array in ring structure
  2021-10-20 23:06  0% ` Ananyev, Konstantin
@ 2021-10-21  7:35  0%   ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-10-21  7:35 UTC (permalink / raw)
  To: Ananyev, Konstantin, Honnappa Nagarahalli
  Cc: dev, andrew.rybchenko, nd, zoltan.kiss

On Thu, Oct 21, 2021 at 1:07 AM Ananyev, Konstantin
<konstantin.ananyev@intel.com> wrote:
> > Use correct define for the name array size. The change breaks ABI and
> > hence cannot be backported to stable branches.
> >
> > Fixes: 38c9817ee1d8 ("mempool: adjust name size in related data types")
> >
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

Applied, thanks.

-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 6/8] cryptodev: rework session framework
  2021-10-20 19:27  0%     ` Ananyev, Konstantin
@ 2021-10-21  6:53  0%       ` Akhil Goyal
  2021-10-21 10:38  0%         ` Ananyev, Konstantin
  0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-21  6:53 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: thomas, david.marchand, hemant.agrawal, Anoob Joseph,
	De Lara Guarch, Pablo, Trahe, Fiona, Doherty, Declan, matan,
	g.singh, Zhang, Roy Fan, jianjay.zhou, asomalap, ruifeng.wang,
	Nicolau, Radu, ajit.khaparde, Nagadheeraj Rottela, Ankur Dwivedi,
	Power, Ciara, Wang, Haiyue, jiawenwu, jianwang

 
> > As per current design, rte_cryptodev_sym_session_create() and
> > rte_cryptodev_sym_session_init() use separate mempool objects
> > for a single session.
> > And structure rte_cryptodev_sym_session is not directly used
> > by the application, it may cause ABI breakage if the structure
> > is modified in future.
> >
> > To address these two issues, the rte_cryptodev_sym_session_create
> > will take one mempool object for both the session and session
> > private data. The API rte_cryptodev_sym_session_init will now not
> > take mempool object.
> > rte_cryptodev_sym_session_create will now return an opaque session
> > pointer which will be used by the app in rte_cryptodev_sym_session_init
> > and other APIs.
> >
> > With this change, rte_cryptodev_sym_session_init will send
> > pointer to session private data of corresponding driver to the PMD
> > based on the driver_id for filling the PMD data.
> >
> > In data path, opaque session pointer is attached to rte_crypto_op
> > and the PMD can call an internal library API to get the session
> > private data pointer based on the driver id.
> >
> > Note: currently nb_drivers are getting updated in RTE_INIT which
> > result in increasing the memory requirements for session.
> > User can compile off drivers which are not in use to reduce the
> > memory consumption of a session.
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > ---
> 
> With that patch ipsec-secgw functional tests crashes for AES_GCM test-cases.
> To be more specific:
> examples/ipsec-secgw/test/run_test.sh -4 tun_aesgcm
> 
> [24126592.561071] traps: dpdk-ipsec-secg[3254860] general protection fault
> ip:7f3ac2397027 sp:7ffeaade8848 error:0 in
> libIPSec_MB.so.1.0.0[7f3ac238f000+2a20000]
> 
> Looking a bit deeper, it fails at:
> #0  0x00007ff9274f4027 in aes_keyexp_128_enc_avx512 ()
>    from /lib/libIPSec_MB.so.1
> #1  0x00007ff929f0ac97 in aes_gcm_pre_128_avx_gen4 ()
>    from /lib/libIPSec_MB.so.1
> #2  0x0000561757073753 in aesni_gcm_session_configure
> (mb_mgr=0x56175c5fe400,
>     session=0x17e3b72d8, xform=0x17e05d7c0)
>     at ../drivers/crypto/ipsec_mb/pmd_aesni_gcm.c:132
> #3  0x00005617570592af in ipsec_mb_sym_session_configure (
>     dev=0x56175be0c940 <rte_crypto_devices>, xform=0x17e05d7c0,
>     sess=0x17e3b72d8) at ../drivers/crypto/ipsec_mb/ipsec_mb_ops.c:330
> #4  0x0000561753b4d6ae in rte_cryptodev_sym_session_init (dev_id=0
> '\000',
>     sess_opaque=0x17e3b4940, xforms=0x17e05d7c0)
>     at ../lib/cryptodev/rte_cryptodev.c:1736
> #5  0x0000561752ef99b7 in create_lookaside_session (
>     ipsec_ctx=0x56175aa6a210 <lcore_conf+1105232>, sa=0x17e05d140,
>     ips=0x17e05d140) at ../examples/ipsec-secgw/ipsec.c:145
> #6  0x0000561752f0cf98 in fill_ipsec_session (ss=0x17e05d140,
>     ctx=0x56175aa6a210 <lcore_conf+1105232>, sa=0x17e05d140)
>     at ../examples/ipsec-secgw/ipsec_process.c:89
> #7  0x0000561752f0d7dd in ipsec_process (
>     ctx=0x56175aa6a210 <lcore_conf+1105232>, trf=0x7ffd192326a0)
>     at ../examples/ipsec-secgw/ipsec_process.c:300
> #8  0x0000561752f21027 in process_pkts_outbound (
> --Type <RET> for more, q to quit, c to continue without paging--
>     ipsec_ctx=0x56175aa6a210 <lcore_conf+1105232>,
> traffic=0x7ffd192326a0)
>     at ../examples/ipsec-secgw/ipsec-secgw.c:839
> #9  0x0000561752f21b2e in process_pkts (
>     qconf=0x56175aa57340 <lcore_conf+1027712>, pkts=0x7ffd19233c20,
>     nb_pkts=1 '\001', portid=1) at ../examples/ipsec-secgw/ipsec-secgw.c:1072
> #10 0x0000561752f224db in ipsec_poll_mode_worker ()
>     at ../examples/ipsec-secgw/ipsec-secgw.c:1262
> #11 0x0000561752f38adc in ipsec_launch_one_lcore (args=0x56175c549700)
>     at ../examples/ipsec-secgw/ipsec_worker.c:654
> #12 0x0000561753cbc523 in rte_eal_mp_remote_launch (
>     f=0x561752f38ab5 <ipsec_launch_one_lcore>, arg=0x56175c549700,
>     call_main=CALL_MAIN) at ../lib/eal/common/eal_common_launch.c:64
> #13 0x0000561752f265ed in main (argc=12, argv=0x7ffd19234168)
>     at ../examples/ipsec-secgw/ipsec-secgw.c:2978
> (gdb) frame 2
> #2  0x0000561757073753 in aesni_gcm_session_configure
> (mb_mgr=0x56175c5fe400,
>     session=0x17e3b72d8, xform=0x17e05d7c0)
>     at ../drivers/crypto/ipsec_mb/pmd_aesni_gcm.c:132
> 132                     mb_mgr->gcm128_pre(key, &sess->gdata_key);
> 
> Because of un-expected unaligned memory access:
> (gdb) disas
> Dump of assembler code for function aes_keyexp_128_enc_avx512:
>    0x00007ff9274f400b <+0>:     endbr64
>    0x00007ff9274f400f <+4>:     cmp    $0x0,%rdi
>    0x00007ff9274f4013 <+8>:     je     0x7ff9274f41b4
> <aes_keyexp_128_enc_avx512+425>
>    0x00007ff9274f4019 <+14>:    cmp    $0x0,%rsi
>    0x00007ff9274f401d <+18>:    je     0x7ff9274f41b4
> <aes_keyexp_128_enc_avx512+425>
>    0x00007ff9274f4023 <+24>:    vmovdqu (%rdi),%xmm1
> => 0x00007ff9274f4027 <+28>:    vmovdqa %xmm1,(%rsi)
> 
> (gdb) print/x $rsi
> $12 = 0x17e3b72e8
> 
> And this is caused because now AES_GCM session private data is not 16B-bits
> aligned anymore:
> (gdb) print ((struct aesni_gcm_session *)sess->sess_data[index].data)
> $29 = (struct aesni_gcm_session *) 0x17e3b72d8
> 
> print &((struct aesni_gcm_session *)sess->sess_data[index].data)-
> >gdata_key
> $31 = (struct gcm_key_data *) 0x17e3b72e8
> 
> As I understand the reason for that is that we changed the way how
> sess_data[index].data
> is populated. Now it is just:
> sess->sess_data[index].data = (void *)((uint8_t *)sess +
>                                 rte_cryptodev_sym_get_header_session_size() +
>                                 (index * sess->priv_sz));
> 
> So, as I can see, there is no guarantee that PMD's private sess data will be
> aligned on 16B
> as expected.
> 
Agreed, that there is no guarantee that the sess_priv will be aligned.
I believe this is requirement from the PMD side for a particular alignment.
Is it possible for the PMD to use __rte_aligned for the fields which are required to
Be aligned. For aesni_gcm it is 16B aligned requirement, for some other PMD it may be
64B alignment. 

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] ring: fix size of name array in ring structure
    2021-10-18 14:54  0% ` Honnappa Nagarahalli
@ 2021-10-20 23:06  0% ` Ananyev, Konstantin
  2021-10-21  7:35  0%   ` David Marchand
  1 sibling, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-10-20 23:06 UTC (permalink / raw)
  To: Honnappa Nagarahalli, dev, andrew.rybchenko; +Cc: nd, zoltan.kiss


> 
> Use correct define for the name array size. The change breaks ABI and
> hence cannot be backported to stable branches.
> 
> Fixes: 38c9817ee1d8 ("mempool: adjust name size in related data types")
> Cc: zoltan.kiss@schaman.hu
> 
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> ---
>  lib/ring/rte_ring_core.h | 7 +------
>  1 file changed, 1 insertion(+), 6 deletions(-)
> 
> diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h
> index 31f7200fa9..46ad584f9c 100644
> --- a/lib/ring/rte_ring_core.h
> +++ b/lib/ring/rte_ring_core.h
> @@ -118,12 +118,7 @@ struct rte_ring_hts_headtail {
>   * a problem.
>   */
>  struct rte_ring {
> -	/*
> -	 * Note: this field kept the RTE_MEMZONE_NAMESIZE size due to ABI
> -	 * compatibility requirements, it could be changed to RTE_RING_NAMESIZE
> -	 * next time the ABI changes
> -	 */
> -	char name[RTE_MEMZONE_NAMESIZE] __rte_cache_aligned;
> +	char name[RTE_RING_NAMESIZE] __rte_cache_aligned;
>  	/**< Name of the ring. */
>  	int flags;               /**< Flags supplied at creation. */
>  	const struct rte_memzone *memzone;
> --

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> 2.25.1


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v15 11/12] doc: changes for new pcapng and dumpcap utility
    2021-10-20 21:42  1%   ` [dpdk-dev] [PATCH v15 06/12] pdump: support pcapng and filtering Stephen Hemminger
@ 2021-10-20 21:42  1%   ` Stephen Hemminger
  1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-10-20 21:42 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger, Reshma Pattan

Describe the new packet capture library and utility.
Fix the title line on the pdump documentation.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Reshma Pattan <reshma.pattan@intel.com>
---
 doc/api/doxy-api-index.md                     |  1 +
 doc/api/doxy-api.conf.in                      |  1 +
 .../howto/img/packet_capture_framework.svg    | 96 +++++++++----------
 doc/guides/howto/packet_capture_framework.rst | 69 ++++++-------
 doc/guides/prog_guide/index.rst               |  1 +
 doc/guides/prog_guide/pcapng_lib.rst          | 46 +++++++++
 doc/guides/prog_guide/pdump_lib.rst           | 28 ++++--
 doc/guides/rel_notes/release_21_11.rst        | 10 ++
 doc/guides/tools/dumpcap.rst                  | 86 +++++++++++++++++
 doc/guides/tools/index.rst                    |  1 +
 10 files changed, 251 insertions(+), 88 deletions(-)
 create mode 100644 doc/guides/prog_guide/pcapng_lib.rst
 create mode 100644 doc/guides/tools/dumpcap.rst

diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 29390504318b..a447c1ab4ac0 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -224,3 +224,4 @@ The public API headers are grouped by topics:
   [experimental APIs]  (@ref rte_compat.h),
   [ABI versioning]     (@ref rte_function_versioning.h),
   [version]            (@ref rte_version.h)
+  [pcapng]             (@ref rte_pcapng.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 109ec1f6826b..096ebbaf0d1b 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -59,6 +59,7 @@ INPUT                   = @TOPDIR@/doc/api/doxy-api-index.md \
                           @TOPDIR@/lib/metrics \
                           @TOPDIR@/lib/node \
                           @TOPDIR@/lib/net \
+                          @TOPDIR@/lib/pcapng \
                           @TOPDIR@/lib/pci \
                           @TOPDIR@/lib/pdump \
                           @TOPDIR@/lib/pipeline \
diff --git a/doc/guides/howto/img/packet_capture_framework.svg b/doc/guides/howto/img/packet_capture_framework.svg
index a76baf71fdee..1c2646a81096 100644
--- a/doc/guides/howto/img/packet_capture_framework.svg
+++ b/doc/guides/howto/img/packet_capture_framework.svg
@@ -1,6 +1,4 @@
 <?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
 <svg
    xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
@@ -16,8 +14,8 @@
    viewBox="0 0 425.19685 283.46457"
    id="svg2"
    version="1.1"
-   inkscape:version="0.91 r13725"
-   sodipodi:docname="drawing-pcap.svg">
+   inkscape:version="1.0.2 (e86c870879, 2021-01-15)"
+   sodipodi:docname="packet_capture_framework.svg">
   <defs
      id="defs4">
     <marker
@@ -228,7 +226,7 @@
        x2="487.64606"
        y2="258.38232"
        gradientUnits="userSpaceOnUse"
-       gradientTransform="translate(-84.916417,744.90779)" />
+       gradientTransform="matrix(1.1457977,0,0,0.99944907,-151.97019,745.05014)" />
     <linearGradient
        inkscape:collect="always"
        xlink:href="#linearGradient5784"
@@ -277,17 +275,18 @@
      borderopacity="1.0"
      inkscape:pageopacity="0.0"
      inkscape:pageshadow="2"
-     inkscape:zoom="0.57434918"
-     inkscape:cx="215.17857"
-     inkscape:cy="285.26445"
+     inkscape:zoom="1"
+     inkscape:cx="226.77165"
+     inkscape:cy="78.124511"
      inkscape:document-units="px"
      inkscape:current-layer="layer1"
      showgrid="false"
-     inkscape:window-width="1874"
-     inkscape:window-height="971"
-     inkscape:window-x="2"
-     inkscape:window-y="24"
-     inkscape:window-maximized="0" />
+     inkscape:window-width="2560"
+     inkscape:window-height="1414"
+     inkscape:window-x="0"
+     inkscape:window-y="0"
+     inkscape:window-maximized="1"
+     inkscape:document-rotation="0" />
   <metadata
      id="metadata7">
     <rdf:RDF>
@@ -296,7 +295,7 @@
         <dc:format>image/svg+xml</dc:format>
         <dc:type
            rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
+        <dc:title />
       </cc:Work>
     </rdf:RDF>
   </metadata>
@@ -321,15 +320,15 @@
        y="790.82452" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="61.050636"
        y="807.3205"
-       id="text4152"
-       sodipodi:linespacing="125%"><tspan
+       id="text4152"><tspan
          sodipodi:role="line"
          id="tspan4154"
          x="61.050636"
-         y="807.3205">DPDK Primary Application</tspan></text>
+         y="807.3205"
+         style="font-size:12.5px;line-height:1.25">DPDK Primary Application</tspan></text>
     <rect
        style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6"
@@ -339,19 +338,20 @@
        y="827.01843" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="350.68585"
        y="841.16058"
-       id="text4189"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189"><tspan
          sodipodi:role="line"
          id="tspan4191"
          x="350.68585"
-         y="841.16058">dpdk-pdump</tspan><tspan
+         y="841.16058"
+         style="font-size:12.5px;line-height:1.25">dpdk-dumpcap</tspan><tspan
          sodipodi:role="line"
          x="350.68585"
          y="856.78558"
-         id="tspan4193">tool</tspan></text>
+         id="tspan4193"
+         style="font-size:12.5px;line-height:1.25">tool</tspan></text>
     <rect
        style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-4"
@@ -361,15 +361,15 @@
        y="891.16315" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="352.70612"
        y="905.3053"
-       id="text4189-1"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-1"><tspan
          sodipodi:role="line"
          x="352.70612"
          y="905.3053"
-         id="tspan4193-3">PCAP PMD</tspan></text>
+         id="tspan4193-3"
+         style="font-size:12.5px;line-height:1.25">librte_pcapng</tspan></text>
     <rect
        style="fill:url(#linearGradient5745);fill-opacity:1;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-6"
@@ -379,15 +379,15 @@
        y="923.9931" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="136.02846"
        y="938.13525"
-       id="text4189-0"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-0"><tspan
          sodipodi:role="line"
          x="136.02846"
          y="938.13525"
-         id="tspan4193-6">dpdk_port0</tspan></text>
+         id="tspan4193-6"
+         style="font-size:12.5px;line-height:1.25">dpdk_port0</tspan></text>
     <rect
        style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-5"
@@ -397,33 +397,33 @@
        y="824.99817" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="137.54369"
        y="839.14026"
-       id="text4189-4"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-4"><tspan
          sodipodi:role="line"
          x="137.54369"
          y="839.14026"
-         id="tspan4193-2">librte_pdump</tspan></text>
+         id="tspan4193-2"
+         style="font-size:12.5px;line-height:1.25">librte_pdump</tspan></text>
     <rect
-       style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+       style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1.07013;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-4-5"
-       width="94.449265"
-       height="35.355339"
-       x="307.7804"
-       y="985.61243" />
+       width="108.21974"
+       height="35.335861"
+       x="297.9809"
+       y="985.62219" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="352.70618"
        y="999.75458"
-       id="text4189-1-8"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-1-8"><tspan
          sodipodi:role="line"
          x="352.70618"
          y="999.75458"
-         id="tspan4193-3-2">capture.pcap</tspan></text>
+         id="tspan4193-3-2"
+         style="font-size:12.5px;line-height:1.25">capture.pcapng</tspan></text>
     <rect
        style="fill:url(#linearGradient5788-1);fill-opacity:1;stroke:#257cdc;stroke-width:1.12555885;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-4-5-1"
@@ -433,15 +433,15 @@
        y="983.14984" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="136.53352"
        y="1002.785"
-       id="text4189-1-8-4"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-1-8-4"><tspan
          sodipodi:role="line"
          x="136.53352"
          y="1002.785"
-         id="tspan4193-3-2-7">Traffic Generator</tspan></text>
+         id="tspan4193-3-2-7"
+         style="font-size:12.5px;line-height:1.25">Traffic Generator</tspan></text>
     <path
        style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker7331)"
        d="m 351.46948,927.02357 c 0,57.5787 0,57.5787 0,57.5787"
diff --git a/doc/guides/howto/packet_capture_framework.rst b/doc/guides/howto/packet_capture_framework.rst
index c31bac52340e..f933cc7e9311 100644
--- a/doc/guides/howto/packet_capture_framework.rst
+++ b/doc/guides/howto/packet_capture_framework.rst
@@ -1,18 +1,19 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright(c) 2017 Intel Corporation.
+    Copyright(c) 2017-2021 Intel Corporation.
 
-DPDK pdump Library and pdump Tool
-=================================
+DPDK packet capture libraries and tools
+=======================================
 
 This document describes how the Data Plane Development Kit (DPDK) Packet
 Capture Framework is used for capturing packets on DPDK ports. It is intended
 for users of DPDK who want to know more about the Packet Capture feature and
 for those who want to monitor traffic on DPDK-controlled devices.
 
-The DPDK packet capture framework was introduced in DPDK v16.07. The DPDK
-packet capture framework consists of the DPDK pdump library and DPDK pdump
-tool.
-
+The DPDK packet capture framework was introduced in DPDK v16.07 and
+enhanced in 21.11. The DPDK packet capture framework consists of the
+libraries for collecting packets ``librte_pdump`` and writing packets
+to a file ``librte_pcapng``. There are two sample applications:
+``dpdk-dumpcap`` and older ``dpdk-pdump``.
 
 Introduction
 ------------
@@ -22,43 +23,46 @@ allow users to initialize the packet capture framework and to enable or
 disable packet capture. The library works on a multi process communication model and its
 usage is recommended for debugging purposes.
 
-The :ref:`dpdk-pdump <pdump_tool>` tool is developed based on the
-``librte_pdump`` library.  It runs as a DPDK secondary process and is capable
-of enabling or disabling packet capture on DPDK ports. The ``dpdk-pdump`` tool
-provides command-line options with which users can request enabling or
-disabling of the packet capture on DPDK ports.
+The :ref:`librte_pcapng <pcapng_library>` library provides the APIs to format
+packets and write them to a file in Pcapng format.
+
+
+The :ref:`dpdk-dumpcap <dumpcap_tool>` is a tool that captures packets in
+like Wireshark dumpcap does for Linux. It runs as a DPDK secondary process and
+captures packets from one or more interfaces and writes them to a file
+in Pcapng format.  The ``dpdk-dumpcap`` tool is designed to take
+most of the same options as the Wireshark ``dumpcap`` command.
 
-The application which initializes the packet capture framework will be a primary process
-and the application that enables or disables the packet capture will
-be a secondary process. The primary process sends the Rx and Tx packets from the DPDK ports
-to the secondary process.
+Without any options it will use the packet capture framework to
+capture traffic from the first available DPDK port.
 
 In DPDK the ``testpmd`` application can be used to initialize the packet
-capture framework and acts as a server, and the ``dpdk-pdump`` tool acts as a
+capture framework and acts as a server, and the ``dpdk-dumpcap`` tool acts as a
 client. To view Rx or Tx packets of ``testpmd``, the application should be
-launched first, and then the ``dpdk-pdump`` tool. Packets from ``testpmd``
-will be sent to the tool, which then sends them on to the Pcap PMD device and
-that device writes them to the Pcap file or to an external interface depending
-on the command-line option used.
+launched first, and then the ``dpdk-dumpcap`` tool. Packets from ``testpmd``
+will be sent to the tool, and then to the Pcapng file.
 
 Some things to note:
 
-* The ``dpdk-pdump`` tool can only be used in conjunction with a primary
+* All tools using ``librte_pdump`` can only be used in conjunction with a primary
   application which has the packet capture framework initialized already. In
   dpdk, only ``testpmd`` is modified to initialize packet capture framework,
-  other applications remain untouched. So, if the ``dpdk-pdump`` tool has to
+  other applications remain untouched. So, if the ``dpdk-dumpcap`` tool has to
   be used with any application other than the testpmd, the user needs to
   explicitly modify that application to call the packet capture framework
   initialization code. Refer to the ``app/test-pmd/testpmd.c`` code and look
   for ``pdump`` keyword to see how this is done.
 
-* The ``dpdk-pdump`` tool depends on the libpcap based PMD.
+* The ``dpdk-pdump`` tool is an older tool created as demonstration of ``librte_pdump``
+  library. The ``dpdk-pdump`` tool provides more limited functionality and
+  and depends on the Pcap PMD. It is retained only for compatibility reasons;
+  users should use ``dpdk-dumpcap`` instead.
 
 
 Test Environment
 ----------------
 
-The overview of using the Packet Capture Framework and the ``dpdk-pdump`` tool
+The overview of using the Packet Capture Framework and the ``dpdk-dumpcap`` utility
 for packet capturing on the DPDK port in
 :numref:`figure_packet_capture_framework`.
 
@@ -66,13 +70,13 @@ for packet capturing on the DPDK port in
 
 .. figure:: img/packet_capture_framework.*
 
-   Packet capturing on a DPDK port using the dpdk-pdump tool.
+   Packet capturing on a DPDK port using the dpdk-dumpcap utility.
 
 
 Running the Application
 -----------------------
 
-The following steps demonstrate how to run the ``dpdk-pdump`` tool to capture
+The following steps demonstrate how to run the ``dpdk-dumpcap`` tool to capture
 Rx side packets on dpdk_port0 in :numref:`figure_packet_capture_framework` and
 inspect them using ``tcpdump``.
 
@@ -80,16 +84,15 @@ inspect them using ``tcpdump``.
 
      sudo <build_dir>/app/dpdk-testpmd -c 0xf0 -n 4 -- -i --port-topology=chained
 
-#. Launch the pdump tool as follows::
+#. Launch the dpdk-dumpcap as follows::
 
-     sudo <build_dir>/app/dpdk-pdump -- \
-          --pdump 'port=0,queue=*,rx-dev=/tmp/capture.pcap'
+     sudo <build_dir>/app/dpdk-dumpcap -w /tmp/capture.pcapng
 
 #. Send traffic to dpdk_port0 from traffic generator.
-   Inspect packets captured in the file capture.pcap using a tool
-   that can interpret Pcap files, for example tcpdump::
+   Inspect packets captured in the file capture.pcapng using a tool such as
+   tcpdump or tshark that can interpret Pcapng files::
 
-     $tcpdump -nr /tmp/capture.pcap
+     $ tcpdump -nr /tmp/capture.pcapng
      reading from file /tmp/capture.pcap, link-type EN10MB (Ethernet)
      11:11:36.891404 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
      11:11:36.891442 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 89af28dacb72..a8e8e759ecf2 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -44,6 +44,7 @@ Programmer's Guide
     ip_fragment_reassembly_lib
     generic_receive_offload_lib
     generic_segmentation_offload_lib
+    pcapng_lib
     pdump_lib
     multi_proc_support
     kernel_nic_interface
diff --git a/doc/guides/prog_guide/pcapng_lib.rst b/doc/guides/prog_guide/pcapng_lib.rst
new file mode 100644
index 000000000000..fa1994c96f4d
--- /dev/null
+++ b/doc/guides/prog_guide/pcapng_lib.rst
@@ -0,0 +1,46 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2021 Microsoft Corporation
+
+.. _pcapng_library:
+
+Packet Capture Next Generation Library
+======================================
+
+Exchanging packet traces becomes more and more critical every day.
+The de facto standard for this is the format define by libpcap;
+but that format is rather old and is lacking in functionality
+for more modern applications. The `Pcapng file format`_
+is the default capture file format for modern network capture
+processing tools such as `wireshark`_ (can also be read by `tcpdump`_).
+
+The Pcapng library is a an API for formatting packet data into
+into a Pcapng file.
+The format conforms to the current `Pcapng RFC`_ standard.
+It is designed to be integrated with the packet capture library.
+
+Usage
+-----
+
+Before the library can be used the function ``rte_pcapng_init``
+should be called once to initialize timestamp computation.
+
+The output stream is created with ``rte_pcapng_fdopen``,
+and should be closed with ``rte_pcapng_close``.
+
+The library requires a DPDK mempool to allocate mbufs. The mbufs
+need to be able to accommodate additional space for the pcapng packet
+format header and trailer information; the function ``rte_pcapng_mbuf_size``
+should be used to determine the lower bound based on MTU.
+
+Collecting packets is done in two parts. The function ``rte_pcapng_copy``
+is used to format and copy mbuf data and ``rte_pcapng_write_packets``
+writes a burst of packets to the output file.
+
+The function ``rte_pcapng_write_stats`` can be used to write
+statistics information into the output file. The summary statistics
+information is automatically added by ``rte_pcapng_close``.
+
+.. _Tcpdump: https://tcpdump.org/
+.. _Wireshark: https://wireshark.org/
+.. _Pcapng file format: https://github.com/pcapng/pcapng/
+.. _Pcapng RFC: https://datatracker.ietf.org/doc/html/draft-tuexen-opsawg-pcapng
diff --git a/doc/guides/prog_guide/pdump_lib.rst b/doc/guides/prog_guide/pdump_lib.rst
index 62c0b015b2fe..f3ff8fd828dc 100644
--- a/doc/guides/prog_guide/pdump_lib.rst
+++ b/doc/guides/prog_guide/pdump_lib.rst
@@ -3,10 +3,10 @@
 
 .. _pdump_library:
 
-The librte_pdump Library
-========================
+Packet Capture Library
+======================
 
-The ``librte_pdump`` library provides a framework for packet capturing in DPDK.
+The DPDK ``pdump`` library provides a framework for packet capturing in DPDK.
 The library does the complete copy of the Rx and Tx mbufs to a new mempool and
 hence it slows down the performance of the applications, so it is recommended
 to use this library for debugging purposes.
@@ -23,11 +23,19 @@ or disable the packet capture, and to uninitialize it.
 
 * ``rte_pdump_enable()``:
   This API enables the packet capture on a given port and queue.
-  Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf()``
+  This API enables the packet capture on a given port and queue.
+  It also allows setting an optional filter using DPDK BPF interpreter and
+  setting the captured packet length.
 
 * ``rte_pdump_enable_by_deviceid()``:
   This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
-  Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf_by_deviceid()``
+  This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
+  It also allows setting an optional filter using DPDK BPF interpreter and
+  setting the captured packet length.
 
 * ``rte_pdump_disable()``:
   This API disables the packet capture on a given port and queue.
@@ -61,6 +69,12 @@ and enables the packet capture by registering the Ethernet RX and TX callbacks f
 and queue combinations. Then the primary process will mirror the packets to the new mempool and enqueue them to
 the rte_ring that secondary process have passed to these APIs.
 
+The packet ring supports one of two formats. The default format enqueues copies of the original packets
+into the rte_ring. If the ``RTE_PDUMP_FLAG_PCAPNG`` is set the mbuf data is extended with header and trailer
+to match the format of Pcapng enhanced packet block. The enhanced packet block has meta-data such as the
+timestamp, port and queue the packet was captured on. It is up to the application consuming the
+packets from the ring to select the format desired.
+
 The library APIs ``rte_pdump_disable()`` and ``rte_pdump_disable_by_deviceid()`` disables the packet capture.
 For the calls to these APIs from secondary process, the library creates the "pdump disable" request and sends
 the request to the primary process over the multi process channel. The primary process takes this request and
@@ -74,5 +88,5 @@ function.
 Use Case: Packet Capturing
 --------------------------
 
-The DPDK ``app/pdump`` tool is developed based on this library to capture packets in DPDK.
-Users can use this as an example to develop their own packet capturing tools.
+The DPDK ``app/dpdk-dumpcap`` utility uses this library
+to capture packets in DPDK.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 30175246c74a..c91f36500a7c 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -189,6 +189,16 @@ New Features
   * Added tests to verify tunnel header verification in IPsec inbound.
   * Added tests to verify inner checksum.
 
+* **Revised packet capture framework.**
+
+  * New dpdk-dumpcap program that has most of the features of the
+    wireshark dumpcap utility including: capture of multiple interfaces,
+    filtering, and stopping after number of bytes, packets.
+  * New library for writing pcapng packet capture files.
+  * Enhancements to the pdump library to support:
+    * Packet filter with BPF.
+    * Pcapng format with timestamps and meta-data.
+    * Fixes packet capture with stripped VLAN tags.
 
 Removed Items
 -------------
diff --git a/doc/guides/tools/dumpcap.rst b/doc/guides/tools/dumpcap.rst
new file mode 100644
index 000000000000..664ea0c79802
--- /dev/null
+++ b/doc/guides/tools/dumpcap.rst
@@ -0,0 +1,86 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2020 Microsoft Corporation.
+
+.. _dumpcap_tool:
+
+dpdk-dumpcap Application
+========================
+
+The ``dpdk-dumpcap`` tool is a Data Plane Development Kit (DPDK)
+network traffic dump tool.  The interface is similar to  the dumpcap tool in Wireshark.
+It runs as a secondary DPDK process and lets you capture packets that are
+coming into and out of a DPDK primary process.
+The ``dpdk-dumpcap`` writes files in Pcapng packet format using
+capture file format is pcapng.
+
+Without any options set it will use DPDK to capture traffic from the first
+available DPDK interface and write the received raw packet data, along
+with timestamps into a pcapng file.
+
+If the ``-w`` option is not specified, ``dpdk-dumpcap`` writes to a newly
+create file with a name chosen based on interface name and timestamp.
+If ``-w`` option is specified, then that file is used.
+
+   .. Note::
+      * The ``dpdk-dumpcap`` tool can only be used in conjunction with a primary
+        application which has the packet capture framework initialized already.
+        In dpdk, only the ``testpmd`` is modified to initialize packet capture
+        framework, other applications remain untouched. So, if the ``dpdk-dumpcap``
+        tool has to be used with any application other than the testpmd, user
+        needs to explicitly modify that application to call packet capture
+        framework initialization code. Refer ``app/test-pmd/testpmd.c``
+        code to see how this is done.
+
+      * The ``dpdk-dumpcap`` tool runs as a DPDK secondary process. It exits when
+        the primary application exits.
+
+
+Running the Application
+-----------------------
+
+To list interfaces available for capture use ``--list-interfaces``.
+
+To filter packets in style of *tshark* use the ``-f`` flag.
+
+To capture on multiple interfaces at once, use multiple ``-I`` flags.
+
+Example
+-------
+
+.. code-block:: console
+
+   # ./<build_dir>/app/dpdk-dumpcap --list-interfaces
+   0. 000:00:03.0
+   1. 000:00:03.1
+
+   # ./<build_dir>/app/dpdk-dumpcap -I 0000:00:03.0 -c 6 -w /tmp/sample.pcapng
+   Packets captured: 6
+   Packets received/dropped on interface '0000:00:03.0' 6/0
+
+   # ./<build_dir>/app/dpdk-dumpcap -f 'tcp port 80'
+   Packets captured: 6
+   Packets received/dropped on interface '0000:00:03.0' 10/8
+
+
+Limitations
+-----------
+The following option of Wireshark ``dumpcap`` is not yet implemented:
+
+   * ``-b|--ring-buffer`` -- more complex file management.
+
+The following options do not make sense in the context of DPDK.
+
+   * ``-C <byte_limit>`` -- its a kernel thing
+
+   * ``-t`` -- use a thread per interface
+
+   * Timestamp type.
+
+   * Link data types. Only EN10MB (Ethernet) is supported.
+
+   * Wireless related options:  ``-I|--monitor-mode`` and  ``-k <freq>``
+
+
+.. Note::
+   * The options to ``dpdk-dumpcap`` are like the Wireshark dumpcap program and
+     are not the same as ``dpdk-pdump`` and other DPDK applications.
diff --git a/doc/guides/tools/index.rst b/doc/guides/tools/index.rst
index 93dde4148e90..b71c12b8f2dd 100644
--- a/doc/guides/tools/index.rst
+++ b/doc/guides/tools/index.rst
@@ -8,6 +8,7 @@ DPDK Tools User Guides
     :maxdepth: 2
     :numbered:
 
+    dumpcap
     proc_info
     pdump
     pmdinfo
-- 
2.30.2


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v15 06/12] pdump: support pcapng and filtering
  @ 2021-10-20 21:42  1%   ` Stephen Hemminger
  2021-10-21 14:16  0%     ` Kinsella, Ray
  2021-10-27  6:34  0%     ` Wang, Yinan
  2021-10-20 21:42  1%   ` [dpdk-dev] [PATCH v15 11/12] doc: changes for new pcapng and dumpcap utility Stephen Hemminger
  1 sibling, 2 replies; 200+ results
From: Stephen Hemminger @ 2021-10-20 21:42 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger, Reshma Pattan, Ray Kinsella, Anatoly Burakov

This enhances the DPDK pdump library to support new
pcapng format and filtering via BPF.

The internal client/server protocol is changed to support
two versions: the original pdump basic version and a
new pcapng version.

The internal version number (not part of exposed API or ABI)
is intentionally increased to cause any attempt to try
mismatched primary/secondary process to fail.

Add new API to do allow filtering of captured packets with
DPDK BPF (eBPF) filter program. It keeps statistics
on packets captured, filtered, and missed (because ring was full).

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Reshma Pattan <reshma.pattan@intel.com>
---
 lib/meson.build       |   4 +-
 lib/pdump/meson.build |   2 +-
 lib/pdump/rte_pdump.c | 432 ++++++++++++++++++++++++++++++------------
 lib/pdump/rte_pdump.h | 113 ++++++++++-
 lib/pdump/version.map |   8 +
 5 files changed, 433 insertions(+), 126 deletions(-)

diff --git a/lib/meson.build b/lib/meson.build
index 484b1da2b88d..1a8ac30c4da6 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -27,6 +27,7 @@ libraries = [
         'acl',
         'bbdev',
         'bitratestats',
+        'bpf',
         'cfgfile',
         'compressdev',
         'cryptodev',
@@ -43,7 +44,6 @@ libraries = [
         'member',
         'pcapng',
         'power',
-        'pdump',
         'rawdev',
         'regexdev',
         'dmadev',
@@ -56,10 +56,10 @@ libraries = [
         'ipsec', # ipsec lib depends on net, crypto and security
         'fib', #fib lib depends on rib
         'port', # pkt framework libs which use other libs from above
+        'pdump', # pdump lib depends on bpf
         'table',
         'pipeline',
         'flow_classify', # flow_classify lib depends on pkt framework table lib
-        'bpf',
         'graph',
         'node',
 ]
diff --git a/lib/pdump/meson.build b/lib/pdump/meson.build
index 3a95eabde6a6..51ceb2afdec5 100644
--- a/lib/pdump/meson.build
+++ b/lib/pdump/meson.build
@@ -3,4 +3,4 @@
 
 sources = files('rte_pdump.c')
 headers = files('rte_pdump.h')
-deps += ['ethdev']
+deps += ['ethdev', 'bpf', 'pcapng']
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index 46a87e233904..71602685d544 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -7,8 +7,10 @@
 #include <rte_ethdev.h>
 #include <rte_lcore.h>
 #include <rte_log.h>
+#include <rte_memzone.h>
 #include <rte_errno.h>
 #include <rte_string_fns.h>
+#include <rte_pcapng.h>
 
 #include "rte_pdump.h"
 
@@ -27,30 +29,23 @@ enum pdump_operation {
 	ENABLE = 2
 };
 
+/* Internal version number in request */
 enum pdump_version {
-	V1 = 1
+	V1 = 1,		    /* no filtering or snap */
+	V2 = 2,
 };
 
 struct pdump_request {
 	uint16_t ver;
 	uint16_t op;
 	uint32_t flags;
-	union pdump_data {
-		struct enable_v1 {
-			char device[RTE_DEV_NAME_MAX_LEN];
-			uint16_t queue;
-			struct rte_ring *ring;
-			struct rte_mempool *mp;
-			void *filter;
-		} en_v1;
-		struct disable_v1 {
-			char device[RTE_DEV_NAME_MAX_LEN];
-			uint16_t queue;
-			struct rte_ring *ring;
-			struct rte_mempool *mp;
-			void *filter;
-		} dis_v1;
-	} data;
+	char device[RTE_DEV_NAME_MAX_LEN];
+	uint16_t queue;
+	struct rte_ring *ring;
+	struct rte_mempool *mp;
+
+	const struct rte_bpf_prm *prm;
+	uint32_t snaplen;
 };
 
 struct pdump_response {
@@ -63,80 +58,140 @@ static struct pdump_rxtx_cbs {
 	struct rte_ring *ring;
 	struct rte_mempool *mp;
 	const struct rte_eth_rxtx_callback *cb;
-	void *filter;
+	const struct rte_bpf *filter;
+	enum pdump_version ver;
+	uint32_t snaplen;
 } rx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
 tx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
 
 
-static inline void
-pdump_copy(struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
+/*
+ * The packet capture statistics keep track of packets
+ * accepted, filtered and dropped. These are per-queue
+ * and in memory between primary and secondary processes.
+ */
+static const char MZ_RTE_PDUMP_STATS[] = "rte_pdump_stats";
+static struct {
+	struct rte_pdump_stats rx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+	struct rte_pdump_stats tx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+} *pdump_stats;
+
+/* Create a clone of mbuf to be placed into ring. */
+static void
+pdump_copy(uint16_t port_id, uint16_t queue,
+	   enum rte_pcapng_direction direction,
+	   struct rte_mbuf **pkts, uint16_t nb_pkts,
+	   const struct pdump_rxtx_cbs *cbs,
+	   struct rte_pdump_stats *stats)
 {
 	unsigned int i;
 	int ring_enq;
 	uint16_t d_pkts = 0;
 	struct rte_mbuf *dup_bufs[nb_pkts];
-	struct pdump_rxtx_cbs *cbs;
+	uint64_t ts;
 	struct rte_ring *ring;
 	struct rte_mempool *mp;
 	struct rte_mbuf *p;
+	uint64_t rcs[nb_pkts];
+
+	if (cbs->filter)
+		rte_bpf_exec_burst(cbs->filter, (void **)pkts, rcs, nb_pkts);
 
-	cbs  = user_params;
+	ts = rte_get_tsc_cycles();
 	ring = cbs->ring;
 	mp = cbs->mp;
 	for (i = 0; i < nb_pkts; i++) {
-		p = rte_pktmbuf_copy(pkts[i], mp, 0, UINT32_MAX);
-		if (p)
+		/*
+		 * This uses same BPF return value convention as socket filter
+		 * and pcap_offline_filter.
+		 * if program returns zero
+		 * then packet doesn't match the filter (will be ignored).
+		 */
+		if (cbs->filter && rcs[i] == 0) {
+			__atomic_fetch_add(&stats->filtered,
+					   1, __ATOMIC_RELAXED);
+			continue;
+		}
+
+		/*
+		 * If using pcapng then want to wrap packets
+		 * otherwise a simple copy.
+		 */
+		if (cbs->ver == V2)
+			p = rte_pcapng_copy(port_id, queue,
+					    pkts[i], mp, cbs->snaplen,
+					    ts, direction);
+		else
+			p = rte_pktmbuf_copy(pkts[i], mp, 0, cbs->snaplen);
+
+		if (unlikely(p == NULL))
+			__atomic_fetch_add(&stats->nombuf, 1, __ATOMIC_RELAXED);
+		else
 			dup_bufs[d_pkts++] = p;
 	}
 
+	__atomic_fetch_add(&stats->accepted, d_pkts, __ATOMIC_RELAXED);
+
 	ring_enq = rte_ring_enqueue_burst(ring, (void *)dup_bufs, d_pkts, NULL);
 	if (unlikely(ring_enq < d_pkts)) {
 		unsigned int drops = d_pkts - ring_enq;
 
-		PDUMP_LOG(DEBUG,
-			"only %d of packets enqueued to ring\n", ring_enq);
+		__atomic_fetch_add(&stats->ringfull, drops, __ATOMIC_RELAXED);
 		rte_pktmbuf_free_bulk(&dup_bufs[ring_enq], drops);
 	}
 }
 
 static uint16_t
-pdump_rx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_rx(uint16_t port, uint16_t queue,
 	struct rte_mbuf **pkts, uint16_t nb_pkts,
-	uint16_t max_pkts __rte_unused,
-	void *user_params)
+	uint16_t max_pkts __rte_unused, void *user_params)
 {
-	pdump_copy(pkts, nb_pkts, user_params);
+	const struct pdump_rxtx_cbs *cbs = user_params;
+	struct rte_pdump_stats *stats = &pdump_stats->rx[port][queue];
+
+	pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_IN,
+		   pkts, nb_pkts, cbs, stats);
 	return nb_pkts;
 }
 
 static uint16_t
-pdump_tx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_tx(uint16_t port, uint16_t queue,
 		struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
 {
-	pdump_copy(pkts, nb_pkts, user_params);
+	const struct pdump_rxtx_cbs *cbs = user_params;
+	struct rte_pdump_stats *stats = &pdump_stats->tx[port][queue];
+
+	pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_OUT,
+		   pkts, nb_pkts, cbs, stats);
 	return nb_pkts;
 }
 
 static int
-pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
-				struct rte_ring *ring, struct rte_mempool *mp,
-				uint16_t operation)
+pdump_register_rx_callbacks(enum pdump_version ver,
+			    uint16_t end_q, uint16_t port, uint16_t queue,
+			    struct rte_ring *ring, struct rte_mempool *mp,
+			    struct rte_bpf *filter,
+			    uint16_t operation, uint32_t snaplen)
 {
 	uint16_t qid;
-	struct pdump_rxtx_cbs *cbs = NULL;
 
 	qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
 	for (; qid < end_q; qid++) {
-		cbs = &rx_cbs[port][qid];
-		if (cbs && operation == ENABLE) {
+		struct pdump_rxtx_cbs *cbs = &rx_cbs[port][qid];
+
+		if (operation == ENABLE) {
 			if (cbs->cb) {
 				PDUMP_LOG(ERR,
 					"rx callback for port=%d queue=%d, already exists\n",
 					port, qid);
 				return -EEXIST;
 			}
+			cbs->ver = ver;
 			cbs->ring = ring;
 			cbs->mp = mp;
+			cbs->snaplen = snaplen;
+			cbs->filter = filter;
+
 			cbs->cb = rte_eth_add_first_rx_callback(port, qid,
 								pdump_rx, cbs);
 			if (cbs->cb == NULL) {
@@ -145,8 +200,7 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
 					rte_errno);
 				return rte_errno;
 			}
-		}
-		if (cbs && operation == DISABLE) {
+		} else if (operation == DISABLE) {
 			int ret;
 
 			if (cbs->cb == NULL) {
@@ -170,26 +224,32 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
 }
 
 static int
-pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
-				struct rte_ring *ring, struct rte_mempool *mp,
-				uint16_t operation)
+pdump_register_tx_callbacks(enum pdump_version ver,
+			    uint16_t end_q, uint16_t port, uint16_t queue,
+			    struct rte_ring *ring, struct rte_mempool *mp,
+			    struct rte_bpf *filter,
+			    uint16_t operation, uint32_t snaplen)
 {
 
 	uint16_t qid;
-	struct pdump_rxtx_cbs *cbs = NULL;
 
 	qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
 	for (; qid < end_q; qid++) {
-		cbs = &tx_cbs[port][qid];
-		if (cbs && operation == ENABLE) {
+		struct pdump_rxtx_cbs *cbs = &tx_cbs[port][qid];
+
+		if (operation == ENABLE) {
 			if (cbs->cb) {
 				PDUMP_LOG(ERR,
 					"tx callback for port=%d queue=%d, already exists\n",
 					port, qid);
 				return -EEXIST;
 			}
+			cbs->ver = ver;
 			cbs->ring = ring;
 			cbs->mp = mp;
+			cbs->snaplen = snaplen;
+			cbs->filter = filter;
+
 			cbs->cb = rte_eth_add_tx_callback(port, qid, pdump_tx,
 								cbs);
 			if (cbs->cb == NULL) {
@@ -198,8 +258,7 @@ pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
 					rte_errno);
 				return rte_errno;
 			}
-		}
-		if (cbs && operation == DISABLE) {
+		} else if (operation == DISABLE) {
 			int ret;
 
 			if (cbs->cb == NULL) {
@@ -228,37 +287,47 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
 	uint16_t nb_rx_q = 0, nb_tx_q = 0, end_q, queue;
 	uint16_t port;
 	int ret = 0;
+	struct rte_bpf *filter = NULL;
 	uint32_t flags;
 	uint16_t operation;
 	struct rte_ring *ring;
 	struct rte_mempool *mp;
 
-	flags = p->flags;
-	operation = p->op;
-	if (operation == ENABLE) {
-		ret = rte_eth_dev_get_port_by_name(p->data.en_v1.device,
-				&port);
-		if (ret < 0) {
+	/* Check for possible DPDK version mismatch */
+	if (!(p->ver == V1 || p->ver == V2)) {
+		PDUMP_LOG(ERR,
+			  "incorrect client version %u\n", p->ver);
+		return -EINVAL;
+	}
+
+	if (p->prm) {
+		if (p->prm->prog_arg.type != RTE_BPF_ARG_PTR_MBUF) {
 			PDUMP_LOG(ERR,
-				"failed to get port id for device id=%s\n",
-				p->data.en_v1.device);
+				  "invalid BPF program type: %u\n",
+				  p->prm->prog_arg.type);
 			return -EINVAL;
 		}
-		queue = p->data.en_v1.queue;
-		ring = p->data.en_v1.ring;
-		mp = p->data.en_v1.mp;
-	} else {
-		ret = rte_eth_dev_get_port_by_name(p->data.dis_v1.device,
-				&port);
-		if (ret < 0) {
-			PDUMP_LOG(ERR,
-				"failed to get port id for device id=%s\n",
-				p->data.dis_v1.device);
-			return -EINVAL;
+
+		filter = rte_bpf_load(p->prm);
+		if (filter == NULL) {
+			PDUMP_LOG(ERR, "cannot load BPF filter: %s\n",
+				  rte_strerror(rte_errno));
+			return -rte_errno;
 		}
-		queue = p->data.dis_v1.queue;
-		ring = p->data.dis_v1.ring;
-		mp = p->data.dis_v1.mp;
+	}
+
+	flags = p->flags;
+	operation = p->op;
+	queue = p->queue;
+	ring = p->ring;
+	mp = p->mp;
+
+	ret = rte_eth_dev_get_port_by_name(p->device, &port);
+	if (ret < 0) {
+		PDUMP_LOG(ERR,
+			  "failed to get port id for device id=%s\n",
+			  p->device);
+		return -EINVAL;
 	}
 
 	/* validation if packet capture is for all queues */
@@ -296,8 +365,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
 	/* register RX callback */
 	if (flags & RTE_PDUMP_FLAG_RX) {
 		end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_rx_q : queue + 1;
-		ret = pdump_register_rx_callbacks(end_q, port, queue, ring, mp,
-							operation);
+		ret = pdump_register_rx_callbacks(p->ver, end_q, port, queue,
+						  ring, mp, filter,
+						  operation, p->snaplen);
 		if (ret < 0)
 			return ret;
 	}
@@ -305,8 +375,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
 	/* register TX callback */
 	if (flags & RTE_PDUMP_FLAG_TX) {
 		end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_tx_q : queue + 1;
-		ret = pdump_register_tx_callbacks(end_q, port, queue, ring, mp,
-							operation);
+		ret = pdump_register_tx_callbacks(p->ver, end_q, port, queue,
+						  ring, mp, filter,
+						  operation, p->snaplen);
 		if (ret < 0)
 			return ret;
 	}
@@ -332,7 +403,7 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
 		resp->err_value = set_pdump_rxtx_cbs(cli_req);
 	}
 
-	strlcpy(mp_resp.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
+	rte_strscpy(mp_resp.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
 	mp_resp.len_param = sizeof(*resp);
 	mp_resp.num_fds = 0;
 	if (rte_mp_reply(&mp_resp, peer) < 0) {
@@ -347,8 +418,18 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
 int
 rte_pdump_init(void)
 {
+	const struct rte_memzone *mz;
 	int ret;
 
+	mz = rte_memzone_reserve(MZ_RTE_PDUMP_STATS, sizeof(*pdump_stats),
+				 rte_socket_id(), 0);
+	if (mz == NULL) {
+		PDUMP_LOG(ERR, "cannot allocate pdump statistics\n");
+		rte_errno = ENOMEM;
+		return -1;
+	}
+	pdump_stats = mz->addr;
+
 	ret = rte_mp_action_register(PDUMP_MP, pdump_server);
 	if (ret && rte_errno != ENOTSUP)
 		return -1;
@@ -393,14 +474,21 @@ pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp)
 static int
 pdump_validate_flags(uint32_t flags)
 {
-	if (flags != RTE_PDUMP_FLAG_RX && flags != RTE_PDUMP_FLAG_TX &&
-		flags != RTE_PDUMP_FLAG_RXTX) {
+	if ((flags & RTE_PDUMP_FLAG_RXTX) == 0) {
 		PDUMP_LOG(ERR,
 			"invalid flags, should be either rx/tx/rxtx\n");
 		rte_errno = EINVAL;
 		return -1;
 	}
 
+	/* mask off the flags we know about */
+	if (flags & ~(RTE_PDUMP_FLAG_RXTX | RTE_PDUMP_FLAG_PCAPNG)) {
+		PDUMP_LOG(ERR,
+			  "unknown flags: %#x\n", flags);
+		rte_errno = ENOTSUP;
+		return -1;
+	}
+
 	return 0;
 }
 
@@ -427,12 +515,12 @@ pdump_validate_port(uint16_t port, char *name)
 }
 
 static int
-pdump_prepare_client_request(char *device, uint16_t queue,
-				uint32_t flags,
-				uint16_t operation,
-				struct rte_ring *ring,
-				struct rte_mempool *mp,
-				void *filter)
+pdump_prepare_client_request(const char *device, uint16_t queue,
+			     uint32_t flags, uint32_t snaplen,
+			     uint16_t operation,
+			     struct rte_ring *ring,
+			     struct rte_mempool *mp,
+			     const struct rte_bpf_prm *prm)
 {
 	int ret = -1;
 	struct rte_mp_msg mp_req, *mp_rep;
@@ -441,26 +529,22 @@ pdump_prepare_client_request(char *device, uint16_t queue,
 	struct pdump_request *req = (struct pdump_request *)mp_req.param;
 	struct pdump_response *resp;
 
-	req->ver = 1;
-	req->flags = flags;
+	memset(req, 0, sizeof(*req));
+
+	req->ver = (flags & RTE_PDUMP_FLAG_PCAPNG) ? V2 : V1;
+	req->flags = flags & RTE_PDUMP_FLAG_RXTX;
 	req->op = operation;
+	req->queue = queue;
+	rte_strscpy(req->device, device, sizeof(req->device));
+
 	if ((operation & ENABLE) != 0) {
-		strlcpy(req->data.en_v1.device, device,
-			sizeof(req->data.en_v1.device));
-		req->data.en_v1.queue = queue;
-		req->data.en_v1.ring = ring;
-		req->data.en_v1.mp = mp;
-		req->data.en_v1.filter = filter;
-	} else {
-		strlcpy(req->data.dis_v1.device, device,
-			sizeof(req->data.dis_v1.device));
-		req->data.dis_v1.queue = queue;
-		req->data.dis_v1.ring = NULL;
-		req->data.dis_v1.mp = NULL;
-		req->data.dis_v1.filter = NULL;
+		req->ring = ring;
+		req->mp = mp;
+		req->prm = prm;
+		req->snaplen = snaplen;
 	}
 
-	strlcpy(mp_req.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
+	rte_strscpy(mp_req.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
 	mp_req.len_param = sizeof(*req);
 	mp_req.num_fds = 0;
 	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0) {
@@ -478,11 +562,17 @@ pdump_prepare_client_request(char *device, uint16_t queue,
 	return ret;
 }
 
-int
-rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
-			struct rte_ring *ring,
-			struct rte_mempool *mp,
-			void *filter)
+/*
+ * There are two versions of this function, because although original API
+ * left place holder for future filter, it never checked the value.
+ * Therefore the API can't depend on application passing a non
+ * bogus value.
+ */
+static int
+pdump_enable(uint16_t port, uint16_t queue,
+	     uint32_t flags, uint32_t snaplen,
+	     struct rte_ring *ring, struct rte_mempool *mp,
+	     const struct rte_bpf_prm *prm)
 {
 	int ret;
 	char name[RTE_DEV_NAME_MAX_LEN];
@@ -497,20 +587,42 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(name, queue, flags,
-						ENABLE, ring, mp, filter);
+	if (snaplen == 0)
+		snaplen = UINT32_MAX;
 
-	return ret;
+	return pdump_prepare_client_request(name, queue, flags, snaplen,
+					    ENABLE, ring, mp, prm);
 }
 
 int
-rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
-				uint32_t flags,
-				struct rte_ring *ring,
-				struct rte_mempool *mp,
-				void *filter)
+rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
+		 struct rte_ring *ring,
+		 struct rte_mempool *mp,
+		 void *filter __rte_unused)
 {
-	int ret = 0;
+	return pdump_enable(port, queue, flags, 0,
+			    ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf(uint16_t port, uint16_t queue,
+		     uint32_t flags, uint32_t snaplen,
+		     struct rte_ring *ring,
+		     struct rte_mempool *mp,
+		     const struct rte_bpf_prm *prm)
+{
+	return pdump_enable(port, queue, flags, snaplen,
+			    ring, mp, prm);
+}
+
+static int
+pdump_enable_by_deviceid(const char *device_id, uint16_t queue,
+			 uint32_t flags, uint32_t snaplen,
+			 struct rte_ring *ring,
+			 struct rte_mempool *mp,
+			 const struct rte_bpf_prm *prm)
+{
+	int ret;
 
 	ret = pdump_validate_ring_mp(ring, mp);
 	if (ret < 0)
@@ -519,10 +631,30 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(device_id, queue, flags,
-						ENABLE, ring, mp, filter);
+	return pdump_prepare_client_request(device_id, queue, flags, snaplen,
+					    ENABLE, ring, mp, prm);
+}
 
-	return ret;
+int
+rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
+			     uint32_t flags,
+			     struct rte_ring *ring,
+			     struct rte_mempool *mp,
+			     void *filter __rte_unused)
+{
+	return pdump_enable_by_deviceid(device_id, queue, flags, 0,
+					ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+				 uint32_t flags, uint32_t snaplen,
+				 struct rte_ring *ring,
+				 struct rte_mempool *mp,
+				 const struct rte_bpf_prm *prm)
+{
+	return pdump_enable_by_deviceid(device_id, queue, flags, snaplen,
+					ring, mp, prm);
 }
 
 int
@@ -538,8 +670,8 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags)
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(name, queue, flags,
-						DISABLE, NULL, NULL, NULL);
+	ret = pdump_prepare_client_request(name, queue, flags, 0,
+					   DISABLE, NULL, NULL, NULL);
 
 	return ret;
 }
@@ -554,8 +686,68 @@ rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(device_id, queue, flags,
-						DISABLE, NULL, NULL, NULL);
+	ret = pdump_prepare_client_request(device_id, queue, flags, 0,
+					   DISABLE, NULL, NULL, NULL);
 
 	return ret;
 }
+
+static void
+pdump_sum_stats(uint16_t port, uint16_t nq,
+		struct rte_pdump_stats stats[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
+		struct rte_pdump_stats *total)
+{
+	uint64_t *sum = (uint64_t *)total;
+	unsigned int i;
+	uint64_t val;
+	uint16_t qid;
+
+	for (qid = 0; qid < nq; qid++) {
+		const uint64_t *perq = (const uint64_t *)&stats[port][qid];
+
+		for (i = 0; i < sizeof(*total) / sizeof(uint64_t); i++) {
+			val = __atomic_load_n(&perq[i], __ATOMIC_RELAXED);
+			sum[i] += val;
+		}
+	}
+}
+
+int
+rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats)
+{
+	struct rte_eth_dev_info dev_info;
+	const struct rte_memzone *mz;
+	int ret;
+
+	memset(stats, 0, sizeof(*stats));
+	ret = rte_eth_dev_info_get(port, &dev_info);
+	if (ret != 0) {
+		PDUMP_LOG(ERR,
+			  "Error during getting device (port %u) info: %s\n",
+			  port, strerror(-ret));
+		return ret;
+	}
+
+	if (pdump_stats == NULL) {
+		if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+			/* rte_pdump_init was not called */
+			PDUMP_LOG(ERR, "pdump stats not initialized\n");
+			rte_errno = EINVAL;
+			return -1;
+		}
+
+		/* secondary process looks up the memzone */
+		mz = rte_memzone_lookup(MZ_RTE_PDUMP_STATS);
+		if (mz == NULL) {
+			/* rte_pdump_init was not called in primary process?? */
+			PDUMP_LOG(ERR, "can not find pdump stats\n");
+			rte_errno = EINVAL;
+			return -1;
+		}
+		pdump_stats = mz->addr;
+	}
+
+	pdump_sum_stats(port, dev_info.nb_rx_queues, pdump_stats->rx, stats);
+	pdump_sum_stats(port, dev_info.nb_tx_queues, pdump_stats->tx, stats);
+	return 0;
+}
diff --git a/lib/pdump/rte_pdump.h b/lib/pdump/rte_pdump.h
index 6b00fc17aeb2..6efa0274f2ce 100644
--- a/lib/pdump/rte_pdump.h
+++ b/lib/pdump/rte_pdump.h
@@ -15,6 +15,7 @@
 #include <stdint.h>
 #include <rte_mempool.h>
 #include <rte_ring.h>
+#include <rte_bpf.h>
 
 #ifdef __cplusplus
 extern "C" {
@@ -26,7 +27,9 @@ enum {
 	RTE_PDUMP_FLAG_RX = 1,  /* receive direction */
 	RTE_PDUMP_FLAG_TX = 2,  /* transmit direction */
 	/* both receive and transmit directions */
-	RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX)
+	RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX),
+
+	RTE_PDUMP_FLAG_PCAPNG = 4, /* format for pcapng */
 };
 
 /**
@@ -68,7 +71,7 @@ rte_pdump_uninit(void);
  * @param mp
  *  mempool on to which original packets will be mirrored or duplicated.
  * @param filter
- *  place holder for packet filtering.
+ *  Unused should be NULL.
  *
  * @return
  *    0 on success, -1 on error, rte_errno is set accordingly.
@@ -80,6 +83,41 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
 		struct rte_mempool *mp,
 		void *filter);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given port and queue with filtering.
+ *
+ * @param port_id
+ *  The Ethernet port on which packet capturing should be enabled.
+ * @param queue
+ *  The queue on the Ethernet port which packet capturing
+ *  should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ *  queues of a given port.
+ * @param flags
+ *  Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ *  The upper limit on bytes to copy.
+ *  Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ *  The ring on which captured packets will be enqueued for user.
+ * @param mp
+ *  The mempool on to which original packets will be mirrored or duplicated.
+ * @param prm
+ *  Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ *    0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf(uint16_t port_id, uint16_t queue,
+		     uint32_t flags, uint32_t snaplen,
+		     struct rte_ring *ring,
+		     struct rte_mempool *mp,
+		     const struct rte_bpf_prm *prm);
+
 /**
  * Disables packet capturing on given port and queue.
  *
@@ -118,7 +156,7 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags);
  * @param mp
  *  mempool on to which original packets will be mirrored or duplicated.
  * @param filter
- *  place holder for packet filtering.
+ *  unused should be NULL
  *
  * @return
  *    0 on success, -1 on error, rte_errno is set accordingly.
@@ -131,6 +169,43 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
 				struct rte_mempool *mp,
 				void *filter);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given device id and queue with filtering.
+ * device_id can be name or pci address of device.
+ *
+ * @param device_id
+ *  device id on which packet capturing should be enabled.
+ * @param queue
+ *  The queue on the Ethernet port which packet capturing
+ *  should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ *  queues of a given port.
+ * @param flags
+ *  Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ *  The upper limit on bytes to copy.
+ *  Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ *  The ring on which captured packets will be enqueued for user.
+ * @param mp
+ *  The mempool on to which original packets will be mirrored or duplicated.
+ * @param filter
+ *  Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ *    0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+				 uint32_t flags, uint32_t snaplen,
+				 struct rte_ring *ring,
+				 struct rte_mempool *mp,
+				 const struct rte_bpf_prm *filter);
+
+
 /**
  * Disables packet capturing on given device_id and queue.
  * device_id can be name or pci address of device.
@@ -153,6 +228,38 @@ int
 rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
 				uint32_t flags);
 
+
+/**
+ * A structure used to retrieve statistics from packet capture.
+ * The statistics are sum of both receive and transmit queues.
+ */
+struct rte_pdump_stats {
+	uint64_t accepted; /**< Number of packets accepted by filter. */
+	uint64_t filtered; /**< Number of packets rejected by filter. */
+	uint64_t nombuf;   /**< Number of mbuf allocation failures. */
+	uint64_t ringfull; /**< Number of missed packets due to ring full. */
+
+	uint64_t reserved[4]; /**< Reserved and pad to cache line */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Retrieve the packet capture statistics for a queue.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param stats
+ *   A pointer to structure of type *rte_pdump_stats* to be filled in.
+ * @return
+ *   Zero if successful. -1 on error and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_pdump_stats(uint16_t port_id, struct rte_pdump_stats *stats);
+
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/pdump/version.map b/lib/pdump/version.map
index f0a9d12c9a9e..ce5502d9cdf4 100644
--- a/lib/pdump/version.map
+++ b/lib/pdump/version.map
@@ -10,3 +10,11 @@ DPDK_22 {
 
 	local: *;
 };
+
+EXPERIMENTAL {
+	global:
+
+	rte_pdump_enable_bpf;
+	rte_pdump_enable_bpf_by_deviceid;
+	rte_pdump_stats;
+};
-- 
2.30.2


^ permalink raw reply	[relevance 1%]

* Re: [dpdk-dev] [PATCH v5 11/14] eventdev: move timer adapters memory to hugepage
  2021-10-18 23:36  4%     ` [dpdk-dev] [PATCH v5 11/14] eventdev: move timer adapters memory to hugepage pbhagavatula
@ 2021-10-20 20:24  0%       ` Carrillo, Erik G
  0 siblings, 0 replies; 200+ results
From: Carrillo, Erik G @ 2021-10-20 20:24 UTC (permalink / raw)
  To: pbhagavatula, jerinj; +Cc: dev

Hi Pavan and Jerin,

> -----Original Message-----
> From: pbhagavatula@marvell.com <pbhagavatula@marvell.com>
> Sent: Monday, October 18, 2021 6:36 PM
> To: jerinj@marvell.com; Carrillo, Erik G <erik.g.carrillo@intel.com>
> Cc: dev@dpdk.org; Pavan Nikhilesh <pbhagavatula@marvell.com>
> Subject: [dpdk-dev] [PATCH v5 11/14] eventdev: move timer adapters
> memory to hugepage
> 
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> 
> Move memory used by timer adapters to hugepage.
> Allocate memory on the first adapter create or lookup to address both
> primary and secondary process usecases.
> This will prevent TLB misses if any and aligns to memory structure of other
> subsystems.
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
>  doc/guides/rel_notes/release_21_11.rst |  2 ++
> lib/eventdev/rte_event_timer_adapter.c | 36
> ++++++++++++++++++++++++--
>  2 files changed, 36 insertions(+), 2 deletions(-)
> 
> diff --git a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> index 6442c79977..9694b32002 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -226,6 +226,8 @@ API Changes
>    the crypto/security operation. This field will be used to communicate
>    events such as soft expiry with IPsec in lookaside mode.
> 
> +* eventdev: Move memory used by timer adapters to hugepage. This will
> +prevent
> +  TLB misses if any and aligns to memory structure of other subsystems.
> 
>  ABI Changes
>  -----------
> diff --git a/lib/eventdev/rte_event_timer_adapter.c
> b/lib/eventdev/rte_event_timer_adapter.c
> index ae55407042..894f532ef0 100644
> --- a/lib/eventdev/rte_event_timer_adapter.c
> +++ b/lib/eventdev/rte_event_timer_adapter.c
> @@ -33,7 +33,7 @@ RTE_LOG_REGISTER_SUFFIX(evtim_logtype,
> adapter.timer, NOTICE);
> RTE_LOG_REGISTER_SUFFIX(evtim_buffer_logtype, adapter.timer, NOTICE);
> RTE_LOG_REGISTER_SUFFIX(evtim_svc_logtype, adapter.timer.svc,
> NOTICE);
> 
> -static struct rte_event_timer_adapter
> adapters[RTE_EVENT_TIMER_ADAPTER_NUM_MAX];
> +static struct rte_event_timer_adapter *adapters;
> 
>  static const struct event_timer_adapter_ops swtim_ops;
> 
> @@ -138,6 +138,17 @@ rte_event_timer_adapter_create_ext(
>  	int n, ret;
>  	struct rte_eventdev *dev;
> 
> +	if (adapters == NULL) {
> +		adapters = rte_zmalloc("Eventdev",
> +				       sizeof(struct rte_event_timer_adapter) *
> +
> RTE_EVENT_TIMER_ADAPTER_NUM_MAX,
> +				       RTE_CACHE_LINE_SIZE);
> +		if (adapters == NULL) {
> +			rte_errno = ENOMEM;
> +			return NULL;
> +		}
> +	}
> +
>  	if (conf == NULL) {
>  		rte_errno = EINVAL;
>  		return NULL;
> @@ -312,6 +323,17 @@ rte_event_timer_adapter_lookup(uint16_t
> adapter_id)
>  	int ret;
>  	struct rte_eventdev *dev;
> 
> +	if (adapters == NULL) {
> +		adapters = rte_zmalloc("Eventdev",
> +				       sizeof(struct rte_event_timer_adapter) *
> +
> RTE_EVENT_TIMER_ADAPTER_NUM_MAX,
> +				       RTE_CACHE_LINE_SIZE);
> +		if (adapters == NULL) {
> +			rte_errno = ENOMEM;
> +			return NULL;
> +		}
> +	}
> +
>  	if (adapters[adapter_id].allocated)
>  		return &adapters[adapter_id]; /* Adapter is already loaded
> */
> 
> @@ -358,7 +380,7 @@ rte_event_timer_adapter_lookup(uint16_t
> adapter_id)  int  rte_event_timer_adapter_free(struct
> rte_event_timer_adapter *adapter)  {
> -	int ret;
> +	int i, ret;
> 
>  	ADAPTER_VALID_OR_ERR_RET(adapter, -EINVAL);
>  	FUNC_PTR_OR_ERR_RET(adapter->ops->uninit, -EINVAL); @@ -
> 382,6 +404,16 @@ rte_event_timer_adapter_free(struct
> rte_event_timer_adapter *adapter)
>  	adapter->data = NULL;
>  	adapter->allocated = 0;
> 
> +	ret = 0;
> +	for (i = 0; i < RTE_EVENT_TIMER_ADAPTER_NUM_MAX; i++)
> +		if (adapters[i].allocated)
> +			ret = adapter[i].allocated;
> +

I found a typo here, but it looks like this series has already been accepted, so I submitted the following patch for the issue:

http://patchwork.dpdk.org/project/dpdk/patch/20211020202021.1205135-1-erik.g.carrillo@intel.com/

Besides that, this patch and the others I was copied on look good to me.

Thanks,
Erik

> +	if (!ret) {
> +		rte_free(adapters);
> +		adapters = NULL;
> +	}
> +
>  	rte_eventdev_trace_timer_adapter_free(adapter);
>  	return 0;
>  }
> --
> 2.17.1


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] lpm: fix buffer overflow
  @ 2021-10-20 19:55  3% ` David Marchand
  2021-10-21 17:15  0%   ` Medvedkin, Vladimir
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-10-20 19:55 UTC (permalink / raw)
  To: Vladimir Medvedkin; +Cc: dev, Bruce Richardson, alex, dpdk stable

Hello Vladimir,

On Fri, Oct 8, 2021 at 11:29 PM Vladimir Medvedkin
<vladimir.medvedkin@intel.com> wrote:
>
> This patch fixes buffer overflow reported by ASAN,
> please reference https://bugs.dpdk.org/show_bug.cgi?id=819
>
> The rte_lpm6 keeps routing information for control plane purpose
> inside the rte_hash table which uses rte_jhash() as a hash function.
> From the rte_jhash() documentation: If input key is not aligned to
> four byte boundaries or a multiple of four bytes in length,
> the memory region just after may be read (but not used in the
> computation).
> rte_lpm6 uses 17 bytes keys consisting of IPv6 address (16 bytes) +
> depth (1 byte).
>
> This patch increases the size of the depth field up to uint32_t
> and sets the alignment to 4 bytes.
>
> Bugzilla ID: 819
> Fixes: 86b3b21952a8 ("lpm6: store rules in hash table")
> Cc: alex@therouter.net
> Cc: stable@dpdk.org

This change should be internal, and not breaking ABI, but are we sure
we want to backport it?


>
> Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> ---
>  lib/lpm/rte_lpm6.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/lib/lpm/rte_lpm6.c b/lib/lpm/rte_lpm6.c
> index 37baabb..d5e0918 100644
> --- a/lib/lpm/rte_lpm6.c
> +++ b/lib/lpm/rte_lpm6.c
> @@ -80,8 +80,8 @@ struct rte_lpm6_rule {
>  /** Rules tbl entry key. */
>  struct rte_lpm6_rule_key {
>         uint8_t ip[RTE_LPM6_IPV6_ADDR_SIZE]; /**< Rule IP address. */
> -       uint8_t depth; /**< Rule depth. */
> -};
> +       uint32_t depth; /**< Rule depth. */
> +} __rte_aligned(sizeof(uint32_t));

I would recommend doing the same than for hash tests: keep growing
depth to 32bits, but no enforcement of alignment and add build check
on structure size being sizeof(uin32_t) aligned.


>
>  /* Header of tbl8 */
>  struct rte_lpm_tbl8_hdr {
> --
> 2.7.4
>


-- 
David Marchand


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v3 6/8] cryptodev: rework session framework
  2021-10-18 21:34  1%   ` [dpdk-dev] [PATCH v3 6/8] cryptodev: " Akhil Goyal
@ 2021-10-20 19:27  0%     ` Ananyev, Konstantin
  2021-10-21  6:53  0%       ` Akhil Goyal
  0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-10-20 19:27 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj, De Lara Guarch,
	Pablo, Trahe, Fiona, Doherty, Declan, matan, g.singh, Zhang,
	Roy Fan, jianjay.zhou, asomalap, ruifeng.wang, Nicolau, Radu,
	ajit.khaparde, rnagadheeraj, adwivedi, Power, Ciara, Wang,
	Haiyue, jiawenwu, jianwang


Hi Akhil,


> As per current design, rte_cryptodev_sym_session_create() and
> rte_cryptodev_sym_session_init() use separate mempool objects
> for a single session.
> And structure rte_cryptodev_sym_session is not directly used
> by the application, it may cause ABI breakage if the structure
> is modified in future.
> 
> To address these two issues, the rte_cryptodev_sym_session_create
> will take one mempool object for both the session and session
> private data. The API rte_cryptodev_sym_session_init will now not
> take mempool object.
> rte_cryptodev_sym_session_create will now return an opaque session
> pointer which will be used by the app in rte_cryptodev_sym_session_init
> and other APIs.
> 
> With this change, rte_cryptodev_sym_session_init will send
> pointer to session private data of corresponding driver to the PMD
> based on the driver_id for filling the PMD data.
> 
> In data path, opaque session pointer is attached to rte_crypto_op
> and the PMD can call an internal library API to get the session
> private data pointer based on the driver id.
> 
> Note: currently nb_drivers are getting updated in RTE_INIT which
> result in increasing the memory requirements for session.
> User can compile off drivers which are not in use to reduce the
> memory consumption of a session.
> 
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---

With that patch ipsec-secgw functional tests crashes for AES_GCM test-cases.
To be more specific:
examples/ipsec-secgw/test/run_test.sh -4 tun_aesgcm

[24126592.561071] traps: dpdk-ipsec-secg[3254860] general protection fault ip:7f3ac2397027 sp:7ffeaade8848 error:0 in libIPSec_MB.so.1.0.0[7f3ac238f000+2a20000]

Looking a bit deeper, it fails at:
#0  0x00007ff9274f4027 in aes_keyexp_128_enc_avx512 ()
   from /lib/libIPSec_MB.so.1
#1  0x00007ff929f0ac97 in aes_gcm_pre_128_avx_gen4 ()
   from /lib/libIPSec_MB.so.1
#2  0x0000561757073753 in aesni_gcm_session_configure (mb_mgr=0x56175c5fe400,
    session=0x17e3b72d8, xform=0x17e05d7c0)
    at ../drivers/crypto/ipsec_mb/pmd_aesni_gcm.c:132
#3  0x00005617570592af in ipsec_mb_sym_session_configure (
    dev=0x56175be0c940 <rte_crypto_devices>, xform=0x17e05d7c0,
    sess=0x17e3b72d8) at ../drivers/crypto/ipsec_mb/ipsec_mb_ops.c:330
#4  0x0000561753b4d6ae in rte_cryptodev_sym_session_init (dev_id=0 '\000',
    sess_opaque=0x17e3b4940, xforms=0x17e05d7c0)
    at ../lib/cryptodev/rte_cryptodev.c:1736
#5  0x0000561752ef99b7 in create_lookaside_session (
    ipsec_ctx=0x56175aa6a210 <lcore_conf+1105232>, sa=0x17e05d140,
    ips=0x17e05d140) at ../examples/ipsec-secgw/ipsec.c:145
#6  0x0000561752f0cf98 in fill_ipsec_session (ss=0x17e05d140,
    ctx=0x56175aa6a210 <lcore_conf+1105232>, sa=0x17e05d140)
    at ../examples/ipsec-secgw/ipsec_process.c:89
#7  0x0000561752f0d7dd in ipsec_process (
    ctx=0x56175aa6a210 <lcore_conf+1105232>, trf=0x7ffd192326a0)
    at ../examples/ipsec-secgw/ipsec_process.c:300
#8  0x0000561752f21027 in process_pkts_outbound (
--Type <RET> for more, q to quit, c to continue without paging--
    ipsec_ctx=0x56175aa6a210 <lcore_conf+1105232>, traffic=0x7ffd192326a0)
    at ../examples/ipsec-secgw/ipsec-secgw.c:839
#9  0x0000561752f21b2e in process_pkts (
    qconf=0x56175aa57340 <lcore_conf+1027712>, pkts=0x7ffd19233c20,
    nb_pkts=1 '\001', portid=1) at ../examples/ipsec-secgw/ipsec-secgw.c:1072
#10 0x0000561752f224db in ipsec_poll_mode_worker ()
    at ../examples/ipsec-secgw/ipsec-secgw.c:1262
#11 0x0000561752f38adc in ipsec_launch_one_lcore (args=0x56175c549700)
    at ../examples/ipsec-secgw/ipsec_worker.c:654
#12 0x0000561753cbc523 in rte_eal_mp_remote_launch (
    f=0x561752f38ab5 <ipsec_launch_one_lcore>, arg=0x56175c549700,
    call_main=CALL_MAIN) at ../lib/eal/common/eal_common_launch.c:64
#13 0x0000561752f265ed in main (argc=12, argv=0x7ffd19234168)
    at ../examples/ipsec-secgw/ipsec-secgw.c:2978
(gdb) frame 2
#2  0x0000561757073753 in aesni_gcm_session_configure (mb_mgr=0x56175c5fe400,
    session=0x17e3b72d8, xform=0x17e05d7c0)
    at ../drivers/crypto/ipsec_mb/pmd_aesni_gcm.c:132
132                     mb_mgr->gcm128_pre(key, &sess->gdata_key);

Because of un-expected unaligned memory access:
(gdb) disas
Dump of assembler code for function aes_keyexp_128_enc_avx512:
   0x00007ff9274f400b <+0>:     endbr64
   0x00007ff9274f400f <+4>:     cmp    $0x0,%rdi
   0x00007ff9274f4013 <+8>:     je     0x7ff9274f41b4 <aes_keyexp_128_enc_avx512+425>
   0x00007ff9274f4019 <+14>:    cmp    $0x0,%rsi
   0x00007ff9274f401d <+18>:    je     0x7ff9274f41b4 <aes_keyexp_128_enc_avx512+425>
   0x00007ff9274f4023 <+24>:    vmovdqu (%rdi),%xmm1
=> 0x00007ff9274f4027 <+28>:    vmovdqa %xmm1,(%rsi)

(gdb) print/x $rsi
$12 = 0x17e3b72e8

And this is caused because now AES_GCM session private data is not 16B-bits
aligned anymore:
(gdb) print ((struct aesni_gcm_session *)sess->sess_data[index].data)
$29 = (struct aesni_gcm_session *) 0x17e3b72d8

print &((struct aesni_gcm_session *)sess->sess_data[index].data)->gdata_key
$31 = (struct gcm_key_data *) 0x17e3b72e8

As I understand the reason for that is that we changed the way how sess_data[index].data
is populated. Now it is just:
sess->sess_data[index].data = (void *)((uint8_t *)sess +
                                rte_cryptodev_sym_get_header_session_size() +
                                (index * sess->priv_sz));

So, as I can see, there is no guarantee that PMD's private sess data will be aligned on 16B
as expected.





^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v5] ethdev: add namespace
  2021-10-18 15:43  1% ` [dpdk-dev] [PATCH v4] " Ferruh Yigit
@ 2021-10-20 19:23  1%   ` Ferruh Yigit
  2021-10-22  2:02  1%     ` [dpdk-dev] [PATCH v6] " Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-10-20 19:23 UTC (permalink / raw)
  To: Maryam Tahhan, Reshma Pattan, Jerin Jacob, Wisam Jaddo,
	Cristian Dumitrescu, Xiaoyun Li, Thomas Monjalon,
	Andrew Rybchenko, Jay Jayatheerthan, Chas Williams,
	Min Hu (Connor),
	Pavan Nikhilesh, Shijith Thotton, Ajit Khaparde, Somnath Kotur,
	John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, Haiyue Wang,
	Beilei Xing, Matan Azrad, Viacheslav Ovsiienko, Keith Wiles,
	Jiayu Hu, Olivier Matz, Ori Kam, Akhil Goyal, Declan Doherty,
	Ray Kinsella, Radu Nicolau, Hemant Agrawal, Sachin Saxena,
	Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
	John W. Linville, Ciara Loftus, Shepard Siegel, Ed Czeck,
	John Miller, Igor Russkikh, Steven Webster, Matt Peters,
	Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh,
	Bruce Richardson, Konstantin Ananyev, Ruifeng Wang,
	Rahul Lakkireddy, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
	Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, Gaetan Rivet,
	Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Yisen Zhuang, Lijun Ou,
	Jingjing Wu, Qiming Yang, Andrew Boyer, Rosen Xu,
	Srisivasubramanian Srinivasan, Jakub Grajciar, Zyta Szpak,
	Liron Himi, Stephen Hemminger, Long Li, Martin Spinler,
	Heinrich Kuhn, Jiawen Wu, Tetsuya Mukawa, Harman Kalra,
	Anoob Joseph, Nalla Pradeep, Radha Mohan Chintakuntla,
	Veerasenareddy Burru, Devendra Singh Rawat, Jasvinder Singh,
	Maciej Czekaj, Jian Wang, Maxime Coquelin, Chenbo Xia, Yong Wang,
	Nicolas Chautru, David Hunt, Harry van Haaren, Bernard Iremonger,
	Anatoly Burakov, John McNamara, Kirill Rybalchenko, Byron Marohn,
	Yipeng Wang
  Cc: Ferruh Yigit, dev, Tyler Retzlaff, David Marchand

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=true, Size: 1214846 bytes --]

Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.

All internal components switched to using new names.

Syntax fixed on lines that this patch touches.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
Cc: David Marchand <david.marchand@redhat.com>
Cc: Thomas Monjalon <thomas@monjalon.net>

v2:
* Updated internal components
* Removed deprecation notice

v3:
* Updated missing macros / structs that David highlighted
* Added release notes update

v4:
* rebased on latest next-net
* depends on https://patches.dpdk.org/user/todo/dpdk/?series=19744
* Not able to complete scripts to update user code, although some
  shared by Aman:
  https://patches.dpdk.org/project/dpdk/patch/20211008102949.70716-1-aman.deep.singh@intel.com/
  Sending new version for possible option to get this patch for -rc1 and
  work for scripts later, before release.

v5:
* rebased on latest next-net
---
 app/proc-info/main.c                          |    8 +-
 app/test-eventdev/test_perf_common.c          |    4 +-
 app/test-eventdev/test_pipeline_common.c      |   10 +-
 app/test-flow-perf/config.h                   |    2 +-
 app/test-pipeline/init.c                      |    8 +-
 app/test-pmd/cmdline.c                        |  286 ++---
 app/test-pmd/config.c                         |  200 ++--
 app/test-pmd/csumonly.c                       |   28 +-
 app/test-pmd/flowgen.c                        |    6 +-
 app/test-pmd/macfwd.c                         |    6 +-
 app/test-pmd/macswap_common.h                 |    6 +-
 app/test-pmd/parameters.c                     |   54 +-
 app/test-pmd/testpmd.c                        |   52 +-
 app/test-pmd/testpmd.h                        |    2 +-
 app/test-pmd/txonly.c                         |    6 +-
 app/test/test_ethdev_link.c                   |   68 +-
 app/test/test_event_eth_rx_adapter.c          |    4 +-
 app/test/test_kni.c                           |    2 +-
 app/test/test_link_bonding.c                  |    4 +-
 app/test/test_link_bonding_mode4.c            |    4 +-
 app/test/test_link_bonding_rssconf.c          |   28 +-
 app/test/test_pmd_perf.c                      |   12 +-
 app/test/virtual_pmd.c                        |   10 +-
 doc/guides/eventdevs/cnxk.rst                 |    2 +-
 doc/guides/eventdevs/octeontx2.rst            |    2 +-
 doc/guides/nics/af_packet.rst                 |    2 +-
 doc/guides/nics/bnxt.rst                      |   24 +-
 doc/guides/nics/enic.rst                      |    2 +-
 doc/guides/nics/features.rst                  |  114 +-
 doc/guides/nics/fm10k.rst                     |    6 +-
 doc/guides/nics/intel_vf.rst                  |   10 +-
 doc/guides/nics/ixgbe.rst                     |   12 +-
 doc/guides/nics/mlx5.rst                      |    4 +-
 doc/guides/nics/tap.rst                       |    2 +-
 .../generic_segmentation_offload_lib.rst      |    8 +-
 doc/guides/prog_guide/mbuf_lib.rst            |   18 +-
 doc/guides/prog_guide/poll_mode_drv.rst       |    8 +-
 doc/guides/prog_guide/rte_flow.rst            |   34 +-
 doc/guides/prog_guide/rte_security.rst        |    2 +-
 doc/guides/rel_notes/deprecation.rst          |   10 +-
 doc/guides/rel_notes/release_21_11.rst        |    3 +
 doc/guides/sample_app_ug/ipsec_secgw.rst      |    4 +-
 doc/guides/testpmd_app_ug/run_app.rst         |    2 +-
 drivers/bus/dpaa/include/process.h            |   16 +-
 drivers/common/cnxk/roc_npc.h                 |    2 +-
 drivers/net/af_packet/rte_eth_af_packet.c     |   20 +-
 drivers/net/af_xdp/rte_eth_af_xdp.c           |   12 +-
 drivers/net/ark/ark_ethdev.c                  |   16 +-
 drivers/net/atlantic/atl_ethdev.c             |   88 +-
 drivers/net/atlantic/atl_ethdev.h             |   18 +-
 drivers/net/atlantic/atl_rxtx.c               |    6 +-
 drivers/net/avp/avp_ethdev.c                  |   26 +-
 drivers/net/axgbe/axgbe_dev.c                 |    6 +-
 drivers/net/axgbe/axgbe_ethdev.c              |  104 +-
 drivers/net/axgbe/axgbe_ethdev.h              |   12 +-
 drivers/net/axgbe/axgbe_mdio.c                |    2 +-
 drivers/net/axgbe/axgbe_rxtx.c                |    6 +-
 drivers/net/bnx2x/bnx2x_ethdev.c              |   12 +-
 drivers/net/bnxt/bnxt.h                       |   62 +-
 drivers/net/bnxt/bnxt_ethdev.c                |  172 +--
 drivers/net/bnxt/bnxt_flow.c                  |    6 +-
 drivers/net/bnxt/bnxt_hwrm.c                  |  112 +-
 drivers/net/bnxt/bnxt_reps.c                  |    2 +-
 drivers/net/bnxt/bnxt_ring.c                  |    4 +-
 drivers/net/bnxt/bnxt_rxq.c                   |   28 +-
 drivers/net/bnxt/bnxt_rxr.c                   |    4 +-
 drivers/net/bnxt/bnxt_rxtx_vec_avx2.c         |    2 +-
 drivers/net/bnxt/bnxt_rxtx_vec_common.h       |    2 +-
 drivers/net/bnxt/bnxt_rxtx_vec_neon.c         |    2 +-
 drivers/net/bnxt/bnxt_rxtx_vec_sse.c          |    2 +-
 drivers/net/bnxt/bnxt_txr.c                   |    4 +-
 drivers/net/bnxt/bnxt_vnic.c                  |   30 +-
 drivers/net/bnxt/rte_pmd_bnxt.c               |    8 +-
 drivers/net/bonding/eth_bond_private.h        |    4 +-
 drivers/net/bonding/rte_eth_bond_8023ad.c     |   16 +-
 drivers/net/bonding/rte_eth_bond_api.c        |    6 +-
 drivers/net/bonding/rte_eth_bond_pmd.c        |   50 +-
 drivers/net/cnxk/cn10k_ethdev.c               |   42 +-
 drivers/net/cnxk/cn10k_rte_flow.c             |    2 +-
 drivers/net/cnxk/cn10k_rx.c                   |    4 +-
 drivers/net/cnxk/cn10k_tx.c                   |    4 +-
 drivers/net/cnxk/cn9k_ethdev.c                |   60 +-
 drivers/net/cnxk/cn9k_rx.c                    |    4 +-
 drivers/net/cnxk/cn9k_tx.c                    |    4 +-
 drivers/net/cnxk/cnxk_ethdev.c                |  112 +-
 drivers/net/cnxk/cnxk_ethdev.h                |   49 +-
 drivers/net/cnxk/cnxk_ethdev_devargs.c        |    6 +-
 drivers/net/cnxk/cnxk_ethdev_ops.c            |  106 +-
 drivers/net/cnxk/cnxk_link.c                  |   14 +-
 drivers/net/cnxk/cnxk_ptp.c                   |    4 +-
 drivers/net/cnxk/cnxk_rte_flow.c              |    2 +-
 drivers/net/cxgbe/cxgbe.h                     |   46 +-
 drivers/net/cxgbe/cxgbe_ethdev.c              |   42 +-
 drivers/net/cxgbe/cxgbe_main.c                |   12 +-
 drivers/net/dpaa/dpaa_ethdev.c                |  180 +--
 drivers/net/dpaa/dpaa_ethdev.h                |   10 +-
 drivers/net/dpaa/dpaa_flow.c                  |   32 +-
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c        |   47 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  138 +--
 drivers/net/dpaa2/dpaa2_ethdev.h              |   22 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |    8 +-
 drivers/net/e1000/e1000_ethdev.h              |   18 +-
 drivers/net/e1000/em_ethdev.c                 |   64 +-
 drivers/net/e1000/em_rxtx.c                   |   38 +-
 drivers/net/e1000/igb_ethdev.c                |  158 +--
 drivers/net/e1000/igb_pf.c                    |    2 +-
 drivers/net/e1000/igb_rxtx.c                  |  116 +-
 drivers/net/ena/ena_ethdev.c                  |   70 +-
 drivers/net/ena/ena_ethdev.h                  |    4 +-
 drivers/net/ena/ena_rss.c                     |   74 +-
 drivers/net/enetc/enetc_ethdev.c              |   30 +-
 drivers/net/enic/enic.h                       |    2 +-
 drivers/net/enic/enic_ethdev.c                |   88 +-
 drivers/net/enic/enic_main.c                  |   40 +-
 drivers/net/enic/enic_res.c                   |   50 +-
 drivers/net/failsafe/failsafe.c               |    8 +-
 drivers/net/failsafe/failsafe_intr.c          |    4 +-
 drivers/net/failsafe/failsafe_ops.c           |   78 +-
 drivers/net/fm10k/fm10k.h                     |    4 +-
 drivers/net/fm10k/fm10k_ethdev.c              |  146 +--
 drivers/net/fm10k/fm10k_rxtx_vec.c            |    6 +-
 drivers/net/hinic/base/hinic_pmd_hwdev.c      |   22 +-
 drivers/net/hinic/hinic_pmd_ethdev.c          |  136 +--
 drivers/net/hinic/hinic_pmd_rx.c              |   36 +-
 drivers/net/hinic/hinic_pmd_rx.h              |   22 +-
 drivers/net/hns3/hns3_dcb.c                   |   14 +-
 drivers/net/hns3/hns3_ethdev.c                |  352 +++---
 drivers/net/hns3/hns3_ethdev.h                |   12 +-
 drivers/net/hns3/hns3_ethdev_vf.c             |  100 +-
 drivers/net/hns3/hns3_flow.c                  |    6 +-
 drivers/net/hns3/hns3_ptp.c                   |    2 +-
 drivers/net/hns3/hns3_rss.c                   |  108 +-
 drivers/net/hns3/hns3_rss.h                   |   28 +-
 drivers/net/hns3/hns3_rxtx.c                  |   30 +-
 drivers/net/hns3/hns3_rxtx.h                  |    2 +-
 drivers/net/hns3/hns3_rxtx_vec.c              |   10 +-
 drivers/net/i40e/i40e_ethdev.c                |  272 ++---
 drivers/net/i40e/i40e_ethdev.h                |   24 +-
 drivers/net/i40e/i40e_flow.c                  |   32 +-
 drivers/net/i40e/i40e_hash.c                  |  158 +--
 drivers/net/i40e/i40e_pf.c                    |   14 +-
 drivers/net/i40e/i40e_rxtx.c                  |    8 +-
 drivers/net/i40e/i40e_rxtx.h                  |    4 +-
 drivers/net/i40e/i40e_rxtx_vec_avx512.c       |    2 +-
 drivers/net/i40e/i40e_rxtx_vec_common.h       |    8 +-
 drivers/net/i40e/i40e_vf_representor.c        |   48 +-
 drivers/net/iavf/iavf.h                       |   24 +-
 drivers/net/iavf/iavf_ethdev.c                |  178 +--
 drivers/net/iavf/iavf_hash.c                  |  320 +++---
 drivers/net/iavf/iavf_rxtx.c                  |    2 +-
 drivers/net/iavf/iavf_rxtx.h                  |   24 +-
 drivers/net/iavf/iavf_rxtx_vec_avx2.c         |    4 +-
 drivers/net/iavf/iavf_rxtx_vec_avx512.c       |    6 +-
 drivers/net/iavf/iavf_rxtx_vec_sse.c          |    2 +-
 drivers/net/ice/ice_dcf.c                     |    2 +-
 drivers/net/ice/ice_dcf_ethdev.c              |   86 +-
 drivers/net/ice/ice_dcf_vf_representor.c      |   56 +-
 drivers/net/ice/ice_ethdev.c                  |  180 +--
 drivers/net/ice/ice_ethdev.h                  |   26 +-
 drivers/net/ice/ice_hash.c                    |  290 ++---
 drivers/net/ice/ice_rxtx.c                    |   16 +-
 drivers/net/ice/ice_rxtx_vec_avx2.c           |    2 +-
 drivers/net/ice/ice_rxtx_vec_avx512.c         |    4 +-
 drivers/net/ice/ice_rxtx_vec_common.h         |   28 +-
 drivers/net/ice/ice_rxtx_vec_sse.c            |    2 +-
 drivers/net/igc/igc_ethdev.c                  |  138 +--
 drivers/net/igc/igc_ethdev.h                  |   54 +-
 drivers/net/igc/igc_txrx.c                    |   48 +-
 drivers/net/ionic/ionic_ethdev.c              |  138 +--
 drivers/net/ionic/ionic_ethdev.h              |   12 +-
 drivers/net/ionic/ionic_lif.c                 |   36 +-
 drivers/net/ionic/ionic_rxtx.c                |   10 +-
 drivers/net/ipn3ke/ipn3ke_representor.c       |   64 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |  285 +++--
 drivers/net/ixgbe/ixgbe_ethdev.h              |   18 +-
 drivers/net/ixgbe/ixgbe_fdir.c                |   24 +-
 drivers/net/ixgbe/ixgbe_flow.c                |    2 +-
 drivers/net/ixgbe/ixgbe_ipsec.c               |   12 +-
 drivers/net/ixgbe/ixgbe_pf.c                  |   34 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                |  249 ++--
 drivers/net/ixgbe/ixgbe_rxtx.h                |    4 +-
 drivers/net/ixgbe/ixgbe_rxtx_vec_common.h     |    2 +-
 drivers/net/ixgbe/ixgbe_tm.c                  |   16 +-
 drivers/net/ixgbe/ixgbe_vf_representor.c      |   16 +-
 drivers/net/ixgbe/rte_pmd_ixgbe.c             |   14 +-
 drivers/net/ixgbe/rte_pmd_ixgbe.h             |    4 +-
 drivers/net/kni/rte_eth_kni.c                 |    8 +-
 drivers/net/liquidio/lio_ethdev.c             |  114 +-
 drivers/net/memif/memif_socket.c              |    2 +-
 drivers/net/memif/rte_eth_memif.c             |   16 +-
 drivers/net/mlx4/mlx4_ethdev.c                |   32 +-
 drivers/net/mlx4/mlx4_flow.c                  |   30 +-
 drivers/net/mlx4/mlx4_intr.c                  |    8 +-
 drivers/net/mlx4/mlx4_rxq.c                   |   18 +-
 drivers/net/mlx4/mlx4_txq.c                   |   24 +-
 drivers/net/mlx5/linux/mlx5_ethdev_os.c       |   54 +-
 drivers/net/mlx5/linux/mlx5_os.c              |    6 +-
 drivers/net/mlx5/mlx5.c                       |    4 +-
 drivers/net/mlx5/mlx5.h                       |    2 +-
 drivers/net/mlx5/mlx5_defs.h                  |    6 +-
 drivers/net/mlx5/mlx5_ethdev.c                |    6 +-
 drivers/net/mlx5/mlx5_flow.c                  |   54 +-
 drivers/net/mlx5/mlx5_flow.h                  |   12 +-
 drivers/net/mlx5/mlx5_flow_dv.c               |   44 +-
 drivers/net/mlx5/mlx5_flow_verbs.c            |    4 +-
 drivers/net/mlx5/mlx5_rss.c                   |   10 +-
 drivers/net/mlx5/mlx5_rxq.c                   |   40 +-
 drivers/net/mlx5/mlx5_rxtx_vec.h              |    8 +-
 drivers/net/mlx5/mlx5_tx.c                    |   30 +-
 drivers/net/mlx5/mlx5_txq.c                   |   58 +-
 drivers/net/mlx5/mlx5_vlan.c                  |    4 +-
 drivers/net/mlx5/windows/mlx5_os.c            |    4 +-
 drivers/net/mvneta/mvneta_ethdev.c            |   32 +-
 drivers/net/mvneta/mvneta_ethdev.h            |   10 +-
 drivers/net/mvneta/mvneta_rxtx.c              |    2 +-
 drivers/net/mvpp2/mrvl_ethdev.c               |  112 +-
 drivers/net/netvsc/hn_ethdev.c                |   70 +-
 drivers/net/netvsc/hn_rndis.c                 |   50 +-
 drivers/net/nfb/nfb_ethdev.c                  |   20 +-
 drivers/net/nfb/nfb_rx.c                      |    2 +-
 drivers/net/nfp/nfp_common.c                  |  122 +-
 drivers/net/nfp/nfp_ethdev.c                  |    2 +-
 drivers/net/nfp/nfp_ethdev_vf.c               |    2 +-
 drivers/net/ngbe/ngbe_ethdev.c                |   50 +-
 drivers/net/null/rte_eth_null.c               |   28 +-
 drivers/net/octeontx/octeontx_ethdev.c        |   74 +-
 drivers/net/octeontx/octeontx_ethdev.h        |   30 +-
 drivers/net/octeontx/octeontx_ethdev_ops.c    |   26 +-
 drivers/net/octeontx2/otx2_ethdev.c           |   96 +-
 drivers/net/octeontx2/otx2_ethdev.h           |   64 +-
 drivers/net/octeontx2/otx2_ethdev_devargs.c   |   12 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c       |   14 +-
 drivers/net/octeontx2/otx2_ethdev_sec.c       |    8 +-
 drivers/net/octeontx2/otx2_flow.c             |    2 +-
 drivers/net/octeontx2/otx2_flow_ctrl.c        |   36 +-
 drivers/net/octeontx2/otx2_flow_parse.c       |    4 +-
 drivers/net/octeontx2/otx2_link.c             |   40 +-
 drivers/net/octeontx2/otx2_mcast.c            |    2 +-
 drivers/net/octeontx2/otx2_ptp.c              |    4 +-
 drivers/net/octeontx2/otx2_rss.c              |   70 +-
 drivers/net/octeontx2/otx2_rx.c               |    4 +-
 drivers/net/octeontx2/otx2_tx.c               |    2 +-
 drivers/net/octeontx2/otx2_vlan.c             |   42 +-
 drivers/net/octeontx_ep/otx_ep_ethdev.c       |    6 +-
 drivers/net/octeontx_ep/otx_ep_rxtx.c         |    6 +-
 drivers/net/pcap/pcap_ethdev.c                |   12 +-
 drivers/net/pfe/pfe_ethdev.c                  |   18 +-
 drivers/net/qede/base/mcp_public.h            |    4 +-
 drivers/net/qede/qede_ethdev.c                |  156 +--
 drivers/net/qede/qede_filter.c                |   42 +-
 drivers/net/qede/qede_rxtx.c                  |    2 +-
 drivers/net/qede/qede_rxtx.h                  |   16 +-
 drivers/net/ring/rte_eth_ring.c               |   20 +-
 drivers/net/sfc/sfc.c                         |   30 +-
 drivers/net/sfc/sfc_ef100_rx.c                |   10 +-
 drivers/net/sfc/sfc_ef100_tx.c                |   20 +-
 drivers/net/sfc/sfc_ef10_essb_rx.c            |    4 +-
 drivers/net/sfc/sfc_ef10_rx.c                 |    8 +-
 drivers/net/sfc/sfc_ef10_tx.c                 |   32 +-
 drivers/net/sfc/sfc_ethdev.c                  |   50 +-
 drivers/net/sfc/sfc_flow.c                    |    2 +-
 drivers/net/sfc/sfc_port.c                    |   52 +-
 drivers/net/sfc/sfc_repr.c                    |   10 +-
 drivers/net/sfc/sfc_rx.c                      |   50 +-
 drivers/net/sfc/sfc_tx.c                      |   50 +-
 drivers/net/softnic/rte_eth_softnic.c         |   12 +-
 drivers/net/szedata2/rte_eth_szedata2.c       |   14 +-
 drivers/net/tap/rte_eth_tap.c                 |  104 +-
 drivers/net/tap/tap_rss.h                     |    2 +-
 drivers/net/thunderx/nicvf_ethdev.c           |  102 +-
 drivers/net/thunderx/nicvf_ethdev.h           |   40 +-
 drivers/net/txgbe/txgbe_ethdev.c              |  242 ++--
 drivers/net/txgbe/txgbe_ethdev.h              |   18 +-
 drivers/net/txgbe/txgbe_ethdev_vf.c           |   24 +-
 drivers/net/txgbe/txgbe_fdir.c                |   20 +-
 drivers/net/txgbe/txgbe_flow.c                |    2 +-
 drivers/net/txgbe/txgbe_ipsec.c               |   12 +-
 drivers/net/txgbe/txgbe_pf.c                  |   34 +-
 drivers/net/txgbe/txgbe_rxtx.c                |  308 ++---
 drivers/net/txgbe/txgbe_rxtx.h                |    4 +-
 drivers/net/txgbe/txgbe_tm.c                  |   16 +-
 drivers/net/vhost/rte_eth_vhost.c             |   16 +-
 drivers/net/virtio/virtio_ethdev.c            |  124 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.c          |   72 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.h          |   16 +-
 drivers/net/vmxnet3/vmxnet3_rxtx.c            |   16 +-
 examples/bbdev_app/main.c                     |    6 +-
 examples/bond/main.c                          |   14 +-
 examples/distributor/main.c                   |   12 +-
 examples/ethtool/ethtool-app/main.c           |    2 +-
 examples/ethtool/lib/rte_ethtool.c            |   18 +-
 .../pipeline_worker_generic.c                 |   16 +-
 .../eventdev_pipeline/pipeline_worker_tx.c    |   12 +-
 examples/flow_classify/flow_classify.c        |    4 +-
 examples/flow_filtering/main.c                |   16 +-
 examples/ioat/ioatfwd.c                       |    8 +-
 examples/ip_fragmentation/main.c              |   12 +-
 examples/ip_pipeline/link.c                   |   20 +-
 examples/ip_reassembly/main.c                 |   18 +-
 examples/ipsec-secgw/ipsec-secgw.c            |   32 +-
 examples/ipsec-secgw/sa.c                     |    8 +-
 examples/ipv4_multicast/main.c                |    6 +-
 examples/kni/main.c                           |    8 +-
 examples/l2fwd-crypto/main.c                  |   10 +-
 examples/l2fwd-event/l2fwd_common.c           |   10 +-
 examples/l2fwd-event/main.c                   |    2 +-
 examples/l2fwd-jobstats/main.c                |    8 +-
 examples/l2fwd-keepalive/main.c               |    8 +-
 examples/l2fwd/main.c                         |    8 +-
 examples/l3fwd-acl/main.c                     |   18 +-
 examples/l3fwd-graph/main.c                   |   14 +-
 examples/l3fwd-power/main.c                   |   16 +-
 examples/l3fwd/l3fwd_event.c                  |    4 +-
 examples/l3fwd/main.c                         |   18 +-
 examples/link_status_interrupt/main.c         |   10 +-
 .../client_server_mp/mp_server/init.c         |    4 +-
 examples/multi_process/symmetric_mp/main.c    |   14 +-
 examples/ntb/ntb_fwd.c                        |    6 +-
 examples/packet_ordering/main.c               |    4 +-
 .../performance-thread/l3fwd-thread/main.c    |   16 +-
 examples/pipeline/obj.c                       |   20 +-
 examples/ptpclient/ptpclient.c                |   10 +-
 examples/qos_meter/main.c                     |   16 +-
 examples/qos_sched/init.c                     |    6 +-
 examples/rxtx_callbacks/main.c                |    8 +-
 examples/server_node_efd/server/init.c        |    8 +-
 examples/skeleton/basicfwd.c                  |    4 +-
 examples/vhost/main.c                         |   26 +-
 examples/vm_power_manager/main.c              |    6 +-
 examples/vmdq/main.c                          |   20 +-
 examples/vmdq_dcb/main.c                      |   40 +-
 lib/ethdev/ethdev_driver.h                    |   36 +-
 lib/ethdev/rte_ethdev.c                       |  181 ++-
 lib/ethdev/rte_ethdev.h                       | 1021 +++++++++++------
 lib/ethdev/rte_flow.h                         |    2 +-
 lib/gso/rte_gso.c                             |   20 +-
 lib/gso/rte_gso.h                             |    4 +-
 lib/mbuf/rte_mbuf_core.h                      |    8 +-
 lib/mbuf/rte_mbuf_dyn.h                       |    2 +-
 339 files changed, 6639 insertions(+), 6382 deletions(-)

diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index a8e928fa9ff3..963b6aa5c589 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -757,11 +757,11 @@ show_port(void)
 		}
 
 		ret = rte_eth_dev_flow_ctrl_get(i, &fc_conf);
-		if (ret == 0 && fc_conf.mode != RTE_FC_NONE)  {
+		if (ret == 0 && fc_conf.mode != RTE_ETH_FC_NONE)  {
 			printf("\t  -- flow control mode %s%s high %u low %u pause %u%s%s\n",
-			       fc_conf.mode == RTE_FC_RX_PAUSE ? "rx " :
-			       fc_conf.mode == RTE_FC_TX_PAUSE ? "tx " :
-			       fc_conf.mode == RTE_FC_FULL ? "full" : "???",
+			       fc_conf.mode == RTE_ETH_FC_RX_PAUSE ? "rx " :
+			       fc_conf.mode == RTE_ETH_FC_TX_PAUSE ? "tx " :
+			       fc_conf.mode == RTE_ETH_FC_FULL ? "full" : "???",
 			       fc_conf.autoneg ? " auto" : "",
 			       fc_conf.high_water,
 			       fc_conf.low_water,
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index 660d5a0364b6..31d1b0e14653 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -668,13 +668,13 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 	struct test_perf *t = evt_test_priv(test);
 	struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 			.split_hdr_size = 0,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 			},
 		},
 	};
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 2775e72c580d..d202091077a6 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -176,12 +176,12 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 	struct rte_eth_rxconf rx_conf;
 	struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 			},
 		},
 	};
@@ -223,7 +223,7 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 
 		if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT))
 			local_port_conf.rxmode.offloads |=
-				DEV_RX_OFFLOAD_RSS_HASH;
+				RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 		ret = rte_eth_dev_info_get(i, &dev_info);
 		if (ret != 0) {
@@ -233,9 +233,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 		}
 
 		/* Enable mbuf fast free if PMD has the capability. */
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		rx_conf = dev_info.default_rxconf;
 		rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h
index a14d4e05e185..4249b6175b82 100644
--- a/app/test-flow-perf/config.h
+++ b/app/test-flow-perf/config.h
@@ -5,7 +5,7 @@
 #define FLOW_ITEM_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ACTION_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ATTR_MASK(_x) (UINT64_C(1) << _x)
-#define GET_RSS_HF() (ETH_RSS_IP)
+#define GET_RSS_HF() (RTE_ETH_RSS_IP)
 
 /* Configuration */
 #define RXQ_NUM 4
diff --git a/app/test-pipeline/init.c b/app/test-pipeline/init.c
index fe37d63730c6..c73801904103 100644
--- a/app/test-pipeline/init.c
+++ b/app/test-pipeline/init.c
@@ -70,16 +70,16 @@ struct app_params app = {
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -178,7 +178,7 @@ app_ports_check_link(void)
 		RTE_LOG(INFO, USER1, "Port %u %s\n",
 			port,
 			link_status_text);
-		if (link.link_status == ETH_LINK_DOWN)
+		if (link.link_status == RTE_ETH_LINK_DOWN)
 			all_ports_up = 0;
 	}
 
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 3221f6e1aa40..ebea13f86ab0 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1478,51 +1478,51 @@ parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
 	int duplex;
 
 	if (!strcmp(duplexstr, "half")) {
-		duplex = ETH_LINK_HALF_DUPLEX;
+		duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	} else if (!strcmp(duplexstr, "full")) {
-		duplex = ETH_LINK_FULL_DUPLEX;
+		duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	} else if (!strcmp(duplexstr, "auto")) {
-		duplex = ETH_LINK_FULL_DUPLEX;
+		duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	} else {
 		fprintf(stderr, "Unknown duplex parameter\n");
 		return -1;
 	}
 
 	if (!strcmp(speedstr, "10")) {
-		*speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
-				ETH_LINK_SPEED_10M_HD : ETH_LINK_SPEED_10M;
+		*speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+				RTE_ETH_LINK_SPEED_10M_HD : RTE_ETH_LINK_SPEED_10M;
 	} else if (!strcmp(speedstr, "100")) {
-		*speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
-				ETH_LINK_SPEED_100M_HD : ETH_LINK_SPEED_100M;
+		*speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+				RTE_ETH_LINK_SPEED_100M_HD : RTE_ETH_LINK_SPEED_100M;
 	} else {
-		if (duplex != ETH_LINK_FULL_DUPLEX) {
+		if (duplex != RTE_ETH_LINK_FULL_DUPLEX) {
 			fprintf(stderr, "Invalid speed/duplex parameters\n");
 			return -1;
 		}
 		if (!strcmp(speedstr, "1000")) {
-			*speed = ETH_LINK_SPEED_1G;
+			*speed = RTE_ETH_LINK_SPEED_1G;
 		} else if (!strcmp(speedstr, "10000")) {
-			*speed = ETH_LINK_SPEED_10G;
+			*speed = RTE_ETH_LINK_SPEED_10G;
 		} else if (!strcmp(speedstr, "25000")) {
-			*speed = ETH_LINK_SPEED_25G;
+			*speed = RTE_ETH_LINK_SPEED_25G;
 		} else if (!strcmp(speedstr, "40000")) {
-			*speed = ETH_LINK_SPEED_40G;
+			*speed = RTE_ETH_LINK_SPEED_40G;
 		} else if (!strcmp(speedstr, "50000")) {
-			*speed = ETH_LINK_SPEED_50G;
+			*speed = RTE_ETH_LINK_SPEED_50G;
 		} else if (!strcmp(speedstr, "100000")) {
-			*speed = ETH_LINK_SPEED_100G;
+			*speed = RTE_ETH_LINK_SPEED_100G;
 		} else if (!strcmp(speedstr, "200000")) {
-			*speed = ETH_LINK_SPEED_200G;
+			*speed = RTE_ETH_LINK_SPEED_200G;
 		} else if (!strcmp(speedstr, "auto")) {
-			*speed = ETH_LINK_SPEED_AUTONEG;
+			*speed = RTE_ETH_LINK_SPEED_AUTONEG;
 		} else {
 			fprintf(stderr, "Unknown speed parameter\n");
 			return -1;
 		}
 	}
 
-	if (*speed != ETH_LINK_SPEED_AUTONEG)
-		*speed |= ETH_LINK_SPEED_FIXED;
+	if (*speed != RTE_ETH_LINK_SPEED_AUTONEG)
+		*speed |= RTE_ETH_LINK_SPEED_FIXED;
 
 	return 0;
 }
@@ -2166,33 +2166,33 @@ cmd_config_rss_parsed(void *parsed_result,
 	int ret;
 
 	if (!strcmp(res->value, "all"))
-		rss_conf.rss_hf = ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP |
-			ETH_RSS_TCP | ETH_RSS_UDP | ETH_RSS_SCTP |
-			ETH_RSS_L2_PAYLOAD | ETH_RSS_L2TPV3 | ETH_RSS_ESP |
-			ETH_RSS_AH | ETH_RSS_PFCP | ETH_RSS_GTPU |
-			ETH_RSS_ECPRI;
+		rss_conf.rss_hf = RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP |
+			RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP |
+			RTE_ETH_RSS_L2_PAYLOAD | RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP |
+			RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP | RTE_ETH_RSS_GTPU |
+			RTE_ETH_RSS_ECPRI;
 	else if (!strcmp(res->value, "eth"))
-		rss_conf.rss_hf = ETH_RSS_ETH;
+		rss_conf.rss_hf = RTE_ETH_RSS_ETH;
 	else if (!strcmp(res->value, "vlan"))
-		rss_conf.rss_hf = ETH_RSS_VLAN;
+		rss_conf.rss_hf = RTE_ETH_RSS_VLAN;
 	else if (!strcmp(res->value, "ip"))
-		rss_conf.rss_hf = ETH_RSS_IP;
+		rss_conf.rss_hf = RTE_ETH_RSS_IP;
 	else if (!strcmp(res->value, "udp"))
-		rss_conf.rss_hf = ETH_RSS_UDP;
+		rss_conf.rss_hf = RTE_ETH_RSS_UDP;
 	else if (!strcmp(res->value, "tcp"))
-		rss_conf.rss_hf = ETH_RSS_TCP;
+		rss_conf.rss_hf = RTE_ETH_RSS_TCP;
 	else if (!strcmp(res->value, "sctp"))
-		rss_conf.rss_hf = ETH_RSS_SCTP;
+		rss_conf.rss_hf = RTE_ETH_RSS_SCTP;
 	else if (!strcmp(res->value, "ether"))
-		rss_conf.rss_hf = ETH_RSS_L2_PAYLOAD;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2_PAYLOAD;
 	else if (!strcmp(res->value, "port"))
-		rss_conf.rss_hf = ETH_RSS_PORT;
+		rss_conf.rss_hf = RTE_ETH_RSS_PORT;
 	else if (!strcmp(res->value, "vxlan"))
-		rss_conf.rss_hf = ETH_RSS_VXLAN;
+		rss_conf.rss_hf = RTE_ETH_RSS_VXLAN;
 	else if (!strcmp(res->value, "geneve"))
-		rss_conf.rss_hf = ETH_RSS_GENEVE;
+		rss_conf.rss_hf = RTE_ETH_RSS_GENEVE;
 	else if (!strcmp(res->value, "nvgre"))
-		rss_conf.rss_hf = ETH_RSS_NVGRE;
+		rss_conf.rss_hf = RTE_ETH_RSS_NVGRE;
 	else if (!strcmp(res->value, "l3-pre32"))
 		rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE32;
 	else if (!strcmp(res->value, "l3-pre40"))
@@ -2206,46 +2206,46 @@ cmd_config_rss_parsed(void *parsed_result,
 	else if (!strcmp(res->value, "l3-pre96"))
 		rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE96;
 	else if (!strcmp(res->value, "l3-src-only"))
-		rss_conf.rss_hf = ETH_RSS_L3_SRC_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L3_SRC_ONLY;
 	else if (!strcmp(res->value, "l3-dst-only"))
-		rss_conf.rss_hf = ETH_RSS_L3_DST_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L3_DST_ONLY;
 	else if (!strcmp(res->value, "l4-src-only"))
-		rss_conf.rss_hf = ETH_RSS_L4_SRC_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L4_SRC_ONLY;
 	else if (!strcmp(res->value, "l4-dst-only"))
-		rss_conf.rss_hf = ETH_RSS_L4_DST_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L4_DST_ONLY;
 	else if (!strcmp(res->value, "l2-src-only"))
-		rss_conf.rss_hf = ETH_RSS_L2_SRC_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2_SRC_ONLY;
 	else if (!strcmp(res->value, "l2-dst-only"))
-		rss_conf.rss_hf = ETH_RSS_L2_DST_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2_DST_ONLY;
 	else if (!strcmp(res->value, "l2tpv3"))
-		rss_conf.rss_hf = ETH_RSS_L2TPV3;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2TPV3;
 	else if (!strcmp(res->value, "esp"))
-		rss_conf.rss_hf = ETH_RSS_ESP;
+		rss_conf.rss_hf = RTE_ETH_RSS_ESP;
 	else if (!strcmp(res->value, "ah"))
-		rss_conf.rss_hf = ETH_RSS_AH;
+		rss_conf.rss_hf = RTE_ETH_RSS_AH;
 	else if (!strcmp(res->value, "pfcp"))
-		rss_conf.rss_hf = ETH_RSS_PFCP;
+		rss_conf.rss_hf = RTE_ETH_RSS_PFCP;
 	else if (!strcmp(res->value, "pppoe"))
-		rss_conf.rss_hf = ETH_RSS_PPPOE;
+		rss_conf.rss_hf = RTE_ETH_RSS_PPPOE;
 	else if (!strcmp(res->value, "gtpu"))
-		rss_conf.rss_hf = ETH_RSS_GTPU;
+		rss_conf.rss_hf = RTE_ETH_RSS_GTPU;
 	else if (!strcmp(res->value, "ecpri"))
-		rss_conf.rss_hf = ETH_RSS_ECPRI;
+		rss_conf.rss_hf = RTE_ETH_RSS_ECPRI;
 	else if (!strcmp(res->value, "mpls"))
-		rss_conf.rss_hf = ETH_RSS_MPLS;
+		rss_conf.rss_hf = RTE_ETH_RSS_MPLS;
 	else if (!strcmp(res->value, "ipv4-chksum"))
-		rss_conf.rss_hf = ETH_RSS_IPV4_CHKSUM;
+		rss_conf.rss_hf = RTE_ETH_RSS_IPV4_CHKSUM;
 	else if (!strcmp(res->value, "none"))
 		rss_conf.rss_hf = 0;
 	else if (!strcmp(res->value, "level-default")) {
-		rss_hf &= (~ETH_RSS_LEVEL_MASK);
-		rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_PMD_DEFAULT);
+		rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+		rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_PMD_DEFAULT);
 	} else if (!strcmp(res->value, "level-outer")) {
-		rss_hf &= (~ETH_RSS_LEVEL_MASK);
-		rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_OUTERMOST);
+		rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+		rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_OUTERMOST);
 	} else if (!strcmp(res->value, "level-inner")) {
-		rss_hf &= (~ETH_RSS_LEVEL_MASK);
-		rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_INNERMOST);
+		rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+		rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_INNERMOST);
 	} else if (!strcmp(res->value, "default"))
 		use_default = 1;
 	else if (isdigit(res->value[0]) && atoi(res->value) > 0 &&
@@ -2982,8 +2982,8 @@ parse_reta_config(const char *str,
 			return -1;
 		}
 
-		idx = hash_index / RTE_RETA_GROUP_SIZE;
-		shift = hash_index % RTE_RETA_GROUP_SIZE;
+		idx = hash_index / RTE_ETH_RETA_GROUP_SIZE;
+		shift = hash_index % RTE_ETH_RETA_GROUP_SIZE;
 		reta_conf[idx].mask |= (1ULL << shift);
 		reta_conf[idx].reta[shift] = nb_queue;
 	}
@@ -3012,10 +3012,10 @@ cmd_set_rss_reta_parsed(void *parsed_result,
 	} else
 		printf("The reta size of port %d is %u\n",
 			res->port_id, dev_info.reta_size);
-	if (dev_info.reta_size > ETH_RSS_RETA_SIZE_512) {
+	if (dev_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
 		fprintf(stderr,
 			"Currently do not support more than %u entries of redirection table\n",
-			ETH_RSS_RETA_SIZE_512);
+			RTE_ETH_RSS_RETA_SIZE_512);
 		return;
 	}
 
@@ -3086,8 +3086,8 @@ showport_parse_reta_config(struct rte_eth_rss_reta_entry64 *conf,
 	char *end;
 	char *str_fld[8];
 	uint16_t i;
-	uint16_t num = (nb_entries + RTE_RETA_GROUP_SIZE - 1) /
-			RTE_RETA_GROUP_SIZE;
+	uint16_t num = (nb_entries + RTE_ETH_RETA_GROUP_SIZE - 1) /
+			RTE_ETH_RETA_GROUP_SIZE;
 	int ret;
 
 	p = strchr(p0, '(');
@@ -3132,7 +3132,7 @@ cmd_showport_reta_parsed(void *parsed_result,
 	if (ret != 0)
 		return;
 
-	max_reta_size = RTE_MIN(dev_info.reta_size, ETH_RSS_RETA_SIZE_512);
+	max_reta_size = RTE_MIN(dev_info.reta_size, RTE_ETH_RSS_RETA_SIZE_512);
 	if (res->size == 0 || res->size > max_reta_size) {
 		fprintf(stderr, "Invalid redirection table size: %u (1-%u)\n",
 			res->size, max_reta_size);
@@ -3272,7 +3272,7 @@ cmd_config_dcb_parsed(void *parsed_result,
 		return;
 	}
 
-	if ((res->num_tcs != ETH_4_TCS) && (res->num_tcs != ETH_8_TCS)) {
+	if ((res->num_tcs != RTE_ETH_4_TCS) && (res->num_tcs != RTE_ETH_8_TCS)) {
 		fprintf(stderr,
 			"The invalid number of traffic class, only 4 or 8 allowed.\n");
 		return;
@@ -4276,9 +4276,9 @@ cmd_vlan_tpid_parsed(void *parsed_result,
 	enum rte_vlan_type vlan_type;
 
 	if (!strcmp(res->vlan_type, "inner"))
-		vlan_type = ETH_VLAN_TYPE_INNER;
+		vlan_type = RTE_ETH_VLAN_TYPE_INNER;
 	else if (!strcmp(res->vlan_type, "outer"))
-		vlan_type = ETH_VLAN_TYPE_OUTER;
+		vlan_type = RTE_ETH_VLAN_TYPE_OUTER;
 	else {
 		fprintf(stderr, "Unknown vlan type\n");
 		return;
@@ -4615,55 +4615,55 @@ csum_show(int port_id)
 	printf("Parse tunnel is %s\n",
 		(ports[port_id].parse_tunnel) ? "on" : "off");
 	printf("IP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
 	printf("UDP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
 	printf("TCP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
 	printf("SCTP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
 	printf("Outer-Ip checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
 	printf("Outer-Udp checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
 
 	/* display warnings if configuration is not supported by the NIC */
 	ret = eth_dev_info_get_print_err(port_id, &dev_info);
 	if (ret != 0)
 		return;
 
-	if ((tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware IP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware UDP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware TCP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_SCTP_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware SCTP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware outer IP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 			== 0) {
 		fprintf(stderr,
 			"Warning: hardware outer UDP checksum enabled but not supported by port %d\n",
@@ -4713,8 +4713,8 @@ cmd_csum_parsed(void *parsed_result,
 
 		if (!strcmp(res->proto, "ip")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_IPV4_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+						RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 			} else {
 				fprintf(stderr,
 					"IP checksum offload is not supported by port %u\n",
@@ -4722,8 +4722,8 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "udp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_UDP_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"UDP checksum offload is not supported by port %u\n",
@@ -4731,8 +4731,8 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "tcp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_TCP_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"TCP checksum offload is not supported by port %u\n",
@@ -4740,8 +4740,8 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "sctp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_SCTP_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_SCTP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"SCTP checksum offload is not supported by port %u\n",
@@ -4749,9 +4749,9 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "outer-ip")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-					DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+					RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 				csum_offloads |=
-						DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+						RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 			} else {
 				fprintf(stderr,
 					"Outer IP checksum offload is not supported by port %u\n",
@@ -4759,9 +4759,9 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "outer-udp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-					DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+					RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
 				csum_offloads |=
-						DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"Outer UDP checksum offload is not supported by port %u\n",
@@ -4916,7 +4916,7 @@ cmd_tso_set_parsed(void *parsed_result,
 		return;
 
 	if ((ports[res->port_id].tso_segsz != 0) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
 		fprintf(stderr, "Error: TSO is not supported by port %d\n",
 			res->port_id);
 		return;
@@ -4924,11 +4924,11 @@ cmd_tso_set_parsed(void *parsed_result,
 
 	if (ports[res->port_id].tso_segsz == 0) {
 		ports[res->port_id].dev_conf.txmode.offloads &=
-						~DEV_TX_OFFLOAD_TCP_TSO;
+						~RTE_ETH_TX_OFFLOAD_TCP_TSO;
 		printf("TSO for non-tunneled packets is disabled\n");
 	} else {
 		ports[res->port_id].dev_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_TCP_TSO;
+						RTE_ETH_TX_OFFLOAD_TCP_TSO;
 		printf("TSO segment size for non-tunneled packets is %d\n",
 			ports[res->port_id].tso_segsz);
 	}
@@ -4940,7 +4940,7 @@ cmd_tso_set_parsed(void *parsed_result,
 		return;
 
 	if ((ports[res->port_id].tso_segsz != 0) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
 		fprintf(stderr,
 			"Warning: TSO enabled but not supported by port %d\n",
 			res->port_id);
@@ -5011,27 +5011,27 @@ check_tunnel_tso_nic_support(portid_t port_id)
 	if (eth_dev_info_get_print_err(port_id, &dev_info) != 0)
 		return dev_info;
 
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VXLAN_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO))
 		fprintf(stderr,
 			"Warning: VXLAN TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		fprintf(stderr,
 			"Warning: GRE TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPIP_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO))
 		fprintf(stderr,
 			"Warning: IPIP TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
 		fprintf(stderr,
 			"Warning: GENEVE TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IP_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IP_TNL_TSO))
 		fprintf(stderr,
 			"Warning: IP TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO))
 		fprintf(stderr,
 			"Warning: UDP TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
@@ -5059,20 +5059,20 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
 	dev_info = check_tunnel_tso_nic_support(res->port_id);
 	if (ports[res->port_id].tunnel_tso_segsz == 0) {
 		ports[res->port_id].dev_conf.txmode.offloads &=
-			~(DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			  DEV_TX_OFFLOAD_GRE_TNL_TSO |
-			  DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-			  DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-			  DEV_TX_OFFLOAD_IP_TNL_TSO |
-			  DEV_TX_OFFLOAD_UDP_TNL_TSO);
+			~(RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 		printf("TSO for tunneled packets is disabled\n");
 	} else {
-		uint64_t tso_offloads = (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-					 DEV_TX_OFFLOAD_GRE_TNL_TSO |
-					 DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-					 DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-					 DEV_TX_OFFLOAD_IP_TNL_TSO |
-					 DEV_TX_OFFLOAD_UDP_TNL_TSO);
+		uint64_t tso_offloads = (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 
 		ports[res->port_id].dev_conf.txmode.offloads |=
 			(tso_offloads & dev_info.tx_offload_capa);
@@ -5095,7 +5095,7 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
 			fprintf(stderr,
 				"Warning: csum parse_tunnel must be set so that tunneled packets are recognized\n");
 		if (!(ports[res->port_id].dev_conf.txmode.offloads &
-		      DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+		      RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
 			fprintf(stderr,
 				"Warning: csum set outer-ip must be set to hw if outer L3 is IPv4; not necessary for IPv6\n");
 	}
@@ -7227,9 +7227,9 @@ cmd_link_flow_ctrl_show_parsed(void *parsed_result,
 		return;
 	}
 
-	if (fc_conf.mode == RTE_FC_RX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+	if (fc_conf.mode == RTE_ETH_FC_RX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
 		rx_fc_en = true;
-	if (fc_conf.mode == RTE_FC_TX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+	if (fc_conf.mode == RTE_ETH_FC_TX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
 		tx_fc_en = true;
 
 	printf("\n%s Flow control infos for port %-2d %s\n",
@@ -7507,12 +7507,12 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
 
 	/*
 	 * Rx on/off, flow control is enabled/disabled on RX side. This can indicate
-	 * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+	 * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
 	 * Tx on/off, flow control is enabled/disabled on TX side. This can indicate
-	 * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+	 * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
 	 */
 	static enum rte_eth_fc_mode rx_tx_onoff_2_lfc_mode[2][2] = {
-			{RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+			{RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
 	};
 
 	/* Partial command line, retrieve current configuration */
@@ -7525,11 +7525,11 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
 			return;
 		}
 
-		if ((fc_conf.mode == RTE_FC_RX_PAUSE) ||
-		    (fc_conf.mode == RTE_FC_FULL))
+		if ((fc_conf.mode == RTE_ETH_FC_RX_PAUSE) ||
+		    (fc_conf.mode == RTE_ETH_FC_FULL))
 			rx_fc_en = 1;
-		if ((fc_conf.mode == RTE_FC_TX_PAUSE) ||
-		    (fc_conf.mode == RTE_FC_FULL))
+		if ((fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ||
+		    (fc_conf.mode == RTE_ETH_FC_FULL))
 			tx_fc_en = 1;
 	}
 
@@ -7597,12 +7597,12 @@ cmd_priority_flow_ctrl_set_parsed(void *parsed_result,
 
 	/*
 	 * Rx on/off, flow control is enabled/disabled on RX side. This can indicate
-	 * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+	 * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
 	 * Tx on/off, flow control is enabled/disabled on TX side. This can indicate
-	 * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+	 * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
 	 */
 	static enum rte_eth_fc_mode rx_tx_onoff_2_pfc_mode[2][2] = {
-		{RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+		{RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
 	};
 
 	memset(&pfc_conf, 0, sizeof(struct rte_eth_pfc_conf));
@@ -9250,13 +9250,13 @@ cmd_set_vf_rxmode_parsed(void *parsed_result,
 	int is_on = (strcmp(res->on, "on") == 0) ? 1 : 0;
 	if (!strcmp(res->what,"rxmode")) {
 		if (!strcmp(res->mode, "AUPE"))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_UNTAG;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_UNTAG;
 		else if (!strcmp(res->mode, "ROPE"))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_HASH_UC;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_HASH_UC;
 		else if (!strcmp(res->mode, "BAM"))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_BROADCAST;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_BROADCAST;
 		else if (!strncmp(res->mode, "MPE",3))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_MULTICAST;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_MULTICAST;
 	}
 
 	RTE_SET_USED(is_on);
@@ -9656,7 +9656,7 @@ cmd_tunnel_udp_config_parsed(void *parsed_result,
 	int ret;
 
 	tunnel_udp.udp_port = res->udp_port;
-	tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+	tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
 
 	if (!strcmp(res->what, "add"))
 		ret = rte_eth_dev_udp_tunnel_port_add(res->port_id,
@@ -9722,13 +9722,13 @@ cmd_cfg_tunnel_udp_port_parsed(void *parsed_result,
 	tunnel_udp.udp_port = res->udp_port;
 
 	if (!strcmp(res->tunnel_type, "vxlan")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
 	} else if (!strcmp(res->tunnel_type, "geneve")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_GENEVE;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_GENEVE;
 	} else if (!strcmp(res->tunnel_type, "vxlan-gpe")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN_GPE;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN_GPE;
 	} else if (!strcmp(res->tunnel_type, "ecpri")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_ECPRI;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_ECPRI;
 	} else {
 		fprintf(stderr, "Invalid tunnel type\n");
 		return;
@@ -11859,7 +11859,7 @@ cmd_set_macsec_offload_on_parsed(
 	if (ret != 0)
 		return;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
 #ifdef RTE_NET_IXGBE
 		ret = rte_pmd_ixgbe_macsec_enable(port_id, en, rp);
 #endif
@@ -11870,7 +11870,7 @@ cmd_set_macsec_offload_on_parsed(
 	switch (ret) {
 	case 0:
 		ports[port_id].dev_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_MACSEC_INSERT;
+						RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 		cmd_reconfig_device_queue(port_id, 1, 1);
 		break;
 	case -ENODEV:
@@ -11956,7 +11956,7 @@ cmd_set_macsec_offload_off_parsed(
 	if (ret != 0)
 		return;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
 #ifdef RTE_NET_IXGBE
 		ret = rte_pmd_ixgbe_macsec_disable(port_id);
 #endif
@@ -11964,7 +11964,7 @@ cmd_set_macsec_offload_off_parsed(
 	switch (ret) {
 	case 0:
 		ports[port_id].dev_conf.txmode.offloads &=
-						~DEV_TX_OFFLOAD_MACSEC_INSERT;
+						~RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 		cmd_reconfig_device_queue(port_id, 1, 1);
 		break;
 	case -ENODEV:
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 23aa334cda0f..f8ddfe60cd58 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -86,62 +86,62 @@ static const struct {
 };
 
 const struct rss_type_info rss_type_table[] = {
-	{ "all", ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP | ETH_RSS_TCP |
-		ETH_RSS_UDP | ETH_RSS_SCTP | ETH_RSS_L2_PAYLOAD |
-		ETH_RSS_L2TPV3 | ETH_RSS_ESP | ETH_RSS_AH | ETH_RSS_PFCP |
-		ETH_RSS_GTPU | ETH_RSS_ECPRI | ETH_RSS_MPLS},
+	{ "all", RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP |
+		RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_L2_PAYLOAD |
+		RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP | RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP |
+		RTE_ETH_RSS_GTPU | RTE_ETH_RSS_ECPRI | RTE_ETH_RSS_MPLS},
 	{ "none", 0 },
-	{ "eth", ETH_RSS_ETH },
-	{ "l2-src-only", ETH_RSS_L2_SRC_ONLY },
-	{ "l2-dst-only", ETH_RSS_L2_DST_ONLY },
-	{ "vlan", ETH_RSS_VLAN },
-	{ "s-vlan", ETH_RSS_S_VLAN },
-	{ "c-vlan", ETH_RSS_C_VLAN },
-	{ "ipv4", ETH_RSS_IPV4 },
-	{ "ipv4-frag", ETH_RSS_FRAG_IPV4 },
-	{ "ipv4-tcp", ETH_RSS_NONFRAG_IPV4_TCP },
-	{ "ipv4-udp", ETH_RSS_NONFRAG_IPV4_UDP },
-	{ "ipv4-sctp", ETH_RSS_NONFRAG_IPV4_SCTP },
-	{ "ipv4-other", ETH_RSS_NONFRAG_IPV4_OTHER },
-	{ "ipv6", ETH_RSS_IPV6 },
-	{ "ipv6-frag", ETH_RSS_FRAG_IPV6 },
-	{ "ipv6-tcp", ETH_RSS_NONFRAG_IPV6_TCP },
-	{ "ipv6-udp", ETH_RSS_NONFRAG_IPV6_UDP },
-	{ "ipv6-sctp", ETH_RSS_NONFRAG_IPV6_SCTP },
-	{ "ipv6-other", ETH_RSS_NONFRAG_IPV6_OTHER },
-	{ "l2-payload", ETH_RSS_L2_PAYLOAD },
-	{ "ipv6-ex", ETH_RSS_IPV6_EX },
-	{ "ipv6-tcp-ex", ETH_RSS_IPV6_TCP_EX },
-	{ "ipv6-udp-ex", ETH_RSS_IPV6_UDP_EX },
-	{ "port", ETH_RSS_PORT },
-	{ "vxlan", ETH_RSS_VXLAN },
-	{ "geneve", ETH_RSS_GENEVE },
-	{ "nvgre", ETH_RSS_NVGRE },
-	{ "ip", ETH_RSS_IP },
-	{ "udp", ETH_RSS_UDP },
-	{ "tcp", ETH_RSS_TCP },
-	{ "sctp", ETH_RSS_SCTP },
-	{ "tunnel", ETH_RSS_TUNNEL },
+	{ "eth", RTE_ETH_RSS_ETH },
+	{ "l2-src-only", RTE_ETH_RSS_L2_SRC_ONLY },
+	{ "l2-dst-only", RTE_ETH_RSS_L2_DST_ONLY },
+	{ "vlan", RTE_ETH_RSS_VLAN },
+	{ "s-vlan", RTE_ETH_RSS_S_VLAN },
+	{ "c-vlan", RTE_ETH_RSS_C_VLAN },
+	{ "ipv4", RTE_ETH_RSS_IPV4 },
+	{ "ipv4-frag", RTE_ETH_RSS_FRAG_IPV4 },
+	{ "ipv4-tcp", RTE_ETH_RSS_NONFRAG_IPV4_TCP },
+	{ "ipv4-udp", RTE_ETH_RSS_NONFRAG_IPV4_UDP },
+	{ "ipv4-sctp", RTE_ETH_RSS_NONFRAG_IPV4_SCTP },
+	{ "ipv4-other", RTE_ETH_RSS_NONFRAG_IPV4_OTHER },
+	{ "ipv6", RTE_ETH_RSS_IPV6 },
+	{ "ipv6-frag", RTE_ETH_RSS_FRAG_IPV6 },
+	{ "ipv6-tcp", RTE_ETH_RSS_NONFRAG_IPV6_TCP },
+	{ "ipv6-udp", RTE_ETH_RSS_NONFRAG_IPV6_UDP },
+	{ "ipv6-sctp", RTE_ETH_RSS_NONFRAG_IPV6_SCTP },
+	{ "ipv6-other", RTE_ETH_RSS_NONFRAG_IPV6_OTHER },
+	{ "l2-payload", RTE_ETH_RSS_L2_PAYLOAD },
+	{ "ipv6-ex", RTE_ETH_RSS_IPV6_EX },
+	{ "ipv6-tcp-ex", RTE_ETH_RSS_IPV6_TCP_EX },
+	{ "ipv6-udp-ex", RTE_ETH_RSS_IPV6_UDP_EX },
+	{ "port", RTE_ETH_RSS_PORT },
+	{ "vxlan", RTE_ETH_RSS_VXLAN },
+	{ "geneve", RTE_ETH_RSS_GENEVE },
+	{ "nvgre", RTE_ETH_RSS_NVGRE },
+	{ "ip", RTE_ETH_RSS_IP },
+	{ "udp", RTE_ETH_RSS_UDP },
+	{ "tcp", RTE_ETH_RSS_TCP },
+	{ "sctp", RTE_ETH_RSS_SCTP },
+	{ "tunnel", RTE_ETH_RSS_TUNNEL },
 	{ "l3-pre32", RTE_ETH_RSS_L3_PRE32 },
 	{ "l3-pre40", RTE_ETH_RSS_L3_PRE40 },
 	{ "l3-pre48", RTE_ETH_RSS_L3_PRE48 },
 	{ "l3-pre56", RTE_ETH_RSS_L3_PRE56 },
 	{ "l3-pre64", RTE_ETH_RSS_L3_PRE64 },
 	{ "l3-pre96", RTE_ETH_RSS_L3_PRE96 },
-	{ "l3-src-only", ETH_RSS_L3_SRC_ONLY },
-	{ "l3-dst-only", ETH_RSS_L3_DST_ONLY },
-	{ "l4-src-only", ETH_RSS_L4_SRC_ONLY },
-	{ "l4-dst-only", ETH_RSS_L4_DST_ONLY },
-	{ "esp", ETH_RSS_ESP },
-	{ "ah", ETH_RSS_AH },
-	{ "l2tpv3", ETH_RSS_L2TPV3 },
-	{ "pfcp", ETH_RSS_PFCP },
-	{ "pppoe", ETH_RSS_PPPOE },
-	{ "gtpu", ETH_RSS_GTPU },
-	{ "ecpri", ETH_RSS_ECPRI },
-	{ "mpls", ETH_RSS_MPLS },
-	{ "ipv4-chksum", ETH_RSS_IPV4_CHKSUM },
-	{ "l4-chksum", ETH_RSS_L4_CHKSUM },
+	{ "l3-src-only", RTE_ETH_RSS_L3_SRC_ONLY },
+	{ "l3-dst-only", RTE_ETH_RSS_L3_DST_ONLY },
+	{ "l4-src-only", RTE_ETH_RSS_L4_SRC_ONLY },
+	{ "l4-dst-only", RTE_ETH_RSS_L4_DST_ONLY },
+	{ "esp", RTE_ETH_RSS_ESP },
+	{ "ah", RTE_ETH_RSS_AH },
+	{ "l2tpv3", RTE_ETH_RSS_L2TPV3 },
+	{ "pfcp", RTE_ETH_RSS_PFCP },
+	{ "pppoe", RTE_ETH_RSS_PPPOE },
+	{ "gtpu", RTE_ETH_RSS_GTPU },
+	{ "ecpri", RTE_ETH_RSS_ECPRI },
+	{ "mpls", RTE_ETH_RSS_MPLS },
+	{ "ipv4-chksum", RTE_ETH_RSS_IPV4_CHKSUM },
+	{ "l4-chksum", RTE_ETH_RSS_L4_CHKSUM },
 	{ NULL, 0 },
 };
 
@@ -538,39 +538,39 @@ static void
 device_infos_display_speeds(uint32_t speed_capa)
 {
 	printf("\n\tDevice speed capability:");
-	if (speed_capa == ETH_LINK_SPEED_AUTONEG)
+	if (speed_capa == RTE_ETH_LINK_SPEED_AUTONEG)
 		printf(" Autonegotiate (all speeds)");
-	if (speed_capa & ETH_LINK_SPEED_FIXED)
+	if (speed_capa & RTE_ETH_LINK_SPEED_FIXED)
 		printf(" Disable autonegotiate (fixed speed)  ");
-	if (speed_capa & ETH_LINK_SPEED_10M_HD)
+	if (speed_capa & RTE_ETH_LINK_SPEED_10M_HD)
 		printf(" 10 Mbps half-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_10M)
+	if (speed_capa & RTE_ETH_LINK_SPEED_10M)
 		printf(" 10 Mbps full-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_100M_HD)
+	if (speed_capa & RTE_ETH_LINK_SPEED_100M_HD)
 		printf(" 100 Mbps half-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_100M)
+	if (speed_capa & RTE_ETH_LINK_SPEED_100M)
 		printf(" 100 Mbps full-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_1G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_1G)
 		printf(" 1 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_2_5G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_2_5G)
 		printf(" 2.5 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_5G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_5G)
 		printf(" 5 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_10G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_10G)
 		printf(" 10 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_20G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_20G)
 		printf(" 20 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_25G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_25G)
 		printf(" 25 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_40G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_40G)
 		printf(" 40 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_50G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_50G)
 		printf(" 50 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_56G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_56G)
 		printf(" 56 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_100G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_100G)
 		printf(" 100 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_200G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_200G)
 		printf(" 200 Gbps  ");
 }
 
@@ -700,9 +700,9 @@ port_infos_display(portid_t port_id)
 
 	printf("\nLink status: %s\n", (link.link_status) ? ("up") : ("down"));
 	printf("Link speed: %s\n", rte_eth_link_speed_to_str(link.link_speed));
-	printf("Link duplex: %s\n", (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+	printf("Link duplex: %s\n", (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 	       ("full-duplex") : ("half-duplex"));
-	printf("Autoneg status: %s\n", (link.link_autoneg == ETH_LINK_AUTONEG) ?
+	printf("Autoneg status: %s\n", (link.link_autoneg == RTE_ETH_LINK_AUTONEG) ?
 	       ("On") : ("Off"));
 
 	if (!rte_eth_dev_get_mtu(port_id, &mtu))
@@ -720,22 +720,22 @@ port_infos_display(portid_t port_id)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 	if (vlan_offload >= 0){
 		printf("VLAN offload: \n");
-		if (vlan_offload & ETH_VLAN_STRIP_OFFLOAD)
+		if (vlan_offload & RTE_ETH_VLAN_STRIP_OFFLOAD)
 			printf("  strip on, ");
 		else
 			printf("  strip off, ");
 
-		if (vlan_offload & ETH_VLAN_FILTER_OFFLOAD)
+		if (vlan_offload & RTE_ETH_VLAN_FILTER_OFFLOAD)
 			printf("filter on, ");
 		else
 			printf("filter off, ");
 
-		if (vlan_offload & ETH_VLAN_EXTEND_OFFLOAD)
+		if (vlan_offload & RTE_ETH_VLAN_EXTEND_OFFLOAD)
 			printf("extend on, ");
 		else
 			printf("extend off, ");
 
-		if (vlan_offload & ETH_QINQ_STRIP_OFFLOAD)
+		if (vlan_offload & RTE_ETH_QINQ_STRIP_OFFLOAD)
 			printf("qinq strip on\n");
 		else
 			printf("qinq strip off\n");
@@ -2919,8 +2919,8 @@ port_rss_reta_info(portid_t port_id,
 	}
 
 	for (i = 0; i < nb_entries; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (!(reta_conf[idx].mask & (1ULL << shift)))
 			continue;
 		printf("RSS RETA configuration: hash index=%u, queue=%u\n",
@@ -3288,7 +3288,7 @@ dcb_fwd_config_setup(void)
 	for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
 		fwd_lcores[lc_id]->stream_nb = 0;
 		fwd_lcores[lc_id]->stream_idx = sm_id;
-		for (i = 0; i < ETH_MAX_VMDQ_POOL; i++) {
+		for (i = 0; i < RTE_ETH_MAX_VMDQ_POOL; i++) {
 			/* if the nb_queue is zero, means this tc is
 			 * not enabled on the POOL
 			 */
@@ -4351,11 +4351,11 @@ vlan_extend_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_VLAN_EXTEND_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+		vlan_offload |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	} else {
-		vlan_offload &= ~ETH_VLAN_EXTEND_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
+		vlan_offload &= ~RTE_ETH_VLAN_EXTEND_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4381,11 +4381,11 @@ rx_vlan_strip_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
-		vlan_offload &= ~ETH_VLAN_STRIP_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		vlan_offload &= ~RTE_ETH_VLAN_STRIP_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4426,11 +4426,11 @@ rx_vlan_filter_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_VLAN_FILTER_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+		vlan_offload |= RTE_ETH_VLAN_FILTER_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	} else {
-		vlan_offload &= ~ETH_VLAN_FILTER_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+		vlan_offload &= ~RTE_ETH_VLAN_FILTER_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4456,11 +4456,11 @@ rx_vlan_qinq_strip_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_QINQ_STRIP_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+		vlan_offload |= RTE_ETH_QINQ_STRIP_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 	} else {
-		vlan_offload &= ~ETH_QINQ_STRIP_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+		vlan_offload &= ~RTE_ETH_QINQ_STRIP_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4530,7 +4530,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
 		return;
 
 	if (ports[port_id].dev_conf.txmode.offloads &
-	    DEV_TX_OFFLOAD_QINQ_INSERT) {
+	    RTE_ETH_TX_OFFLOAD_QINQ_INSERT) {
 		fprintf(stderr, "Error, as QinQ has been enabled.\n");
 		return;
 	}
@@ -4539,7 +4539,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
 	if (ret != 0)
 		return;
 
-	if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VLAN_INSERT) == 0) {
+	if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) == 0) {
 		fprintf(stderr,
 			"Error: vlan insert is not supported by port %d\n",
 			port_id);
@@ -4547,7 +4547,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
 	}
 
 	tx_vlan_reset(port_id);
-	ports[port_id].dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
+	ports[port_id].dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	ports[port_id].tx_vlan_id = vlan_id;
 }
 
@@ -4566,7 +4566,7 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
 	if (ret != 0)
 		return;
 
-	if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_QINQ_INSERT) == 0) {
+	if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) == 0) {
 		fprintf(stderr,
 			"Error: qinq insert not supported by port %d\n",
 			port_id);
@@ -4574,8 +4574,8 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
 	}
 
 	tx_vlan_reset(port_id);
-	ports[port_id].dev_conf.txmode.offloads |= (DEV_TX_OFFLOAD_VLAN_INSERT |
-						    DEV_TX_OFFLOAD_QINQ_INSERT);
+	ports[port_id].dev_conf.txmode.offloads |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+						    RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
 	ports[port_id].tx_vlan_id = vlan_id;
 	ports[port_id].tx_vlan_id_outer = vlan_id_outer;
 }
@@ -4584,8 +4584,8 @@ void
 tx_vlan_reset(portid_t port_id)
 {
 	ports[port_id].dev_conf.txmode.offloads &=
-				~(DEV_TX_OFFLOAD_VLAN_INSERT |
-				  DEV_TX_OFFLOAD_QINQ_INSERT);
+				~(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				  RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
 	ports[port_id].tx_vlan_id = 0;
 	ports[port_id].tx_vlan_id_outer = 0;
 }
@@ -4991,7 +4991,7 @@ set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint16_t rate)
 	ret = eth_link_get_nowait_print_err(port_id, &link);
 	if (ret < 0)
 		return 1;
-	if (link.link_speed != ETH_SPEED_NUM_UNKNOWN &&
+	if (link.link_speed != RTE_ETH_SPEED_NUM_UNKNOWN &&
 	    rate > link.link_speed) {
 		fprintf(stderr,
 			"Invalid rate value:%u bigger than link speed: %u\n",
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 090797318a35..75b24487e72e 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -485,7 +485,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		if (info->l4_proto == IPPROTO_TCP && tso_segsz) {
 			ol_flags |= PKT_TX_IP_CKSUM;
 		} else {
-			if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+			if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
 				ol_flags |= PKT_TX_IP_CKSUM;
 			} else {
 				ipv4_hdr->hdr_checksum = 0;
@@ -502,7 +502,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		udp_hdr = (struct rte_udp_hdr *)((char *)l3_hdr + info->l3_len);
 		/* do not recalculate udp cksum if it was 0 */
 		if (udp_hdr->dgram_cksum != 0) {
-			if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+			if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
 				ol_flags |= PKT_TX_UDP_CKSUM;
 			} else {
 				udp_hdr->dgram_cksum = 0;
@@ -517,7 +517,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		tcp_hdr = (struct rte_tcp_hdr *)((char *)l3_hdr + info->l3_len);
 		if (tso_segsz)
 			ol_flags |= PKT_TX_TCP_SEG;
-		else if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+		else if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
 			ol_flags |= PKT_TX_TCP_CKSUM;
 		} else {
 			tcp_hdr->cksum = 0;
@@ -532,7 +532,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 			((char *)l3_hdr + info->l3_len);
 		/* sctp payload must be a multiple of 4 to be
 		 * offloaded */
-		if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
+		if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
 			((ipv4_hdr->total_length & 0x3) == 0)) {
 			ol_flags |= PKT_TX_SCTP_CKSUM;
 		} else {
@@ -559,7 +559,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 		ipv4_hdr->hdr_checksum = 0;
 		ol_flags |= PKT_TX_OUTER_IPV4;
 
-		if (tx_offloads	& DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+		if (tx_offloads	& RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 			ol_flags |= PKT_TX_OUTER_IP_CKSUM;
 		else
 			ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
@@ -576,7 +576,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 		ol_flags |= PKT_TX_TCP_SEG;
 
 	/* Skip SW outer UDP checksum generation if HW supports it */
-	if (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) {
 		if (info->outer_ethertype == _htons(RTE_ETHER_TYPE_IPV4))
 			udp_hdr->dgram_cksum
 				= rte_ipv4_phdr_cksum(ipv4_hdr, ol_flags);
@@ -959,9 +959,9 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 		if (info.is_tunnel == 1) {
 			if (info.tunnel_tso_segsz ||
 			    (tx_offloads &
-			     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+			     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
 			    (tx_offloads &
-			     DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+			     RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
 				m->outer_l2_len = info.outer_l2_len;
 				m->outer_l3_len = info.outer_l3_len;
 				m->l2_len = info.l2_len;
@@ -1022,19 +1022,19 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 					rte_be_to_cpu_16(info.outer_ethertype),
 					info.outer_l3_len);
 			/* dump tx packet info */
-			if ((tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-					    DEV_TX_OFFLOAD_UDP_CKSUM |
-					    DEV_TX_OFFLOAD_TCP_CKSUM |
-					    DEV_TX_OFFLOAD_SCTP_CKSUM)) ||
+			if ((tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+					    RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+					    RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+					    RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) ||
 				info.tso_segsz != 0)
 				printf("tx: m->l2_len=%d m->l3_len=%d "
 					"m->l4_len=%d\n",
 					m->l2_len, m->l3_len, m->l4_len);
 			if (info.is_tunnel == 1) {
 				if ((tx_offloads &
-				    DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+				    RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
 				    (tx_offloads &
-				    DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
+				    RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
 				    (tx_ol_flags & PKT_TX_OUTER_IPV6))
 					printf("tx: m->outer_l2_len=%d "
 						"m->outer_l3_len=%d\n",
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 7ebed9fed334..03d026dec169 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -99,11 +99,11 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
 	vlan_tci_outer = ports[fs->tx_port].tx_vlan_id_outer;
 
 	tx_offloads = ports[fs->tx_port].dev_conf.txmode.offloads;
-	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ol_flags |= PKT_TX_VLAN_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		ol_flags |= PKT_TX_QINQ_PKT;
-	if (tx_offloads	& DEV_TX_OFFLOAD_MACSEC_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 
 	for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index ee76df7f0323..57e00bca20e7 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -72,11 +72,11 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
 	fs->rx_packets += nb_rx;
 	txp = &ports[fs->tx_port];
 	tx_offloads = txp->dev_conf.txmode.offloads;
-	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ol_flags = PKT_TX_VLAN_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		ol_flags |= PKT_TX_QINQ_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 	for (i = 0; i < nb_rx; i++) {
 		if (likely(i < nb_rx - 1))
diff --git a/app/test-pmd/macswap_common.h b/app/test-pmd/macswap_common.h
index 7e9a3590a436..7ade9a686b7c 100644
--- a/app/test-pmd/macswap_common.h
+++ b/app/test-pmd/macswap_common.h
@@ -10,11 +10,11 @@ ol_flags_init(uint64_t tx_offload)
 {
 	uint64_t ol_flags = 0;
 
-	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_VLAN_INSERT) ?
+	ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) ?
 			PKT_TX_VLAN : 0;
-	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_QINQ_INSERT) ?
+	ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) ?
 			PKT_TX_QINQ : 0;
-	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_MACSEC_INSERT) ?
+	ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) ?
 			PKT_TX_MACSEC : 0;
 
 	return ol_flags;
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index ab8e8f7e694a..693e77eff2c0 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -546,29 +546,29 @@ parse_xstats_list(const char *in_str, struct rte_eth_xstat_name **xstats,
 static int
 parse_link_speed(int n)
 {
-	uint32_t speed = ETH_LINK_SPEED_FIXED;
+	uint32_t speed = RTE_ETH_LINK_SPEED_FIXED;
 
 	switch (n) {
 	case 1000:
-		speed |= ETH_LINK_SPEED_1G;
+		speed |= RTE_ETH_LINK_SPEED_1G;
 		break;
 	case 10000:
-		speed |= ETH_LINK_SPEED_10G;
+		speed |= RTE_ETH_LINK_SPEED_10G;
 		break;
 	case 25000:
-		speed |= ETH_LINK_SPEED_25G;
+		speed |= RTE_ETH_LINK_SPEED_25G;
 		break;
 	case 40000:
-		speed |= ETH_LINK_SPEED_40G;
+		speed |= RTE_ETH_LINK_SPEED_40G;
 		break;
 	case 50000:
-		speed |= ETH_LINK_SPEED_50G;
+		speed |= RTE_ETH_LINK_SPEED_50G;
 		break;
 	case 100000:
-		speed |= ETH_LINK_SPEED_100G;
+		speed |= RTE_ETH_LINK_SPEED_100G;
 		break;
 	case 200000:
-		speed |= ETH_LINK_SPEED_200G;
+		speed |= RTE_ETH_LINK_SPEED_200G;
 		break;
 	case 100:
 	case 10:
@@ -1000,13 +1000,13 @@ launch_args_parse(int argc, char** argv)
 			if (!strcmp(lgopts[opt_idx].name, "pkt-filter-size")) {
 				if (!strcmp(optarg, "64K"))
 					fdir_conf.pballoc =
-						RTE_FDIR_PBALLOC_64K;
+						RTE_ETH_FDIR_PBALLOC_64K;
 				else if (!strcmp(optarg, "128K"))
 					fdir_conf.pballoc =
-						RTE_FDIR_PBALLOC_128K;
+						RTE_ETH_FDIR_PBALLOC_128K;
 				else if (!strcmp(optarg, "256K"))
 					fdir_conf.pballoc =
-						RTE_FDIR_PBALLOC_256K;
+						RTE_ETH_FDIR_PBALLOC_256K;
 				else
 					rte_exit(EXIT_FAILURE, "pkt-filter-size %s invalid -"
 						 " must be: 64K or 128K or 256K\n",
@@ -1048,34 +1048,34 @@ launch_args_parse(int argc, char** argv)
 			}
 #endif
 			if (!strcmp(lgopts[opt_idx].name, "disable-crc-strip"))
-				rx_offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 			if (!strcmp(lgopts[opt_idx].name, "enable-lro"))
-				rx_offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 			if (!strcmp(lgopts[opt_idx].name, "enable-scatter"))
-				rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 			if (!strcmp(lgopts[opt_idx].name, "enable-rx-cksum"))
-				rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-rx-timestamp"))
-				rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 			if (!strcmp(lgopts[opt_idx].name, "enable-hw-vlan"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-vlan-filter"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-vlan-strip"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-vlan-extend"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-qinq-strip"))
-				rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 
 			if (!strcmp(lgopts[opt_idx].name, "enable-drop-en"))
 				rx_drop_en = 1;
@@ -1097,13 +1097,13 @@ launch_args_parse(int argc, char** argv)
 			if (!strcmp(lgopts[opt_idx].name, "forward-mode"))
 				set_pkt_forwarding_mode(optarg);
 			if (!strcmp(lgopts[opt_idx].name, "rss-ip"))
-				rss_hf = ETH_RSS_IP;
+				rss_hf = RTE_ETH_RSS_IP;
 			if (!strcmp(lgopts[opt_idx].name, "rss-udp"))
-				rss_hf = ETH_RSS_UDP;
+				rss_hf = RTE_ETH_RSS_UDP;
 			if (!strcmp(lgopts[opt_idx].name, "rss-level-inner"))
-				rss_hf |= ETH_RSS_LEVEL_INNERMOST;
+				rss_hf |= RTE_ETH_RSS_LEVEL_INNERMOST;
 			if (!strcmp(lgopts[opt_idx].name, "rss-level-outer"))
-				rss_hf |= ETH_RSS_LEVEL_OUTERMOST;
+				rss_hf |= RTE_ETH_RSS_LEVEL_OUTERMOST;
 			if (!strcmp(lgopts[opt_idx].name, "rxq")) {
 				n = atoi(optarg);
 				if (n >= 0 && check_nb_rxq((queueid_t)n) == 0)
@@ -1482,12 +1482,12 @@ launch_args_parse(int argc, char** argv)
 			if (!strcmp(lgopts[opt_idx].name, "rx-mq-mode")) {
 				char *end = NULL;
 				n = strtoul(optarg, &end, 16);
-				if (n >= 0 && n <= ETH_MQ_RX_VMDQ_DCB_RSS)
+				if (n >= 0 && n <= RTE_ETH_MQ_RX_VMDQ_DCB_RSS)
 					rx_mq_mode = (enum rte_eth_rx_mq_mode)n;
 				else
 					rte_exit(EXIT_FAILURE,
 						 "rx-mq-mode must be >= 0 and <= %d\n",
-						 ETH_MQ_RX_VMDQ_DCB_RSS);
+						 RTE_ETH_MQ_RX_VMDQ_DCB_RSS);
 			}
 			if (!strcmp(lgopts[opt_idx].name, "record-core-cycles"))
 				record_core_cycles = 1;
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index af0e79fe6d51..bf2420db0da6 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -348,7 +348,7 @@ uint64_t noisy_lkup_num_reads_writes;
 /*
  * Receive Side Scaling (RSS) configuration.
  */
-uint64_t rss_hf = ETH_RSS_IP; /* RSS IP by default. */
+uint64_t rss_hf = RTE_ETH_RSS_IP; /* RSS IP by default. */
 
 /*
  * Port topology configuration
@@ -459,12 +459,12 @@ lcoreid_t latencystats_lcore_id = -1;
 struct rte_eth_rxmode rx_mode;
 
 struct rte_eth_txmode tx_mode = {
-	.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
+	.offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
 };
 
-struct rte_fdir_conf fdir_conf = {
+struct rte_eth_fdir_conf fdir_conf = {
 	.mode = RTE_FDIR_MODE_NONE,
-	.pballoc = RTE_FDIR_PBALLOC_64K,
+	.pballoc = RTE_ETH_FDIR_PBALLOC_64K,
 	.status = RTE_FDIR_REPORT_STATUS,
 	.mask = {
 		.vlan_tci_mask = 0xFFEF,
@@ -518,7 +518,7 @@ uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
 /*
  * hexadecimal bitmask of RX mq mode can be enabled.
  */
-enum rte_eth_rx_mq_mode rx_mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
+enum rte_eth_rx_mq_mode rx_mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
 
 /*
  * Used to set forced link speed
@@ -1572,9 +1572,9 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
 	if (ret != 0)
 		rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
 
-	if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(port->dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		port->dev_conf.txmode.offloads &=
-			~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Apply Rx offloads configuration */
 	for (i = 0; i < port->dev_info.max_rx_queues; i++)
@@ -1711,8 +1711,8 @@ init_config(void)
 
 	init_port_config();
 
-	gso_types = DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_UDP_TSO;
+	gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO;
 	/*
 	 * Records which Mbuf pool to use by each logical core, if needed.
 	 */
@@ -3457,7 +3457,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -3751,17 +3751,17 @@ init_port_config(void)
 			if (port->dev_conf.rx_adv_conf.rss_conf.rss_hf != 0) {
 				port->dev_conf.rxmode.mq_mode =
 					(enum rte_eth_rx_mq_mode)
-						(rx_mq_mode & ETH_MQ_RX_RSS);
+						(rx_mq_mode & RTE_ETH_MQ_RX_RSS);
 			} else {
-				port->dev_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+				port->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
 				port->dev_conf.rxmode.offloads &=
-						~DEV_RX_OFFLOAD_RSS_HASH;
+						~RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 				for (i = 0;
 				     i < port->dev_info.nb_rx_queues;
 				     i++)
 					port->rx_conf[i].offloads &=
-						~DEV_RX_OFFLOAD_RSS_HASH;
+						~RTE_ETH_RX_OFFLOAD_RSS_HASH;
 			}
 		}
 
@@ -3849,9 +3849,9 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 		vmdq_rx_conf->enable_default_pool = 0;
 		vmdq_rx_conf->default_pool = 0;
 		vmdq_rx_conf->nb_queue_pools =
-			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+			(num_tcs ==  RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
 		vmdq_tx_conf->nb_queue_pools =
-			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+			(num_tcs ==  RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
 
 		vmdq_rx_conf->nb_pool_maps = vmdq_rx_conf->nb_queue_pools;
 		for (i = 0; i < vmdq_rx_conf->nb_pool_maps; i++) {
@@ -3859,7 +3859,7 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 			vmdq_rx_conf->pool_map[i].pools =
 				1 << (i % vmdq_rx_conf->nb_queue_pools);
 		}
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			vmdq_rx_conf->dcb_tc[i] = i % num_tcs;
 			vmdq_tx_conf->dcb_tc[i] = i % num_tcs;
 		}
@@ -3867,8 +3867,8 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 		/* set DCB mode of RX and TX of multiple queues */
 		eth_conf->rxmode.mq_mode =
 				(enum rte_eth_rx_mq_mode)
-					(rx_mq_mode & ETH_MQ_RX_VMDQ_DCB);
-		eth_conf->txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+					(rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB);
+		eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
 	} else {
 		struct rte_eth_dcb_rx_conf *rx_conf =
 				&eth_conf->rx_adv_conf.dcb_rx_conf;
@@ -3884,23 +3884,23 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 		rx_conf->nb_tcs = num_tcs;
 		tx_conf->nb_tcs = num_tcs;
 
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			rx_conf->dcb_tc[i] = i % num_tcs;
 			tx_conf->dcb_tc[i] = i % num_tcs;
 		}
 
 		eth_conf->rxmode.mq_mode =
 				(enum rte_eth_rx_mq_mode)
-					(rx_mq_mode & ETH_MQ_RX_DCB_RSS);
+					(rx_mq_mode & RTE_ETH_MQ_RX_DCB_RSS);
 		eth_conf->rx_adv_conf.rss_conf = rss_conf;
-		eth_conf->txmode.mq_mode = ETH_MQ_TX_DCB;
+		eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_DCB;
 	}
 
 	if (pfc_en)
 		eth_conf->dcb_capability_en =
-				ETH_DCB_PG_SUPPORT | ETH_DCB_PFC_SUPPORT;
+				RTE_ETH_DCB_PG_SUPPORT | RTE_ETH_DCB_PFC_SUPPORT;
 	else
-		eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
+		eth_conf->dcb_capability_en = RTE_ETH_DCB_PG_SUPPORT;
 
 	return 0;
 }
@@ -3929,7 +3929,7 @@ init_port_dcb_config(portid_t pid,
 	retval = get_eth_dcb_conf(pid, &port_conf, dcb_mode, num_tcs, pfc_en);
 	if (retval < 0)
 		return retval;
-	port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	/* re-configure the device . */
 	retval = rte_eth_dev_configure(pid, nb_rxq, nb_rxq, &port_conf);
@@ -3979,7 +3979,7 @@ init_port_dcb_config(portid_t pid,
 
 	rxtx_port_config(rte_port);
 	/* VLAN filter */
-	rte_port->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	rte_port->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	for (i = 0; i < RTE_DIM(vlan_tags); i++)
 		rx_vft_set(pid, vlan_tags[i], 1);
 
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index e3995d24ab53..ccd025d5e0f5 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -491,7 +491,7 @@ extern lcoreid_t bitrate_lcore_id;
 extern uint8_t bitrate_enabled;
 #endif
 
-extern struct rte_fdir_conf fdir_conf;
+extern struct rte_eth_fdir_conf fdir_conf;
 
 extern uint32_t max_rx_pkt_len;
 
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index e45f8840c91c..9eb7992815e8 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -354,11 +354,11 @@ pkt_burst_transmit(struct fwd_stream *fs)
 	tx_offloads = txp->dev_conf.txmode.offloads;
 	vlan_tci = txp->tx_vlan_id;
 	vlan_tci_outer = txp->tx_vlan_id_outer;
-	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ol_flags = PKT_TX_VLAN_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		ol_flags |= PKT_TX_QINQ_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 
 	/*
diff --git a/app/test/test_ethdev_link.c b/app/test/test_ethdev_link.c
index ee11987bae28..6248aea49abd 100644
--- a/app/test/test_ethdev_link.c
+++ b/app/test/test_ethdev_link.c
@@ -14,10 +14,10 @@ test_link_status_up_default(void)
 {
 	int ret = 0;
 	struct rte_eth_link link_status = {
-		.link_speed = ETH_SPEED_NUM_2_5G,
-		.link_status = ETH_LINK_UP,
-		.link_autoneg = ETH_LINK_AUTONEG,
-		.link_duplex = ETH_LINK_FULL_DUPLEX
+		.link_speed = RTE_ETH_SPEED_NUM_2_5G,
+		.link_status = RTE_ETH_LINK_UP,
+		.link_autoneg = RTE_ETH_LINK_AUTONEG,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -27,9 +27,9 @@ test_link_status_up_default(void)
 	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg",
 		text, strlen(text), "Invalid default link status string");
 
-	link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link_status.link_autoneg = ETH_LINK_FIXED;
-	link_status.link_speed = ETH_SPEED_NUM_10M,
+	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link_status.link_autoneg = RTE_ETH_LINK_FIXED;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_10M;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #2: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -37,7 +37,7 @@ test_link_status_up_default(void)
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
-	link_status.link_speed = ETH_SPEED_NUM_UNKNOWN;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -45,7 +45,7 @@ test_link_status_up_default(void)
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
-	link_status.link_speed = ETH_SPEED_NUM_NONE;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -54,9 +54,9 @@ test_link_status_up_default(void)
 		"string with HDX");
 
 	/* test max str len */
-	link_status.link_speed = ETH_SPEED_NUM_200G;
-	link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link_status.link_autoneg = ETH_LINK_AUTONEG;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_200G;
+	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link_status.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #4:len = %d, %s\n", ret, text);
 	RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
@@ -69,10 +69,10 @@ test_link_status_down_default(void)
 {
 	int ret = 0;
 	struct rte_eth_link link_status = {
-		.link_speed = ETH_SPEED_NUM_2_5G,
-		.link_status = ETH_LINK_DOWN,
-		.link_autoneg = ETH_LINK_AUTONEG,
-		.link_duplex = ETH_LINK_FULL_DUPLEX
+		.link_speed = RTE_ETH_SPEED_NUM_2_5G,
+		.link_status = RTE_ETH_LINK_DOWN,
+		.link_autoneg = RTE_ETH_LINK_AUTONEG,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -90,9 +90,9 @@ test_link_status_invalid(void)
 	int ret = 0;
 	struct rte_eth_link link_status = {
 		.link_speed = 55555,
-		.link_status = ETH_LINK_UP,
-		.link_autoneg = ETH_LINK_AUTONEG,
-		.link_duplex = ETH_LINK_FULL_DUPLEX
+		.link_status = RTE_ETH_LINK_UP,
+		.link_autoneg = RTE_ETH_LINK_AUTONEG,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -116,21 +116,21 @@ test_link_speed_all_values(void)
 		const char *value;
 		uint32_t link_speed;
 	} speed_str_map[] = {
-		{ "None",   ETH_SPEED_NUM_NONE },
-		{ "10 Mbps",  ETH_SPEED_NUM_10M },
-		{ "100 Mbps", ETH_SPEED_NUM_100M },
-		{ "1 Gbps",   ETH_SPEED_NUM_1G },
-		{ "2.5 Gbps", ETH_SPEED_NUM_2_5G },
-		{ "5 Gbps",   ETH_SPEED_NUM_5G },
-		{ "10 Gbps",  ETH_SPEED_NUM_10G },
-		{ "20 Gbps",  ETH_SPEED_NUM_20G },
-		{ "25 Gbps",  ETH_SPEED_NUM_25G },
-		{ "40 Gbps",  ETH_SPEED_NUM_40G },
-		{ "50 Gbps",  ETH_SPEED_NUM_50G },
-		{ "56 Gbps",  ETH_SPEED_NUM_56G },
-		{ "100 Gbps", ETH_SPEED_NUM_100G },
-		{ "200 Gbps", ETH_SPEED_NUM_200G },
-		{ "Unknown",  ETH_SPEED_NUM_UNKNOWN },
+		{ "None",   RTE_ETH_SPEED_NUM_NONE },
+		{ "10 Mbps",  RTE_ETH_SPEED_NUM_10M },
+		{ "100 Mbps", RTE_ETH_SPEED_NUM_100M },
+		{ "1 Gbps",   RTE_ETH_SPEED_NUM_1G },
+		{ "2.5 Gbps", RTE_ETH_SPEED_NUM_2_5G },
+		{ "5 Gbps",   RTE_ETH_SPEED_NUM_5G },
+		{ "10 Gbps",  RTE_ETH_SPEED_NUM_10G },
+		{ "20 Gbps",  RTE_ETH_SPEED_NUM_20G },
+		{ "25 Gbps",  RTE_ETH_SPEED_NUM_25G },
+		{ "40 Gbps",  RTE_ETH_SPEED_NUM_40G },
+		{ "50 Gbps",  RTE_ETH_SPEED_NUM_50G },
+		{ "56 Gbps",  RTE_ETH_SPEED_NUM_56G },
+		{ "100 Gbps", RTE_ETH_SPEED_NUM_100G },
+		{ "200 Gbps", RTE_ETH_SPEED_NUM_200G },
+		{ "Unknown",  RTE_ETH_SPEED_NUM_UNKNOWN },
 		{ "Invalid",   50505 }
 	};
 
diff --git a/app/test/test_event_eth_rx_adapter.c b/app/test/test_event_eth_rx_adapter.c
index add4d8a67821..a09253e91814 100644
--- a/app/test/test_event_eth_rx_adapter.c
+++ b/app/test/test_event_eth_rx_adapter.c
@@ -103,7 +103,7 @@ port_init_rx_intr(uint16_t port, struct rte_mempool *mp)
 {
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_NONE,
+			.mq_mode = RTE_ETH_MQ_RX_NONE,
 		},
 		.intr_conf = {
 			.rxq = 1,
@@ -118,7 +118,7 @@ port_init(uint16_t port, struct rte_mempool *mp)
 {
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_NONE,
+			.mq_mode = RTE_ETH_MQ_RX_NONE,
 		},
 	};
 
diff --git a/app/test/test_kni.c b/app/test/test_kni.c
index 96733554b6c4..40ab0d5c4ca4 100644
--- a/app/test/test_kni.c
+++ b/app/test/test_kni.c
@@ -74,7 +74,7 @@ static const struct rte_eth_txconf tx_conf = {
 
 static const struct rte_eth_conf port_conf = {
 	.txmode = {
-		.mq_mode = ETH_DCB_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 5388d18125a6..8a9ef851789f 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -134,11 +134,11 @@ static uint16_t vlan_id = 0x100;
 
 static struct rte_eth_conf default_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 189d2430f27e..351129de2f9b 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -107,11 +107,11 @@ static struct link_bonding_unittest_params test_params  = {
 
 static struct rte_eth_conf default_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index e7bb0497b663..f9eae9397386 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -52,7 +52,7 @@ struct slave_conf {
 
 	struct rte_eth_rss_conf rss_conf;
 	uint8_t rss_key[40];
-	struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
 
 	uint8_t is_slave;
 	struct rte_ring *rxtx_queue[RXTX_QUEUE_COUNT];
@@ -61,7 +61,7 @@ struct slave_conf {
 struct link_bonding_rssconf_unittest_params {
 	uint8_t bond_port_id;
 	struct rte_eth_dev_info bond_dev_info;
-	struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
 	struct slave_conf slave_ports[SLAVE_COUNT];
 
 	struct rte_mempool *mbuf_pool;
@@ -80,27 +80,27 @@ static struct link_bonding_rssconf_unittest_params test_params  = {
  */
 static struct rte_eth_conf default_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
 
 static struct rte_eth_conf rss_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IPV6,
+			.rss_hf = RTE_ETH_RSS_IPV6,
 		},
 	},
 	.lpbk_mode = 0,
@@ -207,13 +207,13 @@ bond_slaves(void)
 static int
 reta_set(uint16_t port_id, uint8_t value, int reta_size)
 {
-	struct rte_eth_rss_reta_entry64 reta_conf[512/RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[512/RTE_ETH_RETA_GROUP_SIZE];
 	int i, j;
 
-	for (i = 0; i < reta_size / RTE_RETA_GROUP_SIZE; i++) {
+	for (i = 0; i < reta_size / RTE_ETH_RETA_GROUP_SIZE; i++) {
 		/* select all fields to set */
 		reta_conf[i].mask = ~0LL;
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			reta_conf[i].reta[j] = value;
 	}
 
@@ -232,8 +232,8 @@ reta_check_synced(struct slave_conf *port)
 	for (i = 0; i < test_params.bond_dev_info.reta_size;
 			i++) {
 
-		int index = i / RTE_RETA_GROUP_SIZE;
-		int shift = i % RTE_RETA_GROUP_SIZE;
+		int index = i / RTE_ETH_RETA_GROUP_SIZE;
+		int shift = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (port->reta_conf[index].reta[shift] !=
 				test_params.bond_reta_conf[index].reta[shift])
@@ -251,7 +251,7 @@ static int
 bond_reta_fetch(void) {
 	unsigned j;
 
-	for (j = 0; j < test_params.bond_dev_info.reta_size / RTE_RETA_GROUP_SIZE;
+	for (j = 0; j < test_params.bond_dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE;
 			j++)
 		test_params.bond_reta_conf[j].mask = ~0LL;
 
@@ -268,7 +268,7 @@ static int
 slave_reta_fetch(struct slave_conf *port) {
 	unsigned j;
 
-	for (j = 0; j < port->dev_info.reta_size / RTE_RETA_GROUP_SIZE; j++)
+	for (j = 0; j < port->dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE; j++)
 		port->reta_conf[j].mask = ~0LL;
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_rss_reta_query(port->port_id,
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index a3b4f52c65e6..1df86ce080e5 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -62,11 +62,11 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 1,  /* enable loopback */
 };
@@ -155,7 +155,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -822,7 +822,7 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
 		/* bulk alloc rx, full-featured tx */
 		tx_conf.tx_rs_thresh = 32;
 		tx_conf.tx_free_thresh = 32;
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 		return 0;
 	} else if (!strcmp(mode, "hybrid")) {
 		/* bulk alloc rx, vector tx
@@ -831,13 +831,13 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
 		 */
 		tx_conf.tx_rs_thresh = 32;
 		tx_conf.tx_free_thresh = 32;
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 		return 0;
 	} else if (!strcmp(mode, "full")) {
 		/* full feature rx,tx pair */
 		tx_conf.tx_rs_thresh = 32;
 		tx_conf.tx_free_thresh = 32;
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 		return 0;
 	}
 
diff --git a/app/test/virtual_pmd.c b/app/test/virtual_pmd.c
index 7e15b47eb0fb..d9f2e4f66bde 100644
--- a/app/test/virtual_pmd.c
+++ b/app/test/virtual_pmd.c
@@ -53,7 +53,7 @@ static int  virtual_ethdev_stop(struct rte_eth_dev *eth_dev __rte_unused)
 	void *pkt = NULL;
 	struct virtual_ethdev_private *prv = eth_dev->data->dev_private;
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 0;
 	while (rte_ring_dequeue(prv->rx_queue, &pkt) != -ENOENT)
 		rte_pktmbuf_free(pkt);
@@ -168,7 +168,7 @@ virtual_ethdev_link_update_success(struct rte_eth_dev *bonded_eth_dev,
 		int wait_to_complete __rte_unused)
 {
 	if (!bonded_eth_dev->data->dev_started)
-		bonded_eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		bonded_eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -562,9 +562,9 @@ virtual_ethdev_create(const char *name, struct rte_ether_addr *mac_addr,
 	eth_dev->data->nb_rx_queues = (uint16_t)1;
 	eth_dev->data->nb_tx_queues = (uint16_t)1;
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
-	eth_dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
-	eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	eth_dev->data->mac_addrs = rte_zmalloc(name, RTE_ETHER_ADDR_LEN, 0);
 	if (eth_dev->data->mac_addrs == NULL)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 53560d3830d7..1c0ea988f239 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -42,7 +42,7 @@ Features of the OCTEON cnxk SSO PMD are:
 - HW managed packets enqueued from ethdev to eventdev exposed through event eth
   RX adapter.
 - N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
   capability while maintaining receive packet order.
 - Full Rx/Tx offload support defined through ethdev queue configuration.
 - HW managed event vectorization on CN10K for packets enqueued from ethdev to
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 11fbebfcd243..0fa57abfa3e0 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -35,7 +35,7 @@ Features of the OCTEON TX2 SSO PMD are:
 - HW managed packets enqueued from ethdev to eventdev exposed through event eth
   RX adapter.
 - N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
   capability while maintaining receive packet order.
 - Full Rx/Tx offload support defined through ethdev queue config.
 
diff --git a/doc/guides/nics/af_packet.rst b/doc/guides/nics/af_packet.rst
index bdd6e7263c85..54feffdef4bd 100644
--- a/doc/guides/nics/af_packet.rst
+++ b/doc/guides/nics/af_packet.rst
@@ -70,5 +70,5 @@ Features and Limitations
 ------------------------
 
 The PMD will re-insert the VLAN tag transparently to the packet if the kernel
-strips it, as long as the ``DEV_RX_OFFLOAD_VLAN_STRIP`` is not enabled by the
+strips it, as long as the ``RTE_ETH_RX_OFFLOAD_VLAN_STRIP`` is not enabled by the
 application.
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index aa6032889a55..b3d10f30dc77 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -877,21 +877,21 @@ processing. This improved performance is derived from a number of optimizations:
     * TX: only the following reduced set of transmit offloads is supported in
       vector mode::
 
-       DEV_TX_OFFLOAD_MBUF_FAST_FREE
+       RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
 
     * RX: only the following reduced set of receive offloads is supported in
       vector mode (note that jumbo MTU is allowed only when the MTU setting
-      does not require `DEV_RX_OFFLOAD_SCATTER` to be enabled)::
-
-       DEV_RX_OFFLOAD_VLAN_STRIP
-       DEV_RX_OFFLOAD_KEEP_CRC
-       DEV_RX_OFFLOAD_IPV4_CKSUM
-       DEV_RX_OFFLOAD_UDP_CKSUM
-       DEV_RX_OFFLOAD_TCP_CKSUM
-       DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
-       DEV_RX_OFFLOAD_OUTER_UDP_CKSUM
-       DEV_RX_OFFLOAD_RSS_HASH
-       DEV_RX_OFFLOAD_VLAN_FILTER
+      does not require `RTE_ETH_RX_OFFLOAD_SCATTER` to be enabled)::
+
+       RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+       RTE_ETH_RX_OFFLOAD_KEEP_CRC
+       RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+       RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+       RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+       RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+       RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+       RTE_ETH_RX_OFFLOAD_RSS_HASH
+       RTE_ETH_RX_OFFLOAD_VLAN_FILTER
 
 The BNXT Vector PMD is enabled in DPDK builds by default. The decision to enable
 vector processing is made at run-time when the port is started; if no transmit
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index 91bdcd065a95..0209730b904a 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -432,7 +432,7 @@ Limitations
 .. code-block:: console
 
      vlan_offload = rte_eth_dev_get_vlan_offload(port);
-     vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
+     vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
      rte_eth_dev_set_vlan_offload(port, vlan_offload);
 
 Another alternative is modify the adapter's ingress VLAN rewrite mode so that
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 8dd421ca013b..b48d9dcb9591 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -30,7 +30,7 @@ Speed capabilities
 
 Supports getting the speed capabilities that the current device is capable of.
 
-* **[provides] rte_eth_dev_info**: ``speed_capa:ETH_LINK_SPEED_*``.
+* **[provides] rte_eth_dev_info**: ``speed_capa:RTE_ETH_LINK_SPEED_*``.
 * **[related]  API**: ``rte_eth_dev_info_get()``.
 
 
@@ -101,11 +101,11 @@ Supports Rx interrupts.
 Lock-free Tx queue
 ------------------
 
-If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+If a PMD advertises RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
 invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
 
-* **[uses]    rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
+* **[uses]    rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
 * **[related]  API**: ``rte_eth_tx_burst()``.
 
 
@@ -117,8 +117,8 @@ Fast mbuf free
 Supports optimization for fast release of mbufs following successful Tx.
 Requires that per queue, all mbufs come from the same mempool and has refcnt = 1.
 
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
-* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
 
 
 .. _nic_features_free_tx_mbuf_on_demand:
@@ -177,7 +177,7 @@ Scattered Rx
 
 Supports receiving segmented mbufs.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SCATTER``.
 * **[implements] datapath**: ``Scattered Rx function``.
 * **[implements] rte_eth_dev_data**: ``scattered_rx``.
 * **[provides]   eth_dev_ops**: ``rxq_info_get:scattered_rx``.
@@ -205,12 +205,12 @@ LRO
 
 Supports Large Receive Offload.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
   ``dev_conf.rxmode.max_lro_pkt_size``.
 * **[implements] datapath**: ``LRO functionality``.
 * **[implements] rte_eth_dev_data**: ``lro``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
 * **[provides]   rte_eth_dev_info**: ``max_lro_pkt_size``.
 
 
@@ -221,12 +221,12 @@ TSO
 
 Supports TCP Segmentation Offloading.
 
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_TCP_TSO``.
 * **[uses]       rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:`` ``PKT_TX_TCP_SEG``, ``PKT_TX_IPV4``, ``PKT_TX_IPV6``, ``PKT_TX_IP_CKSUM``.
 * **[uses]       mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
 * **[implements] datapath**: ``TSO functionality``.
-* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_TCP_TSO,RTE_ETH_TX_OFFLOAD_UDP_TSO``.
 
 
 .. _nic_features_promiscuous_mode:
@@ -287,9 +287,9 @@ RSS hash
 
 Supports RSS hashing on RX.
 
-* **[uses]     user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_RSS_FLAG``.
+* **[uses]     user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_RSS_FLAG``.
 * **[uses]     user config**: ``dev_conf.rx_adv_conf.rss_conf``.
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
 * **[provides] rte_eth_dev_info**: ``flow_type_rss_offloads``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
 
@@ -302,7 +302,7 @@ Inner RSS
 Supports RX RSS hashing on Inner headers.
 
 * **[uses]    rte_flow_action_rss**: ``level``.
-* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
 
 
@@ -339,7 +339,7 @@ VMDq
 
 Supports Virtual Machine Device Queues (VMDq).
 
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_VMDQ_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_VMDQ_FLAG``.
 * **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
 * **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_rx_conf``.
 * **[uses] user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -362,7 +362,7 @@ DCB
 
 Supports Data Center Bridging (DCB).
 
-* **[uses]       user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_DCB_FLAG``.
+* **[uses]       user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_DCB_FLAG``.
 * **[uses]       user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
 * **[uses]       user config**: ``dev_conf.rx_adv_conf.dcb_rx_conf``.
 * **[uses]       user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -378,7 +378,7 @@ VLAN filter
 
 Supports filtering of a VLAN Tag identifier.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_FILTER``.
 * **[implements] eth_dev_ops**: ``vlan_filter_set``.
 * **[related]    API**: ``rte_eth_dev_vlan_filter()``.
 
@@ -416,13 +416,13 @@ Supports inline crypto processing defined by rte_security library to perform cry
 operations of security protocol while packet is received in NIC. NIC is not aware
 of protocol operations. See Security library and PMD documentation for more details.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[uses]       mbuf**: ``mbuf.l2_len``.
 * **[implements] rte_security_ops**: ``session_create``, ``session_update``,
   ``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
   ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO``
@@ -438,14 +438,14 @@ protocol processing for the security protocol (e.g. IPsec, MACSEC) while the
 packet is received at NIC. The NIC is capable of understanding the security
 protocol operations. See security library and PMD documentation for more details.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[uses]       mbuf**: ``mbuf.l2_len``.
 * **[implements] rte_security_ops**: ``session_create``, ``session_update``,
   ``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``get_userdata``,
   ``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
   ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL``
@@ -459,7 +459,7 @@ CRC offload
 Supports CRC stripping by hardware.
 A PMD assumed to support CRC stripping by default. PMD should advertise if it supports keeping CRC.
 
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_KEEP_CRC``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_KEEP_CRC``.
 
 
 .. _nic_features_vlan_offload:
@@ -469,13 +469,13 @@ VLAN offload
 
 Supports VLAN offload to hardware.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_STRIP,RTE_ETH_RX_OFFLOAD_VLAN_FILTER,RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:PKT_TX_VLAN``, ``mbuf.vlan_tci``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN`` ``mbuf.vlan_tci``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_VLAN_STRIP``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
   ``rte_eth_dev_get_vlan_offload()``.
 
@@ -487,14 +487,14 @@ QinQ offload
 
 Supports QinQ (queue in queue) offload.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ``, ``mbuf.vlan_tci_outer``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.ol_flags:PKT_RX_QINQ``,
   ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN``
   ``mbuf.vlan_tci``, ``mbuf.vlan_tci_outer``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
 
 
 .. _nic_features_fec:
@@ -508,7 +508,7 @@ information to correct the bit errors generated during data packet transmission
 improves signal quality but also brings a delay to signals. This function can be enabled or disabled as required.
 
 * **[implements] eth_dev_ops**: ``fec_get_capability``, ``fec_get``, ``fec_set``.
-* **[provides]   rte_eth_fec_capa**: ``speed:ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
+* **[provides]   rte_eth_fec_capa**: ``speed:RTE_ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
 * **[related]    API**: ``rte_eth_fec_get_capability()``, ``rte_eth_fec_get()``, ``rte_eth_fec_set()``.
 
 
@@ -519,16 +519,16 @@ L3 checksum offload
 
 Supports L3 checksum offload.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[uses]     mbuf**: ``mbuf.l2_len``, ``mbuf.l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
   ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
   ``PKT_RX_IP_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
 
 
 .. _nic_features_l4_checksum_offload:
@@ -538,8 +538,8 @@ L4 checksum offload
 
 Supports L4 checksum offload.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -547,8 +547,8 @@ Supports L4 checksum offload.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
   ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
   ``PKT_RX_L4_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
 
 .. _nic_features_hw_timestamp:
 
@@ -557,10 +557,10 @@ Timestamp offload
 
 Supports Timestamp.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_TIMESTAMP``.
 * **[provides] mbuf**: ``mbuf.timestamp``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
 * **[related] eth_dev_ops**: ``read_clock``.
 
 .. _nic_features_macsec_offload:
@@ -570,11 +570,11 @@ MACsec offload
 
 Supports MACsec.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
 
 
 .. _nic_features_inner_l3_checksum:
@@ -584,16 +584,16 @@ Inner L3 checksum
 
 Supports inner packet L3 checksum.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_IP_CKSUM_BAD``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
 
 .. _nic_features_inner_l4_checksum:
@@ -603,15 +603,15 @@ Inner L4 checksum
 
 Supports inner packet L4 checksum.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_L4_CKSUM_UNKNOWN`` |
   ``PKT_RX_OUTER_L4_CKSUM_BAD`` | ``PKT_RX_OUTER_L4_CKSUM_GOOD`` | ``PKT_RX_OUTER_L4_CKSUM_INVALID``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
   ``mbuf.ol_flags:PKT_TX_OUTER_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
 
 
 .. _nic_features_packet_type_parsing:
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index ed6afd62703d..bba53f5a64ee 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -78,11 +78,11 @@ To enable via ``RX_OLFLAGS`` use ``RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y``.
 To guarantee the constraint, the following capabilities in ``dev_conf.rxmode.offloads``
 will be checked:
 
-*   ``DEV_RX_OFFLOAD_VLAN_EXTEND``
+*   ``RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``
 
-*   ``DEV_RX_OFFLOAD_CHECKSUM``
+*   ``RTE_ETH_RX_OFFLOAD_CHECKSUM``
 
-*   ``DEV_RX_OFFLOAD_HEADER_SPLIT``
+*   ``RTE_ETH_RX_OFFLOAD_HEADER_SPLIT``
 
 *   ``fdir_conf->mode``
 
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index 2efdd1a41bb4..a1e236ad75e5 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -216,21 +216,21 @@ For example,
     *   If the max number of VFs (max_vfs) is set in the range of 1 to 32:
 
         If the number of Rx queues is specified as 4 (``--rxq=4`` in testpmd), then there are totally 32
-        pools (ETH_32_POOLS), and each VF could have 4 Rx queues;
+        pools (RTE_ETH_32_POOLS), and each VF could have 4 Rx queues;
 
         If the number of Rx queues is specified as 2 (``--rxq=2`` in testpmd), then there are totally 32
-        pools (ETH_32_POOLS), and each VF could have 2 Rx queues;
+        pools (RTE_ETH_32_POOLS), and each VF could have 2 Rx queues;
 
     *   If the max number of VFs (max_vfs) is in the range of 33 to 64:
 
         If the number of Rx queues in specified as 4 (``--rxq=4`` in testpmd), then error message is expected
         as ``rxq`` is not correct at this case;
 
-        If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (ETH_64_POOLS),
+        If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (RTE_ETH_64_POOLS),
         and each VF have 2 Rx queues;
 
-    On host, to enable VF RSS functionality, rx mq mode should be set as ETH_MQ_RX_VMDQ_RSS
-    or ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
+    On host, to enable VF RSS functionality, rx mq mode should be set as RTE_ETH_MQ_RX_VMDQ_RSS
+    or RTE_ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
     It also needs config VF RSS information like hash function, RSS key, RSS key length.
 
 .. note::
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index 20a74b9b5bcd..148d2f5fc2be 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -89,13 +89,13 @@ Other features are supported using optional MACRO configuration. They include:
 
 To guarantee the constraint, capabilities in dev_conf.rxmode.offloads will be checked:
 
-*   DEV_RX_OFFLOAD_VLAN_STRIP
+*   RTE_ETH_RX_OFFLOAD_VLAN_STRIP
 
-*   DEV_RX_OFFLOAD_VLAN_EXTEND
+*   RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
 
-*   DEV_RX_OFFLOAD_CHECKSUM
+*   RTE_ETH_RX_OFFLOAD_CHECKSUM
 
-*   DEV_RX_OFFLOAD_HEADER_SPLIT
+*   RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
 
 *   dev_conf
 
@@ -163,13 +163,13 @@ l3fwd
 ~~~~~
 
 When running l3fwd with vPMD, there is one thing to note.
-In the configuration, ensure that DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
+In the configuration, ensure that RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
 Otherwise, by default, RX vPMD is disabled.
 
 load_balancer
 ~~~~~~~~~~~~~
 
-As in the case of l3fwd, to enable vPMD, do NOT set DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
+As in the case of l3fwd, to enable vPMD, do NOT set RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
 In addition, for improved performance, use -bsz "(32,32),(64,64),(32,32)" in load_balancer to avoid using the default burst size of 144.
 
 
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index e4f58c899031..cc1726207f6c 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -371,7 +371,7 @@ Limitations
 
 - CRC:
 
-  - ``DEV_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
+  - ``RTE_ETH_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
     for some NICs (such as ConnectX-6 Dx, ConnectX-6 Lx, and BlueField-2).
     The capability bit ``scatter_fcs_w_decap_disable`` shows NIC support.
 
@@ -607,7 +607,7 @@ Driver options
   small-packet traffic.
 
   When MPRQ is enabled, MTU can be larger than the size of
-  user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
+  user-provided mbuf even if RTE_ETH_RX_OFFLOAD_SCATTER isn't enabled. PMD will
   configure large stride size enough to accommodate MTU as long as
   device allows. Note that this can waste system memory compared to enabling Rx
   scatter and multi-segment packet.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 3ce696b605d1..681010d9ed7d 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -275,7 +275,7 @@ An example utility for eBPF instruction generation in the format of C arrays wil
 be added in next releases
 
 TAP reports on supported RSS functions as part of dev_infos_get callback:
-``ETH_RSS_IP``, ``ETH_RSS_UDP`` and ``ETH_RSS_TCP``.
+``RTE_ETH_RSS_IP``, ``RTE_ETH_RSS_UDP`` and ``RTE_ETH_RSS_TCP``.
 **Known limitation:** TAP supports all of the above hash functions together
 and not in partial combinations.
 
diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
index 7bff0aef0b74..9b2c31a2f0bc 100644
--- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
@@ -194,11 +194,11 @@ To segment an outgoing packet, an application must:
 
    - the bit mask of required GSO types. The GSO library uses the same macros as
      those that describe a physical device's TX offloading capabilities (i.e.
-     ``DEV_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
+     ``RTE_ETH_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
      wants to segment TCP/IPv4 packets, it should set gso_types to
-     ``DEV_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
-     supported for gso_types are ``DEV_TX_OFFLOAD_VXLAN_TNL_TSO``, and
-     ``DEV_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
+     ``RTE_ETH_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
+     supported for gso_types are ``RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO``, and
+     ``RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
      allowed.
 
    - a flag, that indicates whether the IPv4 headers of output segments should
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 2f190b40e43a..dc6186a44ae2 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -137,7 +137,7 @@ a vxlan-encapsulated tcp packet:
     mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
     set out_ip checksum to 0 in the packet
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
 
 - calculate checksum of out_ip and out_udp::
 
@@ -147,8 +147,8 @@ a vxlan-encapsulated tcp packet:
     set out_ip checksum to 0 in the packet
     set out_udp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM
-  and DEV_TX_OFFLOAD_UDP_CKSUM.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+  and RTE_ETH_TX_OFFLOAD_UDP_CKSUM.
 
 - calculate checksum of in_ip::
 
@@ -158,7 +158,7 @@ a vxlan-encapsulated tcp packet:
     set in_ip checksum to 0 in the packet
 
   This is similar to case 1), but l2_len is different. It is supported
-  on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+  on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
   Note that it can only work if outer L4 checksum is 0.
 
 - calculate checksum of in_ip and in_tcp::
@@ -170,8 +170,8 @@ a vxlan-encapsulated tcp packet:
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
   This is similar to case 2), but l2_len is different. It is supported
-  on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM and
-  DEV_TX_OFFLOAD_TCP_CKSUM.
+  on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM and
+  RTE_ETH_TX_OFFLOAD_TCP_CKSUM.
   Note that it can only work if outer L4 checksum is 0.
 
 - segment inner TCP::
@@ -185,7 +185,7 @@ a vxlan-encapsulated tcp packet:
     set in_tcp checksum to pseudo header without including the IP
       payload length using rte_ipv4_phdr_cksum()
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_TCP_TSO.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_TCP_TSO.
   Note that it can only work if outer L4 checksum is 0.
 
 - calculate checksum of out_ip, in_ip, in_tcp::
@@ -200,8 +200,8 @@ a vxlan-encapsulated tcp packet:
     set in_ip checksum to 0 in the packet
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM,
-  DEV_TX_OFFLOAD_UDP_CKSUM and DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
+  RTE_ETH_TX_OFFLOAD_UDP_CKSUM and RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM.
 
 The list of flags and their precise meaning is described in the mbuf API
 documentation (rte_mbuf.h). Also refer to the testpmd source code
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 0d4ac77a7ccf..68312898448c 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -57,7 +57,7 @@ Whenever needed and appropriate, asynchronous communication should be introduced
 
 Avoiding lock contention is a key issue in a multi-core environment.
 To address this issue, PMDs are designed to work with per-core private resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable.
+For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
 In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
 
 To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core
@@ -119,7 +119,7 @@ This is also true for the pipe-line model provided all logical cores used are lo
 
 Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance.
 
-If the PMD is ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
+If the PMD is ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
 concurrently on the same tx queue without SW lock. This PMD feature found in some NICs and useful in the following use cases:
 
 *  Remove explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
@@ -127,7 +127,7 @@ concurrently on the same tx queue without SW lock. This PMD feature found in som
 *  In the eventdev use case, avoid dedicating a separate TX core for transmitting and thus
    enables more scaling as all workers can send the packets.
 
-See `Hardware Offload`_ for ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
+See `Hardware Offload`_ for ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
 
 Device Identification, Ownership and Configuration
 --------------------------------------------------
@@ -311,7 +311,7 @@ The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get(
 The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities.
 Supported offloads can be either per-port or per-queue.
 
-Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
+Offloads are enabled using the existing ``RTE_ETH_TX_OFFLOAD_*`` or ``RTE_ETH_RX_OFFLOAD_*`` flags.
 Any requested offloading by an application must be within the device capabilities.
 Any offloading is disabled by default if it is not set in the parameter
 ``dev_conf->[rt]xmode.offloads`` to ``rte_eth_dev_configure()`` and
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index aeba3741825e..063ff388476a 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1968,23 +1968,23 @@ only matching traffic goes through.
 
 .. table:: RSS
 
-   +---------------+---------------------------------------------+
-   | Field         | Value                                       |
-   +===============+=============================================+
-   | ``func``      | RSS hash function to apply                  |
-   +---------------+---------------------------------------------+
-   | ``level``     | encapsulation level for ``types``           |
-   +---------------+---------------------------------------------+
-   | ``types``     | specific RSS hash types (see ``ETH_RSS_*``) |
-   +---------------+---------------------------------------------+
-   | ``key_len``   | hash key length in bytes                    |
-   +---------------+---------------------------------------------+
-   | ``queue_num`` | number of entries in ``queue``              |
-   +---------------+---------------------------------------------+
-   | ``key``       | hash key                                    |
-   +---------------+---------------------------------------------+
-   | ``queue``     | queue indices to use                        |
-   +---------------+---------------------------------------------+
+   +---------------+-------------------------------------------------+
+   | Field         | Value                                           |
+   +===============+=================================================+
+   | ``func``      | RSS hash function to apply                      |
+   +---------------+-------------------------------------------------+
+   | ``level``     | encapsulation level for ``types``               |
+   +---------------+-------------------------------------------------+
+   | ``types``     | specific RSS hash types (see ``RTE_ETH_RSS_*``) |
+   +---------------+-------------------------------------------------+
+   | ``key_len``   | hash key length in bytes                        |
+   +---------------+-------------------------------------------------+
+   | ``queue_num`` | number of entries in ``queue``                  |
+   +---------------+-------------------------------------------------+
+   | ``key``       | hash key                                        |
+   +---------------+-------------------------------------------------+
+   | ``queue``     | queue indices to use                            |
+   +---------------+-------------------------------------------------+
 
 Action: ``PF``
 ^^^^^^^^^^^^^^
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
index ad92c16868c1..46c9b51d1bf9 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -569,7 +569,7 @@ created by the application is attached to the security session by the API
 
 For Inline Crypto and Inline protocol offload, device specific defined metadata is
 updated in the mbuf using ``rte_security_set_pkt_metadata()`` if
-``DEV_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
+``RTE_ETH_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
 
 For inline protocol offloaded ingress traffic, the application can register a
 pointer, ``userdata`` , in the security session. When the packet is received,
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 0b4d03fb961f..199c3fa0bd70 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -58,22 +58,16 @@ Deprecation Notices
   ``RTE_ETH_FLOW_MAX`` is one sample of the mentioned case, adding a new flow
   type will break the ABI because of ``flex_mask[RTE_ETH_FLOW_MAX]`` array
   usage in following public struct hierarchy:
-  ``rte_eth_fdir_flex_conf -> rte_fdir_conf -> rte_eth_conf (in the middle)``.
+  ``rte_eth_fdir_flex_conf -> rte_eth_fdir_conf -> rte_eth_conf (in the middle)``.
   Need to identify this kind of usages and fix in 20.11, otherwise this blocks
   us extending existing enum/define.
   One solution can be using a fixed size array instead of ``.*MAX.*`` value.
 
-* ethdev: Will add ``RTE_ETH_`` prefix to all ethdev macros/enums in v21.11.
-  Macros will be added for backward compatibility.
-  Backward compatibility macros will be removed on v22.11.
-  A few old backward compatibility macros from 2013 that does not have
-  proper prefix will be removed on v21.11.
-
 * ethdev: The flow director API, including ``rte_eth_conf.fdir_conf`` field,
   and the related structures (``rte_fdir_*`` and ``rte_eth_fdir_*``),
   will be removed in DPDK 20.11.
 
-* ethdev: New offload flags ``DEV_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
+* ethdev: New offload flags ``RTE_ETH_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
   This will allow application to enable or disable PMDs from updating
   ``rte_mbuf::hash::fdir``.
   This scheme will allow PMDs to avoid writes to ``rte_mbuf`` fields on Rx and
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 041383ee2a73..707352099b13 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -368,6 +368,9 @@ ABI Changes
   to be transparent for both users (no changes in user app is required) and
   PMD developers (no changes in PMD is required).
 
+* ethdev: All enums & macros updated to have ``RTE_ETH`` prefix and structures
+  updated to have ``rte_eth`` prefix. DPDK components updated to use new names.
+
 
 Known Issues
 ------------
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 78171b25f96e..782574dd39d5 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -209,12 +209,12 @@ Where:
     device will ensure the ordering. Ordering will be lost when tried in PARALLEL.
 
 *   ``--rxoffload MASK``: RX HW offload capabilities to enable/use on this port
-    (bitmask of DEV_RX_OFFLOAD_* values). It is an optional parameter and
+    (bitmask of RTE_ETH_RX_OFFLOAD_* values). It is an optional parameter and
     allows user to disable some of the RX HW offload capabilities.
     By default all HW RX offloads are enabled.
 
 *   ``--txoffload MASK``: TX HW offload capabilities to enable/use on this port
-    (bitmask of DEV_TX_OFFLOAD_* values). It is an optional parameter and
+    (bitmask of RTE_ETH_TX_OFFLOAD_* values). It is an optional parameter and
     allows user to disable some of the TX HW offload capabilities.
     By default all HW TX offloads are enabled.
 
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index 8ff7ab85369c..2e1446ee461b 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -537,7 +537,7 @@ The command line options are:
     Set the hexadecimal bitmask of RX multi queue mode which can be enabled.
     The default value is 0x7::
 
-       ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG | ETH_MQ_RX_VMDQ_FLAG
+       RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG
 
 *   ``--record-core-cycles``
 
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index be52e6f72dab..a922988607ef 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -90,20 +90,20 @@ int dpaa_intr_disable(char *if_name);
 struct usdpaa_ioctl_link_status_args_old {
 	/* network device node name */
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link status(ETH_LINK_UP/DOWN) */
+	/* link status(RTE_ETH_LINK_UP/DOWN) */
 	int     link_status;
 };
 
 struct usdpaa_ioctl_link_status_args {
 	/* network device node name */
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link status(ETH_LINK_UP/DOWN) */
+	/* link status(RTE_ETH_LINK_UP/DOWN) */
 	int     link_status;
-	/* link speed (ETH_SPEED_NUM_)*/
+	/* link speed (RTE_ETH_SPEED_NUM_)*/
 	int     link_speed;
-	/* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+	/* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
 	int     link_duplex;
-	/* link autoneg (ETH_LINK_AUTONEG/FIXED)*/
+	/* link autoneg (RTE_ETH_LINK_AUTONEG/FIXED)*/
 	int     link_autoneg;
 
 };
@@ -111,16 +111,16 @@ struct usdpaa_ioctl_link_status_args {
 struct usdpaa_ioctl_update_link_status_args {
 	/* network device node name */
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link status(ETH_LINK_UP/DOWN) */
+	/* link status(RTE_ETH_LINK_UP/DOWN) */
 	int     link_status;
 };
 
 struct usdpaa_ioctl_update_link_speed {
 	/* network device node name*/
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link speed (ETH_SPEED_NUM_)*/
+	/* link speed (RTE_ETH_SPEED_NUM_)*/
 	int     link_speed;
-	/* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+	/* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
 	int     link_duplex;
 };
 
diff --git a/drivers/common/cnxk/roc_npc.h b/drivers/common/cnxk/roc_npc.h
index 10d1ac82a4bd..21883f6b3f66 100644
--- a/drivers/common/cnxk/roc_npc.h
+++ b/drivers/common/cnxk/roc_npc.h
@@ -160,7 +160,7 @@ enum roc_npc_rss_hash_function {
 struct roc_npc_action_rss {
 	enum roc_npc_rss_hash_function func;
 	uint32_t level;
-	uint64_t types;	       /**< Specific RSS hash types (see ETH_RSS_*). */
+	uint64_t types;	       /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
 	uint32_t key_len;      /**< Hash key length in bytes. */
 	uint32_t queue_num;    /**< Number of entries in @p queue. */
 	const uint8_t *key;    /**< Hash key. */
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index a077376dc0fb..8f778f0c2419 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -93,10 +93,10 @@ static const char *valid_arguments[] = {
 };
 
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(af_packet_logtype, NOTICE);
@@ -290,7 +290,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 static int
 eth_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -320,7 +320,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
 		internals->tx_queue[i].sockfd = -1;
 	}
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
@@ -331,7 +331,7 @@ eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 	const struct rte_eth_rxmode *rxmode = &dev_conf->rxmode;
 	struct pmd_internals *internals = dev->data->dev_private;
 
-	internals->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	internals->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 	return 0;
 }
 
@@ -346,9 +346,9 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_rx_queues = (uint16_t)internals->nb_queues;
 	dev_info->max_tx_queues = (uint16_t)internals->nb_queues;
 	dev_info->min_rx_bufsize = 0;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_VLAN_INSERT;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	return 0;
 }
diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index b362ccdcd38c..e156246f24df 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -163,10 +163,10 @@ static const char * const valid_arguments[] = {
 };
 
 static const struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_AUTONEG
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_AUTONEG
 };
 
 /* List which tracks PMDs to facilitate sharing UMEMs across them. */
@@ -652,7 +652,7 @@ eth_af_xdp_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 static int
 eth_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -661,7 +661,7 @@ eth_dev_start(struct rte_eth_dev *dev)
 static int
 eth_dev_stop(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index 377299b14c7a..b618cba3f023 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -736,14 +736,14 @@ eth_ark_dev_info_get(struct rte_eth_dev *dev,
 		.nb_align = ARK_TX_MIN_QUEUE}; /* power of 2 */
 
 	/* ARK PMD supports all line rates, how do we indicate that here ?? */
-	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
-				ETH_LINK_SPEED_10G |
-				ETH_LINK_SPEED_25G |
-				ETH_LINK_SPEED_40G |
-				ETH_LINK_SPEED_50G |
-				ETH_LINK_SPEED_100G);
-
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_TIMESTAMP;
+	dev_info->speed_capa = (RTE_ETH_LINK_SPEED_1G |
+				RTE_ETH_LINK_SPEED_10G |
+				RTE_ETH_LINK_SPEED_25G |
+				RTE_ETH_LINK_SPEED_40G |
+				RTE_ETH_LINK_SPEED_50G |
+				RTE_ETH_LINK_SPEED_100G);
+
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	return 0;
 }
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 5a198f53fce7..f7bfac796c07 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -154,20 +154,20 @@ static struct rte_pci_driver rte_atl_pmd = {
 	.remove = eth_atl_pci_remove,
 };
 
-#define ATL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP \
-			| DEV_RX_OFFLOAD_IPV4_CKSUM \
-			| DEV_RX_OFFLOAD_UDP_CKSUM \
-			| DEV_RX_OFFLOAD_TCP_CKSUM \
-			| DEV_RX_OFFLOAD_MACSEC_STRIP \
-			| DEV_RX_OFFLOAD_VLAN_FILTER)
-
-#define ATL_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT \
-			| DEV_TX_OFFLOAD_IPV4_CKSUM \
-			| DEV_TX_OFFLOAD_UDP_CKSUM \
-			| DEV_TX_OFFLOAD_TCP_CKSUM \
-			| DEV_TX_OFFLOAD_TCP_TSO \
-			| DEV_TX_OFFLOAD_MACSEC_INSERT \
-			| DEV_TX_OFFLOAD_MULTI_SEGS)
+#define ATL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP \
+			| RTE_ETH_RX_OFFLOAD_IPV4_CKSUM \
+			| RTE_ETH_RX_OFFLOAD_UDP_CKSUM \
+			| RTE_ETH_RX_OFFLOAD_TCP_CKSUM \
+			| RTE_ETH_RX_OFFLOAD_MACSEC_STRIP \
+			| RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+
+#define ATL_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT \
+			| RTE_ETH_TX_OFFLOAD_IPV4_CKSUM \
+			| RTE_ETH_TX_OFFLOAD_UDP_CKSUM \
+			| RTE_ETH_TX_OFFLOAD_TCP_CKSUM \
+			| RTE_ETH_TX_OFFLOAD_TCP_TSO \
+			| RTE_ETH_TX_OFFLOAD_MACSEC_INSERT \
+			| RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define SFP_EEPROM_SIZE 0x100
 
@@ -488,7 +488,7 @@ atl_dev_start(struct rte_eth_dev *dev)
 	/* set adapter started */
 	hw->adapter_stopped = 0;
 
-	if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(ERR,
 		"Invalid link_speeds for port %u, fix speed not supported",
 				dev->data->port_id);
@@ -655,18 +655,18 @@ atl_dev_set_link_up(struct rte_eth_dev *dev)
 	uint32_t link_speeds = dev->data->dev_conf.link_speeds;
 	uint32_t speed_mask = 0;
 
-	if (link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		speed_mask = hw->aq_nic_cfg->link_speed_msk;
 	} else {
-		if (link_speeds & ETH_LINK_SPEED_10G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 			speed_mask |= AQ_NIC_RATE_10G;
-		if (link_speeds & ETH_LINK_SPEED_5G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_5G)
 			speed_mask |= AQ_NIC_RATE_5G;
-		if (link_speeds & ETH_LINK_SPEED_1G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed_mask |= AQ_NIC_RATE_1G;
-		if (link_speeds & ETH_LINK_SPEED_2_5G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_2_5G)
 			speed_mask |=  AQ_NIC_RATE_2G5;
-		if (link_speeds & ETH_LINK_SPEED_100M)
+		if (link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed_mask |= AQ_NIC_RATE_100M;
 	}
 
@@ -1127,10 +1127,10 @@ atl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->reta_size = HW_ATL_B0_RSS_REDIRECTION_MAX;
 	dev_info->flow_type_rss_offloads = ATL_RSS_OFFLOAD_ALL;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
-	dev_info->speed_capa |= ETH_LINK_SPEED_100M;
-	dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
-	dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
 
 	return 0;
 }
@@ -1175,10 +1175,10 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
 	u32 fc = AQ_NIC_FC_OFF;
 	int err = 0;
 
-	link.link_status = ETH_LINK_DOWN;
+	link.link_status = RTE_ETH_LINK_DOWN;
 	link.link_speed = 0;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_autoneg = hw->is_autoneg ? ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_autoneg = hw->is_autoneg ? RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
 	memset(&old, 0, sizeof(old));
 
 	/* load old link status */
@@ -1198,8 +1198,8 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
 		return 0;
 	}
 
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_speed = hw->aq_link_status.mbps;
 
 	rte_eth_linkstatus_set(dev, &link);
@@ -1333,7 +1333,7 @@ atl_dev_link_status_print(struct rte_eth_dev *dev)
 		PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned int)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -1532,13 +1532,13 @@ atl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	hw->aq_fw_ops->get_flow_control(hw, &fc);
 
 	if (fc == AQ_NIC_FC_OFF)
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	else if ((fc & AQ_NIC_FC_RX) && (fc & AQ_NIC_FC_TX))
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (fc & AQ_NIC_FC_RX)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (fc & AQ_NIC_FC_TX)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 
 	return 0;
 }
@@ -1553,13 +1553,13 @@ atl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	if (hw->aq_fw_ops->set_flow_control == NULL)
 		return -ENOTSUP;
 
-	if (fc_conf->mode == RTE_FC_NONE)
+	if (fc_conf->mode == RTE_ETH_FC_NONE)
 		hw->aq_nic_cfg->flow_control = AQ_NIC_FC_OFF;
-	else if (fc_conf->mode == RTE_FC_RX_PAUSE)
+	else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
 		hw->aq_nic_cfg->flow_control = AQ_NIC_FC_RX;
-	else if (fc_conf->mode == RTE_FC_TX_PAUSE)
+	else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
 		hw->aq_nic_cfg->flow_control = AQ_NIC_FC_TX;
-	else if (fc_conf->mode == RTE_FC_FULL)
+	else if (fc_conf->mode == RTE_ETH_FC_FULL)
 		hw->aq_nic_cfg->flow_control = (AQ_NIC_FC_RX | AQ_NIC_FC_TX);
 
 	if (old_flow_control != hw->aq_nic_cfg->flow_control)
@@ -1727,14 +1727,14 @@ atl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	PMD_INIT_FUNC_TRACE();
 
-	ret = atl_enable_vlan_filter(dev, mask & ETH_VLAN_FILTER_MASK);
+	ret = atl_enable_vlan_filter(dev, mask & RTE_ETH_VLAN_FILTER_MASK);
 
-	cfg->vlan_strip = !!(mask & ETH_VLAN_STRIP_MASK);
+	cfg->vlan_strip = !!(mask & RTE_ETH_VLAN_STRIP_MASK);
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++)
 		hw_atl_rpo_rx_desc_vlan_stripping_set(hw, cfg->vlan_strip, i);
 
-	if (mask & ETH_VLAN_EXTEND_MASK)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK)
 		ret = -ENOTSUP;
 
 	return ret;
@@ -1750,10 +1750,10 @@ atl_vlan_tpid_set(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 	PMD_INIT_FUNC_TRACE();
 
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
+	case RTE_ETH_VLAN_TYPE_INNER:
 		hw_atl_rpf_vlan_inner_etht_set(hw, tpid);
 		break;
-	case ETH_VLAN_TYPE_OUTER:
+	case RTE_ETH_VLAN_TYPE_OUTER:
 		hw_atl_rpf_vlan_outer_etht_set(hw, tpid);
 		break;
 	default:
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index fbc9917ed30d..ed9ef9f0cc52 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -11,15 +11,15 @@
 #include "hw_atl/hw_atl_utils.h"
 
 #define ATL_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define ATL_DEV_PRIVATE_TO_HW(adapter) \
 	(&((struct atl_adapter *)adapter)->hw)
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 0d3460383a50..2ff426892df2 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -145,10 +145,10 @@ atl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
 
 	rxq->l3_csum_enabled = dev->data->dev_conf.rxmode.offloads &
-		DEV_RX_OFFLOAD_IPV4_CKSUM;
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 	rxq->l4_csum_enabled = dev->data->dev_conf.rxmode.offloads &
-		(DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM);
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		(RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		PMD_DRV_LOG(ERR, "PMD does not support KEEP_CRC offload");
 
 	/* allocate memory for the software ring */
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 932ec90265cf..5d94db02c506 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1998,9 +1998,9 @@ avp_dev_configure(struct rte_eth_dev *eth_dev)
 	/* Setup required number of queues */
 	_avp_set_queue_counts(eth_dev);
 
-	mask = (ETH_VLAN_STRIP_MASK |
-		ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK);
+	mask = (RTE_ETH_VLAN_STRIP_MASK |
+		RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK);
 	ret = avp_vlan_offload_set(eth_dev, mask);
 	if (ret < 0) {
 		PMD_DRV_LOG(ERR, "VLAN offload set failed by host, ret=%d\n",
@@ -2140,8 +2140,8 @@ avp_dev_link_update(struct rte_eth_dev *eth_dev,
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	struct rte_eth_link *link = &eth_dev->data->dev_link;
 
-	link->link_speed = ETH_SPEED_NUM_10G;
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_speed = RTE_ETH_SPEED_NUM_10G;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link->link_status = !!(avp->flags & AVP_F_LINKUP);
 
 	return -1;
@@ -2191,8 +2191,8 @@ avp_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->max_rx_pktlen = avp->max_rx_pkt_len;
 	dev_info->max_mac_addrs = AVP_MAX_MAC_ADDRS;
 	if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
-		dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
-		dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+		dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+		dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	}
 
 	return 0;
@@ -2205,9 +2205,9 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 	struct rte_eth_conf *dev_conf = &eth_dev->data->dev_conf;
 	uint64_t offloads = dev_conf->rxmode.offloads;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
-			if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 				avp->features |= RTE_AVP_FEATURE_VLAN_OFFLOAD;
 			else
 				avp->features &= ~RTE_AVP_FEATURE_VLAN_OFFLOAD;
@@ -2216,13 +2216,13 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 		}
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			PMD_DRV_LOG(ERR, "VLAN filter offload not supported\n");
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			PMD_DRV_LOG(ERR, "VLAN extend offload not supported\n");
 	}
 
diff --git a/drivers/net/axgbe/axgbe_dev.c b/drivers/net/axgbe/axgbe_dev.c
index ca32ad641873..3aaa2193272f 100644
--- a/drivers/net/axgbe/axgbe_dev.c
+++ b/drivers/net/axgbe/axgbe_dev.c
@@ -840,11 +840,11 @@ static void axgbe_rss_options(struct axgbe_port *pdata)
 	pdata->rss_hf = rss_conf->rss_hf;
 	rss_hf = rss_conf->rss_hf;
 
-	if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+	if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
-	if (rss_hf & (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+	if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
-	if (rss_hf & (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+	if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
 }
 
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 0250256830ac..dab0c6775d1d 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -326,7 +326,7 @@ axgbe_dev_configure(struct rte_eth_dev *dev)
 	struct axgbe_port *pdata =  dev->data->dev_private;
 	/* Checksum offload to hardware */
 	pdata->rx_csum_enable = dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_CHECKSUM;
+				RTE_ETH_RX_OFFLOAD_CHECKSUM;
 	return 0;
 }
 
@@ -335,9 +335,9 @@ axgbe_dev_rx_mq_config(struct rte_eth_dev *dev)
 {
 	struct axgbe_port *pdata = dev->data->dev_private;
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 		pdata->rss_enable = 1;
-	else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+	else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
 		pdata->rss_enable = 0;
 	else
 		return  -1;
@@ -385,7 +385,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
 	rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
 
 	max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 				max_pkt_len > pdata->rx_buf_size)
 		dev_data->scattered_rx = 1;
 
@@ -521,8 +521,8 @@ axgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & (1ULL << shift)) == 0)
 			continue;
 		pdata->rss_table[i] = reta_conf[idx].reta[shift];
@@ -552,8 +552,8 @@ axgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & (1ULL << shift)) == 0)
 			continue;
 		reta_conf[idx].reta[shift] = pdata->rss_table[i];
@@ -590,13 +590,13 @@ axgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
 
 	pdata->rss_hf = rss_conf->rss_hf & AXGBE_RSS_OFFLOAD;
 
-	if (pdata->rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+	if (pdata->rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
 	if (pdata->rss_hf &
-	    (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+	    (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
 	if (pdata->rss_hf &
-	    (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+	    (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
 
 	/* Set the RSS options */
@@ -765,7 +765,7 @@ axgbe_dev_link_update(struct rte_eth_dev *dev,
 	link.link_status = pdata->phy_link;
 	link.link_speed = pdata->phy_speed;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			      ETH_LINK_SPEED_FIXED);
+			      RTE_ETH_LINK_SPEED_FIXED);
 	ret = rte_eth_linkstatus_set(dev, &link);
 	if (ret == -1)
 		PMD_DRV_LOG(ERR, "No change in link status\n");
@@ -1208,24 +1208,24 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_rx_pktlen = AXGBE_RX_MAX_BUF_SIZE;
 	dev_info->max_mac_addrs = pdata->hw_feat.addn_mac + 1;
 	dev_info->max_hash_mac_addrs = pdata->hw_feat.hash_table_size;
-	dev_info->speed_capa =  ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM  |
-		DEV_RX_OFFLOAD_TCP_CKSUM  |
-		DEV_RX_OFFLOAD_SCATTER	  |
-		DEV_RX_OFFLOAD_KEEP_CRC;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_SCATTER	  |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if (pdata->hw_feat.rss) {
 		dev_info->flow_type_rss_offloads = AXGBE_RSS_OFFLOAD;
@@ -1262,13 +1262,13 @@ axgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	fc.autoneg = pdata->pause_autoneg;
 
 	if (pdata->rx_pause && pdata->tx_pause)
-		fc.mode = RTE_FC_FULL;
+		fc.mode = RTE_ETH_FC_FULL;
 	else if (pdata->rx_pause)
-		fc.mode = RTE_FC_RX_PAUSE;
+		fc.mode = RTE_ETH_FC_RX_PAUSE;
 	else if (pdata->tx_pause)
-		fc.mode = RTE_FC_TX_PAUSE;
+		fc.mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc.mode = RTE_FC_NONE;
+		fc.mode = RTE_ETH_FC_NONE;
 
 	fc_conf->high_water =  (1024 + (fc.low_water[0] << 9)) / 1024;
 	fc_conf->low_water =  (1024 + (fc.high_water[0] << 9)) / 1024;
@@ -1298,13 +1298,13 @@ axgbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	AXGMAC_IOWRITE(pdata, reg, reg_val);
 	fc.mode = fc_conf->mode;
 
-	if (fc.mode == RTE_FC_FULL) {
+	if (fc.mode == RTE_ETH_FC_FULL) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 1;
-	} else if (fc.mode == RTE_FC_RX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
 		pdata->tx_pause = 0;
 		pdata->rx_pause = 1;
-	} else if (fc.mode == RTE_FC_TX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 0;
 	} else {
@@ -1386,15 +1386,15 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
 
 	fc.mode = pfc_conf->fc.mode;
 
-	if (fc.mode == RTE_FC_FULL) {
+	if (fc.mode == RTE_ETH_FC_FULL) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 1;
 		AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
-	} else if (fc.mode == RTE_FC_RX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
 		pdata->tx_pause = 0;
 		pdata->rx_pause = 1;
 		AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
-	} else if (fc.mode == RTE_FC_TX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 0;
 		AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 0);
@@ -1830,8 +1830,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 	PMD_DRV_LOG(DEBUG, "EDVLP: qinq = 0x%x\n", qinq);
 
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
-		PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_INNER\n");
+	case RTE_ETH_VLAN_TYPE_INNER:
+		PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_INNER\n");
 		if (qinq) {
 			if (tpid != 0x8100 && tpid != 0x88a8)
 				PMD_DRV_LOG(ERR,
@@ -1848,8 +1848,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 				    "Inner type not supported in single tag\n");
 		}
 		break;
-	case ETH_VLAN_TYPE_OUTER:
-		PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_OUTER\n");
+	case RTE_ETH_VLAN_TYPE_OUTER:
+		PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_OUTER\n");
 		if (qinq) {
 			PMD_DRV_LOG(DEBUG, "double tagging is enabled\n");
 			/*Enable outer VLAN tag*/
@@ -1866,11 +1866,11 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 					    "tag supported 0x8100/0x88A8\n");
 		}
 		break;
-	case ETH_VLAN_TYPE_MAX:
-		PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_MAX\n");
+	case RTE_ETH_VLAN_TYPE_MAX:
+		PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_MAX\n");
 		break;
-	case ETH_VLAN_TYPE_UNKNOWN:
-		PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_UNKNOWN\n");
+	case RTE_ETH_VLAN_TYPE_UNKNOWN:
+		PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_UNKNOWN\n");
 		break;
 	}
 	return 0;
@@ -1904,8 +1904,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, CSVL, 0);
 	AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, VLTI, 1);
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 			PMD_DRV_LOG(DEBUG, "Strip ON for device = %s\n",
 				    pdata->eth_dev->device->name);
 			pdata->hw_if.enable_rx_vlan_stripping(pdata);
@@ -1915,8 +1915,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			pdata->hw_if.disable_rx_vlan_stripping(pdata);
 		}
 	}
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 			PMD_DRV_LOG(DEBUG, "Filter ON for device = %s\n",
 				    pdata->eth_dev->device->name);
 			pdata->hw_if.enable_rx_vlan_filtering(pdata);
@@ -1926,14 +1926,14 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			pdata->hw_if.disable_rx_vlan_filtering(pdata);
 		}
 	}
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
 			PMD_DRV_LOG(DEBUG, "enabling vlan extended mode\n");
 			axgbe_vlan_extend_enable(pdata);
 			/* Set global registers with default ethertype*/
-			axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+			axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					    RTE_ETHER_TYPE_VLAN);
-			axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+			axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
 					    RTE_ETHER_TYPE_VLAN);
 		} else {
 			PMD_DRV_LOG(DEBUG, "disabling vlan extended mode\n");
diff --git a/drivers/net/axgbe/axgbe_ethdev.h b/drivers/net/axgbe/axgbe_ethdev.h
index a6226729fe4d..0a3e1c59df1a 100644
--- a/drivers/net/axgbe/axgbe_ethdev.h
+++ b/drivers/net/axgbe/axgbe_ethdev.h
@@ -97,12 +97,12 @@
 
 /* Receive Side Scaling */
 #define AXGBE_RSS_OFFLOAD  ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define AXGBE_RSS_HASH_KEY_SIZE		40
 #define AXGBE_RSS_MAX_TABLE_SIZE	256
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 4f98e695ae74..59fa9175aded 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -597,7 +597,7 @@ static void axgbe_an73_state_machine(struct axgbe_port *pdata)
 		pdata->an_int = 0;
 		axgbe_an73_clear_interrupts(pdata);
 		pdata->eth_dev->data->dev_link.link_status =
-			ETH_LINK_DOWN;
+			RTE_ETH_LINK_DOWN;
 	} else if (pdata->an_state == AXGBE_AN_ERROR) {
 		PMD_DRV_LOG(ERR, "error during auto-negotiation, state=%u\n",
 			    cur_state);
diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c
index c8618d2d6daa..aa2c27ebaa49 100644
--- a/drivers/net/axgbe/axgbe_rxtx.c
+++ b/drivers/net/axgbe/axgbe_rxtx.c
@@ -75,7 +75,7 @@ int axgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		(DMA_CH_INC * rxq->queue_id));
 	rxq->dma_tail_reg = (volatile uint32_t *)((uint8_t *)rxq->dma_regs +
 						  DMA_CH_RDTR_LO);
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -286,7 +286,7 @@ axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
-				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+				if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
 				else
 					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
@@ -430,7 +430,7 @@ uint16_t eth_axgbe_recv_scattered_pkts(void *rx_queue,
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
-				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+				if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
 				else
 					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 567ea2382864..78fc717ec44a 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -94,14 +94,14 @@ bnx2x_link_update(struct rte_eth_dev *dev)
 	link.link_speed = sc->link_vars.line_speed;
 	switch (sc->link_vars.duplex) {
 		case DUPLEX_FULL:
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			break;
 		case DUPLEX_HALF:
-			link.link_duplex = ETH_LINK_HALF_DUPLEX;
+			link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 			break;
 	}
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+		 RTE_ETH_LINK_SPEED_FIXED);
 	link.link_status = sc->link_vars.link_up;
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -408,7 +408,7 @@ bnx2xvf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_comple
 	if (sc->old_bulletin.valid_bitmap & (1 << CHANNEL_DOWN)) {
 		PMD_DRV_LOG(ERR, sc, "PF indicated channel is down."
 				"VF device is no longer operational");
-		dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	}
 
 	return ret;
@@ -534,7 +534,7 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_rx_bufsize = BNX2X_MIN_RX_BUF_SIZE;
 	dev_info->max_rx_pktlen  = BNX2X_MAX_RX_PKT_LEN;
 	dev_info->max_mac_addrs  = BNX2X_MAX_MAC_ADDRS;
-	dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G;
 
 	dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
 	dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
@@ -669,7 +669,7 @@ bnx2x_common_dev_init(struct rte_eth_dev *eth_dev, int is_vf)
 	bnx2x_load_firmware(sc);
 	assert(sc->firmware);
 
-	if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		sc->udp_rss = 1;
 
 	sc->rx_budget = BNX2X_RX_BUDGET;
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 6743cf92b0e6..39bd739c7bc9 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -569,37 +569,37 @@ struct bnxt_rep_info {
 #define BNXT_FW_STATUS_SHUTDOWN		0x100000
 
 #define BNXT_ETH_RSS_SUPPORT (	\
-	ETH_RSS_IPV4 |		\
-	ETH_RSS_NONFRAG_IPV4_TCP |	\
-	ETH_RSS_NONFRAG_IPV4_UDP |	\
-	ETH_RSS_IPV6 |		\
-	ETH_RSS_NONFRAG_IPV6_TCP |	\
-	ETH_RSS_NONFRAG_IPV6_UDP |	\
-	ETH_RSS_LEVEL_MASK)
-
-#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_CKSUM | \
-				     DEV_TX_OFFLOAD_UDP_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_TSO | \
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GRE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_QINQ_INSERT | \
-				     DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
-				     DEV_RX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_UDP_CKSUM | \
-				     DEV_RX_OFFLOAD_TCP_CKSUM | \
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
-				     DEV_RX_OFFLOAD_KEEP_CRC | \
-				     DEV_RX_OFFLOAD_VLAN_EXTEND | \
-				     DEV_RX_OFFLOAD_TCP_LRO | \
-				     DEV_RX_OFFLOAD_SCATTER | \
-				     DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RSS_IPV4 |		\
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP |	\
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP |	\
+	RTE_ETH_RSS_IPV6 |		\
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP |	\
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP |	\
+	RTE_ETH_RSS_LEVEL_MASK)
+
+#define BNXT_DEV_TX_OFFLOAD_SUPPORT (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+				     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+				     RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define BNXT_DEV_RX_OFFLOAD_SUPPORT (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+				     RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_KEEP_CRC | \
+				     RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+				     RTE_ETH_RX_OFFLOAD_TCP_LRO | \
+				     RTE_ETH_RX_OFFLOAD_SCATTER | \
+				     RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index f385723a9f65..2791a5c62db1 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -426,7 +426,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 		goto err_out;
 
 	/* Alloc RSS context only if RSS mode is enabled */
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
 		int j, nr_ctxs = bnxt_rss_ctxts(bp);
 
 		/* RSS table size in Thor is 512.
@@ -458,7 +458,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 	 * setting is not available at this time, it will not be
 	 * configured correctly in the CFA.
 	 */
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 		vnic->vlan_strip = true;
 	else
 		vnic->vlan_strip = false;
@@ -493,7 +493,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 	bnxt_hwrm_vnic_plcmode_cfg(bp, vnic);
 
 	rc = bnxt_hwrm_vnic_tpa_cfg(bp, vnic,
-				    (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) ?
+				    (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) ?
 				    true : false);
 	if (rc)
 		goto err_out;
@@ -923,35 +923,35 @@ uint32_t bnxt_get_speed_capabilities(struct bnxt *bp)
 		link_speed = bp->link_info->support_pam4_speeds;
 
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB)
-		speed_capa |= ETH_LINK_SPEED_100M;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100MBHD)
-		speed_capa |= ETH_LINK_SPEED_100M_HD;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_1GB)
-		speed_capa |= ETH_LINK_SPEED_1G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_2_5GB)
-		speed_capa |= ETH_LINK_SPEED_2_5G;
+		speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_10GB)
-		speed_capa |= ETH_LINK_SPEED_10G;
+		speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_20GB)
-		speed_capa |= ETH_LINK_SPEED_20G;
+		speed_capa |= RTE_ETH_LINK_SPEED_20G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_25GB)
-		speed_capa |= ETH_LINK_SPEED_25G;
+		speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_40GB)
-		speed_capa |= ETH_LINK_SPEED_40G;
+		speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_50GB)
-		speed_capa |= ETH_LINK_SPEED_50G;
+		speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100GB)
-		speed_capa |= ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_50G)
-		speed_capa |= ETH_LINK_SPEED_50G;
+		speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_100G)
-		speed_capa |= ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_200G)
-		speed_capa |= ETH_LINK_SPEED_200G;
+		speed_capa |= RTE_ETH_LINK_SPEED_200G;
 
 	if (bp->link_info->auto_mode ==
 	    HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE)
-		speed_capa |= ETH_LINK_SPEED_FIXED;
+		speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
 
 	return speed_capa;
 }
@@ -995,14 +995,14 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 
 	dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
 	if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	if (bp->vnic_cap_flags & BNXT_VNIC_CAP_VLAN_RX_STRIP)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_STRIP;
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT |
 				    dev_info->tx_queue_offload_capa;
 	if (bp->fw_cap & BNXT_FW_CAP_VLAN_TX_INSERT)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_VLAN_INSERT;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
 
 	dev_info->speed_capa = bnxt_get_speed_capabilities(bp);
@@ -1049,8 +1049,8 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	 */
 
 	/* VMDq resources */
-	vpool = 64; /* ETH_64_POOLS */
-	vrxq = 128; /* ETH_VMDQ_DCB_NUM_QUEUES */
+	vpool = 64; /* RTE_ETH_64_POOLS */
+	vrxq = 128; /* RTE_ETH_VMDQ_DCB_NUM_QUEUES */
 	for (i = 0; i < 4; vpool >>= 1, i++) {
 		if (max_vnics > vpool) {
 			for (j = 0; j < 5; vrxq >>= 1, j++) {
@@ -1145,15 +1145,15 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
 	    (uint32_t)(eth_dev->data->nb_rx_queues) > bp->max_ring_grps)
 		goto resource_error;
 
-	if (!(eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) &&
+	if (!(eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) &&
 	    bp->max_vnics < eth_dev->data->nb_rx_queues)
 		goto resource_error;
 
 	bp->rx_cp_nr_rings = bp->rx_nr_rings;
 	bp->tx_cp_nr_rings = bp->tx_nr_rings;
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rx_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
 
 	bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
@@ -1182,7 +1182,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
 		PMD_DRV_LOG(INFO, "Port %d Link Up - speed %u Mbps - %s\n",
 			eth_dev->data->port_id,
 			(uint32_t)link->link_speed,
-			(link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+			(link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 			("full-duplex") : ("half-duplex\n"));
 	else
 		PMD_DRV_LOG(INFO, "Port %d Link Down\n",
@@ -1199,10 +1199,10 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
 	uint16_t buf_size;
 	int i;
 
-	if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		return 1;
 
-	if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO)
+	if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		return 1;
 
 	for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1247,15 +1247,15 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
 	 * a limited subset have been enabled.
 	 */
 	if (eth_dev->data->dev_conf.rxmode.offloads &
-		~(DEV_RX_OFFLOAD_VLAN_STRIP |
-		  DEV_RX_OFFLOAD_KEEP_CRC |
-		  DEV_RX_OFFLOAD_IPV4_CKSUM |
-		  DEV_RX_OFFLOAD_UDP_CKSUM |
-		  DEV_RX_OFFLOAD_TCP_CKSUM |
-		  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		  DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-		  DEV_RX_OFFLOAD_RSS_HASH |
-		  DEV_RX_OFFLOAD_VLAN_FILTER))
+		~(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		  RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		  RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_RSS_HASH |
+		  RTE_ETH_RX_OFFLOAD_VLAN_FILTER))
 		goto use_scalar_rx;
 
 #if defined(RTE_ARCH_X86) && defined(CC_AVX2_SUPPORT)
@@ -1307,7 +1307,7 @@ bnxt_transmit_function(struct rte_eth_dev *eth_dev)
 	 * or tx offloads.
 	 */
 	if (eth_dev->data->scattered_rx ||
-	    (offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) ||
+	    (offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) ||
 	    BNXT_TRUFLOW_EN(bp))
 		goto use_scalar_tx;
 
@@ -1608,10 +1608,10 @@ static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
 
 	bnxt_link_update_op(eth_dev, 1);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
-		vlan_mask |= ETH_VLAN_FILTER_MASK;
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-		vlan_mask |= ETH_VLAN_STRIP_MASK;
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+		vlan_mask |= RTE_ETH_VLAN_FILTER_MASK;
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+		vlan_mask |= RTE_ETH_VLAN_STRIP_MASK;
 	rc = bnxt_vlan_offload_set_op(eth_dev, vlan_mask);
 	if (rc)
 		goto error;
@@ -1833,8 +1833,8 @@ int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete)
 		/* Retrieve link info from hardware */
 		rc = bnxt_get_hwrm_link_config(bp, &new);
 		if (rc) {
-			new.link_speed = ETH_LINK_SPEED_100M;
-			new.link_duplex = ETH_LINK_FULL_DUPLEX;
+			new.link_speed = RTE_ETH_LINK_SPEED_100M;
+			new.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR,
 				"Failed to retrieve link rc = 0x%x!\n", rc);
 			goto out;
@@ -2028,7 +2028,7 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
 	if (!vnic->rss_table)
 		return -EINVAL;
 
-	if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+	if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 		return -EINVAL;
 
 	if (reta_size != tbl_size) {
@@ -2041,8 +2041,8 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
 	for (i = 0; i < reta_size; i++) {
 		struct bnxt_rx_queue *rxq;
 
-		idx = i / RTE_RETA_GROUP_SIZE;
-		sft = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		sft = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (!(reta_conf[idx].mask & (1ULL << sft)))
 			continue;
@@ -2095,8 +2095,8 @@ static int bnxt_reta_query_op(struct rte_eth_dev *eth_dev,
 	}
 
 	for (idx = 0, i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		sft = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		sft = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (reta_conf[idx].mask & (1ULL << sft)) {
 			uint16_t qid;
@@ -2134,7 +2134,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
 	 * If RSS enablement were different than dev_configure,
 	 * then return -EINVAL
 	 */
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (!rss_conf->rss_hf)
 			PMD_DRV_LOG(ERR, "Hash type NONE\n");
 	} else {
@@ -2152,7 +2152,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
 	vnic->hash_type = bnxt_rte_to_hwrm_hash_types(rss_conf->rss_hf);
 	vnic->hash_mode =
 		bnxt_rte_to_hwrm_hash_level(bp, rss_conf->rss_hf,
-					    ETH_RSS_LEVEL(rss_conf->rss_hf));
+					    RTE_ETH_RSS_LEVEL(rss_conf->rss_hf));
 
 	/*
 	 * If hashkey is not specified, use the previously configured
@@ -2197,30 +2197,30 @@ static int bnxt_rss_hash_conf_get_op(struct rte_eth_dev *eth_dev,
 		hash_types = vnic->hash_type;
 		rss_conf->rss_hf = 0;
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4) {
-			rss_conf->rss_hf |= ETH_RSS_IPV4;
+			rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
 			hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6) {
-			rss_conf->rss_hf |= ETH_RSS_IPV6;
+			rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
 			hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
 		}
@@ -2260,17 +2260,17 @@ static int bnxt_flow_ctrl_get_op(struct rte_eth_dev *dev,
 		fc_conf->autoneg = 1;
 	switch (bp->link_info->pause) {
 	case 0:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case (HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX |
 			HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX):
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	}
 	return 0;
@@ -2293,11 +2293,11 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		bp->link_info->auto_pause = 0;
 		bp->link_info->force_pause = 0;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		if (fc_conf->autoneg) {
 			bp->link_info->auto_pause =
 					HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_RX;
@@ -2308,7 +2308,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
 					HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_RX;
 		}
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		if (fc_conf->autoneg) {
 			bp->link_info->auto_pause =
 					HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX;
@@ -2319,7 +2319,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
 					HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_TX;
 		}
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		if (fc_conf->autoneg) {
 			bp->link_info->auto_pause =
 					HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX |
@@ -2350,7 +2350,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
 		return rc;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (bp->vxlan_port_cnt) {
 			PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
 				udp_tunnel->udp_port);
@@ -2364,7 +2364,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
 		tunnel_type =
 			HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_VXLAN;
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (bp->geneve_port_cnt) {
 			PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
 				udp_tunnel->udp_port);
@@ -2413,7 +2413,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
 		return rc;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (!bp->vxlan_port_cnt) {
 			PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
 			return -EINVAL;
@@ -2430,7 +2430,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
 			HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_VXLAN;
 		port = bp->vxlan_fw_dst_port_id;
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (!bp->geneve_port_cnt) {
 			PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
 			return -EINVAL;
@@ -2608,7 +2608,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
 	int rc;
 
 	vnic = BNXT_GET_DEFAULT_VNIC(bp);
-	if (!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)) {
+	if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
 		/* Remove any VLAN filters programmed */
 		for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
 			bnxt_del_vlan_filter(bp, i);
@@ -2628,7 +2628,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
 		bnxt_add_vlan_filter(bp, 0);
 	}
 	PMD_DRV_LOG(DEBUG, "VLAN Filtering: %d\n",
-		    !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER));
+		    !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER));
 
 	return 0;
 }
@@ -2641,7 +2641,7 @@ static int bnxt_free_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 
 	/* Destroy vnic filters and vnic */
 	if (bp->eth_dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_VLAN_FILTER) {
+	    RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
 			bnxt_del_vlan_filter(bp, i);
 	}
@@ -2680,7 +2680,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
 		return rc;
 
 	if (bp->eth_dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_VLAN_FILTER) {
+	    RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		rc = bnxt_add_vlan_filter(bp, 0);
 		if (rc)
 			return rc;
@@ -2698,7 +2698,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
 		return rc;
 
 	PMD_DRV_LOG(DEBUG, "VLAN Strip Offload: %d\n",
-		    !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP));
+		    !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP));
 
 	return rc;
 }
@@ -2718,22 +2718,22 @@ bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask)
 	if (!dev->data->dev_started)
 		return 0;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* Enable or disable VLAN filtering */
 		rc = bnxt_config_vlan_hw_filter(bp, rx_offloads);
 		if (rc)
 			return rc;
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
 		rc = bnxt_config_vlan_hw_stripping(bp, rx_offloads);
 		if (rc)
 			return rc;
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			PMD_DRV_LOG(DEBUG, "Extend VLAN supported\n");
 		else
 			PMD_DRV_LOG(INFO, "Extend VLAN unsupported\n");
@@ -2748,10 +2748,10 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 {
 	struct bnxt *bp = dev->data->dev_private;
 	int qinq = dev->data->dev_conf.rxmode.offloads &
-		   DEV_RX_OFFLOAD_VLAN_EXTEND;
+		   RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
-	if (vlan_type != ETH_VLAN_TYPE_INNER &&
-	    vlan_type != ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+	    vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
 		PMD_DRV_LOG(ERR,
 			    "Unsupported vlan type.");
 		return -EINVAL;
@@ -2763,7 +2763,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 		return -EINVAL;
 	}
 
-	if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		switch (tpid) {
 		case RTE_ETHER_TYPE_QINQ:
 			bp->outer_tpid_bd =
@@ -2791,7 +2791,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 		}
 		bp->outer_tpid_bd |= tpid;
 		PMD_DRV_LOG(INFO, "outer_tpid_bd = %x\n", bp->outer_tpid_bd);
-	} else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+	} else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
 		PMD_DRV_LOG(ERR,
 			    "Can accelerate only outer vlan in QinQ\n");
 		return -EINVAL;
@@ -2831,7 +2831,7 @@ bnxt_set_default_mac_addr_op(struct rte_eth_dev *dev,
 	bnxt_del_dflt_mac_filter(bp, vnic);
 
 	memcpy(bp->mac_addr, addr, RTE_ETHER_ADDR_LEN);
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		/* This filter will allow only untagged packets */
 		rc = bnxt_add_vlan_filter(bp, 0);
 	} else {
@@ -6556,4 +6556,4 @@ bool is_bnxt_supported(struct rte_eth_dev *dev)
 RTE_LOG_REGISTER_SUFFIX(bnxt_logtype_driver, driver, NOTICE);
 RTE_PMD_REGISTER_PCI(net_bnxt, bnxt_rte_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_bnxt, bnxt_pci_id_map);
-RTE_PMD_REGISTER_KMOD_DEP(net_bnxt, "* igb_uio | uio_pci_generic | vfio-pci");
+
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index b2ebb5634e3a..ced697a73980 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -978,7 +978,7 @@ static int bnxt_vnic_prep(struct bnxt *bp, struct bnxt_vnic_info *vnic,
 		}
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 		vnic->vlan_strip = true;
 	else
 		vnic->vlan_strip = false;
@@ -1177,7 +1177,7 @@ bnxt_vnic_rss_cfg_update(struct bnxt *bp,
 	}
 
 	/* If RSS types is 0, use a best effort configuration */
-	types = rss->types ? rss->types : ETH_RSS_IPV4;
+	types = rss->types ? rss->types : RTE_ETH_RSS_IPV4;
 
 	hash_type = bnxt_rte_to_hwrm_hash_types(types);
 
@@ -1322,7 +1322,7 @@ bnxt_validate_and_parse_flow(struct rte_eth_dev *dev,
 
 		rxq = bp->rx_queues[act_q->index];
 
-		if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) && rxq &&
+		if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) && rxq &&
 		    vnic->fw_vnic_id != INVALID_HW_RING_ID)
 			goto use_vnic;
 
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 181e607d7bf8..82e89b7c8af7 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -628,7 +628,7 @@ int bnxt_hwrm_set_l2_filter(struct bnxt *bp,
 	uint16_t j = dst_id - 1;
 
 	//TODO: Is there a better way to add VLANs to each VNIC in case of VMDQ
-	if ((dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) &&
+	if ((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) &&
 	    conf->pool_map[j].pools & (1UL << j)) {
 		PMD_DRV_LOG(DEBUG,
 			"Add vlan %u to vmdq pool %u\n",
@@ -2979,12 +2979,12 @@ static uint16_t bnxt_parse_eth_link_duplex(uint32_t conf_link_speed)
 {
 	uint8_t hw_link_duplex = HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
 
-	if ((conf_link_speed & ETH_LINK_SPEED_FIXED) == ETH_LINK_SPEED_AUTONEG)
+	if ((conf_link_speed & RTE_ETH_LINK_SPEED_FIXED) == RTE_ETH_LINK_SPEED_AUTONEG)
 		return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
 
 	switch (conf_link_speed) {
-	case ETH_LINK_SPEED_10M_HD:
-	case ETH_LINK_SPEED_100M_HD:
+	case RTE_ETH_LINK_SPEED_10M_HD:
+	case RTE_ETH_LINK_SPEED_100M_HD:
 		/* FALLTHROUGH */
 		return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF;
 	}
@@ -3001,51 +3001,51 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
 {
 	uint16_t eth_link_speed = 0;
 
-	if (conf_link_speed == ETH_LINK_SPEED_AUTONEG)
-		return ETH_LINK_SPEED_AUTONEG;
+	if (conf_link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
+		return RTE_ETH_LINK_SPEED_AUTONEG;
 
-	switch (conf_link_speed & ~ETH_LINK_SPEED_FIXED) {
-	case ETH_LINK_SPEED_100M:
-	case ETH_LINK_SPEED_100M_HD:
+	switch (conf_link_speed & ~RTE_ETH_LINK_SPEED_FIXED) {
+	case RTE_ETH_LINK_SPEED_100M:
+	case RTE_ETH_LINK_SPEED_100M_HD:
 		/* FALLTHROUGH */
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_100MB;
 		break;
-	case ETH_LINK_SPEED_1G:
+	case RTE_ETH_LINK_SPEED_1G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_1GB;
 		break;
-	case ETH_LINK_SPEED_2_5G:
+	case RTE_ETH_LINK_SPEED_2_5G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_2_5GB;
 		break;
-	case ETH_LINK_SPEED_10G:
+	case RTE_ETH_LINK_SPEED_10G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_10GB;
 		break;
-	case ETH_LINK_SPEED_20G:
+	case RTE_ETH_LINK_SPEED_20G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_20GB;
 		break;
-	case ETH_LINK_SPEED_25G:
+	case RTE_ETH_LINK_SPEED_25G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_25GB;
 		break;
-	case ETH_LINK_SPEED_40G:
+	case RTE_ETH_LINK_SPEED_40G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_40GB;
 		break;
-	case ETH_LINK_SPEED_50G:
+	case RTE_ETH_LINK_SPEED_50G:
 		eth_link_speed = pam4_link ?
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_50GB :
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_50GB;
 		break;
-	case ETH_LINK_SPEED_100G:
+	case RTE_ETH_LINK_SPEED_100G:
 		eth_link_speed = pam4_link ?
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_100GB :
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_100GB;
 		break;
-	case ETH_LINK_SPEED_200G:
+	case RTE_ETH_LINK_SPEED_200G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
 		break;
@@ -3058,11 +3058,11 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
 	return eth_link_speed;
 }
 
-#define BNXT_SUPPORTED_SPEEDS (ETH_LINK_SPEED_100M | ETH_LINK_SPEED_100M_HD | \
-		ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G | \
-		ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G | ETH_LINK_SPEED_25G | \
-		ETH_LINK_SPEED_40G | ETH_LINK_SPEED_50G | \
-		ETH_LINK_SPEED_100G | ETH_LINK_SPEED_200G)
+#define BNXT_SUPPORTED_SPEEDS (RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_100M_HD | \
+		RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G | \
+		RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G | RTE_ETH_LINK_SPEED_25G | \
+		RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_50G | \
+		RTE_ETH_LINK_SPEED_100G | RTE_ETH_LINK_SPEED_200G)
 
 static int bnxt_validate_link_speed(struct bnxt *bp)
 {
@@ -3071,13 +3071,13 @@ static int bnxt_validate_link_speed(struct bnxt *bp)
 	uint32_t link_speed_capa;
 	uint32_t one_speed;
 
-	if (link_speed == ETH_LINK_SPEED_AUTONEG)
+	if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
 		return 0;
 
 	link_speed_capa = bnxt_get_speed_capabilities(bp);
 
-	if (link_speed & ETH_LINK_SPEED_FIXED) {
-		one_speed = link_speed & ~ETH_LINK_SPEED_FIXED;
+	if (link_speed & RTE_ETH_LINK_SPEED_FIXED) {
+		one_speed = link_speed & ~RTE_ETH_LINK_SPEED_FIXED;
 
 		if (one_speed & (one_speed - 1)) {
 			PMD_DRV_LOG(ERR,
@@ -3107,71 +3107,71 @@ bnxt_parse_eth_link_speed_mask(struct bnxt *bp, uint32_t link_speed)
 {
 	uint16_t ret = 0;
 
-	if (link_speed == ETH_LINK_SPEED_AUTONEG) {
+	if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG) {
 		if (bp->link_info->support_speeds)
 			return bp->link_info->support_speeds;
 		link_speed = BNXT_SUPPORTED_SPEEDS;
 	}
 
-	if (link_speed & ETH_LINK_SPEED_100M)
+	if (link_speed & RTE_ETH_LINK_SPEED_100M)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
-	if (link_speed & ETH_LINK_SPEED_100M_HD)
+	if (link_speed & RTE_ETH_LINK_SPEED_100M_HD)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
-	if (link_speed & ETH_LINK_SPEED_1G)
+	if (link_speed & RTE_ETH_LINK_SPEED_1G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_1GB;
-	if (link_speed & ETH_LINK_SPEED_2_5G)
+	if (link_speed & RTE_ETH_LINK_SPEED_2_5G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_2_5GB;
-	if (link_speed & ETH_LINK_SPEED_10G)
+	if (link_speed & RTE_ETH_LINK_SPEED_10G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_10GB;
-	if (link_speed & ETH_LINK_SPEED_20G)
+	if (link_speed & RTE_ETH_LINK_SPEED_20G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_20GB;
-	if (link_speed & ETH_LINK_SPEED_25G)
+	if (link_speed & RTE_ETH_LINK_SPEED_25G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_25GB;
-	if (link_speed & ETH_LINK_SPEED_40G)
+	if (link_speed & RTE_ETH_LINK_SPEED_40G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_40GB;
-	if (link_speed & ETH_LINK_SPEED_50G)
+	if (link_speed & RTE_ETH_LINK_SPEED_50G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_50GB;
-	if (link_speed & ETH_LINK_SPEED_100G)
+	if (link_speed & RTE_ETH_LINK_SPEED_100G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100GB;
-	if (link_speed & ETH_LINK_SPEED_200G)
+	if (link_speed & RTE_ETH_LINK_SPEED_200G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
 	return ret;
 }
 
 static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
 {
-	uint32_t eth_link_speed = ETH_SPEED_NUM_NONE;
+	uint32_t eth_link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	switch (hw_link_speed) {
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB:
-		eth_link_speed = ETH_SPEED_NUM_100M;
+		eth_link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_1GB:
-		eth_link_speed = ETH_SPEED_NUM_1G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2_5GB:
-		eth_link_speed = ETH_SPEED_NUM_2_5G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_10GB:
-		eth_link_speed = ETH_SPEED_NUM_10G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_20GB:
-		eth_link_speed = ETH_SPEED_NUM_20G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_25GB:
-		eth_link_speed = ETH_SPEED_NUM_25G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_40GB:
-		eth_link_speed = ETH_SPEED_NUM_40G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_50GB:
-		eth_link_speed = ETH_SPEED_NUM_50G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100GB:
-		eth_link_speed = ETH_SPEED_NUM_100G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_200GB:
-		eth_link_speed = ETH_SPEED_NUM_200G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_200G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2GB:
 	default:
@@ -3184,16 +3184,16 @@ static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
 
 static uint16_t bnxt_parse_hw_link_duplex(uint16_t hw_link_duplex)
 {
-	uint16_t eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+	uint16_t eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (hw_link_duplex) {
 	case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH:
 	case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_FULL:
 		/* FALLTHROUGH */
-		eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+		eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF:
-		eth_link_duplex = ETH_LINK_HALF_DUPLEX;
+		eth_link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "HWRM link duplex %d not defined\n",
@@ -3222,12 +3222,12 @@ int bnxt_get_hwrm_link_config(struct bnxt *bp, struct rte_eth_link *link)
 		link->link_speed =
 			bnxt_parse_hw_link_speed(link_info->link_speed);
 	else
-		link->link_speed = ETH_SPEED_NUM_NONE;
+		link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 	link->link_duplex = bnxt_parse_hw_link_duplex(link_info->duplex);
 	link->link_status = link_info->link_up;
 	link->link_autoneg = link_info->auto_mode ==
 		HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE ?
-		ETH_LINK_FIXED : ETH_LINK_AUTONEG;
+		RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
 exit:
 	return rc;
 }
@@ -3253,7 +3253,7 @@ int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up)
 
 	autoneg = bnxt_check_eth_link_autoneg(dev_conf->link_speeds);
 	if (BNXT_CHIP_P5(bp) &&
-	    dev_conf->link_speeds == ETH_LINK_SPEED_40G) {
+	    dev_conf->link_speeds == RTE_ETH_LINK_SPEED_40G) {
 		/* 40G is not supported as part of media auto detect.
 		 * The speed should be forced and autoneg disabled
 		 * to configure 40G speed.
@@ -3344,7 +3344,7 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 
 	HWRM_CHECK_RESULT();
 
-	bp->vlan = rte_le_to_cpu_16(resp->vlan) & ETH_VLAN_ID_MAX;
+	bp->vlan = rte_le_to_cpu_16(resp->vlan) & RTE_ETH_VLAN_ID_MAX;
 
 	svif_info = rte_le_to_cpu_16(resp->svif_info);
 	if (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID)
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index b7e88e013a84..1c07db3ca9c5 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -537,7 +537,7 @@ int bnxt_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
 
 	dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
 	if (parent_bp->flags & BNXT_FLAG_PTP_SUPPORTED)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT;
 	dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
 
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index 08cefa1baaef..7940d489a102 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -187,7 +187,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
 			rx_ring_info->rx_ring_struct->ring_size *
 			AGG_RING_SIZE_FACTOR)) : 0;
 
-		if (rx_ring_info && (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+		if (rx_ring_info && (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 			int tpa_max = BNXT_TPA_MAX_AGGS(bp);
 
 			tpa_info_len = tpa_max * sizeof(struct bnxt_tpa_info);
@@ -283,7 +283,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
 					    ag_bitmap_start, ag_bitmap_len);
 
 			/* TPA info */
-			if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+			if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 				rx_ring_info->tpa_info =
 					((struct bnxt_tpa_info *)
 					 ((char *)mz->addr + tpa_info_start));
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index 38ec4aa14b77..1456f8b54ffa 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -52,13 +52,13 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 	bp->nr_vnics = 0;
 
 	/* Multi-queue mode */
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB_RSS) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
 		/* VMDq ONLY, VMDq+RSS, VMDq+DCB, VMDq+DCB+RSS */
 
 		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_RSS:
-		case ETH_MQ_RX_VMDQ_ONLY:
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
 			/* FALLTHROUGH */
 			/* ETH_8/64_POOLs */
 			pools = conf->nb_queue_pools;
@@ -66,14 +66,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 			max_pools = RTE_MIN(bp->max_vnics,
 					    RTE_MIN(bp->max_l2_ctx,
 					    RTE_MIN(bp->max_rsscos_ctx,
-						    ETH_64_POOLS)));
+						    RTE_ETH_64_POOLS)));
 			PMD_DRV_LOG(DEBUG,
 				    "pools = %u max_pools = %u\n",
 				    pools, max_pools);
 			if (pools > max_pools)
 				pools = max_pools;
 			break;
-		case ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_RSS:
 			pools = bp->rx_cosq_cnt ? bp->rx_cosq_cnt : 1;
 			break;
 		default:
@@ -111,7 +111,7 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 				    ring_idx, rxq, i, vnic);
 		}
 		if (i == 0) {
-			if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB) {
+			if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB) {
 				bp->eth_dev->data->promiscuous = 1;
 				vnic->flags |= BNXT_VNIC_INFO_PROMISC;
 			}
@@ -121,8 +121,8 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 		vnic->end_grp_id = end_grp_id;
 
 		if (i) {
-			if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB ||
-			    !(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS))
+			if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB ||
+			    !(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS))
 				vnic->rss_dflt_cr = true;
 			goto skip_filter_allocation;
 		}
@@ -147,14 +147,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 
 	bp->rx_num_qs_per_vnic = nb_q_per_grp;
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		struct rte_eth_rss_conf *rss = &dev_conf->rx_adv_conf.rss_conf;
 
 		if (bp->flags & BNXT_FLAG_UPDATE_HASH)
 			bp->flags &= ~BNXT_FLAG_UPDATE_HASH;
 
 		for (i = 0; i < bp->nr_vnics; i++) {
-			uint32_t lvl = ETH_RSS_LEVEL(rss->rss_hf);
+			uint32_t lvl = RTE_ETH_RSS_LEVEL(rss->rss_hf);
 
 			vnic = &bp->vnic_info[i];
 			vnic->hash_type =
@@ -363,7 +363,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 	PMD_DRV_LOG(DEBUG, "RX Buf size is %d\n", rxq->rx_buf_size);
 	rxq->queue_id = queue_idx;
 	rxq->port_id = eth_dev->data->port_id;
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -478,7 +478,7 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 	PMD_DRV_LOG(INFO, "Rx queue started %d\n", rx_queue_id);
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		vnic = rxq->vnic;
 
 		if (BNXT_HAS_RING_GRPS(bp)) {
@@ -549,7 +549,7 @@ int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	rxq->rx_started = false;
 	PMD_DRV_LOG(DEBUG, "Rx queue stopped\n");
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (BNXT_HAS_RING_GRPS(bp))
 			vnic->fw_grp_ids[rx_queue_id] = INVALID_HW_RING_ID;
 
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index aeacc60a0127..eb555c4545e6 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -566,8 +566,8 @@ bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
 	dev_conf = &rxq->bp->eth_dev->data->dev_conf;
 	offloads = dev_conf->rxmode.offloads;
 
-	outer_cksum_enabled = !!(offloads & (DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-					     DEV_RX_OFFLOAD_OUTER_UDP_CKSUM));
+	outer_cksum_enabled = !!(offloads & (RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+					     RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM));
 
 	/* Initialize ol_flags table. */
 	pt = rxr->ol_flags_table;
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
index d08854ff61e2..e4905b4fd169 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
@@ -416,7 +416,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_common.h b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
index 9b9489a695a2..0627fd212d0a 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_common.h
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
@@ -96,7 +96,7 @@ bnxt_rxq_rearm(struct bnxt_rx_queue *rxq, struct bnxt_rx_ring_info *rxr)
 }
 
 /*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
  * is enabled.
  */
 static inline void
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
index 13211060cf0e..f15e2d3b4ed4 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
@@ -352,7 +352,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
index 6e563053260a..ffd560166cac 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
@@ -333,7 +333,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 9e45ddd7a82e..f2fcaf53021c 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -353,7 +353,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 }
 
 /*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
  * is enabled.
  */
 static void bnxt_tx_cmp_fast(struct bnxt_tx_queue *txq, int nr_pkts)
@@ -479,7 +479,7 @@ static int bnxt_handle_tx_cp(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index 26253a7e17f2..c63cf4b943fa 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -239,17 +239,17 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
 {
 	uint16_t hwrm_type = 0;
 
-	if (rte_type & ETH_RSS_IPV4)
+	if (rte_type & RTE_ETH_RSS_IPV4)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
-	if (rte_type & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
-	if (rte_type & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
-	if (rte_type & ETH_RSS_IPV6)
+	if (rte_type & RTE_ETH_RSS_IPV6)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
-	if (rte_type & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
-	if (rte_type & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
 
 	return hwrm_type;
@@ -258,11 +258,11 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
 int bnxt_rte_to_hwrm_hash_level(struct bnxt *bp, uint64_t hash_f, uint32_t lvl)
 {
 	uint32_t mode = HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_DEFAULT;
-	bool l3 = (hash_f & (ETH_RSS_IPV4 | ETH_RSS_IPV6));
-	bool l4 = (hash_f & (ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV6_UDP |
-			     ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV6_TCP));
+	bool l3 = (hash_f & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6));
+	bool l4 = (hash_f & (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_TCP));
 	bool l3_only = l3 && !l4;
 	bool l3_and_l4 = l3 && l4;
 
@@ -307,16 +307,16 @@ uint64_t bnxt_hwrm_to_rte_rss_level(struct bnxt *bp, uint32_t mode)
 	 * return default hash mode.
 	 */
 	if (!(bp->vnic_cap_flags & BNXT_VNIC_CAP_OUTER_RSS))
-		return ETH_RSS_LEVEL_PMD_DEFAULT;
+		return RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
 
 	if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_2 ||
 	    mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_4)
-		rss_level |= ETH_RSS_LEVEL_OUTERMOST;
+		rss_level |= RTE_ETH_RSS_LEVEL_OUTERMOST;
 	else if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_2 ||
 		 mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_4)
-		rss_level |= ETH_RSS_LEVEL_INNERMOST;
+		rss_level |= RTE_ETH_RSS_LEVEL_INNERMOST;
 	else
-		rss_level |= ETH_RSS_LEVEL_PMD_DEFAULT;
+		rss_level |= RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
 
 	return rss_level;
 }
diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c
index f71543810970..77ecbef04c3d 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt.c
+++ b/drivers/net/bnxt/rte_pmd_bnxt.c
@@ -421,18 +421,18 @@ int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf,
 	if (vf >= bp->pdev->max_vfs)
 		return -EINVAL;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG) {
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG) {
 		PMD_DRV_LOG(ERR, "Currently cannot toggle this setting\n");
 		return -ENOTSUP;
 	}
 
 	/* Is this really the correct mapping?  VFd seems to think it is. */
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 		flag |= BNXT_VNIC_INFO_PROMISC;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 		flag |= BNXT_VNIC_INFO_BCAST;
-	if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 		flag |= BNXT_VNIC_INFO_ALLMULTI | BNXT_VNIC_INFO_MCAST;
 
 	if (on)
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index fc179a2732ac..8b104b639184 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -167,8 +167,8 @@ struct bond_dev_private {
 	struct rte_eth_desc_lim tx_desc_lim;	/**< Tx descriptor limits */
 
 	uint16_t reta_size;
-	struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_512 /
-			RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_512 /
+			RTE_ETH_RETA_GROUP_SIZE];
 
 	uint8_t rss_key[52];				/**< 52-byte hash key buffer. */
 	uint8_t rss_key_len;				/**< hash key length in bytes. */
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 2029955c1092..ca50583d62d8 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -770,25 +770,25 @@ link_speed_key(uint16_t speed) {
 	uint16_t key_speed;
 
 	switch (speed) {
-	case ETH_SPEED_NUM_NONE:
+	case RTE_ETH_SPEED_NUM_NONE:
 		key_speed = 0x00;
 		break;
-	case ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_10M:
 		key_speed = BOND_LINK_SPEED_KEY_10M;
 		break;
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		key_speed = BOND_LINK_SPEED_KEY_100M;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		key_speed = BOND_LINK_SPEED_KEY_1000M;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		key_speed = BOND_LINK_SPEED_KEY_10G;
 		break;
-	case ETH_SPEED_NUM_20G:
+	case RTE_ETH_SPEED_NUM_20G:
 		key_speed = BOND_LINK_SPEED_KEY_20G;
 		break;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		key_speed = BOND_LINK_SPEED_KEY_40G;
 		break;
 	default:
@@ -887,7 +887,7 @@ bond_mode_8023ad_periodic_cb(void *arg)
 
 		if (ret >= 0 && link_info.link_status != 0) {
 			key = link_speed_key(link_info.link_speed) << 1;
-			if (link_info.link_duplex == ETH_LINK_FULL_DUPLEX)
+			if (link_info.link_duplex == RTE_ETH_LINK_FULL_DUPLEX)
 				key |= BOND_LINK_FULL_DUPLEX_KEY;
 		} else {
 			key = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 5140ef14c2ee..84943cffe2bb 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -204,7 +204,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
 
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	if ((bonded_eth_dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_VLAN_FILTER) == 0)
+			RTE_ETH_RX_OFFLOAD_VLAN_FILTER) == 0)
 		return 0;
 
 	internals = bonded_eth_dev->data->dev_private;
@@ -592,7 +592,7 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
 			return -1;
 		}
 
-		 if (link_props.link_status == ETH_LINK_UP) {
+		if (link_props.link_status == RTE_ETH_LINK_UP) {
 			if (internals->active_slave_count == 0 &&
 			    !internals->user_defined_primary_port)
 				bond_ethdev_primary_set(internals,
@@ -727,7 +727,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
 		internals->tx_offload_capa = 0;
 		internals->rx_queue_offload_capa = 0;
 		internals->tx_queue_offload_capa = 0;
-		internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+		internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
 		internals->reta_size = 0;
 		internals->candidate_max_rx_pktlen = 0;
 		internals->max_rx_pktlen = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 8d038ba6b6c4..834a5937b3aa 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1369,8 +1369,8 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
 		 * In any other mode the link properties are set to default
 		 * values of AUTONEG/DUPLEX
 		 */
-		ethdev->data->dev_link.link_autoneg = ETH_LINK_AUTONEG;
-		ethdev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		ethdev->data->dev_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
+		ethdev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	}
 }
 
@@ -1700,7 +1700,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 		slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
 
 	/* If RSS is enabled for bonding, try to enable it for slaves  */
-	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		/* rss_key won't be empty if RSS is configured in bonded dev */
 		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
 					internals->rss_key_len;
@@ -1714,12 +1714,12 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 	}
 
 	if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_VLAN_FILTER)
+			RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 		slave_eth_dev->data->dev_conf.rxmode.offloads |=
-				DEV_RX_OFFLOAD_VLAN_FILTER;
+				RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	else
 		slave_eth_dev->data->dev_conf.rxmode.offloads &=
-				~DEV_RX_OFFLOAD_VLAN_FILTER;
+				~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	slave_eth_dev->data->dev_conf.rxmode.mtu =
 			bonded_eth_dev->data->dev_conf.rxmode.mtu;
@@ -1823,7 +1823,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 	}
 
 	/* If RSS is enabled for bonding, synchronize RETA */
-	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
 		int i;
 		struct bond_dev_private *internals;
 
@@ -1946,7 +1946,7 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 		return -1;
 	}
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 1;
 
 	internals = eth_dev->data->dev_private;
@@ -2086,7 +2086,7 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 			tlb_last_obytets[internals->active_slaves[i]] = 0;
 	}
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 0;
 
 	internals->link_status_polling_enabled = 0;
@@ -2416,15 +2416,15 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 
 	bond_ctx = ethdev->data->dev_private;
 
-	ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+	ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	if (ethdev->data->dev_started == 0 ||
 			bond_ctx->active_slave_count == 0) {
-		ethdev->data->dev_link.link_status = ETH_LINK_DOWN;
+		ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 		return 0;
 	}
 
-	ethdev->data->dev_link.link_status = ETH_LINK_UP;
+	ethdev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	if (wait_to_complete)
 		link_update = rte_eth_link_get;
@@ -2449,7 +2449,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 					  &slave_link);
 			if (ret < 0) {
 				ethdev->data->dev_link.link_speed =
-					ETH_SPEED_NUM_NONE;
+					RTE_ETH_SPEED_NUM_NONE;
 				RTE_BOND_LOG(ERR,
 					"Slave (port %u) link get failed: %s",
 					bond_ctx->active_slaves[idx],
@@ -2491,7 +2491,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 		 * In theses mode the maximum theoretical link speed is the sum
 		 * of all the slaves
 		 */
-		ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		one_link_update_succeeded = false;
 
 		for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
@@ -2865,7 +2865,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 			goto link_update;
 
 		/* check link state properties if bonded link is up*/
-		if (bonded_eth_dev->data->dev_link.link_status == ETH_LINK_UP) {
+		if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
 			if (link_properties_valid(bonded_eth_dev, &link) != 0)
 				RTE_BOND_LOG(ERR, "Invalid link properties "
 					     "for slave %d in bonding mode %d",
@@ -2881,7 +2881,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 		if (internals->active_slave_count < 1) {
 			/* If first active slave, then change link status */
 			bonded_eth_dev->data->dev_link.link_status =
-								ETH_LINK_UP;
+								RTE_ETH_LINK_UP;
 			internals->current_primary_port = port_id;
 			lsc_flag = 1;
 
@@ -2973,12 +2973,12 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	 /* Copy RETA table */
-	reta_count = (reta_size + RTE_RETA_GROUP_SIZE - 1) /
-			RTE_RETA_GROUP_SIZE;
+	reta_count = (reta_size + RTE_ETH_RETA_GROUP_SIZE - 1) /
+			RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < reta_count; i++) {
 		internals->reta_conf[i].mask = reta_conf[i].mask;
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				internals->reta_conf[i].reta[j] = reta_conf[i].reta[j];
 	}
@@ -3011,8 +3011,8 @@ bond_ethdev_rss_reta_query(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	 /* Copy RETA table */
-	for (i = 0; i < reta_size / RTE_RETA_GROUP_SIZE; i++)
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < reta_size / RTE_ETH_RETA_GROUP_SIZE; i++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = internals->reta_conf[i].reta[j];
 
@@ -3274,7 +3274,7 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
 	internals->max_rx_pktlen = 0;
 
 	/* Initially allow to choose any offload type */
-	internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+	internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
 
 	memset(&internals->default_rxconf, 0,
 	       sizeof(internals->default_rxconf));
@@ -3501,7 +3501,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 	 * set key to the the value specified in port RSS configuration.
 	 * Fall back to default RSS key if the key is not specified
 	 */
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
 		struct rte_eth_rss_conf *rss_conf =
 			&dev->data->dev_conf.rx_adv_conf.rss_conf;
 		if (rss_conf->rss_key != NULL) {
@@ -3526,9 +3526,9 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 
 		for (i = 0; i < RTE_DIM(internals->reta_conf); i++) {
 			internals->reta_conf[i].mask = ~0LL;
-			for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+			for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 				internals->reta_conf[i].reta[j] =
-						(i * RTE_RETA_GROUP_SIZE + j) %
+						(i * RTE_ETH_RETA_GROUP_SIZE + j) %
 						dev->data->nb_rx_queues;
 		}
 	}
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 9dfea99db9b2..d52f8ffecf23 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -15,28 +15,28 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rxmode *rxmode = &conf->rxmode;
 	uint16_t flags = 0;
 
-	if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
-	    (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+	    (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		flags |= NIX_RX_OFFLOAD_RSS_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		flags |= NIX_RX_MULTI_SEG_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 
 	if (!dev->ptype_disable)
 		flags |= NIX_RX_OFFLOAD_PTYPE_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		flags |= NIX_RX_OFFLOAD_SECURITY_F;
 
 	return flags;
@@ -72,39 +72,39 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
 			 offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
 
-	if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
-	    conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+	    conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
 
-	if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= NIX_TX_MULTI_SEG_F;
 
 	/* Enable Inner checksum for TSO */
-	if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+	if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
 	/* Enable Inner and Outer checksum for Tunnel TSO */
-	if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		    DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
-	if (conf & DEV_TX_OFFLOAD_SECURITY)
+	if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
 		flags |= NIX_TX_OFFLOAD_SECURITY_F;
 
 	return flags;
diff --git a/drivers/net/cnxk/cn10k_rte_flow.c b/drivers/net/cnxk/cn10k_rte_flow.c
index 8c87452934eb..dff4c7746cf5 100644
--- a/drivers/net/cnxk/cn10k_rte_flow.c
+++ b/drivers/net/cnxk/cn10k_rte_flow.c
@@ -98,7 +98,7 @@ cn10k_rss_action_validate(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		plt_err("multi-queue mode is disabled");
 		return -ENOTSUP;
 	}
diff --git a/drivers/net/cnxk/cn10k_rx.c b/drivers/net/cnxk/cn10k_rx.c
index d6af54b56de6..5d603514c045 100644
--- a/drivers/net/cnxk/cn10k_rx.c
+++ b/drivers/net/cnxk/cn10k_rx.c
@@ -77,12 +77,12 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 			nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
 
 	if (dev->scalar_ena) {
-		if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+		if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 			return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
 		return pick_rx_func(eth_dev, nix_eth_rx_burst);
 	}
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
 	return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
 }
diff --git a/drivers/net/cnxk/cn10k_tx.c b/drivers/net/cnxk/cn10k_tx.c
index eb962ef08cab..5e6c5ee11188 100644
--- a/drivers/net/cnxk/cn10k_tx.c
+++ b/drivers/net/cnxk/cn10k_tx.c
@@ -78,11 +78,11 @@ cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 
 	if (dev->scalar_ena) {
 		pick_tx_func(eth_dev, nix_eth_tx_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
 	} else {
 		pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
 	}
 
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 08c86f9e6b7b..17f8f6debbc8 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -15,28 +15,28 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rxmode *rxmode = &conf->rxmode;
 	uint16_t flags = 0;
 
-	if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
-	    (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+	    (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		flags |= NIX_RX_OFFLOAD_RSS_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		flags |= NIX_RX_MULTI_SEG_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 
 	if (!dev->ptype_disable)
 		flags |= NIX_RX_OFFLOAD_PTYPE_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		flags |= NIX_RX_OFFLOAD_SECURITY_F;
 
 	return flags;
@@ -72,39 +72,39 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
 			 offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
 
-	if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
-	    conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+	    conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
 
-	if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= NIX_TX_MULTI_SEG_F;
 
 	/* Enable Inner checksum for TSO */
-	if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+	if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
 	/* Enable Inner and Outer checksum for Tunnel TSO */
-	if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		    DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY)
 		flags |= NIX_TX_OFFLOAD_SECURITY_F;
 
 	return flags;
@@ -298,9 +298,9 @@ cn9k_nix_configure(struct rte_eth_dev *eth_dev)
 
 	/* Platform specific checks */
 	if ((roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) &&
-	    (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
-	    ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
-	     (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+	    ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+	     (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
 		plt_err("Outer IP and SCTP checksum unsupported");
 		return -EINVAL;
 	}
@@ -553,17 +553,17 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
 	 * TSO not supported for earlier chip revisions
 	 */
 	if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0())
-		dev->tx_offload_capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
-					  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-					  DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-					  DEV_TX_OFFLOAD_GRE_TNL_TSO);
+		dev->tx_offload_capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+					  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+					  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+					  RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
 
 	/* 50G and 100G to be supported for board version C0
 	 * and above of CN9K.
 	 */
 	if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) {
-		dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_50G;
-		dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_100G;
+		dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_50G;
+		dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_100G;
 	}
 
 	dev->hwcap = 0;
diff --git a/drivers/net/cnxk/cn9k_rx.c b/drivers/net/cnxk/cn9k_rx.c
index 5c4387e74e0b..8d504c4a6d92 100644
--- a/drivers/net/cnxk/cn9k_rx.c
+++ b/drivers/net/cnxk/cn9k_rx.c
@@ -77,12 +77,12 @@ cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 			nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
 
 	if (dev->scalar_ena) {
-		if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+		if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 			return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
 		return pick_rx_func(eth_dev, nix_eth_rx_burst);
 	}
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
 	return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
 }
diff --git a/drivers/net/cnxk/cn9k_tx.c b/drivers/net/cnxk/cn9k_tx.c
index e5691a2a7e16..f3f19fed9780 100644
--- a/drivers/net/cnxk/cn9k_tx.c
+++ b/drivers/net/cnxk/cn9k_tx.c
@@ -77,11 +77,11 @@ cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 
 	if (dev->scalar_ena) {
 		pick_tx_func(eth_dev, nix_eth_tx_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
 	} else {
 		pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
 	}
 
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 2e05d8bf1552..db54468dbca1 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -10,7 +10,7 @@ nix_get_rx_offload_capa(struct cnxk_eth_dev *dev)
 
 	if (roc_nix_is_vf_or_sdp(&dev->nix) ||
 	    dev->npc.switch_header_type == ROC_PRIV_FLAGS_HIGIG)
-		capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+		capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	return capa;
 }
@@ -28,11 +28,11 @@ nix_get_speed_capa(struct cnxk_eth_dev *dev)
 	uint32_t speed_capa;
 
 	/* Auto negotiation disabled */
-	speed_capa = ETH_LINK_SPEED_FIXED;
+	speed_capa = RTE_ETH_LINK_SPEED_FIXED;
 	if (!roc_nix_is_vf_or_sdp(&dev->nix) && !roc_nix_is_lbk(&dev->nix)) {
-		speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			      ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
-			      ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			      RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+			      RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
 	}
 
 	return speed_capa;
@@ -65,7 +65,7 @@ nix_security_setup(struct cnxk_eth_dev *dev)
 	struct roc_nix *nix = &dev->nix;
 	int i, rc = 0;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		/* Setup Inline Inbound */
 		rc = roc_nix_inl_inb_init(nix);
 		if (rc) {
@@ -80,8 +80,8 @@ nix_security_setup(struct cnxk_eth_dev *dev)
 		cnxk_nix_inb_mode_set(dev, true);
 	}
 
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
-	    dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY ||
+	    dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		struct plt_bitmap *bmap;
 		size_t bmap_sz;
 		void *mem;
@@ -100,8 +100,8 @@ nix_security_setup(struct cnxk_eth_dev *dev)
 
 		dev->outb.lf_base = roc_nix_inl_outb_lf_base_get(nix);
 
-		/* Skip the rest if DEV_TX_OFFLOAD_SECURITY is not enabled */
-		if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY))
+		/* Skip the rest if RTE_ETH_TX_OFFLOAD_SECURITY is not enabled */
+		if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY))
 			goto done;
 
 		rc = -ENOMEM;
@@ -136,7 +136,7 @@ nix_security_setup(struct cnxk_eth_dev *dev)
 done:
 	return 0;
 cleanup:
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		rc |= roc_nix_inl_inb_fini(nix);
 	return rc;
 }
@@ -182,7 +182,7 @@ nix_security_release(struct cnxk_eth_dev *dev)
 	int rc, ret = 0;
 
 	/* Cleanup Inline inbound */
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		/* Destroy inbound sessions */
 		tvar = NULL;
 		RTE_TAILQ_FOREACH_SAFE(eth_sec, &dev->inb.list, entry, tvar)
@@ -199,8 +199,8 @@ nix_security_release(struct cnxk_eth_dev *dev)
 	}
 
 	/* Cleanup Inline outbound */
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
-	    dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY ||
+	    dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		/* Destroy outbound sessions */
 		tvar = NULL;
 		RTE_TAILQ_FOREACH_SAFE(eth_sec, &dev->outb.list, entry, tvar)
@@ -242,8 +242,8 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
 	buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
 
 	if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD > buffsz) {
-		dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
-		dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+		dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	}
 }
 
@@ -273,7 +273,7 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
 	struct rte_eth_fc_conf fc_conf = {0};
 	int rc;
 
-	/* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+	/* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
 	 * by AF driver, update those info in PMD structure.
 	 */
 	rc = cnxk_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -281,10 +281,10 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
 		goto exit;
 
 	fc->mode = fc_conf.mode;
-	fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_RX_PAUSE);
-	fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_TX_PAUSE);
+	fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+	fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
 
 exit:
 	return rc;
@@ -305,11 +305,11 @@ nix_update_flow_ctrl_config(struct rte_eth_dev *eth_dev)
 	/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
 	if (roc_model_is_cn96_ax() &&
 	    dev->npc.switch_header_type != ROC_PRIV_FLAGS_HIGIG &&
-	    (fc_cfg.mode == RTE_FC_FULL || fc_cfg.mode == RTE_FC_RX_PAUSE)) {
+	    (fc_cfg.mode == RTE_ETH_FC_FULL || fc_cfg.mode == RTE_ETH_FC_RX_PAUSE)) {
 		fc_cfg.mode =
-				(fc_cfg.mode == RTE_FC_FULL ||
-				fc_cfg.mode == RTE_FC_TX_PAUSE) ?
-				RTE_FC_TX_PAUSE : RTE_FC_NONE;
+				(fc_cfg.mode == RTE_ETH_FC_FULL ||
+				fc_cfg.mode == RTE_ETH_FC_TX_PAUSE) ?
+				RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
 	}
 
 	return cnxk_nix_flow_ctrl_set(eth_dev, &fc_cfg);
@@ -352,7 +352,7 @@ nix_sq_max_sqe_sz(struct cnxk_eth_dev *dev)
 	 * Maximum three segments can be supported with W8, Choose
 	 * NIX_MAXSQESZ_W16 for multi segment offload.
 	 */
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		return NIX_MAXSQESZ_W16;
 	else
 		return NIX_MAXSQESZ_W8;
@@ -380,7 +380,7 @@ cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	/* When Tx Security offload is enabled, increase tx desc count by
 	 * max possible outbound desc count.
 	 */
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY)
 		nb_desc += dev->outb.nb_desc;
 
 	/* Setup ROC SQ */
@@ -499,7 +499,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	 * to avoid meta packet drop as LBK does not currently support
 	 * backpressure.
 	 */
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY && roc_nix_is_lbk(nix)) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY && roc_nix_is_lbk(nix)) {
 		uint64_t pkt_pool_limit = roc_nix_inl_dev_rq_limit_get();
 
 		/* Use current RQ's aura limit if inl rq is not available */
@@ -561,7 +561,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	rxq_sp->qconf.nb_desc = nb_desc;
 	rxq_sp->qconf.mp = mp;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		/* Setup rq reference for inline dev if present */
 		rc = roc_nix_inl_dev_rq_get(rq);
 		if (rc)
@@ -579,7 +579,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	 * These are needed in deriving raw clock value from tsc counter.
 	 * read_clock eth op returns raw clock value.
 	 */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
 		rc = cnxk_nix_tsc_convert(dev);
 		if (rc) {
 			plt_err("Failed to calculate delta and freq mult");
@@ -618,7 +618,7 @@ cnxk_nix_rx_queue_release(struct rte_eth_dev *eth_dev, uint16_t qid)
 	plt_nix_dbg("Releasing rxq %u", qid);
 
 	/* Release rq reference for inline dev if present */
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		roc_nix_inl_dev_rq_put(rq);
 
 	/* Cleanup ROC RQ */
@@ -657,24 +657,24 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
 
 	dev->ethdev_rss_hf = ethdev_rss;
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
 	    dev->npc.switch_header_type == ROC_PRIV_FLAGS_LEN_90B) {
 		flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
 	}
 
-	if (ethdev_rss & ETH_RSS_C_VLAN)
+	if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
 
-	if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
 
-	if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
 
-	if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
 
-	if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
 
 	if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -683,34 +683,34 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
 	if (ethdev_rss & RSS_IPV6_ENABLE)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
 
-	if (ethdev_rss & ETH_RSS_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_TCP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_UDP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_SCTP)
+	if (ethdev_rss & RTE_ETH_RSS_SCTP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
 
 	if (ethdev_rss & RSS_IPV6_EX_ENABLE)
 		flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
 
-	if (ethdev_rss & ETH_RSS_PORT)
+	if (ethdev_rss & RTE_ETH_RSS_PORT)
 		flowkey_cfg |= FLOW_KEY_TYPE_PORT;
 
-	if (ethdev_rss & ETH_RSS_NVGRE)
+	if (ethdev_rss & RTE_ETH_RSS_NVGRE)
 		flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
 
-	if (ethdev_rss & ETH_RSS_VXLAN)
+	if (ethdev_rss & RTE_ETH_RSS_VXLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
 
-	if (ethdev_rss & ETH_RSS_GENEVE)
+	if (ethdev_rss & RTE_ETH_RSS_GENEVE)
 		flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
 
-	if (ethdev_rss & ETH_RSS_GTPU)
+	if (ethdev_rss & RTE_ETH_RSS_GTPU)
 		flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
 
 	return flowkey_cfg;
@@ -746,7 +746,7 @@ nix_rss_default_setup(struct cnxk_eth_dev *dev)
 	uint64_t rss_hf;
 
 	rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
-	rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 
@@ -958,8 +958,8 @@ nix_lso_fmt_setup(struct cnxk_eth_dev *dev)
 
 	/* Nothing much to do if offload is not enabled */
 	if (!(dev->tx_offloads &
-	      (DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-	       DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO)))
+	      (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+	       RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))
 		return 0;
 
 	/* Setup LSO formats in AF. Its a no-op if other ethdev has
@@ -1007,13 +1007,13 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 		goto fail_configure;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-	    rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+	    rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		plt_err("Unsupported mq rx mode %d", rxmode->mq_mode);
 		goto fail_configure;
 	}
 
-	if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		plt_err("Unsupported mq tx mode %d", txmode->mq_mode);
 		goto fail_configure;
 	}
@@ -1054,7 +1054,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 	/* Prepare rx cfg */
 	rx_cfg = ROC_NIX_LF_RX_CFG_DIS_APAD;
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+	    (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
 		rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_OL4;
 		rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_IL4;
 	}
@@ -1062,7 +1062,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 		   ROC_NIX_LF_RX_CFG_LEN_IL4 | ROC_NIX_LF_RX_CFG_LEN_IL3 |
 		   ROC_NIX_LF_RX_CFG_LEN_OL4 | ROC_NIX_LF_RX_CFG_LEN_OL3);
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		rx_cfg |= ROC_NIX_LF_RX_CFG_IP6_UDP_OPT;
 		/* Disable drop re if rx offload security is enabled and
 		 * platform does not support it.
@@ -1454,12 +1454,12 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
 	 * enabled on PF owning this VF
 	 */
 	memset(&dev->tstamp, 0, sizeof(struct cnxk_timesync_info));
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
 		cnxk_eth_dev_ops.timesync_enable(eth_dev);
 	else
 		cnxk_eth_dev_ops.timesync_disable(eth_dev);
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 		rc = rte_mbuf_dyn_rx_timestamp_register
 			(&dev->tstamp.tstamp_dynfield_offset,
 			 &dev->tstamp.rx_tstamp_dynflag);
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 72f80ae948cf..29a3540ed3f8 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -58,41 +58,44 @@
 	 CNXK_NIX_TX_NB_SEG_MAX)
 
 #define CNXK_NIX_RSS_L3_L4_SRC_DST                                             \
-	(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | ETH_RSS_L4_SRC_ONLY |     \
-	 ETH_RSS_L4_DST_ONLY)
+	(RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |                   \
+	 RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
 
 #define CNXK_NIX_RSS_OFFLOAD                                                   \
-	(ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP |               \
-	 ETH_RSS_SCTP | ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD |                  \
-	 CNXK_NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | ETH_RSS_C_VLAN)
+	(RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |                 \
+	 RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_TUNNEL |             \
+	 RTE_ETH_RSS_L2_PAYLOAD | CNXK_NIX_RSS_L3_L4_SRC_DST |                 \
+	 RTE_ETH_RSS_LEVEL_MASK | RTE_ETH_RSS_C_VLAN)
 
 #define CNXK_NIX_TX_OFFLOAD_CAPA                                               \
-	(DEV_TX_OFFLOAD_MBUF_FAST_FREE | DEV_TX_OFFLOAD_MT_LOCKFREE |          \
-	 DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT |             \
-	 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |    \
-	 DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM |                 \
-	 DEV_TX_OFFLOAD_SCTP_CKSUM | DEV_TX_OFFLOAD_TCP_TSO |                  \
-	 DEV_TX_OFFLOAD_VXLAN_TNL_TSO | DEV_TX_OFFLOAD_GENEVE_TNL_TSO |        \
-	 DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_MULTI_SEGS |              \
-	 DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_SECURITY)
+	(RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |          \
+	 RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT |             \
+	 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |    \
+	 RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM |                 \
+	 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO |                  \
+	 RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |        \
+	 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS |              \
+	 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_SECURITY)
 
 #define CNXK_NIX_RX_OFFLOAD_CAPA                                               \
-	(DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM |                 \
-	 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER |            \
-	 DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | DEV_RX_OFFLOAD_RSS_HASH |            \
-	 DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_VLAN_STRIP |                \
-	 DEV_RX_OFFLOAD_SECURITY)
+	(RTE_ETH_RX_OFFLOAD_CHECKSUM | RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |         \
+	 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_SCATTER |    \
+	 RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_RSS_HASH |    \
+	 RTE_ETH_RX_OFFLOAD_TIMESTAMP | RTE_ETH_RX_OFFLOAD_VLAN_STRIP |        \
+	 RTE_ETH_RX_OFFLOAD_SECURITY)
 
 #define RSS_IPV4_ENABLE                                                        \
-	(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP |         \
-	 ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_SCTP)
+	(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |                            \
+	 RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV4_TCP |         \
+	 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
 #define RSS_IPV6_ENABLE                                                        \
-	(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP |         \
-	 ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_SCTP)
+	(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |                            \
+	 RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |         \
+	 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 #define RSS_IPV6_EX_ENABLE                                                     \
-	(ETH_RSS_IPV6_EX | ETH_RSS_IPV6_TCP_EX | ETH_RSS_IPV6_UDP_EX)
+	(RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define RSS_MAX_LEVELS 3
 
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index c0b949e21ab0..e068f553495c 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -104,11 +104,11 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
 
 	val = atoi(value);
 
-	if (val <= ETH_RSS_RETA_SIZE_64)
+	if (val <= RTE_ETH_RSS_RETA_SIZE_64)
 		val = ROC_NIX_RSS_RETA_SZ_64;
-	else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
+	else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
 		val = ROC_NIX_RSS_RETA_SZ_128;
-	else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
+	else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
 		val = ROC_NIX_RSS_RETA_SZ_256;
 	else
 		val = ROC_NIX_RSS_RETA_SZ_64;
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index d0924df76152..67464302653d 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -81,24 +81,24 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
 		uint64_t flags;
 		const char *output;
 	} rx_offload_map[] = {
-		{DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
-		{DEV_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
-		{DEV_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
-		{DEV_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
-		{DEV_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
-		{DEV_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
-		{DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
-		{DEV_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
-		{DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
-		{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
-		{DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
-		{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
-		{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
-		{DEV_RX_OFFLOAD_SECURITY, " Security,"},
-		{DEV_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
-		{DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
-		{DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
-		{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+		{RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
+		{RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
+		{RTE_ETH_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
+		{RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
+		{RTE_ETH_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
+		{RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
+		{RTE_ETH_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
+		{RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+		{RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+		{RTE_ETH_RX_OFFLOAD_SECURITY, " Security,"},
+		{RTE_ETH_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
+		{RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
+		{RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
 	};
 	static const char *const burst_mode[] = {"Vector Neon, Rx Offloads:",
 						 "Scalar, Rx Offloads:"
@@ -142,28 +142,28 @@ cnxk_nix_tx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
 		uint64_t flags;
 		const char *output;
 	} tx_offload_map[] = {
-		{DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
-		{DEV_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
-		{DEV_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
-		{DEV_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
-		{DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
-		{DEV_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
-		{DEV_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
-		{DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
-		{DEV_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
-		{DEV_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
-		{DEV_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
-		{DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
-		{DEV_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
-		{DEV_TX_OFFLOAD_SECURITY, " Security,"},
-		{DEV_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
-		{DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
+		{RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+		{RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
+		{RTE_ETH_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
+		{RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
+		{RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
+		{RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
+		{RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
+		{RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
+		{RTE_ETH_TX_OFFLOAD_SECURITY, " Security,"},
+		{RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
 	};
 	static const char *const burst_mode[] = {"Vector Neon, Tx Offloads:",
 						 "Scalar, Tx Offloads:"
@@ -203,8 +203,8 @@ cnxk_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 	enum rte_eth_fc_mode mode_map[] = {
-					   RTE_FC_NONE, RTE_FC_RX_PAUSE,
-					   RTE_FC_TX_PAUSE, RTE_FC_FULL
+					   RTE_ETH_FC_NONE, RTE_ETH_FC_RX_PAUSE,
+					   RTE_ETH_FC_TX_PAUSE, RTE_ETH_FC_FULL
 					  };
 	struct roc_nix *nix = &dev->nix;
 	int mode;
@@ -264,10 +264,10 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	if (fc_conf->mode == fc->mode)
 		return 0;
 
-	rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_RX_PAUSE);
-	tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_TX_PAUSE);
+	rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+	tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
 
 	/* Check if TX pause frame is already enabled or not */
 	if (fc->tx_pause ^ tx_pause) {
@@ -408,13 +408,13 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	 * when this feature has not been enabled before.
 	 */
 	if (data->dev_started && frame_size > buffsz &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		plt_err("Scatter offload is not enabled for mtu");
 		goto exit;
 	}
 
 	/* Check <seg size> * <max_seg>  >= max_frame */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)	&&
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	&&
 	    frame_size > (buffsz * CNXK_NIX_RX_NB_SEG_MAX)) {
 		plt_err("Greater than maximum supported packet length");
 		goto exit;
@@ -734,8 +734,8 @@ cnxk_nix_reta_update(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Copy RETA table */
-	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta[idx] = reta_conf[i].reta[j];
 			idx++;
@@ -770,8 +770,8 @@ cnxk_nix_reta_query(struct rte_eth_dev *eth_dev,
 		goto fail;
 
 	/* Copy RETA table */
-	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = reta[idx];
 			idx++;
@@ -804,7 +804,7 @@ cnxk_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
 	if (rss_conf->rss_key)
 		roc_nix_rss_key_set(nix, rss_conf->rss_key);
 
-	rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 	flowkey_cfg =
diff --git a/drivers/net/cnxk/cnxk_link.c b/drivers/net/cnxk/cnxk_link.c
index 6a7080167598..f10a502826c6 100644
--- a/drivers/net/cnxk/cnxk_link.c
+++ b/drivers/net/cnxk/cnxk_link.c
@@ -38,7 +38,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
 		plt_info("Port %d: Link Up - speed %u Mbps - %s",
 			 (int)(eth_dev->data->port_id),
 			 (uint32_t)link->link_speed,
-			 link->link_duplex == ETH_LINK_FULL_DUPLEX
+			 link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX
 				 ? "full-duplex"
 				 : "half-duplex");
 	else
@@ -89,7 +89,7 @@ cnxk_eth_dev_link_status_cb(struct roc_nix *nix, struct roc_nix_link_info *link)
 
 	eth_link.link_status = link->status;
 	eth_link.link_speed = link->speed;
-	eth_link.link_autoneg = ETH_LINK_AUTONEG;
+	eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	eth_link.link_duplex = link->full_duplex;
 
 	/* Print link info */
@@ -117,17 +117,17 @@ cnxk_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
 		return 0;
 
 	if (roc_nix_is_lbk(&dev->nix)) {
-		link.link_status = ETH_LINK_UP;
-		link.link_speed = ETH_SPEED_NUM_100G;
-		link.link_autoneg = ETH_LINK_FIXED;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_status = RTE_ETH_LINK_UP;
+		link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	} else {
 		rc = roc_nix_mac_link_info_get(&dev->nix, &info);
 		if (rc)
 			return rc;
 		link.link_status = info.status;
 		link.link_speed = info.speed;
-		link.link_autoneg = ETH_LINK_AUTONEG;
+		link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 		if (info.full_duplex)
 			link.link_duplex = info.full_duplex;
 	}
diff --git a/drivers/net/cnxk/cnxk_ptp.c b/drivers/net/cnxk/cnxk_ptp.c
index 449489f599c4..139fea256ccd 100644
--- a/drivers/net/cnxk/cnxk_ptp.c
+++ b/drivers/net/cnxk/cnxk_ptp.c
@@ -227,7 +227,7 @@ cnxk_nix_timesync_enable(struct rte_eth_dev *eth_dev)
 	dev->rx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
 	dev->tx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
 
-	dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+	dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	rc = roc_nix_ptp_rx_ena_dis(nix, true);
 	if (!rc) {
@@ -257,7 +257,7 @@ int
 cnxk_nix_timesync_disable(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
-	uint64_t rx_offloads = DEV_RX_OFFLOAD_TIMESTAMP;
+	uint64_t rx_offloads = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	struct roc_nix *nix = &dev->nix;
 	int rc = 0;
 
diff --git a/drivers/net/cnxk/cnxk_rte_flow.c b/drivers/net/cnxk/cnxk_rte_flow.c
index ad89a2e105b1..c86c92ce4c2f 100644
--- a/drivers/net/cnxk/cnxk_rte_flow.c
+++ b/drivers/net/cnxk/cnxk_rte_flow.c
@@ -69,7 +69,7 @@ npc_rss_action_validate(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		plt_err("multi-queue mode is disabled");
 		return -ENOTSUP;
 	}
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 37625c5bfb69..dbcbfaf68a30 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -28,31 +28,31 @@
 #define CXGBE_LINK_STATUS_POLL_CNT 100 /* Max number of times to poll */
 
 #define CXGBE_DEFAULT_RSS_KEY_LEN     40 /* 320-bits */
-#define CXGBE_RSS_HF_IPV4_MASK (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
-				ETH_RSS_NONFRAG_IPV4_OTHER)
-#define CXGBE_RSS_HF_IPV6_MASK (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
-				ETH_RSS_NONFRAG_IPV6_OTHER | \
-				ETH_RSS_IPV6_EX)
-#define CXGBE_RSS_HF_TCP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_TCP | \
-				    ETH_RSS_IPV6_TCP_EX)
-#define CXGBE_RSS_HF_UDP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_UDP | \
-				    ETH_RSS_IPV6_UDP_EX)
-#define CXGBE_RSS_HF_ALL (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP)
+#define CXGBE_RSS_HF_IPV4_MASK (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+				RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
+#define CXGBE_RSS_HF_IPV6_MASK (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
+				RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+				RTE_ETH_RSS_IPV6_EX)
+#define CXGBE_RSS_HF_TCP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+				    RTE_ETH_RSS_IPV6_TCP_EX)
+#define CXGBE_RSS_HF_UDP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+				    RTE_ETH_RSS_IPV6_UDP_EX)
+#define CXGBE_RSS_HF_ALL (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP)
 
 /* Tx/Rx Offloads supported */
-#define CXGBE_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT | \
-			   DEV_TX_OFFLOAD_IPV4_CKSUM | \
-			   DEV_TX_OFFLOAD_UDP_CKSUM | \
-			   DEV_TX_OFFLOAD_TCP_CKSUM | \
-			   DEV_TX_OFFLOAD_TCP_TSO | \
-			   DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define CXGBE_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP | \
-			   DEV_RX_OFFLOAD_IPV4_CKSUM | \
-			   DEV_RX_OFFLOAD_UDP_CKSUM | \
-			   DEV_RX_OFFLOAD_TCP_CKSUM | \
-			   DEV_RX_OFFLOAD_SCATTER | \
-			   DEV_RX_OFFLOAD_RSS_HASH)
+#define CXGBE_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+			   RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+			   RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+			   RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+			   RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define CXGBE_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+			   RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+			   RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+			   RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+			   RTE_ETH_RX_OFFLOAD_SCATTER | \
+			   RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 /* Devargs filtermode and filtermask representation */
 enum cxgbe_devargs_filter_mode_flags {
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index f77b2976002c..4758321778d1 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -231,9 +231,9 @@ int cxgbe_dev_link_update(struct rte_eth_dev *eth_dev,
 	}
 
 	new_link.link_status = cxgbe_force_linkup(adapter) ?
-			       ETH_LINK_UP : pi->link_cfg.link_ok;
+			       RTE_ETH_LINK_UP : pi->link_cfg.link_ok;
 	new_link.link_autoneg = (lc->link_caps & FW_PORT_CAP32_ANEG) ? 1 : 0;
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	new_link.link_speed = t4_fwcap_to_speed(lc->link_caps);
 
 	return rte_eth_linkstatus_set(eth_dev, &new_link);
@@ -374,7 +374,7 @@ int cxgbe_dev_start(struct rte_eth_dev *eth_dev)
 			goto out;
 	}
 
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		eth_dev->data->scattered_rx = 1;
 	else
 		eth_dev->data->scattered_rx = 0;
@@ -438,9 +438,9 @@ int cxgbe_dev_configure(struct rte_eth_dev *eth_dev)
 
 	CXGBE_FUNC_TRACE();
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (!(adapter->flags & FW_QUEUE_BOUND)) {
 		err = cxgbe_setup_sge_fwevtq(adapter);
@@ -1080,13 +1080,13 @@ static int cxgbe_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 		rx_pause = 1;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	return 0;
 }
 
@@ -1099,12 +1099,12 @@ static int cxgbe_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	u8 tx_pause = 0, rx_pause = 0;
 	int ret;
 
-	if (fc_conf->mode == RTE_FC_FULL) {
+	if (fc_conf->mode == RTE_ETH_FC_FULL) {
 		tx_pause = 1;
 		rx_pause = 1;
-	} else if (fc_conf->mode == RTE_FC_TX_PAUSE) {
+	} else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE) {
 		tx_pause = 1;
-	} else if (fc_conf->mode == RTE_FC_RX_PAUSE) {
+	} else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE) {
 		rx_pause = 1;
 	}
 
@@ -1200,9 +1200,9 @@ static int cxgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 		rss_hf |= CXGBE_RSS_HF_IPV6_MASK;
 
 	if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN) {
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		if (flags & F_FW_RSS_VI_CONFIG_CMD_UDPEN)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	}
 
 	if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN)
@@ -1246,8 +1246,8 @@ static int cxgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 
 	rte_memcpy(rss, pi->rss, pi->rss_size * sizeof(u16));
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (!(reta_conf[idx].mask & (1ULL << shift)))
 			continue;
 
@@ -1277,8 +1277,8 @@ static int cxgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (!(reta_conf[idx].mask & (1ULL << shift)))
 			continue;
 
@@ -1479,7 +1479,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
 
 	if (lc->pcaps & FW_PORT_CAP32_SPEED_100G) {
 		if (capa_arr) {
-			capa_arr[num].speed = ETH_SPEED_NUM_100G;
+			capa_arr[num].speed = RTE_ETH_SPEED_NUM_100G;
 			capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(RS);
 		}
@@ -1488,7 +1488,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
 
 	if (lc->pcaps & FW_PORT_CAP32_SPEED_50G) {
 		if (capa_arr) {
-			capa_arr[num].speed = ETH_SPEED_NUM_50G;
+			capa_arr[num].speed = RTE_ETH_SPEED_NUM_50G;
 			capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(BASER);
 		}
@@ -1497,7 +1497,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
 
 	if (lc->pcaps & FW_PORT_CAP32_SPEED_25G) {
 		if (capa_arr) {
-			capa_arr[num].speed = ETH_SPEED_NUM_25G;
+			capa_arr[num].speed = RTE_ETH_SPEED_NUM_25G;
 			capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(RS);
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 91d6bb9bbcb0..f1ac32270961 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -1670,7 +1670,7 @@ int cxgbe_link_start(struct port_info *pi)
 	 * that step explicitly.
 	 */
 	ret = t4_set_rxmode(adapter, adapter->mbox, pi->viid, mtu, -1, -1, -1,
-			    !!(conf_offloads & DEV_RX_OFFLOAD_VLAN_STRIP),
+			    !!(conf_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP),
 			    true);
 	if (ret == 0) {
 		ret = cxgbe_mpstcam_modify(pi, (int)pi->xact_addr_filt,
@@ -1694,7 +1694,7 @@ int cxgbe_link_start(struct port_info *pi)
 	}
 
 	if (ret == 0 && cxgbe_force_linkup(adapter))
-		pi->eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+		pi->eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return ret;
 }
 
@@ -1725,10 +1725,10 @@ int cxgbe_write_rss_conf(const struct port_info *pi, uint64_t rss_hf)
 	if (rss_hf & CXGBE_RSS_HF_IPV4_MASK)
 		flags |= F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN |
 			 F_FW_RSS_VI_CONFIG_CMD_UDPEN;
 
@@ -1865,7 +1865,7 @@ static void fw_caps_to_speed_caps(enum fw_port_type port_type,
 {
 #define SET_SPEED(__speed_name) \
 	do { \
-		*speed_caps |= ETH_LINK_ ## __speed_name; \
+		*speed_caps |= RTE_ETH_LINK_ ## __speed_name; \
 	} while (0)
 
 #define FW_CAPS_TO_SPEED(__fw_name) \
@@ -1952,7 +1952,7 @@ void cxgbe_get_speed_caps(struct port_info *pi, u32 *speed_caps)
 			      speed_caps);
 
 	if (!(pi->link_cfg.pcaps & FW_PORT_CAP32_ANEG))
-		*speed_caps |= ETH_LINK_SPEED_FIXED;
+		*speed_caps |= RTE_ETH_LINK_SPEED_FIXED;
 }
 
 /**
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index c79cdb8d8ad7..89ea7dd47c0b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -54,29 +54,29 @@
 
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
-		DEV_RX_OFFLOAD_SCATTER;
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 /* Rx offloads which cannot be disabled */
 static uint64_t dev_rx_offloads_nodis =
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 /* Supported Tx offloads */
 static uint64_t dev_tx_offloads_sup =
-		DEV_TX_OFFLOAD_MT_LOCKFREE |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 /* Tx offloads which cannot be disabled */
 static uint64_t dev_tx_offloads_nodis =
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 /* Keep track of whether QMAN and BMAN have been globally initialized */
 static int is_global_init;
@@ -238,7 +238,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 
 	fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		DPAA_PMD_DEBUG("enabling scatter mode");
 		fman_if_set_sg(dev->process_private, 1);
 		dev->data->scattered_rx = 1;
@@ -283,43 +283,43 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 
 	/* Configure link only if link is UP*/
 	if (link->link_status) {
-		if (eth_conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
+		if (eth_conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 			/* Start autoneg only if link is not in autoneg mode */
 			if (!link->link_autoneg)
 				dpaa_restart_link_autoneg(__fif->node_name);
-		} else if (eth_conf->link_speeds & ETH_LINK_SPEED_FIXED) {
-			switch (eth_conf->link_speeds & ~ETH_LINK_SPEED_FIXED) {
-			case ETH_LINK_SPEED_10M_HD:
-				speed = ETH_SPEED_NUM_10M;
-				duplex = ETH_LINK_HALF_DUPLEX;
+		} else if (eth_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+			switch (eth_conf->link_speeds &  RTE_ETH_LINK_SPEED_FIXED) {
+			case RTE_ETH_LINK_SPEED_10M_HD:
+				speed = RTE_ETH_SPEED_NUM_10M;
+				duplex = RTE_ETH_LINK_HALF_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_10M:
-				speed = ETH_SPEED_NUM_10M;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_10M:
+				speed = RTE_ETH_SPEED_NUM_10M;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_100M_HD:
-				speed = ETH_SPEED_NUM_100M;
-				duplex = ETH_LINK_HALF_DUPLEX;
+			case RTE_ETH_LINK_SPEED_100M_HD:
+				speed = RTE_ETH_SPEED_NUM_100M;
+				duplex = RTE_ETH_LINK_HALF_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_100M:
-				speed = ETH_SPEED_NUM_100M;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_100M:
+				speed = RTE_ETH_SPEED_NUM_100M;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_1G:
-				speed = ETH_SPEED_NUM_1G;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_1G:
+				speed = RTE_ETH_SPEED_NUM_1G;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_2_5G:
-				speed = ETH_SPEED_NUM_2_5G;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_2_5G:
+				speed = RTE_ETH_SPEED_NUM_2_5G;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_10G:
-				speed = ETH_SPEED_NUM_10G;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_10G:
+				speed = RTE_ETH_SPEED_NUM_10G;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
 			default:
-				speed = ETH_SPEED_NUM_NONE;
-				duplex = ETH_LINK_FULL_DUPLEX;
+				speed = RTE_ETH_SPEED_NUM_NONE;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
 			}
 			/* Set link speed */
@@ -535,30 +535,30 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_mac_addrs = DPAA_MAX_MAC_FILTER;
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
-	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
 
 	if (fif->mac_type == fman_mac_1g) {
-		dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
-					| ETH_LINK_SPEED_10M
-					| ETH_LINK_SPEED_100M_HD
-					| ETH_LINK_SPEED_100M
-					| ETH_LINK_SPEED_1G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+					| RTE_ETH_LINK_SPEED_10M
+					| RTE_ETH_LINK_SPEED_100M_HD
+					| RTE_ETH_LINK_SPEED_100M
+					| RTE_ETH_LINK_SPEED_1G;
 	} else if (fif->mac_type == fman_mac_2_5g) {
-		dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
-					| ETH_LINK_SPEED_10M
-					| ETH_LINK_SPEED_100M_HD
-					| ETH_LINK_SPEED_100M
-					| ETH_LINK_SPEED_1G
-					| ETH_LINK_SPEED_2_5G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+					| RTE_ETH_LINK_SPEED_10M
+					| RTE_ETH_LINK_SPEED_100M_HD
+					| RTE_ETH_LINK_SPEED_100M
+					| RTE_ETH_LINK_SPEED_1G
+					| RTE_ETH_LINK_SPEED_2_5G;
 	} else if (fif->mac_type == fman_mac_10g) {
-		dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
-					| ETH_LINK_SPEED_10M
-					| ETH_LINK_SPEED_100M_HD
-					| ETH_LINK_SPEED_100M
-					| ETH_LINK_SPEED_1G
-					| ETH_LINK_SPEED_2_5G
-					| ETH_LINK_SPEED_10G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+					| RTE_ETH_LINK_SPEED_10M
+					| RTE_ETH_LINK_SPEED_100M_HD
+					| RTE_ETH_LINK_SPEED_100M
+					| RTE_ETH_LINK_SPEED_1G
+					| RTE_ETH_LINK_SPEED_2_5G
+					| RTE_ETH_LINK_SPEED_10G;
 	} else {
 		DPAA_PMD_ERR("invalid link_speed: %s, %d",
 			     dpaa_intf->name, fif->mac_type);
@@ -591,12 +591,12 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} rx_offload_map[] = {
-			{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
-			{DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
-			{DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
-			{DEV_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
-			{DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+			{RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+			{RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+			{RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+			{RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+			{RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
 	};
 
 	/* Update Rx offload info */
@@ -623,14 +623,14 @@ dpaa_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} tx_offload_map[] = {
-			{DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
-			{DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
-			{DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
-			{DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
-			{DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
-			{DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
-			{DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+			{RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+			{RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+			{RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+			{RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+			{RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+			{RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
 	};
 
 	/* Update Tx offload info */
@@ -664,7 +664,7 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 			ret = dpaa_get_link_status(__fif->node_name, link);
 			if (ret)
 				return ret;
-			if (link->link_status == ETH_LINK_DOWN &&
+			if (link->link_status == RTE_ETH_LINK_DOWN &&
 			    wait_to_complete)
 				rte_delay_ms(CHECK_INTERVAL);
 			else
@@ -675,15 +675,15 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	}
 
 	if (ioctl_version < 2) {
-		link->link_duplex = ETH_LINK_FULL_DUPLEX;
-		link->link_autoneg = ETH_LINK_AUTONEG;
+		link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+		link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 		if (fif->mac_type == fman_mac_1g)
-			link->link_speed = ETH_SPEED_NUM_1G;
+			link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		else if (fif->mac_type == fman_mac_2_5g)
-			link->link_speed = ETH_SPEED_NUM_2_5G;
+			link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		else if (fif->mac_type == fman_mac_10g)
-			link->link_speed = ETH_SPEED_NUM_10G;
+			link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		else
 			DPAA_PMD_ERR("invalid link_speed: %s, %d",
 				     dpaa_intf->name, fif->mac_type);
@@ -962,7 +962,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (max_rx_pktlen <= buffsz) {
 		;
 	} else if (dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_SCATTER) {
+			RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
 			DPAA_PMD_ERR("Maximum Rx packet size %d too big to fit "
 				"MaxSGlist %d",
@@ -1268,7 +1268,7 @@ static int dpaa_link_down(struct rte_eth_dev *dev)
 	__fif = container_of(fif, struct __fman_if, __if);
 
 	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
-		dpaa_update_link_status(__fif->node_name, ETH_LINK_DOWN);
+		dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_DOWN);
 	else
 		return dpaa_eth_dev_stop(dev);
 	return 0;
@@ -1284,7 +1284,7 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
 	__fif = container_of(fif, struct __fman_if, __if);
 
 	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
-		dpaa_update_link_status(__fif->node_name, ETH_LINK_UP);
+		dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_UP);
 	else
 		dpaa_eth_dev_start(dev);
 	return 0;
@@ -1314,10 +1314,10 @@ dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
 		return -EINVAL;
 	}
 
-	if (fc_conf->mode == RTE_FC_NONE) {
+	if (fc_conf->mode == RTE_ETH_FC_NONE) {
 		return 0;
-	} else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
-		 fc_conf->mode == RTE_FC_FULL) {
+	} else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE ||
+		 fc_conf->mode == RTE_ETH_FC_FULL) {
 		fman_if_set_fc_threshold(dev->process_private,
 					 fc_conf->high_water,
 					 fc_conf->low_water,
@@ -1361,11 +1361,11 @@ dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
 	}
 	ret = fman_if_get_fc_threshold(dev->process_private);
 	if (ret) {
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		fc_conf->pause_time =
 			fman_if_get_fc_quanta(dev->process_private);
 	} else {
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return 0;
@@ -1626,10 +1626,10 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf,
 	fc_conf = dpaa_intf->fc_conf;
 	ret = fman_if_get_fc_threshold(fman_intf);
 	if (ret) {
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		fc_conf->pause_time = fman_if_get_fc_quanta(fman_intf);
 	} else {
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b5728e09c29f..c868e9d5bd9b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -74,11 +74,11 @@
 #define DPAA_DEBUG_FQ_TX_ERROR   1
 
 #define DPAA_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_L2_PAYLOAD | \
-	ETH_RSS_IP | \
-	ETH_RSS_UDP | \
-	ETH_RSS_TCP | \
-	ETH_RSS_SCTP)
+	RTE_ETH_RSS_L2_PAYLOAD | \
+	RTE_ETH_RSS_IP | \
+	RTE_ETH_RSS_UDP | \
+	RTE_ETH_RSS_TCP | \
+	RTE_ETH_RSS_SCTP)
 
 #define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
 		PKT_TX_IP_CKSUM |                \
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index c5b5ec869519..1ccd03602790 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -394,7 +394,7 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 		if (req_dist_set % 2 != 0) {
 			dist_field = 1U << loop;
 			switch (dist_field) {
-			case ETH_RSS_L2_PAYLOAD:
+			case RTE_ETH_RSS_L2_PAYLOAD:
 
 				if (l2_configured)
 					break;
@@ -404,9 +404,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_ETH;
 				break;
 
-			case ETH_RSS_IPV4:
-			case ETH_RSS_FRAG_IPV4:
-			case ETH_RSS_NONFRAG_IPV4_OTHER:
+			case RTE_ETH_RSS_IPV4:
+			case RTE_ETH_RSS_FRAG_IPV4:
+			case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
 
 				if (ipv4_configured)
 					break;
@@ -415,10 +415,10 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_IPV4;
 				break;
 
-			case ETH_RSS_IPV6:
-			case ETH_RSS_FRAG_IPV6:
-			case ETH_RSS_NONFRAG_IPV6_OTHER:
-			case ETH_RSS_IPV6_EX:
+			case RTE_ETH_RSS_IPV6:
+			case RTE_ETH_RSS_FRAG_IPV6:
+			case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+			case RTE_ETH_RSS_IPV6_EX:
 
 				if (ipv6_configured)
 					break;
@@ -427,9 +427,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_IPV6;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_TCP:
-			case ETH_RSS_NONFRAG_IPV6_TCP:
-			case ETH_RSS_IPV6_TCP_EX:
+			case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+			case RTE_ETH_RSS_IPV6_TCP_EX:
 
 				if (tcp_configured)
 					break;
@@ -438,9 +438,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_TCP;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_UDP:
-			case ETH_RSS_NONFRAG_IPV6_UDP:
-			case ETH_RSS_IPV6_UDP_EX:
+			case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+			case RTE_ETH_RSS_IPV6_UDP_EX:
 
 				if (udp_configured)
 					break;
@@ -449,8 +449,8 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_UDP;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_SCTP:
-			case ETH_RSS_NONFRAG_IPV6_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
 
 				if (sctp_configured)
 					break;
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 08f49af7685d..3170694841df 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -220,9 +220,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
 		if (req_dist_set % 2 != 0) {
 			dist_field = 1ULL << loop;
 			switch (dist_field) {
-			case ETH_RSS_L2_PAYLOAD:
-			case ETH_RSS_ETH:
-
+			case RTE_ETH_RSS_L2_PAYLOAD:
+			case RTE_ETH_RSS_ETH:
 				if (l2_configured)
 					break;
 				l2_configured = 1;
@@ -238,7 +237,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_PPPOE:
+			case RTE_ETH_RSS_PPPOE:
 				if (pppoe_configured)
 					break;
 				kg_cfg->extracts[i].extract.from_hdr.prot =
@@ -252,7 +251,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_ESP:
+			case RTE_ETH_RSS_ESP:
 				if (esp_configured)
 					break;
 				esp_configured = 1;
@@ -268,7 +267,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_AH:
+			case RTE_ETH_RSS_AH:
 				if (ah_configured)
 					break;
 				ah_configured = 1;
@@ -284,8 +283,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_C_VLAN:
-			case ETH_RSS_S_VLAN:
+			case RTE_ETH_RSS_C_VLAN:
+			case RTE_ETH_RSS_S_VLAN:
 				if (vlan_configured)
 					break;
 				vlan_configured = 1;
@@ -301,7 +300,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_MPLS:
+			case RTE_ETH_RSS_MPLS:
 
 				if (mpls_configured)
 					break;
@@ -338,13 +337,13 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_IPV4:
-			case ETH_RSS_FRAG_IPV4:
-			case ETH_RSS_NONFRAG_IPV4_OTHER:
-			case ETH_RSS_IPV6:
-			case ETH_RSS_FRAG_IPV6:
-			case ETH_RSS_NONFRAG_IPV6_OTHER:
-			case ETH_RSS_IPV6_EX:
+			case RTE_ETH_RSS_IPV4:
+			case RTE_ETH_RSS_FRAG_IPV4:
+			case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
+			case RTE_ETH_RSS_IPV6:
+			case RTE_ETH_RSS_FRAG_IPV6:
+			case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+			case RTE_ETH_RSS_IPV6_EX:
 
 				if (l3_configured)
 					break;
@@ -382,12 +381,12 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 			break;
 
-			case ETH_RSS_NONFRAG_IPV4_TCP:
-			case ETH_RSS_NONFRAG_IPV6_TCP:
-			case ETH_RSS_NONFRAG_IPV4_UDP:
-			case ETH_RSS_NONFRAG_IPV6_UDP:
-			case ETH_RSS_IPV6_TCP_EX:
-			case ETH_RSS_IPV6_UDP_EX:
+			case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+			case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+			case RTE_ETH_RSS_IPV6_TCP_EX:
+			case RTE_ETH_RSS_IPV6_UDP_EX:
 
 				if (l4_configured)
 					break;
@@ -414,8 +413,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_SCTP:
-			case ETH_RSS_NONFRAG_IPV6_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
 
 				if (sctp_configured)
 					break;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index a0270e78520e..59e728577f53 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -38,33 +38,33 @@
 
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
-		DEV_RX_OFFLOAD_CHECKSUM |
-		DEV_RX_OFFLOAD_SCTP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_CHECKSUM |
+		RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 /* Rx offloads which cannot be disabled */
 static uint64_t dev_rx_offloads_nodis =
-		DEV_RX_OFFLOAD_RSS_HASH |
-		DEV_RX_OFFLOAD_SCATTER;
+		RTE_ETH_RX_OFFLOAD_RSS_HASH |
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 /* Supported Tx offloads */
 static uint64_t dev_tx_offloads_sup =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_MT_LOCKFREE |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 /* Tx offloads which cannot be disabled */
 static uint64_t dev_tx_offloads_nodis =
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 /* enable timestamp in mbuf */
 bool dpaa2_enable_ts[RTE_MAX_ETHPORTS];
@@ -142,7 +142,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* VLAN Filter not avaialble */
 		if (!priv->max_vlan_filters) {
 			DPAA2_PMD_INFO("VLAN filter not available");
@@ -150,7 +150,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		}
 
 		if (dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_VLAN_FILTER)
+			RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ret = dpni_enable_vlan_filter(dpni, CMD_PRI_LOW,
 						      priv->token, true);
 		else
@@ -251,13 +251,13 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 					dev_rx_offloads_nodis;
 	dev_info->tx_offload_capa = dev_tx_offloads_sup |
 					dev_tx_offloads_nodis;
-	dev_info->speed_capa = ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_2_5G |
-			ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_2_5G |
+			RTE_ETH_LINK_SPEED_10G;
 
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
-	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	dev_info->flow_type_rss_offloads = DPAA2_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxportconf.burst_size = dpaa2_dqrr_size;
@@ -270,10 +270,10 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->default_rxportconf.ring_size = DPAA2_RX_DEFAULT_NBDESC;
 
 	if (dpaa2_svr_family == SVR_LX2160A) {
-		dev_info->speed_capa |= ETH_LINK_SPEED_25G |
-				ETH_LINK_SPEED_40G |
-				ETH_LINK_SPEED_50G |
-				ETH_LINK_SPEED_100G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G |
+				RTE_ETH_LINK_SPEED_40G |
+				RTE_ETH_LINK_SPEED_50G |
+				RTE_ETH_LINK_SPEED_100G;
 	}
 
 	return 0;
@@ -291,15 +291,15 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} rx_offload_map[] = {
-			{DEV_RX_OFFLOAD_CHECKSUM, " Checksum,"},
-			{DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
-			{DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
-			{DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
-			{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
-			{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
-			{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
-			{DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
+			{RTE_ETH_RX_OFFLOAD_CHECKSUM, " Checksum,"},
+			{RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+			{RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
+			{RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
+			{RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
+			{RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+			{RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"},
+			{RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"}
 	};
 
 	/* Update Rx offload info */
@@ -326,15 +326,15 @@ dpaa2_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} tx_offload_map[] = {
-			{DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
-			{DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
-			{DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
-			{DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
-			{DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
-			{DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
-			{DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
-			{DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+			{RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+			{RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+			{RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+			{RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+			{RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+			{RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+			{RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
 	};
 
 	/* Update Tx offload info */
@@ -573,7 +573,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		return -1;
 	}
 
-	if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (eth_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) {
 			ret = dpaa2_setup_flow_dist(dev,
 					eth_conf->rx_adv_conf.rss_conf.rss_hf,
@@ -587,12 +587,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		rx_l3_csum_offload = true;
 
-	if ((rx_offloads & DEV_RX_OFFLOAD_UDP_CKSUM) ||
-		(rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) ||
-		(rx_offloads & DEV_RX_OFFLOAD_SCTP_CKSUM))
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) ||
+		(rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) ||
+		(rx_offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM))
 		rx_l4_csum_offload = true;
 
 	ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -610,7 +610,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 	}
 
 #if !defined(RTE_LIBRTE_IEEE1588)
-	if (rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 #endif
 	{
 		ret = rte_mbuf_dyn_rx_timestamp_register(
@@ -623,12 +623,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		dpaa2_enable_ts[dev->data->port_id] = true;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		tx_l3_csum_offload = true;
 
-	if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ||
-		(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
-		(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ||
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM))
 		tx_l4_csum_offload = true;
 
 	ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -660,8 +660,8 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
-		dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+		dpaa2_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
 
 	dpaa2_tm_init(dev);
 
@@ -1856,7 +1856,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 			DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret);
 			return -1;
 		}
-		if (state.up == ETH_LINK_DOWN &&
+		if (state.up == RTE_ETH_LINK_DOWN &&
 		    wait_to_complete)
 			rte_delay_ms(CHECK_INTERVAL);
 		else
@@ -1868,9 +1868,9 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 	link.link_speed = state.rate;
 
 	if (state.options & DPNI_LINK_OPT_HALF_DUPLEX)
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	else
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	ret = rte_eth_linkstatus_set(dev, &link);
 	if (ret == -1)
@@ -2031,9 +2031,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *	No TX side flow control (send Pause frame disabled)
 		 */
 		if (!(state.options & DPNI_LINK_OPT_ASYM_PAUSE))
-			fc_conf->mode = RTE_FC_FULL;
+			fc_conf->mode = RTE_ETH_FC_FULL;
 		else
-			fc_conf->mode = RTE_FC_RX_PAUSE;
+			fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	} else {
 		/* DPNI_LINK_OPT_PAUSE not set
 		 *  if ASYM_PAUSE set,
@@ -2043,9 +2043,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *	Flow control disabled
 		 */
 		if (state.options & DPNI_LINK_OPT_ASYM_PAUSE)
-			fc_conf->mode = RTE_FC_TX_PAUSE;
+			fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		else
-			fc_conf->mode = RTE_FC_NONE;
+			fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return ret;
@@ -2089,14 +2089,14 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	/* update cfg with fc_conf */
 	switch (fc_conf->mode) {
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		/* Full flow control;
 		 * OPT_PAUSE set, ASYM_PAUSE not set
 		 */
 		cfg.options |= DPNI_LINK_OPT_PAUSE;
 		cfg.options &= ~DPNI_LINK_OPT_ASYM_PAUSE;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		/* Enable RX flow control
 		 * OPT_PAUSE not set;
 		 * ASYM_PAUSE set;
@@ -2104,7 +2104,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
 		cfg.options &= ~DPNI_LINK_OPT_PAUSE;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		/* Enable TX Flow control
 		 * OPT_PAUSE set
 		 * ASYM_PAUSE set
@@ -2112,7 +2112,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		cfg.options |= DPNI_LINK_OPT_PAUSE;
 		cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
 		break;
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		/* Disable Flow control
 		 * OPT_PAUSE not set
 		 * ASYM_PAUSE not set
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index fdc62ec30d22..c5e9267bf04d 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -65,17 +65,17 @@
 #define DPAA2_TX_CONF_ENABLE	0x08
 
 #define DPAA2_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_L2_PAYLOAD | \
-	ETH_RSS_IP | \
-	ETH_RSS_UDP | \
-	ETH_RSS_TCP | \
-	ETH_RSS_SCTP | \
-	ETH_RSS_MPLS | \
-	ETH_RSS_C_VLAN | \
-	ETH_RSS_S_VLAN | \
-	ETH_RSS_ESP | \
-	ETH_RSS_AH | \
-	ETH_RSS_PPPOE)
+	RTE_ETH_RSS_L2_PAYLOAD | \
+	RTE_ETH_RSS_IP | \
+	RTE_ETH_RSS_UDP | \
+	RTE_ETH_RSS_TCP | \
+	RTE_ETH_RSS_SCTP | \
+	RTE_ETH_RSS_MPLS | \
+	RTE_ETH_RSS_C_VLAN | \
+	RTE_ETH_RSS_S_VLAN | \
+	RTE_ETH_RSS_ESP | \
+	RTE_ETH_RSS_AH | \
+	RTE_ETH_RSS_PPPOE)
 
 /* LX2 FRC Parsed values (Little Endian) */
 #define DPAA2_PKT_TYPE_ETHER		0x0060
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f40369e2c3f9..7c77243b5d1a 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -773,7 +773,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 #endif
 
 		if (eth_data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_VLAN_STRIP)
+				RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			rte_vlan_strip(bufs[num_rx]);
 
 		dq_storage++;
@@ -987,7 +987,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 							eth_data->port_id);
 
 		if (eth_data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_VLAN_STRIP) {
+				RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 			rte_vlan_strip(bufs[num_rx]);
 		}
 
@@ -1230,7 +1230,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 					if (unlikely(((*bufs)->ol_flags
 						& PKT_TX_VLAN_PKT) ||
 						(eth_data->dev_conf.txmode.offloads
-						& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+						& RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
 						ret = rte_vlan_insert(bufs);
 						if (ret)
 							goto send_n_return;
@@ -1273,7 +1273,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 			if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN_PKT) ||
 				(eth_data->dev_conf.txmode.offloads
-				& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+				& RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
 				int ret = rte_vlan_insert(bufs);
 				if (ret)
 					goto send_n_return;
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 93bee734ae5d..031c92a66fa0 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -81,15 +81,15 @@
 #define E1000_FTQF_QUEUE_ENABLE          0x00000100
 
 #define IGB_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 /*
  * The overhead from MTU to max frame size.
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 73152dec6ed1..9da477e59def 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -597,8 +597,8 @@ eth_em_start(struct rte_eth_dev *dev)
 
 	e1000_clear_hw_cntrs_base_generic(hw);
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
-			ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+			RTE_ETH_VLAN_EXTEND_MASK;
 	ret = eth_em_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Unable to update vlan offload");
@@ -611,39 +611,39 @@ eth_em_start(struct rte_eth_dev *dev)
 
 	/* Setup link speed and duplex */
 	speeds = &dev->data->dev_conf.link_speeds;
-	if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
 		hw->mac.autoneg = 1;
 	} else {
 		num_speeds = 0;
-		autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+		autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 		/* Reset */
 		hw->phy.autoneg_advertised = 0;
 
-		if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+		if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
 			num_speeds = -1;
 			goto error_invalid_config;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_1G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_1G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
 			num_speeds++;
 		}
@@ -1102,9 +1102,9 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.nb_mtu_seg_max = EM_TX_MAX_MTU_SEG,
 	};
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G;
 
 	/* Preferred queue parameters */
 	dev_info->default_rxportconf.nb_queues = 1;
@@ -1162,17 +1162,17 @@ eth_em_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		uint16_t duplex, speed;
 		hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
 		link.link_duplex = (duplex == FULL_DUPLEX) ?
-				ETH_LINK_FULL_DUPLEX :
-				ETH_LINK_HALF_DUPLEX;
+				RTE_ETH_LINK_FULL_DUPLEX :
+				RTE_ETH_LINK_HALF_DUPLEX;
 		link.link_speed = speed;
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 		link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 	} else {
-		link.link_speed = ETH_SPEED_NUM_NONE;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_status = ETH_LINK_DOWN;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -1424,15 +1424,15 @@ eth_em_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if(mask & ETH_VLAN_STRIP_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			em_vlan_hw_strip_enable(dev);
 		else
 			em_vlan_hw_strip_disable(dev);
 	}
 
-	if(mask & ETH_VLAN_FILTER_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			em_vlan_hw_filter_enable(dev);
 		else
 			em_vlan_hw_filter_disable(dev);
@@ -1601,7 +1601,7 @@ eth_em_interrupt_action(struct rte_eth_dev *dev,
 	if (link.link_status) {
 		PMD_INIT_LOG(INFO, " Port %d: Link Up - speed %u Mbps - %s",
 			     dev->data->port_id, link.link_speed,
-			     link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			     link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 			     "full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down", dev->data->port_id);
@@ -1683,13 +1683,13 @@ eth_em_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		rx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 344149c19147..648b04154c5b 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -93,7 +93,7 @@ struct em_rx_queue {
 	struct em_rx_entry *sw_ring;   /**< address of RX software ring. */
 	struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */
 	struct rte_mbuf *pkt_last_seg;  /**< Last segment of current packet. */
-	uint64_t	    offloads;   /**< Offloads of DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads;   /**< Offloads of RTE_ETH_RX_OFFLOAD_* */
 	uint16_t            nb_rx_desc; /**< number of RX descriptors. */
 	uint16_t            rx_tail;    /**< current value of RDT register. */
 	uint16_t            nb_rx_hold; /**< number of held free RX desc. */
@@ -173,7 +173,7 @@ struct em_tx_queue {
 	uint8_t                wthresh;  /**< Write-back threshold register. */
 	struct em_ctx_info ctx_cache;
 	/**< Hardware context history.*/
-	uint64_t	       offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+	uint64_t	       offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
@@ -1171,11 +1171,11 @@ em_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
 
 	RTE_SET_USED(dev);
 	tx_offload_capa =
-		DEV_TX_OFFLOAD_MULTI_SEGS  |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	return tx_offload_capa;
 }
@@ -1369,13 +1369,13 @@ em_get_rx_port_offloads_capa(void)
 	uint64_t rx_offload_capa;
 
 	rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP  |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_IPV4_CKSUM  |
-		DEV_RX_OFFLOAD_UDP_CKSUM   |
-		DEV_RX_OFFLOAD_TCP_CKSUM   |
-		DEV_RX_OFFLOAD_KEEP_CRC    |
-		DEV_RX_OFFLOAD_SCATTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	return rx_offload_capa;
 }
@@ -1469,7 +1469,7 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
 	rxq->queue_id = queue_idx;
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1788,7 +1788,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 *  call to configure
 		 */
-		if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -1831,7 +1831,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (!dev->data->scattered_rx)
 			PMD_INIT_LOG(DEBUG, "forcing scatter mode");
 		dev->rx_pkt_burst = eth_em_recv_scattered_pkts;
@@ -1844,7 +1844,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 	 */
 	rxcsum = E1000_READ_REG(hw, E1000_RXCSUM);
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= E1000_RXCSUM_IPOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_IPOFL;
@@ -1870,7 +1870,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 	}
 
 	/* Setup the Receive Control Register. */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
 	else
 		rctl |= E1000_RCTL_SECRC; /* Strip Ethernet CRC. */
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index dbe811a1ad2f..ae3bc4a9c201 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -1073,21 +1073,21 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
 	uint16_t nb_rx_q = dev->data->nb_rx_queues;
 	uint16_t nb_tx_q = dev->data->nb_tx_queues;
 
-	if ((rx_mq_mode & ETH_MQ_RX_DCB_FLAG) ||
-	    tx_mq_mode == ETH_MQ_TX_DCB ||
-	    tx_mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	if ((rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) ||
+	    tx_mq_mode == RTE_ETH_MQ_TX_DCB ||
+	    tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		PMD_INIT_LOG(ERR, "DCB mode is not supported.");
 		return -EINVAL;
 	}
 	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
 		/* Check multi-queue mode.
-		 * To no break software we accept ETH_MQ_RX_NONE as this might
+		 * To no break software we accept RTE_ETH_MQ_RX_NONE as this might
 		 * be used to turn off VLAN filter.
 		 */
 
-		if (rx_mq_mode == ETH_MQ_RX_NONE ||
-		    rx_mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+		if (rx_mq_mode == RTE_ETH_MQ_RX_NONE ||
+		    rx_mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
 			RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
 		} else {
 			/* Only support one queue on VFs.
@@ -1099,12 +1099,12 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
 			return -EINVAL;
 		}
 		/* TX mode is not used here, so mode might be ignored.*/
-		if (tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+		if (tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
 			/* SRIOV only works in VMDq enable mode */
 			PMD_INIT_LOG(WARNING, "SRIOV is active,"
 					" TX mode %d is not supported. "
 					" Driver will behave as %d mode.",
-					tx_mq_mode, ETH_MQ_TX_VMDQ_ONLY);
+					tx_mq_mode, RTE_ETH_MQ_TX_VMDQ_ONLY);
 		}
 
 		/* check valid queue number */
@@ -1117,17 +1117,17 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
 		/* To no break software that set invalid mode, only display
 		 * warning if invalid mode is used.
 		 */
-		if (rx_mq_mode != ETH_MQ_RX_NONE &&
-		    rx_mq_mode != ETH_MQ_RX_VMDQ_ONLY &&
-		    rx_mq_mode != ETH_MQ_RX_RSS) {
+		if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+		    rx_mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY &&
+		    rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
 			/* RSS together with VMDq not supported*/
 			PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
 				     rx_mq_mode);
 			return -EINVAL;
 		}
 
-		if (tx_mq_mode != ETH_MQ_TX_NONE &&
-		    tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+		if (tx_mq_mode != RTE_ETH_MQ_TX_NONE &&
+		    tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
 			PMD_INIT_LOG(WARNING, "TX mode %d is not supported."
 					" Due to txmode is meaningless in this"
 					" driver, just ignore.",
@@ -1146,8 +1146,8 @@ eth_igb_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multipe queue mode checking */
 	ret  = igb_check_mq_mode(dev);
@@ -1287,8 +1287,8 @@ eth_igb_start(struct rte_eth_dev *dev)
 	/*
 	 * VLAN Offload Settings
 	 */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
-			ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+			RTE_ETH_VLAN_EXTEND_MASK;
 	ret = eth_igb_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Unable to set vlan offload");
@@ -1296,7 +1296,7 @@ eth_igb_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
 		/* Enable VLAN filter since VMDq always use VLAN filter */
 		igb_vmdq_vlan_hw_filter_enable(dev);
 	}
@@ -1310,39 +1310,39 @@ eth_igb_start(struct rte_eth_dev *dev)
 
 	/* Setup link speed and duplex */
 	speeds = &dev->data->dev_conf.link_speeds;
-	if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
 		hw->mac.autoneg = 1;
 	} else {
 		num_speeds = 0;
-		autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+		autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 		/* Reset */
 		hw->phy.autoneg_advertised = 0;
 
-		if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+		if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
 			num_speeds = -1;
 			goto error_invalid_config;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_1G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_1G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
 			num_speeds++;
 		}
@@ -2185,21 +2185,21 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	case e1000_82576:
 		dev_info->max_rx_queues = 16;
 		dev_info->max_tx_queues = 16;
-		dev_info->max_vmdq_pools = ETH_8_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
 		dev_info->vmdq_queue_num = 16;
 		break;
 
 	case e1000_82580:
 		dev_info->max_rx_queues = 8;
 		dev_info->max_tx_queues = 8;
-		dev_info->max_vmdq_pools = ETH_8_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
 		dev_info->vmdq_queue_num = 8;
 		break;
 
 	case e1000_i350:
 		dev_info->max_rx_queues = 8;
 		dev_info->max_tx_queues = 8;
-		dev_info->max_vmdq_pools = ETH_8_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
 		dev_info->vmdq_queue_num = 8;
 		break;
 
@@ -2225,7 +2225,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		return -EINVAL;
 	}
 	dev_info->hash_key_size = IGB_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = IGB_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -2251,9 +2251,9 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->rx_desc_lim = rx_desc_lim;
 	dev_info->tx_desc_lim = tx_desc_lim;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G;
 
 	dev_info->max_mtu = dev_info->max_rx_pktlen - E1000_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -2296,12 +2296,12 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_rx_bufsize = 256; /* See BSIZE field of RCTL register. */
 	dev_info->max_rx_pktlen  = 0x3FFF; /* See RLPML register. */
 	dev_info->max_mac_addrs = hw->mac.rar_entry_count;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-				DEV_TX_OFFLOAD_IPV4_CKSUM  |
-				DEV_TX_OFFLOAD_UDP_CKSUM   |
-				DEV_TX_OFFLOAD_TCP_CKSUM   |
-				DEV_TX_OFFLOAD_SCTP_CKSUM  |
-				DEV_TX_OFFLOAD_TCP_TSO;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	switch (hw->mac.type) {
 	case e1000_vfadapt:
 		dev_info->max_rx_queues = 2;
@@ -2402,17 +2402,17 @@ eth_igb_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		uint16_t duplex, speed;
 		hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
 		link.link_duplex = (duplex == FULL_DUPLEX) ?
-				ETH_LINK_FULL_DUPLEX :
-				ETH_LINK_HALF_DUPLEX;
+				RTE_ETH_LINK_FULL_DUPLEX :
+				RTE_ETH_LINK_HALF_DUPLEX;
 		link.link_speed = speed;
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 		link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 	} else if (!link_check) {
 		link.link_speed = 0;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_status = ETH_LINK_DOWN;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -2588,7 +2588,7 @@ eth_igb_vlan_tpid_set(struct rte_eth_dev *dev,
 	qinq &= E1000_CTRL_EXT_EXT_VLAN;
 
 	/* only outer TPID of double VLAN can be configured*/
-	if (qinq && vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (qinq && vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		reg = E1000_READ_REG(hw, E1000_VET);
 		reg = (reg & (~E1000_VET_VET_EXT)) |
 			((uint32_t)tpid << E1000_VET_VET_EXT_SHIFT);
@@ -2703,22 +2703,22 @@ eth_igb_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if(mask & ETH_VLAN_STRIP_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			igb_vlan_hw_strip_enable(dev);
 		else
 			igb_vlan_hw_strip_disable(dev);
 	}
 
-	if(mask & ETH_VLAN_FILTER_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			igb_vlan_hw_filter_enable(dev);
 		else
 			igb_vlan_hw_filter_disable(dev);
 	}
 
-	if(mask & ETH_VLAN_EXTEND_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			igb_vlan_hw_extend_enable(dev);
 		else
 			igb_vlan_hw_extend_disable(dev);
@@ -2870,7 +2870,7 @@ eth_igb_interrupt_action(struct rte_eth_dev *dev,
 				     " Port %d: Link Up - speed %u Mbps - %s",
 				     dev->data->port_id,
 				     (unsigned)link.link_speed,
-				     link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+				     link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 				     "full-duplex" : "half-duplex");
 		} else {
 			PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3024,13 +3024,13 @@ eth_igb_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		rx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -3099,18 +3099,18 @@ eth_igb_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 * on configuration
 		 */
 		switch (fc_conf->mode) {
-		case RTE_FC_NONE:
+		case RTE_ETH_FC_NONE:
 			ctrl &= ~E1000_CTRL_RFCE & ~E1000_CTRL_TFCE;
 			break;
-		case RTE_FC_RX_PAUSE:
+		case RTE_ETH_FC_RX_PAUSE:
 			ctrl |= E1000_CTRL_RFCE;
 			ctrl &= ~E1000_CTRL_TFCE;
 			break;
-		case RTE_FC_TX_PAUSE:
+		case RTE_ETH_FC_TX_PAUSE:
 			ctrl |= E1000_CTRL_TFCE;
 			ctrl &= ~E1000_CTRL_RFCE;
 			break;
-		case RTE_FC_FULL:
+		case RTE_ETH_FC_FULL:
 			ctrl |= E1000_CTRL_RFCE | E1000_CTRL_TFCE;
 			break;
 		default:
@@ -3258,22 +3258,22 @@ igbvf_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
 		     dev->data->port_id);
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/*
 	 * VF has no ability to enable/disable HW CRC
 	 * Keep the persistent behavior the same as Host PF
 	 */
 #ifndef RTE_LIBRTE_E1000_PF_DISABLE_STRIP_CRC
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
-		conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #else
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
 		PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #endif
 
@@ -3571,16 +3571,16 @@ eth_igb_rss_reta_update(struct rte_eth_dev *dev,
 	uint16_t idx, shift;
 	struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += IGB_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IGB_4_BIT_MASK);
 		if (!mask)
@@ -3612,16 +3612,16 @@ eth_igb_rss_reta_query(struct rte_eth_dev *dev,
 	uint16_t idx, shift;
 	struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += IGB_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IGB_4_BIT_MASK);
 		if (!mask)
diff --git a/drivers/net/e1000/igb_pf.c b/drivers/net/e1000/igb_pf.c
index 2ce74dd5a9a5..fe355ef6b3b5 100644
--- a/drivers/net/e1000/igb_pf.c
+++ b/drivers/net/e1000/igb_pf.c
@@ -88,7 +88,7 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev)
 	if (*vfinfo == NULL)
 		rte_panic("Cannot allocate memory for private VF data\n");
 
-	RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
+	RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_8_POOLS;
 	RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
 	RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
 	RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index a1d5eecc14a1..bcce2fc726d8 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -111,7 +111,7 @@ struct igb_rx_queue {
 	uint8_t             crc_len;    /**< 0 if CRC stripped, 4 otherwise. */
 	uint8_t             drop_en;  /**< If not 0, set SRRCTL.Drop_En. */
 	uint32_t            flags;      /**< RX flags. */
-	uint64_t	    offloads;   /**< offloads of DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads;   /**< offloads of RTE_ETH_RX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
@@ -186,7 +186,7 @@ struct igb_tx_queue {
 	/**< Start context position for transmit queue. */
 	struct igb_advctx_info ctx_cache[IGB_CTX_NUM];
 	/**< Hardware context history.*/
-	uint64_t	       offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+	uint64_t	       offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
@@ -1459,13 +1459,13 @@ igb_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
 	uint64_t tx_offload_capa;
 
 	RTE_SET_USED(dev);
-	tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-			  DEV_TX_OFFLOAD_IPV4_CKSUM  |
-			  DEV_TX_OFFLOAD_UDP_CKSUM   |
-			  DEV_TX_OFFLOAD_TCP_CKSUM   |
-			  DEV_TX_OFFLOAD_SCTP_CKSUM  |
-			  DEV_TX_OFFLOAD_TCP_TSO     |
-			  DEV_TX_OFFLOAD_MULTI_SEGS;
+	tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+			  RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+			  RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+			  RTE_ETH_TX_OFFLOAD_TCP_TSO     |
+			  RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return tx_offload_capa;
 }
@@ -1640,19 +1640,19 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
 
 	hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP  |
-			  DEV_RX_OFFLOAD_VLAN_FILTER |
-			  DEV_RX_OFFLOAD_IPV4_CKSUM  |
-			  DEV_RX_OFFLOAD_UDP_CKSUM   |
-			  DEV_RX_OFFLOAD_TCP_CKSUM   |
-			  DEV_RX_OFFLOAD_KEEP_CRC    |
-			  DEV_RX_OFFLOAD_SCATTER     |
-			  DEV_RX_OFFLOAD_RSS_HASH;
+	rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+			  RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+			  RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+			  RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+			  RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+			  RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+			  RTE_ETH_RX_OFFLOAD_SCATTER     |
+			  RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (hw->mac.type == e1000_i350 ||
 	    hw->mac.type == e1000_i210 ||
 	    hw->mac.type == e1000_i211)
-		rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+		rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
 	return rx_offload_capa;
 }
@@ -1733,7 +1733,7 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
 		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1950,23 +1950,23 @@ igb_hw_rss_hash_set(struct e1000_hw *hw, struct rte_eth_rss_conf *rss_conf)
 	/* Set configured hashing protocols in MRQC register */
 	rss_hf = rss_conf->rss_hf;
 	mrqc = E1000_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV4_TCP;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6;
-	if (rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_EX)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP;
-	if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV4_UDP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP;
-	if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP_EX;
 	E1000_WRITE_REG(hw, E1000_MRQC, mrqc);
 }
@@ -2032,23 +2032,23 @@ int eth_igb_rss_hash_conf_get(struct rte_eth_dev *dev,
 	}
 	rss_hf = 0;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_EX)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP_EX)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP_EX)
-		rss_hf |= ETH_RSS_IPV6_UDP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
 	rss_conf->rss_hf = rss_hf;
 	return 0;
 }
@@ -2170,15 +2170,15 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 			E1000_VMOLR_ROPE | E1000_VMOLR_BAM |
 			E1000_VMOLR_MPME);
 
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_UNTAG)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_UNTAG)
 			vmolr |= E1000_VMOLR_AUPE;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_MC)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
 			vmolr |= E1000_VMOLR_ROMPE;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_UC)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 			vmolr |= E1000_VMOLR_ROPE;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_BROADCAST)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 			vmolr |= E1000_VMOLR_BAM;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_MULTICAST)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 			vmolr |= E1000_VMOLR_MPME;
 
 		E1000_WRITE_REG(hw, E1000_VMOLR(i), vmolr);
@@ -2214,9 +2214,9 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 	/* VLVF: set up filters for vlan tags as configured */
 	for (i = 0; i < cfg->nb_pool_maps; i++) {
 		/* set vlan id in VF register and set the valid bit */
-		E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE | \
-                        (cfg->pool_map[i].vlan_id & ETH_VLAN_ID_MAX) | \
-			((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT ) & \
+		E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE |
+			(cfg->pool_map[i].vlan_id & RTE_ETH_VLAN_ID_MAX) |
+			((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT) &
 			E1000_VLVF_POOLSEL_MASK)));
 	}
 
@@ -2268,7 +2268,7 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	uint32_t mrqc;
 
-	if (RTE_ETH_DEV_SRIOV(dev).active == ETH_8_POOLS) {
+	if (RTE_ETH_DEV_SRIOV(dev).active == RTE_ETH_8_POOLS) {
 		/*
 		 * SRIOV active scheme
 		 * FIXME if support RSS together with VMDq & SRIOV
@@ -2282,14 +2282,14 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * SRIOV inactive scheme
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-			case ETH_MQ_RX_RSS:
+			case RTE_ETH_MQ_RX_RSS:
 				igb_rss_configure(dev);
 				break;
-			case ETH_MQ_RX_VMDQ_ONLY:
+			case RTE_ETH_MQ_RX_VMDQ_ONLY:
 				/*Configure general VMDQ only RX parameters*/
 				igb_vmdq_rx_hw_configure(dev);
 				break;
-			case ETH_MQ_RX_NONE:
+			case RTE_ETH_MQ_RX_NONE:
 				/* if mq_mode is none, disable rss mode.*/
 			default:
 				igb_rss_disable(dev);
@@ -2338,7 +2338,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		 * Set maximum packet length by default, and might be updated
 		 * together with enabling/disabling dual VLAN.
 		 */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			max_len += VLAN_TAG_SIZE;
 
 		E1000_WRITE_REG(hw, E1000_RLPML, max_len);
@@ -2374,7 +2374,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 *  call to configure
 		 */
-		if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -2444,7 +2444,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		E1000_WRITE_REG(hw, E1000_RXDCTL(rxq->reg_idx), rxdctl);
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (!dev->data->scattered_rx)
 			PMD_INIT_LOG(DEBUG, "forcing scatter mode");
 		dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
@@ -2488,16 +2488,16 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 	rxcsum |= E1000_RXCSUM_PCSD;
 
 	/* Enable both L3/L4 rx checksum offload */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		rxcsum |= E1000_RXCSUM_IPOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_IPOFL;
 	if (rxmode->offloads &
-		(DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+		(RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		rxcsum |= E1000_RXCSUM_TUOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_TUOFL;
-	if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= E1000_RXCSUM_CRCOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_CRCOFL;
@@ -2505,7 +2505,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 	E1000_WRITE_REG(hw, E1000_RXCSUM, rxcsum);
 
 	/* Setup the Receive Control Register. */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
 
 		/* clear STRCRC bit in all queues */
@@ -2545,7 +2545,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		(hw->mac.mc_filter_type << E1000_RCTL_MO_SHIFT);
 
 	/* Make sure VLAN Filters are off. */
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_VMDQ_ONLY)
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY)
 		rctl &= ~E1000_RCTL_VFE;
 	/* Don't store bad packets. */
 	rctl &= ~E1000_RCTL_SBP;
@@ -2743,7 +2743,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
 		E1000_WRITE_REG(hw, E1000_RXDCTL(i), rxdctl);
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (!dev->data->scattered_rx)
 			PMD_INIT_LOG(DEBUG, "forcing scatter mode");
 		dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index f3b17d70c9a4..4d2601d15a57 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -117,10 +117,10 @@ static const struct ena_stats ena_stats_rx_strings[] = {
 #define ENA_STATS_ARRAY_TX	ARRAY_SIZE(ena_stats_tx_strings)
 #define ENA_STATS_ARRAY_RX	ARRAY_SIZE(ena_stats_rx_strings)
 
-#define QUEUE_OFFLOADS (DEV_TX_OFFLOAD_TCP_CKSUM |\
-			DEV_TX_OFFLOAD_UDP_CKSUM |\
-			DEV_TX_OFFLOAD_IPV4_CKSUM |\
-			DEV_TX_OFFLOAD_TCP_TSO)
+#define QUEUE_OFFLOADS (RTE_ETH_TX_OFFLOAD_TCP_CKSUM |\
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |\
+			RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |\
+			RTE_ETH_TX_OFFLOAD_TCP_TSO)
 #define MBUF_OFFLOADS (PKT_TX_L4_MASK |\
 		       PKT_TX_IP_CKSUM |\
 		       PKT_TX_TCP_SEG)
@@ -332,7 +332,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 	    (queue_offloads & QUEUE_OFFLOADS)) {
 		/* check if TSO is required */
 		if ((mbuf->ol_flags & PKT_TX_TCP_SEG) &&
-		    (queue_offloads & DEV_TX_OFFLOAD_TCP_TSO)) {
+		    (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
 			ena_tx_ctx->tso_enable = true;
 
 			ena_meta->l4_hdr_len = GET_L4_HDR_LEN(mbuf);
@@ -340,7 +340,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 
 		/* check if L3 checksum is needed */
 		if ((mbuf->ol_flags & PKT_TX_IP_CKSUM) &&
-		    (queue_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM))
+		    (queue_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM))
 			ena_tx_ctx->l3_csum_enable = true;
 
 		if (mbuf->ol_flags & PKT_TX_IPV6) {
@@ -357,12 +357,12 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 
 		/* check if L4 checksum is needed */
 		if (((mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) &&
-		    (queue_offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) {
+		    (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_TCP;
 			ena_tx_ctx->l4_csum_enable = true;
 		} else if (((mbuf->ol_flags & PKT_TX_L4_MASK) ==
 				PKT_TX_UDP_CKSUM) &&
-				(queue_offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+				(queue_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UDP;
 			ena_tx_ctx->l4_csum_enable = true;
 		} else {
@@ -643,9 +643,9 @@ static int ena_link_update(struct rte_eth_dev *dev,
 	struct rte_eth_link *link = &dev->data->dev_link;
 	struct ena_adapter *adapter = dev->data->dev_private;
 
-	link->link_status = adapter->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
-	link->link_speed = ETH_SPEED_NUM_NONE;
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_status = adapter->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
+	link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	return 0;
 }
@@ -923,7 +923,7 @@ static int ena_start(struct rte_eth_dev *dev)
 	if (rc)
 		goto err_start_tx;
 
-	if (adapter->edev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (adapter->edev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		rc = ena_rss_configure(adapter);
 		if (rc)
 			goto err_rss_init;
@@ -2004,9 +2004,9 @@ static int ena_dev_configure(struct rte_eth_dev *dev)
 
 	adapter->state = ENA_ADAPTER_STATE_CONFIG;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
-	dev->data->dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+	dev->data->dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	/* Scattered Rx cannot be turned off in the HW, so this capability must
 	 * be forced.
@@ -2067,17 +2067,17 @@ static uint64_t ena_get_rx_port_offloads(struct ena_adapter *adapter)
 	uint64_t port_offloads = 0;
 
 	if (adapter->offloads.rx_offloads & ENA_L3_IPV4_CSUM)
-		port_offloads |= DEV_RX_OFFLOAD_IPV4_CKSUM;
+		port_offloads |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 
 	if (adapter->offloads.rx_offloads &
 	    (ENA_L4_IPV4_CSUM | ENA_L4_IPV6_CSUM))
 		port_offloads |=
-			DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM;
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 	if (adapter->offloads.rx_offloads & ENA_RX_RSS_HASH)
-		port_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+		port_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
-	port_offloads |= DEV_RX_OFFLOAD_SCATTER;
+	port_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	return port_offloads;
 }
@@ -2087,17 +2087,17 @@ static uint64_t ena_get_tx_port_offloads(struct ena_adapter *adapter)
 	uint64_t port_offloads = 0;
 
 	if (adapter->offloads.tx_offloads & ENA_IPV4_TSO)
-		port_offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+		port_offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (adapter->offloads.tx_offloads & ENA_L3_IPV4_CSUM)
-		port_offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+		port_offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 	if (adapter->offloads.tx_offloads &
 	    (ENA_L4_IPV4_CSUM_PARTIAL | ENA_L4_IPV4_CSUM |
 	     ENA_L4_IPV6_CSUM | ENA_L4_IPV6_CSUM_PARTIAL))
 		port_offloads |=
-			DEV_TX_OFFLOAD_UDP_CKSUM | DEV_TX_OFFLOAD_TCP_CKSUM;
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
-	port_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	port_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return port_offloads;
 }
@@ -2130,14 +2130,14 @@ static int ena_infos_get(struct rte_eth_dev *dev,
 	ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
 
 	dev_info->speed_capa =
-			ETH_LINK_SPEED_1G   |
-			ETH_LINK_SPEED_2_5G |
-			ETH_LINK_SPEED_5G   |
-			ETH_LINK_SPEED_10G  |
-			ETH_LINK_SPEED_25G  |
-			ETH_LINK_SPEED_40G  |
-			ETH_LINK_SPEED_50G  |
-			ETH_LINK_SPEED_100G;
+			RTE_ETH_LINK_SPEED_1G   |
+			RTE_ETH_LINK_SPEED_2_5G |
+			RTE_ETH_LINK_SPEED_5G   |
+			RTE_ETH_LINK_SPEED_10G  |
+			RTE_ETH_LINK_SPEED_25G  |
+			RTE_ETH_LINK_SPEED_40G  |
+			RTE_ETH_LINK_SPEED_50G  |
+			RTE_ETH_LINK_SPEED_100G;
 
 	/* Inform framework about available features */
 	dev_info->rx_offload_capa = ena_get_rx_port_offloads(adapter);
@@ -2303,7 +2303,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	}
 #endif
 
-	fill_hash = rx_ring->offloads & DEV_RX_OFFLOAD_RSS_HASH;
+	fill_hash = rx_ring->offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	descs_in_use = rx_ring->ring_size -
 		ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1;
@@ -2416,11 +2416,11 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 #ifdef RTE_LIBRTE_ETHDEV_DEBUG
 		/* Check if requested offload is also enabled for the queue */
 		if ((ol_flags & PKT_TX_IP_CKSUM &&
-		     !(tx_ring->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)) ||
+		     !(tx_ring->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)) ||
 		    (l4_csum_flag == PKT_TX_TCP_CKSUM &&
-		     !(tx_ring->offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) ||
+		     !(tx_ring->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) ||
 		    (l4_csum_flag == PKT_TX_UDP_CKSUM &&
-		     !(tx_ring->offloads & DEV_TX_OFFLOAD_UDP_CKSUM))) {
+		     !(tx_ring->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM))) {
 			PMD_TX_LOG(DEBUG,
 				"mbuf[%" PRIu32 "]: requested offloads: %" PRIu16 " are not enabled for the queue[%u]\n",
 				i, m->nb_segs, tx_ring->id);
diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h
index 4f4142ed12d0..865e1241e0ce 100644
--- a/drivers/net/ena/ena_ethdev.h
+++ b/drivers/net/ena/ena_ethdev.h
@@ -58,8 +58,8 @@
 
 #define ENA_HASH_KEY_SIZE		40
 
-#define ENA_ALL_RSS_HF (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
-			ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_UDP)
+#define ENA_ALL_RSS_HF (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define ENA_IO_TXQ_IDX(q)		(2 * (q))
 #define ENA_IO_RXQ_IDX(q)		(2 * (q) + 1)
diff --git a/drivers/net/ena/ena_rss.c b/drivers/net/ena/ena_rss.c
index 152098410fa2..be4007e3f3fe 100644
--- a/drivers/net/ena/ena_rss.c
+++ b/drivers/net/ena/ena_rss.c
@@ -76,7 +76,7 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
 	if (reta_size == 0 || reta_conf == NULL)
 		return -EINVAL;
 
-	if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		PMD_DRV_LOG(ERR,
 			"RSS was not configured for the PMD\n");
 		return -ENOTSUP;
@@ -93,8 +93,8 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
 		/* Each reta_conf is for 64 entries.
 		 * To support 128 we use 2 conf of 64.
 		 */
-		conf_idx = i / RTE_RETA_GROUP_SIZE;
-		idx = i % RTE_RETA_GROUP_SIZE;
+		conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		idx = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (TEST_BIT(reta_conf[conf_idx].mask, idx)) {
 			entry_value =
 				ENA_IO_RXQ_IDX(reta_conf[conf_idx].reta[idx]);
@@ -139,7 +139,7 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
 	if (reta_size == 0 || reta_conf == NULL)
 		return -EINVAL;
 
-	if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		PMD_DRV_LOG(ERR,
 			"RSS was not configured for the PMD\n");
 		return -ENOTSUP;
@@ -154,8 +154,8 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0 ; i < reta_size ; i++) {
-		reta_conf_idx = i / RTE_RETA_GROUP_SIZE;
-		reta_idx = i % RTE_RETA_GROUP_SIZE;
+		reta_conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (TEST_BIT(reta_conf[reta_conf_idx].mask, reta_idx))
 			reta_conf[reta_conf_idx].reta[reta_idx] =
 				ENA_IO_RXQ_IDX_REV(indirect_table[i]);
@@ -199,34 +199,34 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
 	/* Convert proto to ETH flag */
 	switch (proto) {
 	case ENA_ADMIN_RSS_TCP4:
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		break;
 	case ENA_ADMIN_RSS_UDP4:
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 		break;
 	case ENA_ADMIN_RSS_TCP6:
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 		break;
 	case ENA_ADMIN_RSS_UDP6:
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 		break;
 	case ENA_ADMIN_RSS_IP4:
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 		break;
 	case ENA_ADMIN_RSS_IP6:
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 		break;
 	case ENA_ADMIN_RSS_IP4_FRAG:
-		rss_hf |= ETH_RSS_FRAG_IPV4;
+		rss_hf |= RTE_ETH_RSS_FRAG_IPV4;
 		break;
 	case ENA_ADMIN_RSS_NOT_IP:
-		rss_hf |= ETH_RSS_L2_PAYLOAD;
+		rss_hf |= RTE_ETH_RSS_L2_PAYLOAD;
 		break;
 	case ENA_ADMIN_RSS_TCP6_EX:
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 		break;
 	case ENA_ADMIN_RSS_IP6_EX:
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 		break;
 	default:
 		break;
@@ -235,10 +235,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
 	/* Check if only DA or SA is being used for L3. */
 	switch (fields & ENA_HF_RSS_ALL_L3) {
 	case ENA_ADMIN_RSS_L3_SA:
-		rss_hf |= ETH_RSS_L3_SRC_ONLY;
+		rss_hf |= RTE_ETH_RSS_L3_SRC_ONLY;
 		break;
 	case ENA_ADMIN_RSS_L3_DA:
-		rss_hf |= ETH_RSS_L3_DST_ONLY;
+		rss_hf |= RTE_ETH_RSS_L3_DST_ONLY;
 		break;
 	default:
 		break;
@@ -247,10 +247,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
 	/* Check if only DA or SA is being used for L4. */
 	switch (fields & ENA_HF_RSS_ALL_L4) {
 	case ENA_ADMIN_RSS_L4_SP:
-		rss_hf |= ETH_RSS_L4_SRC_ONLY;
+		rss_hf |= RTE_ETH_RSS_L4_SRC_ONLY;
 		break;
 	case ENA_ADMIN_RSS_L4_DP:
-		rss_hf |= ETH_RSS_L4_DST_ONLY;
+		rss_hf |= RTE_ETH_RSS_L4_DST_ONLY;
 		break;
 	default:
 		break;
@@ -268,11 +268,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
 	fields_mask = ENA_ADMIN_RSS_L2_DA | ENA_ADMIN_RSS_L2_SA;
 
 	/* Determine which fields of L3 should be used. */
-	switch (rss_hf & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) {
-	case ETH_RSS_L3_DST_ONLY:
+	switch (rss_hf & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) {
+	case RTE_ETH_RSS_L3_DST_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L3_DA;
 		break;
-	case ETH_RSS_L3_SRC_ONLY:
+	case RTE_ETH_RSS_L3_SRC_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L3_SA;
 		break;
 	default:
@@ -284,11 +284,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
 	}
 
 	/* Determine which fields of L4 should be used. */
-	switch (rss_hf & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) {
-	case ETH_RSS_L4_DST_ONLY:
+	switch (rss_hf & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) {
+	case RTE_ETH_RSS_L4_DST_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L4_DP;
 		break;
-	case ETH_RSS_L4_SRC_ONLY:
+	case RTE_ETH_RSS_L4_SRC_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L4_SP;
 		break;
 	default:
@@ -334,43 +334,43 @@ static int ena_set_hash_fields(struct ena_com_dev *ena_dev, uint64_t rss_hf)
 	int rc, i;
 
 	/* Turn on appropriate fields for each requested packet type */
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) != 0)
 		selected_fields[ENA_ADMIN_RSS_TCP4].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP4, rss_hf);
 
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) != 0)
 		selected_fields[ENA_ADMIN_RSS_UDP4].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP4, rss_hf);
 
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) != 0)
 		selected_fields[ENA_ADMIN_RSS_TCP6].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6, rss_hf);
 
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) != 0)
 		selected_fields[ENA_ADMIN_RSS_UDP6].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP6, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV4) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV4) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP4].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV6) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV6) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP6].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6, rss_hf);
 
-	if ((rss_hf & ETH_RSS_FRAG_IPV4) != 0)
+	if ((rss_hf & RTE_ETH_RSS_FRAG_IPV4) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP4_FRAG].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4_FRAG, rss_hf);
 
-	if ((rss_hf & ETH_RSS_L2_PAYLOAD) != 0)
+	if ((rss_hf & RTE_ETH_RSS_L2_PAYLOAD) != 0)
 		selected_fields[ENA_ADMIN_RSS_NOT_IP].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_NOT_IP, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV6_TCP_EX) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) != 0)
 		selected_fields[ENA_ADMIN_RSS_TCP6_EX].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6_EX, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV6_EX) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV6_EX) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP6_EX].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6_EX, rss_hf);
 
@@ -541,7 +541,7 @@ int ena_rss_hash_conf_get(struct rte_eth_dev *dev,
 	uint16_t admin_hf;
 	static bool warn_once;
 
-	if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		PMD_DRV_LOG(ERR, "RSS was not configured for the PMD\n");
 		return -ENOTSUP;
 	}
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index 1b567f01eae0..7cdb8ce463ed 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -100,27 +100,27 @@ enetc_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 	status = enetc_port_rd(enetc_hw, ENETC_PM0_STATUS);
 
 	if (status & ENETC_LINK_MODE)
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	else
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 
 	if (status & ENETC_LINK_STATUS)
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 	else
-		link.link_status = ETH_LINK_DOWN;
+		link.link_status = RTE_ETH_LINK_DOWN;
 
 	switch (status & ENETC_LINK_SPEED_MASK) {
 	case ENETC_LINK_SPEED_1G:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case ENETC_LINK_SPEED_100M:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	default:
 	case ENETC_LINK_SPEED_10M:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -207,10 +207,10 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
 	dev_info->max_tx_queues = MAX_TX_RINGS;
 	dev_info->max_rx_pktlen = ENETC_MAC_MAXFRM_SIZE;
 	dev_info->rx_offload_capa =
-		(DEV_RX_OFFLOAD_IPV4_CKSUM |
-		 DEV_RX_OFFLOAD_UDP_CKSUM |
-		 DEV_RX_OFFLOAD_TCP_CKSUM |
-		 DEV_RX_OFFLOAD_KEEP_CRC);
+		(RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		 RTE_ETH_RX_OFFLOAD_KEEP_CRC);
 
 	return 0;
 }
@@ -463,7 +463,7 @@ enetc_rx_queue_setup(struct rte_eth_dev *dev,
 			       RTE_ETH_QUEUE_STATE_STOPPED;
 	}
 
-	rx_ring->crc_len = (uint8_t)((rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+	rx_ring->crc_len = (uint8_t)((rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
 				     RTE_ETHER_CRC_LEN : 0);
 
 	return 0;
@@ -705,7 +705,7 @@ enetc_dev_configure(struct rte_eth_dev *dev)
 	enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
 	enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		int config;
 
 		config = enetc_port_rd(enetc_hw, ENETC_PM0_CMD_CFG);
@@ -713,10 +713,10 @@ enetc_dev_configure(struct rte_eth_dev *dev)
 		enetc_port_wr(enetc_hw, ENETC_PM0_CMD_CFG, config);
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		checksum &= ~L3_CKSUM;
 
-	if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM))
+	if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
 		checksum &= ~L4_CKSUM;
 
 	enetc_port_wr(enetc_hw, ENETC_PAR_PORT_CFG, checksum);
diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index 47bfdac2cfdd..d5493c98345d 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -178,7 +178,7 @@ struct enic {
 	 */
 	uint8_t rss_hash_type; /* NIC_CFG_RSS_HASH_TYPE flags */
 	uint8_t rss_enable;
-	uint64_t rss_hf; /* ETH_RSS flags */
+	uint64_t rss_hf; /* RTE_ETH_RSS flags */
 	union vnic_rss_key rss_key;
 	union vnic_rss_cpu rss_cpu;
 
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8df7332bc5e0..c8bdaf1a8e79 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -38,30 +38,30 @@ static const struct vic_speed_capa {
 	uint16_t sub_devid;
 	uint32_t capa;
 } vic_speed_capa_map[] = {
-	{ 0x0043, ETH_LINK_SPEED_10G }, /* VIC */
-	{ 0x0047, ETH_LINK_SPEED_10G }, /* P81E PCIe */
-	{ 0x0048, ETH_LINK_SPEED_10G }, /* M81KR Mezz */
-	{ 0x004f, ETH_LINK_SPEED_10G }, /* 1280 Mezz */
-	{ 0x0084, ETH_LINK_SPEED_10G }, /* 1240 MLOM */
-	{ 0x0085, ETH_LINK_SPEED_10G }, /* 1225 PCIe */
-	{ 0x00cd, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1285 PCIe */
-	{ 0x00ce, ETH_LINK_SPEED_10G }, /* 1225T PCIe */
-	{ 0x012a, ETH_LINK_SPEED_40G }, /* M4308 */
-	{ 0x012c, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1340 MLOM */
-	{ 0x012e, ETH_LINK_SPEED_10G }, /* 1227 PCIe */
-	{ 0x0137, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1380 Mezz */
-	{ 0x014d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1385 PCIe */
-	{ 0x015d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1387 MLOM */
-	{ 0x0215, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
-		  ETH_LINK_SPEED_40G }, /* 1440 Mezz */
-	{ 0x0216, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
-		  ETH_LINK_SPEED_40G }, /* 1480 MLOM */
-	{ 0x0217, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1455 PCIe */
-	{ 0x0218, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1457 MLOM */
-	{ 0x0219, ETH_LINK_SPEED_40G }, /* 1485 PCIe */
-	{ 0x021a, ETH_LINK_SPEED_40G }, /* 1487 MLOM */
-	{ 0x024a, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1495 PCIe */
-	{ 0x024b, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1497 MLOM */
+	{ 0x0043, RTE_ETH_LINK_SPEED_10G }, /* VIC */
+	{ 0x0047, RTE_ETH_LINK_SPEED_10G }, /* P81E PCIe */
+	{ 0x0048, RTE_ETH_LINK_SPEED_10G }, /* M81KR Mezz */
+	{ 0x004f, RTE_ETH_LINK_SPEED_10G }, /* 1280 Mezz */
+	{ 0x0084, RTE_ETH_LINK_SPEED_10G }, /* 1240 MLOM */
+	{ 0x0085, RTE_ETH_LINK_SPEED_10G }, /* 1225 PCIe */
+	{ 0x00cd, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1285 PCIe */
+	{ 0x00ce, RTE_ETH_LINK_SPEED_10G }, /* 1225T PCIe */
+	{ 0x012a, RTE_ETH_LINK_SPEED_40G }, /* M4308 */
+	{ 0x012c, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1340 MLOM */
+	{ 0x012e, RTE_ETH_LINK_SPEED_10G }, /* 1227 PCIe */
+	{ 0x0137, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1380 Mezz */
+	{ 0x014d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1385 PCIe */
+	{ 0x015d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1387 MLOM */
+	{ 0x0215, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+		  RTE_ETH_LINK_SPEED_40G }, /* 1440 Mezz */
+	{ 0x0216, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+		  RTE_ETH_LINK_SPEED_40G }, /* 1480 MLOM */
+	{ 0x0217, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1455 PCIe */
+	{ 0x0218, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1457 MLOM */
+	{ 0x0219, RTE_ETH_LINK_SPEED_40G }, /* 1485 PCIe */
+	{ 0x021a, RTE_ETH_LINK_SPEED_40G }, /* 1487 MLOM */
+	{ 0x024a, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1495 PCIe */
+	{ 0x024b, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1497 MLOM */
 	{ 0, 0 }, /* End marker */
 };
 
@@ -297,8 +297,8 @@ static int enicpmd_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 	ENICPMD_FUNC_TRACE();
 
 	offloads = eth_dev->data->dev_conf.rxmode.offloads;
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			enic->ig_vlan_strip_en = 1;
 		else
 			enic->ig_vlan_strip_en = 0;
@@ -323,17 +323,17 @@ static int enicpmd_dev_configure(struct rte_eth_dev *eth_dev)
 		return ret;
 	}
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	enic->mc_count = 0;
 	enic->hw_ip_checksum = !!(eth_dev->data->dev_conf.rxmode.offloads &
-				  DEV_RX_OFFLOAD_CHECKSUM);
+				  RTE_ETH_RX_OFFLOAD_CHECKSUM);
 	/* All vlan offload masks to apply the current settings */
-	mask = ETH_VLAN_STRIP_MASK |
-		ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK |
+		RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	ret = enicpmd_vlan_offload_set(eth_dev, mask);
 	if (ret) {
 		dev_err(enic, "Failed to configure VLAN offloads\n");
@@ -435,14 +435,14 @@ static uint32_t speed_capa_from_pci_id(struct rte_eth_dev *eth_dev)
 	}
 	/* 1300 and later models are at least 40G */
 	if (id >= 0x0100)
-		return ETH_LINK_SPEED_40G;
+		return RTE_ETH_LINK_SPEED_40G;
 	/* VFs have subsystem id 0, check device id */
 	if (id == 0) {
 		/* Newer VF implies at least 40G model */
 		if (pdev->id.device_id == PCI_DEVICE_ID_CISCO_VIC_ENET_SN)
-			return ETH_LINK_SPEED_40G;
+			return RTE_ETH_LINK_SPEED_40G;
 	}
-	return ETH_LINK_SPEED_10G;
+	return RTE_ETH_LINK_SPEED_10G;
 }
 
 static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
@@ -774,8 +774,8 @@ static int enicpmd_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = enic_sop_rq_idx_to_rte_idx(
 				enic->rss_cpu.cpu[i / 4].b[i % 4]);
@@ -806,8 +806,8 @@ static int enicpmd_dev_rss_reta_update(struct rte_eth_dev *dev,
 	 */
 	rss_cpu = enic->rss_cpu;
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			rss_cpu.cpu[i / 4].b[i % 4] =
 				enic_rte_rq_idx_to_sop_idx(
@@ -883,7 +883,7 @@ static void enicpmd_dev_rxq_info_get(struct rte_eth_dev *dev,
 	 */
 	conf->offloads = enic->rx_offload_capa;
 	if (!enic->ig_vlan_strip_en)
-		conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	/* rx_thresh and other fields are not applicable for enic */
 }
 
@@ -969,8 +969,8 @@ static int enicpmd_dev_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
 static int udp_tunnel_common_check(struct enic *enic,
 				   struct rte_eth_udp_tunnel *tnl)
 {
-	if (tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN &&
-	    tnl->prot_type != RTE_TUNNEL_TYPE_GENEVE)
+	if (tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN &&
+	    tnl->prot_type != RTE_ETH_TUNNEL_TYPE_GENEVE)
 		return -ENOTSUP;
 	if (!enic->overlay_offload) {
 		ENICPMD_LOG(DEBUG, " overlay offload is not supported\n");
@@ -1010,7 +1010,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
 	ret = udp_tunnel_common_check(enic, tnl);
 	if (ret)
 		return ret;
-	vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+	vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
 	if (vxlan)
 		port = enic->vxlan_port;
 	else
@@ -1039,7 +1039,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
 	ret = udp_tunnel_common_check(enic, tnl);
 	if (ret)
 		return ret;
-	vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+	vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
 	if (vxlan)
 		port = enic->vxlan_port;
 	else
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index dfc7f5d1f94f..21b1fffb14f0 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -430,7 +430,7 @@ int enic_link_update(struct rte_eth_dev *eth_dev)
 
 	memset(&link, 0, sizeof(link));
 	link.link_status = enic_get_link_status(enic);
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_speed = vnic_dev_port_speed(enic->vdev);
 
 	return rte_eth_linkstatus_set(eth_dev, &link);
@@ -597,7 +597,7 @@ int enic_enable(struct enic *enic)
 	}
 
 	eth_dev->data->dev_link.link_speed = vnic_dev_port_speed(enic->vdev);
-	eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	/* vnic notification of link status has already been turned on in
 	 * enic_dev_init() which is called during probe time.  Here we are
@@ -638,11 +638,11 @@ int enic_enable(struct enic *enic)
 	 * and vlan insertion are supported.
 	 */
 	simple_tx_offloads = enic->tx_offload_capa &
-		(DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		 DEV_TX_OFFLOAD_VLAN_INSERT |
-		 DEV_TX_OFFLOAD_IPV4_CKSUM |
-		 DEV_TX_OFFLOAD_UDP_CKSUM |
-		 DEV_TX_OFFLOAD_TCP_CKSUM);
+		(RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		 RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		 RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
 	if ((eth_dev->data->dev_conf.txmode.offloads &
 	     ~simple_tx_offloads) == 0) {
 		ENICPMD_LOG(DEBUG, " use the simple tx handler");
@@ -858,7 +858,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
 	max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
 
 	if (enic->rte_dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_SCATTER) {
+	    RTE_ETH_RX_OFFLOAD_SCATTER) {
 		dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
 		/* ceil((max pkt len)/mbuf_size) */
 		mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) / mbuf_size;
@@ -1385,15 +1385,15 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
 	rss_hash_type = 0;
 	rss_hf = rss_conf->rss_hf & enic->flow_type_rss_offloads;
 	if (enic->rq_count > 1 &&
-	    (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) &&
+	    (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) &&
 	    rss_hf != 0) {
 		rss_enable = 1;
-		if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			      ETH_RSS_NONFRAG_IPV4_OTHER))
+		if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			      RTE_ETH_RSS_NONFRAG_IPV4_OTHER))
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV4;
 			if (enic->udp_rss_weak) {
 				/*
@@ -1404,12 +1404,12 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
 				rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
 			}
 		}
-		if (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_IPV6_EX |
-			      ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER))
+		if (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_IPV6_EX |
+			      RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER))
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV6;
-		if (rss_hf & (ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX))
+		if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX))
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
-		if (rss_hf & (ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX)) {
+		if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX)) {
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV6;
 			if (enic->udp_rss_weak)
 				rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
@@ -1745,9 +1745,9 @@ enic_enable_overlay_offload(struct enic *enic)
 		return -EINVAL;
 	}
 	enic->tx_offload_capa |=
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		(enic->geneve ? DEV_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
-		(enic->vxlan ? DEV_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		(enic->geneve ? RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
+		(enic->vxlan ? RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
 	enic->tx_offload_mask |=
 		PKT_TX_OUTER_IPV6 |
 		PKT_TX_OUTER_IPV4 |
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index c5777772a09e..918a9e170ff6 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -147,31 +147,31 @@ int enic_get_vnic_config(struct enic *enic)
 		 * IPV4 hash type handles both non-frag and frag packet types.
 		 * TCP/UDP is controlled via a separate flag below.
 		 */
-		enic->flow_type_rss_offloads |= ETH_RSS_IPV4 |
-			ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV4 |
+			RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER;
 	if (ENIC_SETTING(enic, RSSHASH_TCPIPV4))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_TCP;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (ENIC_SETTING(enic, RSSHASH_IPV6))
 		/*
 		 * The VIC adapter can perform RSS on IPv6 packets with and
 		 * without extension headers. An IPv6 "fragment" is an IPv6
 		 * packet with the fragment extension header.
 		 */
-		enic->flow_type_rss_offloads |= ETH_RSS_IPV6 |
-			ETH_RSS_IPV6_EX | ETH_RSS_FRAG_IPV6 |
-			ETH_RSS_NONFRAG_IPV6_OTHER;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV6 |
+			RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_FRAG_IPV6 |
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER;
 	if (ENIC_SETTING(enic, RSSHASH_TCPIPV6))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_TCP |
-			ETH_RSS_IPV6_TCP_EX;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			RTE_ETH_RSS_IPV6_TCP_EX;
 	if (enic->udp_rss_weak)
 		enic->flow_type_rss_offloads |=
-			ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
-			ETH_RSS_IPV6_UDP_EX;
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			RTE_ETH_RSS_IPV6_UDP_EX;
 	if (ENIC_SETTING(enic, RSSHASH_UDPIPV4))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_UDP;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (ENIC_SETTING(enic, RSSHASH_UDPIPV6))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_UDP |
-			ETH_RSS_IPV6_UDP_EX;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			RTE_ETH_RSS_IPV6_UDP_EX;
 
 	/* Zero offloads if RSS is not enabled */
 	if (!ENIC_SETTING(enic, RSS))
@@ -201,19 +201,19 @@ int enic_get_vnic_config(struct enic *enic)
 	enic->tx_queue_offload_capa = 0;
 	enic->tx_offload_capa =
 		enic->tx_queue_offload_capa |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	enic->rx_offload_capa =
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	enic->tx_offload_mask =
 		PKT_TX_IPV6 |
 		PKT_TX_IPV4 |
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index b87c036e6014..82d595b1d1a0 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -17,10 +17,10 @@
 
 const char pmd_failsafe_driver_name[] = FAILSAFE_DRIVER_NAME;
 static const struct rte_eth_link eth_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_UP,
-	.link_autoneg = ETH_LINK_AUTONEG,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_UP,
+	.link_autoneg = RTE_ETH_LINK_AUTONEG,
 };
 
 static int
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 602c04033c18..5f4810051dac 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -326,7 +326,7 @@ int failsafe_rx_intr_install_subdevice(struct sub_device *sdev)
 	int qid;
 	struct rte_eth_dev *fsdev;
 	struct rxq **rxq;
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 				&ETH(sdev)->data->dev_conf.intr_conf;
 
 	fsdev = fs_dev(sdev);
@@ -519,7 +519,7 @@ int
 failsafe_rx_intr_install(struct rte_eth_dev *dev)
 {
 	struct fs_priv *priv = PRIV(dev);
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 			&priv->data->dev_conf.intr_conf;
 
 	if (intr_conf->rxq == 0 || dev->intr_handle != NULL)
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 29de39910c6e..a3a8a1c82e3a 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1172,51 +1172,51 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
 	 * configuring a sub-device.
 	 */
 	infos->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_LRO |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_MACSEC_STRIP |
-		DEV_RX_OFFLOAD_HEADER_SPLIT |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_TIMESTAMP |
-		DEV_RX_OFFLOAD_SECURITY |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_LRO |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+		RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+		RTE_ETH_RX_OFFLOAD_SECURITY |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	infos->rx_queue_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_LRO |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_MACSEC_STRIP |
-		DEV_RX_OFFLOAD_HEADER_SPLIT |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_TIMESTAMP |
-		DEV_RX_OFFLOAD_SECURITY |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_LRO |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+		RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+		RTE_ETH_RX_OFFLOAD_SECURITY |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	infos->tx_offload_capa =
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	infos->flow_type_rss_offloads =
-		ETH_RSS_IP |
-		ETH_RSS_UDP |
-		ETH_RSS_TCP;
+		RTE_ETH_RSS_IP |
+		RTE_ETH_RSS_UDP |
+		RTE_ETH_RSS_TCP;
 	infos->dev_capa =
 		RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 		RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 17c73c4dc5ae..b7522a47a80b 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -177,7 +177,7 @@ struct fm10k_rx_queue {
 	uint8_t drop_en;
 	uint8_t rx_deferred_start; /* don't start this queue in dev start. */
 	uint16_t rx_ftag_en; /* indicates FTAG RX supported */
-	uint64_t offloads; /* offloads of DEV_RX_OFFLOAD_* */
+	uint64_t offloads; /* offloads of RTE_ETH_RX_OFFLOAD_* */
 };
 
 /*
@@ -209,7 +209,7 @@ struct fm10k_tx_queue {
 	uint16_t next_rs; /* Next pos to set RS flag */
 	uint16_t next_dd; /* Next pos to check DD flag */
 	volatile uint32_t *tail_ptr;
-	uint64_t offloads; /* Offloads of DEV_TX_OFFLOAD_* */
+	uint64_t offloads; /* Offloads of RTE_ETH_TX_OFFLOAD_* */
 	uint16_t nb_desc;
 	uint16_t port_id;
 	uint8_t tx_deferred_start; /** don't start this queue in dev start. */
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 66f4a5c6df2c..d256334bfde9 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -413,12 +413,12 @@ fm10k_check_mq_mode(struct rte_eth_dev *dev)
 
 	vmdq_conf = &dev->data->dev_conf.rx_adv_conf.vmdq_rx_conf;
 
-	if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		PMD_INIT_LOG(ERR, "DCB mode is not supported.");
 		return -EINVAL;
 	}
 
-	if (!(rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+	if (!(rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
 		return 0;
 
 	if (hw->mac.type == fm10k_mac_vf) {
@@ -449,8 +449,8 @@ fm10k_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multipe queue mode checking */
 	ret  = fm10k_check_mq_mode(dev);
@@ -510,7 +510,7 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
 		0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
 	};
 
-	if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_RSS ||
+	if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_RSS ||
 		dev_conf->rx_adv_conf.rss_conf.rss_hf == 0) {
 		FM10K_WRITE_REG(hw, FM10K_MRQC(0), 0);
 		return;
@@ -547,15 +547,15 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
 	 */
 	hf = dev_conf->rx_adv_conf.rss_conf.rss_hf;
 	mrqc = 0;
-	mrqc |= (hf & ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
 
 	if (mrqc == 0) {
 		PMD_INIT_LOG(ERR, "Specified RSS mode 0x%"PRIx64"is not"
@@ -602,7 +602,7 @@ fm10k_dev_mq_rx_configure(struct rte_eth_dev *dev)
 	if (hw->mac.type != fm10k_mac_pf)
 		return;
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
 		nb_queue_pools = vmdq_conf->nb_queue_pools;
 
 	/* no pool number change, no need to update logic port and VLAN/MAC */
@@ -759,7 +759,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
 		/* It adds dual VLAN length for supporting dual VLAN */
 		if ((dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
 				2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
-			rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
+			rxq->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 			uint32_t reg;
 			dev->data->scattered_rx = 1;
 			reg = FM10K_READ_REG(hw, FM10K_SRRCTL(i));
@@ -1145,7 +1145,7 @@ fm10k_dev_start(struct rte_eth_dev *dev)
 	}
 
 	/* Update default vlan when not in VMDQ mode */
-	if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+	if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
 		fm10k_vlan_filter_set(dev, hw->mac.default_vid, true);
 
 	fm10k_link_update(dev, 0);
@@ -1222,11 +1222,11 @@ fm10k_link_update(struct rte_eth_dev *dev,
 		FM10K_DEV_PRIVATE_TO_INFO(dev->data->dev_private);
 	PMD_INIT_FUNC_TRACE();
 
-	dev->data->dev_link.link_speed  = ETH_SPEED_NUM_50G;
-	dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	dev->data->dev_link.link_speed  = RTE_ETH_SPEED_NUM_50G;
+	dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	dev->data->dev_link.link_status =
-		dev_info->sm_down ? ETH_LINK_DOWN : ETH_LINK_UP;
-	dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
+		dev_info->sm_down ? RTE_ETH_LINK_DOWN : RTE_ETH_LINK_UP;
+	dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
 
 	return 0;
 }
@@ -1378,7 +1378,7 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 	dev_info->max_vfs            = pdev->max_vfs;
 	dev_info->vmdq_pool_base     = 0;
 	dev_info->vmdq_queue_base    = 0;
-	dev_info->max_vmdq_pools     = ETH_32_POOLS;
+	dev_info->max_vmdq_pools     = RTE_ETH_32_POOLS;
 	dev_info->vmdq_queue_num     = FM10K_MAX_QUEUES_PF;
 	dev_info->rx_queue_offload_capa = fm10k_get_rx_queue_offloads_capa(dev);
 	dev_info->rx_offload_capa = fm10k_get_rx_port_offloads_capa(dev) |
@@ -1389,15 +1389,15 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 
 	dev_info->hash_key_size = FM10K_RSSRK_SIZE * sizeof(uint32_t);
 	dev_info->reta_size = FM10K_MAX_RSS_INDICES;
-	dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
-					ETH_RSS_IPV6 |
-					ETH_RSS_IPV6_EX |
-					ETH_RSS_NONFRAG_IPV4_TCP |
-					ETH_RSS_NONFRAG_IPV6_TCP |
-					ETH_RSS_IPV6_TCP_EX |
-					ETH_RSS_NONFRAG_IPV4_UDP |
-					ETH_RSS_NONFRAG_IPV6_UDP |
-					ETH_RSS_IPV6_UDP_EX;
+	dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+					RTE_ETH_RSS_IPV6 |
+					RTE_ETH_RSS_IPV6_EX |
+					RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+					RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+					RTE_ETH_RSS_IPV6_TCP_EX |
+					RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+					RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+					RTE_ETH_RSS_IPV6_UDP_EX;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -1435,9 +1435,9 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 		.nb_mtu_seg_max = FM10K_TX_MAX_MTU_SEG,
 	};
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G |
-			ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
-			ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G |
+			RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+			RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -1509,7 +1509,7 @@ fm10k_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 		return -EINVAL;
 	}
 
-	if (vlan_id > ETH_VLAN_ID_MAX) {
+	if (vlan_id > RTE_ETH_VLAN_ID_MAX) {
 		PMD_INIT_LOG(ERR, "Invalid vlan_id: must be < 4096");
 		return -EINVAL;
 	}
@@ -1767,20 +1767,20 @@ static uint64_t fm10k_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
 
-	return (uint64_t)(DEV_RX_OFFLOAD_SCATTER);
+	return (uint64_t)(RTE_ETH_RX_OFFLOAD_SCATTER);
 }
 
 static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
 
-	return  (uint64_t)(DEV_RX_OFFLOAD_VLAN_STRIP  |
-			   DEV_RX_OFFLOAD_VLAN_FILTER |
-			   DEV_RX_OFFLOAD_IPV4_CKSUM  |
-			   DEV_RX_OFFLOAD_UDP_CKSUM   |
-			   DEV_RX_OFFLOAD_TCP_CKSUM   |
-			   DEV_RX_OFFLOAD_HEADER_SPLIT |
-			   DEV_RX_OFFLOAD_RSS_HASH);
+	return  (uint64_t)(RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+			   RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+			   RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+			   RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+			   RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+			   RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+			   RTE_ETH_RX_OFFLOAD_RSS_HASH);
 }
 
 static int
@@ -1965,12 +1965,12 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
 
-	return (uint64_t)(DEV_TX_OFFLOAD_VLAN_INSERT |
-			  DEV_TX_OFFLOAD_MULTI_SEGS  |
-			  DEV_TX_OFFLOAD_IPV4_CKSUM  |
-			  DEV_TX_OFFLOAD_UDP_CKSUM   |
-			  DEV_TX_OFFLOAD_TCP_CKSUM   |
-			  DEV_TX_OFFLOAD_TCP_TSO);
+	return (uint64_t)(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+			  RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+			  RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+			  RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_TCP_TSO);
 }
 
 static int
@@ -2111,8 +2111,8 @@ fm10k_reta_update(struct rte_eth_dev *dev,
 	 * 128-entries in 32 registers
 	 */
 	for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				BIT_MASK_PER_UINT32);
 		if (mask == 0)
@@ -2160,8 +2160,8 @@ fm10k_reta_query(struct rte_eth_dev *dev,
 	 * 128-entries in 32 registers
 	 */
 	for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				BIT_MASK_PER_UINT32);
 		if (mask == 0)
@@ -2198,15 +2198,15 @@ fm10k_rss_hash_update(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	mrqc = 0;
-	mrqc |= (hf & ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
 
 	/* If the mapping doesn't fit any supported, return */
 	if (mrqc == 0)
@@ -2243,15 +2243,15 @@ fm10k_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	mrqc = FM10K_READ_REG(hw, FM10K_MRQC(0));
 	hf = 0;
-	hf |= (mrqc & FM10K_MRQC_IPV4)     ? ETH_RSS_IPV4              : 0;
-	hf |= (mrqc & FM10K_MRQC_IPV6)     ? ETH_RSS_IPV6              : 0;
-	hf |= (mrqc & FM10K_MRQC_IPV6)     ? ETH_RSS_IPV6_EX           : 0;
-	hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? ETH_RSS_NONFRAG_IPV4_TCP  : 0;
-	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_NONFRAG_IPV6_TCP  : 0;
-	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP_EX       : 0;
-	hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? ETH_RSS_NONFRAG_IPV4_UDP  : 0;
-	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_NONFRAG_IPV6_UDP  : 0;
-	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP_EX       : 0;
+	hf |= (mrqc & FM10K_MRQC_IPV4)     ? RTE_ETH_RSS_IPV4              : 0;
+	hf |= (mrqc & FM10K_MRQC_IPV6)     ? RTE_ETH_RSS_IPV6              : 0;
+	hf |= (mrqc & FM10K_MRQC_IPV6)     ? RTE_ETH_RSS_IPV6_EX           : 0;
+	hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_TCP  : 0;
+	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_TCP  : 0;
+	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_IPV6_TCP_EX       : 0;
+	hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_UDP  : 0;
+	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_UDP  : 0;
+	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_IPV6_UDP_EX       : 0;
 
 	rss_conf->rss_hf = hf;
 
@@ -2606,7 +2606,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
 
 			/* first clear the internal SW recording structure */
 			if (!(dev->data->dev_conf.rxmode.mq_mode &
-						ETH_MQ_RX_VMDQ_FLAG))
+						RTE_ETH_MQ_RX_VMDQ_FLAG))
 				fm10k_vlan_filter_set(dev, hw->mac.default_vid,
 					false);
 
@@ -2622,7 +2622,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
 					MAIN_VSI_POOL_NUMBER);
 
 			if (!(dev->data->dev_conf.rxmode.mq_mode &
-						ETH_MQ_RX_VMDQ_FLAG))
+						RTE_ETH_MQ_RX_VMDQ_FLAG))
 				fm10k_vlan_filter_set(dev, hw->mac.default_vid,
 					true);
 
diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c
index 83af01dc2da6..50973a662c67 100644
--- a/drivers/net/fm10k/fm10k_rxtx_vec.c
+++ b/drivers/net/fm10k/fm10k_rxtx_vec.c
@@ -208,11 +208,11 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
 {
 #ifndef RTE_LIBRTE_IEEE1588
 	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 
 #ifndef RTE_FM10K_RX_OLFLAGS_ENABLE
 	/* whithout rx ol_flags, no VP flag report */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 		return -1;
 #endif
 
@@ -221,7 +221,7 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
 		return -1;
 
 	/* no header split support */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
 		return -1;
 
 	return 0;
diff --git a/drivers/net/hinic/base/hinic_pmd_hwdev.c b/drivers/net/hinic/base/hinic_pmd_hwdev.c
index cb9cf6efa287..80f9eb5c3031 100644
--- a/drivers/net/hinic/base/hinic_pmd_hwdev.c
+++ b/drivers/net/hinic/base/hinic_pmd_hwdev.c
@@ -1320,28 +1320,28 @@ hinic_cable_status_event(u8 cmd, void *buf_in, __rte_unused u16 in_size,
 static int hinic_link_event_process(struct hinic_hwdev *hwdev,
 				    struct rte_eth_dev *eth_dev, u8 status)
 {
-	uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
-					ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
-					ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
-					ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+	uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+					RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+					RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+					RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
 	struct nic_port_info port_info;
 	struct rte_eth_link link;
 	int rc = HINIC_OK;
 
 	if (!status) {
-		link.link_status = ETH_LINK_DOWN;
+		link.link_status = RTE_ETH_LINK_DOWN;
 		link.link_speed = 0;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	} else {
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 
 		memset(&port_info, 0, sizeof(port_info));
 		rc = hinic_get_port_info(hwdev, &port_info);
 		if (rc) {
-			link.link_speed = ETH_SPEED_NUM_NONE;
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
-			link.link_autoneg = ETH_LINK_FIXED;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+			link.link_autoneg = RTE_ETH_LINK_FIXED;
 		} else {
 			link.link_speed = port_speed[port_info.speed %
 						LINK_SPEED_MAX];
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index c2374ebb6759..4cd5a85d5f8d 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -311,8 +311,8 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* mtu size is 256~9600 */
 	if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
@@ -338,7 +338,7 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
 
 	/* init vlan offoad */
 	err = hinic_vlan_offload_set(dev,
-				ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+				RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
 	if (err) {
 		PMD_DRV_LOG(ERR, "Initialize vlan filter and strip failed");
 		(void)hinic_config_mq_mode(dev, FALSE);
@@ -696,15 +696,15 @@ static void hinic_get_speed_capa(struct rte_eth_dev *dev, uint32_t *speed_capa)
 	} else {
 		*speed_capa = 0;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_1G))
-			*speed_capa |= ETH_LINK_SPEED_1G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_1G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_10G))
-			*speed_capa |= ETH_LINK_SPEED_10G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_10G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_25G))
-			*speed_capa |= ETH_LINK_SPEED_25G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_25G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_40G))
-			*speed_capa |= ETH_LINK_SPEED_40G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_40G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_100G))
-			*speed_capa |= ETH_LINK_SPEED_100G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	}
 }
 
@@ -732,24 +732,24 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 
 	hinic_get_speed_capa(dev, &info->speed_capa);
 	info->rx_queue_offload_capa = 0;
-	info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
-				DEV_RX_OFFLOAD_IPV4_CKSUM |
-				DEV_RX_OFFLOAD_UDP_CKSUM |
-				DEV_RX_OFFLOAD_TCP_CKSUM |
-				DEV_RX_OFFLOAD_VLAN_FILTER |
-				DEV_RX_OFFLOAD_SCATTER |
-				DEV_RX_OFFLOAD_TCP_LRO |
-				DEV_RX_OFFLOAD_RSS_HASH;
+	info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+				RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				RTE_ETH_RX_OFFLOAD_SCATTER |
+				RTE_ETH_RX_OFFLOAD_TCP_LRO |
+				RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	info->tx_queue_offload_capa = 0;
-	info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-				DEV_TX_OFFLOAD_IPV4_CKSUM |
-				DEV_TX_OFFLOAD_UDP_CKSUM |
-				DEV_TX_OFFLOAD_TCP_CKSUM |
-				DEV_TX_OFFLOAD_SCTP_CKSUM |
-				DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				DEV_TX_OFFLOAD_TCP_TSO |
-				DEV_TX_OFFLOAD_MULTI_SEGS;
+	info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	info->hash_key_size = HINIC_RSS_KEY_SIZE;
 	info->reta_size = HINIC_RSS_INDIR_SIZE;
@@ -846,20 +846,20 @@ static int hinic_priv_get_dev_link_status(struct hinic_nic_dev *nic_dev,
 	u8 port_link_status = 0;
 	struct nic_port_info port_link_info;
 	struct hinic_hwdev *nic_hwdev = nic_dev->hwdev;
-	uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
-					ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
-					ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
-					ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+	uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+					RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+					RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+					RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
 
 	rc = hinic_get_link_status(nic_hwdev, &port_link_status);
 	if (rc)
 		return rc;
 
 	if (!port_link_status) {
-		link->link_status = ETH_LINK_DOWN;
+		link->link_status = RTE_ETH_LINK_DOWN;
 		link->link_speed = 0;
-		link->link_duplex = ETH_LINK_HALF_DUPLEX;
-		link->link_autoneg = ETH_LINK_FIXED;
+		link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link->link_autoneg = RTE_ETH_LINK_FIXED;
 		return HINIC_OK;
 	}
 
@@ -901,8 +901,8 @@ static int hinic_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		/* Get link status information from hardware */
 		rc = hinic_priv_get_dev_link_status(nic_dev, &link);
 		if (rc != HINIC_OK) {
-			link.link_speed = ETH_SPEED_NUM_NONE;
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR, "Get link status failed");
 			goto out;
 		}
@@ -1650,8 +1650,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	int err;
 
 	/* Enable or disable VLAN filter */
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) ?
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) ?
 			TRUE : FALSE;
 		err = hinic_config_vlan_filter(nic_dev->hwdev, on);
 		if (err == HINIC_MGMT_CMD_UNSUPPORTED) {
@@ -1672,8 +1672,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	}
 
 	/* Enable or disable VLAN stripping */
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) ?
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) ?
 			TRUE : FALSE;
 		err = hinic_set_rx_vlan_offload(nic_dev->hwdev, on);
 		if (err) {
@@ -1859,13 +1859,13 @@ static int hinic_flow_ctrl_get(struct rte_eth_dev *dev,
 	fc_conf->autoneg = nic_pause.auto_neg;
 
 	if (nic_pause.tx_pause && nic_pause.rx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (nic_pause.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else if (nic_pause.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -1879,14 +1879,14 @@ static int hinic_flow_ctrl_set(struct rte_eth_dev *dev,
 
 	nic_pause.auto_neg = fc_conf->autoneg;
 
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-		(fc_conf->mode & RTE_FC_TX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+		(fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
 		nic_pause.tx_pause = true;
 	else
 		nic_pause.tx_pause = false;
 
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-		(fc_conf->mode & RTE_FC_RX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+		(fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
 		nic_pause.rx_pause = true;
 	else
 		nic_pause.rx_pause = false;
@@ -1930,7 +1930,7 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
 	struct nic_rss_type rss_type = {0};
 	int err = 0;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		PMD_DRV_LOG(WARNING, "RSS is not enabled");
 		return HINIC_OK;
 	}
@@ -1951,14 +1951,14 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
 		}
 	}
 
-	rss_type.ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
-	rss_type.tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
-	rss_type.ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
-	rss_type.ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
-	rss_type.tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
-	rss_type.tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
-	rss_type.udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
-	rss_type.udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+	rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+	rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+	rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+	rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+	rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+	rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+	rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+	rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
 
 	err = hinic_set_rss_type(nic_dev->hwdev, tmpl_idx, rss_type);
 	if (err) {
@@ -1994,7 +1994,7 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
 	struct nic_rss_type rss_type = {0};
 	int err;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		PMD_DRV_LOG(WARNING, "RSS is not enabled");
 		return HINIC_ERROR;
 	}
@@ -2015,15 +2015,15 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
 
 	rss_conf->rss_hf = 0;
 	rss_conf->rss_hf |=  rss_type.ipv4 ?
-		(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4) : 0;
-	rss_conf->rss_hf |=  rss_type.tcp_ipv4 ? ETH_RSS_NONFRAG_IPV4_TCP : 0;
+		(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4) : 0;
+	rss_conf->rss_hf |=  rss_type.tcp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_TCP : 0;
 	rss_conf->rss_hf |=  rss_type.ipv6 ?
-		(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6) : 0;
-	rss_conf->rss_hf |=  rss_type.ipv6_ext ? ETH_RSS_IPV6_EX : 0;
-	rss_conf->rss_hf |=  rss_type.tcp_ipv6 ? ETH_RSS_NONFRAG_IPV6_TCP : 0;
-	rss_conf->rss_hf |=  rss_type.tcp_ipv6_ext ? ETH_RSS_IPV6_TCP_EX : 0;
-	rss_conf->rss_hf |=  rss_type.udp_ipv4 ? ETH_RSS_NONFRAG_IPV4_UDP : 0;
-	rss_conf->rss_hf |=  rss_type.udp_ipv6 ? ETH_RSS_NONFRAG_IPV6_UDP : 0;
+		(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6) : 0;
+	rss_conf->rss_hf |=  rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0;
+	rss_conf->rss_hf |=  rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0;
+	rss_conf->rss_hf |=  rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0;
+	rss_conf->rss_hf |=  rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0;
+	rss_conf->rss_hf |=  rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0;
 
 	return HINIC_OK;
 }
@@ -2053,7 +2053,7 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
 	u16 i = 0;
 	u16 idx, shift;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG))
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG))
 		return HINIC_OK;
 
 	if (reta_size != NIC_RSS_INDIR_SIZE) {
@@ -2067,8 +2067,8 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
 
 	/* update rss indir_tbl */
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (reta_conf[idx].reta[shift] >= nic_dev->num_rq) {
 			PMD_DRV_LOG(ERR, "Invalid reta entry, indirtbl[%d]: %d "
@@ -2133,8 +2133,8 @@ static int hinic_rss_indirtbl_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = (uint16_t)indirtbl[i];
 	}
diff --git a/drivers/net/hinic/hinic_pmd_rx.c b/drivers/net/hinic/hinic_pmd_rx.c
index 842399cc4cd8..d347afe9a6a9 100644
--- a/drivers/net/hinic/hinic_pmd_rx.c
+++ b/drivers/net/hinic/hinic_pmd_rx.c
@@ -504,14 +504,14 @@ static void hinic_fill_rss_type(struct nic_rss_type *rss_type,
 {
 	u64 rss_hf = rss_conf->rss_hf;
 
-	rss_type->ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
-	rss_type->tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
-	rss_type->ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
-	rss_type->ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
-	rss_type->tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
-	rss_type->tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
-	rss_type->udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
-	rss_type->udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+	rss_type->ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+	rss_type->tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+	rss_type->ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+	rss_type->ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+	rss_type->tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+	rss_type->tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+	rss_type->udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+	rss_type->udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
 }
 
 static void hinic_fillout_indir_tbl(struct hinic_nic_dev *nic_dev, u32 *indir)
@@ -588,8 +588,8 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
 {
 	int err, i;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
-		nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
+		nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
 		nic_dev->num_rss = 0;
 		if (nic_dev->num_rq > 1) {
 			/* get rss template id */
@@ -599,7 +599,7 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
 				PMD_DRV_LOG(WARNING, "Alloc rss template failed");
 				return err;
 			}
-			nic_dev->flags |= ETH_MQ_RX_RSS_FLAG;
+			nic_dev->flags |= RTE_ETH_MQ_RX_RSS_FLAG;
 			for (i = 0; i < nic_dev->num_rq; i++)
 				hinic_add_rq_to_rx_queue_list(nic_dev, i);
 		}
@@ -610,12 +610,12 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
 
 static void hinic_destroy_num_qps(struct hinic_nic_dev *nic_dev)
 {
-	if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+	if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (hinic_rss_template_free(nic_dev->hwdev,
 					    nic_dev->rss_tmpl_idx))
 			PMD_DRV_LOG(WARNING, "Free rss template failed");
 
-		nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+		nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
 	}
 }
 
@@ -641,7 +641,7 @@ int hinic_config_mq_mode(struct rte_eth_dev *dev, bool on)
 	int ret = 0;
 
 	switch (dev_conf->rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		ret = hinic_config_mq_rx_rss(nic_dev, on);
 		break;
 	default:
@@ -662,7 +662,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
 	int lro_wqe_num;
 	int buf_size;
 
-	if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+	if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (rss_conf.rss_hf == 0) {
 			rss_conf.rss_hf = HINIC_RSS_OFFLOAD_ALL;
 		} else if ((rss_conf.rss_hf & HINIC_RSS_OFFLOAD_ALL) == 0) {
@@ -678,7 +678,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
 	}
 
 	/* Enable both L3/L4 rx checksum offload */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		nic_dev->rx_csum_en = HINIC_RX_CSUM_OFFLOAD_EN;
 
 	err = hinic_set_rx_csum_offload(nic_dev->hwdev,
@@ -687,7 +687,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
 		goto rx_csum_ofl_err;
 
 	/* config lro */
-	lro_en = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ?
+	lro_en = dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ?
 			true : false;
 	max_lro_size = dev->data->dev_conf.rxmode.max_lro_pkt_size;
 	buf_size = nic_dev->hwdev->nic_io->rq_buf_size;
@@ -726,7 +726,7 @@ void hinic_rx_remove_configure(struct rte_eth_dev *dev)
 {
 	struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
 
-	if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+	if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
 		hinic_rss_deinit(nic_dev);
 		hinic_destroy_num_qps(nic_dev);
 	}
diff --git a/drivers/net/hinic/hinic_pmd_rx.h b/drivers/net/hinic/hinic_pmd_rx.h
index 8a45f2d9fc50..5c303398b635 100644
--- a/drivers/net/hinic/hinic_pmd_rx.h
+++ b/drivers/net/hinic/hinic_pmd_rx.h
@@ -8,17 +8,17 @@
 #define HINIC_DEFAULT_RX_FREE_THRESH	32
 
 #define HINIC_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 |\
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 |\
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 enum rq_completion_fmt {
 	RQ_COMPLETE_SGE = 1
diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
index 8753c340e790..3d0159d78778 100644
--- a/drivers/net/hns3/hns3_dcb.c
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -1536,7 +1536,7 @@ hns3_dcb_hw_configure(struct hns3_adapter *hns)
 		return ret;
 	}
 
-	if (hw->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (hw->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		dcb_rx_conf = &hw->data->dev_conf.rx_adv_conf.dcb_rx_conf;
 		if (dcb_rx_conf->nb_tcs == 0)
 			hw->dcb_info.pfc_en = 1; /* tc0 only */
@@ -1693,7 +1693,7 @@ hns3_update_queue_map_configure(struct hns3_adapter *hns)
 	uint16_t nb_tx_q = hw->data->nb_tx_queues;
 	int ret;
 
-	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		return 0;
 
 	ret = hns3_dcb_update_tc_queue_mapping(hw, nb_rx_q, nb_tx_q);
@@ -1713,22 +1713,22 @@ static void
 hns3_get_fc_mode(struct hns3_hw *hw, enum rte_eth_fc_mode mode)
 {
 	switch (mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		hw->requested_fc_mode = HNS3_FC_NONE;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		hw->requested_fc_mode = HNS3_FC_RX_PAUSE;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		hw->requested_fc_mode = HNS3_FC_TX_PAUSE;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		hw->requested_fc_mode = HNS3_FC_FULL;
 		break;
 	default:
 		hw->requested_fc_mode = HNS3_FC_NONE;
 		hns3_warn(hw, "fc_mode(%u) exceeds member scope and is "
-			  "configured to RTE_FC_NONE", mode);
+			  "configured to RTE_ETH_FC_NONE", mode);
 		break;
 	}
 }
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 693048f58704..8e0ccecb57a6 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -60,29 +60,29 @@ enum hns3_evt_cause {
 };
 
 static const struct rte_eth_fec_capa speed_fec_capa_tbl[] = {
-	{ ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
 
-	{ ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
 
-	{ ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
 
-	{ ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
 
-	{ ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
 
-	{ ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(RS) }
 };
@@ -500,8 +500,8 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
 	struct hns3_cmd_desc desc;
 	int ret;
 
-	if ((vlan_type != ETH_VLAN_TYPE_INNER &&
-	     vlan_type != ETH_VLAN_TYPE_OUTER)) {
+	if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+	     vlan_type != RTE_ETH_VLAN_TYPE_OUTER)) {
 		hns3_err(hw, "Unsupported vlan type, vlan_type =%d", vlan_type);
 		return -EINVAL;
 	}
@@ -514,10 +514,10 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
 	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MAC_VLAN_TYPE_ID, false);
 	rx_req = (struct hns3_rx_vlan_type_cfg_cmd *)desc.data;
 
-	if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
 		rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
-	} else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+	} else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
 		rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
 		rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
 		rx_req->in_fst_vlan_type = rte_cpu_to_le_16(tpid);
@@ -725,11 +725,11 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	rte_spinlock_lock(&hw->lock);
 	rxmode = &dev->data->dev_conf.rxmode;
 	tmp_mask = (unsigned int)mask;
-	if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* ignore vlan filter configuration during promiscuous mode */
 		if (!dev->data->promiscuous) {
 			/* Enable or disable VLAN filter */
-			enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER ?
+			enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ?
 				 true : false;
 
 			ret = hns3_enable_vlan_filter(hns, enable);
@@ -742,9 +742,9 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		}
 	}
 
-	if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP ?
+		enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ?
 		    true : false;
 
 		ret = hns3_en_hw_strip_rxvtag(hns, enable);
@@ -1118,7 +1118,7 @@ hns3_init_vlan_config(struct hns3_adapter *hns)
 		return ret;
 	}
 
-	ret = hns3_vlan_tpid_configure(hns, ETH_VLAN_TYPE_INNER,
+	ret = hns3_vlan_tpid_configure(hns, RTE_ETH_VLAN_TYPE_INNER,
 				       RTE_ETHER_TYPE_VLAN);
 	if (ret) {
 		hns3_err(hw, "tpid set fail in pf, ret =%d", ret);
@@ -1161,7 +1161,7 @@ hns3_restore_vlan_conf(struct hns3_adapter *hns)
 	if (!hw->data->promiscuous) {
 		/* restore vlan filter states */
 		offloads = hw->data->dev_conf.rxmode.offloads;
-		enable = offloads & DEV_RX_OFFLOAD_VLAN_FILTER ? true : false;
+		enable = offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ? true : false;
 		ret = hns3_enable_vlan_filter(hns, enable);
 		if (ret) {
 			hns3_err(hw, "failed to restore vlan rx filter conf, "
@@ -1204,7 +1204,7 @@ hns3_dev_configure_vlan(struct rte_eth_dev *dev)
 			  txmode->hw_vlan_reject_untagged);
 
 	/* Apply vlan offload setting */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
 	ret = hns3_vlan_offload_set(dev, mask);
 	if (ret) {
 		hns3_err(hw, "dev config rx vlan offload failed, ret = %d",
@@ -2213,9 +2213,9 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
 	int max_tc = 0;
 	int i;
 
-	if ((rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG) ||
-	    (tx_mq_mode == ETH_MQ_TX_VMDQ_DCB ||
-	     tx_mq_mode == ETH_MQ_TX_VMDQ_ONLY)) {
+	if ((rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) ||
+	    (tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB ||
+	     tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)) {
 		hns3_err(hw, "VMDQ is not supported, rx_mq_mode = %d, tx_mq_mode = %d.",
 			 rx_mq_mode, tx_mq_mode);
 		return -EOPNOTSUPP;
@@ -2223,7 +2223,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
 
 	dcb_rx_conf = &dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
 	dcb_tx_conf = &dev->data->dev_conf.tx_adv_conf.dcb_tx_conf;
-	if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		if (dcb_rx_conf->nb_tcs > pf->tc_max) {
 			hns3_err(hw, "nb_tcs(%u) > max_tc(%u) driver supported.",
 				 dcb_rx_conf->nb_tcs, pf->tc_max);
@@ -2232,7 +2232,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
 
 		if (!(dcb_rx_conf->nb_tcs == HNS3_4_TCS ||
 		      dcb_rx_conf->nb_tcs == HNS3_8_TCS)) {
-			hns3_err(hw, "on ETH_MQ_RX_DCB_RSS mode, "
+			hns3_err(hw, "on RTE_ETH_MQ_RX_DCB_RSS mode, "
 				 "nb_tcs(%d) != %d or %d in rx direction.",
 				 dcb_rx_conf->nb_tcs, HNS3_4_TCS, HNS3_8_TCS);
 			return -EINVAL;
@@ -2400,11 +2400,11 @@ hns3_check_link_speed(struct hns3_hw *hw, uint32_t link_speeds)
 	 * configure link_speeds (default 0), which means auto-negotiation.
 	 * In this case, it should return success.
 	 */
-	if (link_speeds == ETH_LINK_SPEED_AUTONEG &&
+	if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG &&
 	    hw->mac.support_autoneg == 0)
 		return 0;
 
-	if (link_speeds != ETH_LINK_SPEED_AUTONEG) {
+	if (link_speeds != RTE_ETH_LINK_SPEED_AUTONEG) {
 		ret = hns3_check_port_speed(hw, link_speeds);
 		if (ret)
 			return ret;
@@ -2464,15 +2464,15 @@ hns3_dev_configure(struct rte_eth_dev *dev)
 	if (ret)
 		goto cfg_err;
 
-	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		ret = hns3_setup_dcb(dev);
 		if (ret)
 			goto cfg_err;
 	}
 
 	/* When RSS is not configured, redirect the packet queue 0 */
-	if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 		rss_conf = conf->rx_adv_conf.rss_conf;
 		hw->rss_dis_flag = false;
 		ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -2493,7 +2493,7 @@ hns3_dev_configure(struct rte_eth_dev *dev)
 		goto cfg_err;
 
 	/* config hardware GRO */
-	gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+	gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
 	ret = hns3_config_gro(hw, gro_en);
 	if (ret)
 		goto cfg_err;
@@ -2600,15 +2600,15 @@ hns3_get_copper_port_speed_capa(uint32_t supported_speed)
 	uint32_t speed_capa = 0;
 
 	if (supported_speed & HNS3_PHY_LINK_SPEED_10M_HD_BIT)
-		speed_capa |= ETH_LINK_SPEED_10M_HD;
+		speed_capa |= RTE_ETH_LINK_SPEED_10M_HD;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_10M_BIT)
-		speed_capa |= ETH_LINK_SPEED_10M;
+		speed_capa |= RTE_ETH_LINK_SPEED_10M;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_100M_HD_BIT)
-		speed_capa |= ETH_LINK_SPEED_100M_HD;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_100M_BIT)
-		speed_capa |= ETH_LINK_SPEED_100M;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_1000M_BIT)
-		speed_capa |= ETH_LINK_SPEED_1G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G;
 
 	return speed_capa;
 }
@@ -2619,19 +2619,19 @@ hns3_get_firber_port_speed_capa(uint32_t supported_speed)
 	uint32_t speed_capa = 0;
 
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_1G_BIT)
-		speed_capa |= ETH_LINK_SPEED_1G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_10G_BIT)
-		speed_capa |= ETH_LINK_SPEED_10G;
+		speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_25G_BIT)
-		speed_capa |= ETH_LINK_SPEED_25G;
+		speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_40G_BIT)
-		speed_capa |= ETH_LINK_SPEED_40G;
+		speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_50G_BIT)
-		speed_capa |= ETH_LINK_SPEED_50G;
+		speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_100G_BIT)
-		speed_capa |= ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_200G_BIT)
-		speed_capa |= ETH_LINK_SPEED_200G;
+		speed_capa |= RTE_ETH_LINK_SPEED_200G;
 
 	return speed_capa;
 }
@@ -2650,7 +2650,7 @@ hns3_get_speed_capa(struct hns3_hw *hw)
 			hns3_get_firber_port_speed_capa(mac->supported_speed);
 
 	if (mac->support_autoneg == 0)
-		speed_capa |= ETH_LINK_SPEED_FIXED;
+		speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
 
 	return speed_capa;
 }
@@ -2676,40 +2676,40 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 	info->max_mac_addrs = HNS3_UC_MACADDR_NUM;
 	info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
 	info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
-	info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_TCP_CKSUM |
-				 DEV_RX_OFFLOAD_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_SCTP_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_KEEP_CRC |
-				 DEV_RX_OFFLOAD_SCATTER |
-				 DEV_RX_OFFLOAD_VLAN_STRIP |
-				 DEV_RX_OFFLOAD_VLAN_FILTER |
-				 DEV_RX_OFFLOAD_RSS_HASH |
-				 DEV_RX_OFFLOAD_TCP_LRO);
-	info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_TCP_CKSUM |
-				 DEV_TX_OFFLOAD_UDP_CKSUM |
-				 DEV_TX_OFFLOAD_SCTP_CKSUM |
-				 DEV_TX_OFFLOAD_MULTI_SEGS |
-				 DEV_TX_OFFLOAD_TCP_TSO |
-				 DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				 DEV_TX_OFFLOAD_GRE_TNL_TSO |
-				 DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-				 DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+	info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+				 RTE_ETH_RX_OFFLOAD_SCATTER |
+				 RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				 RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				 RTE_ETH_RX_OFFLOAD_RSS_HASH |
+				 RTE_ETH_RX_OFFLOAD_TCP_LRO);
+	info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				 RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				 RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
 				 hns3_txvlan_cap_get(hw));
 
 	if (hns3_dev_get_support(hw, OUTER_UDP_CKSUM))
-		info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+		info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 
 	if (hns3_dev_get_support(hw, INDEP_TXRX))
 		info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 				 RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
 
 	if (hns3_dev_get_support(hw, PTP))
-		info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+		info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	info->rx_desc_lim = (struct rte_eth_desc_lim) {
 		.nb_max = HNS3_MAX_RING_DESC,
@@ -2793,7 +2793,7 @@ hns3_update_port_link_info(struct rte_eth_dev *eth_dev)
 
 	ret = hns3_update_link_info(eth_dev);
 	if (ret)
-		hw->mac.link_status = ETH_LINK_DOWN;
+		hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	return ret;
 }
@@ -2806,29 +2806,29 @@ hns3_setup_linkstatus(struct rte_eth_dev *eth_dev,
 	struct hns3_mac *mac = &hw->mac;
 
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_10M:
-	case ETH_SPEED_NUM_100M:
-	case ETH_SPEED_NUM_1G:
-	case ETH_SPEED_NUM_10G:
-	case ETH_SPEED_NUM_25G:
-	case ETH_SPEED_NUM_40G:
-	case ETH_SPEED_NUM_50G:
-	case ETH_SPEED_NUM_100G:
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_200G:
 		if (mac->link_status)
 			new_link->link_speed = mac->link_speed;
 		break;
 	default:
 		if (mac->link_status)
-			new_link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+			new_link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 	}
 
 	if (!mac->link_status)
-		new_link->link_speed = ETH_SPEED_NUM_NONE;
+		new_link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	new_link->link_duplex = mac->link_duplex;
-	new_link->link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+	new_link->link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 	new_link->link_autoneg = mac->link_autoneg;
 }
 
@@ -2848,8 +2848,8 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
 	if (eth_dev->data->dev_started == 0) {
 		new_link.link_autoneg = mac->link_autoneg;
 		new_link.link_duplex = mac->link_duplex;
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
-		new_link.link_status = ETH_LINK_DOWN;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		new_link.link_status = RTE_ETH_LINK_DOWN;
 		goto out;
 	}
 
@@ -2861,7 +2861,7 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
 			break;
 		}
 
-		if (!wait_to_complete || mac->link_status == ETH_LINK_UP)
+		if (!wait_to_complete || mac->link_status == RTE_ETH_LINK_UP)
 			break;
 
 		rte_delay_ms(HNS3_LINK_CHECK_INTERVAL);
@@ -3207,31 +3207,31 @@ hns3_parse_speed(int speed_cmd, uint32_t *speed)
 {
 	switch (speed_cmd) {
 	case HNS3_CFG_SPEED_10M:
-		*speed = ETH_SPEED_NUM_10M;
+		*speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case HNS3_CFG_SPEED_100M:
-		*speed = ETH_SPEED_NUM_100M;
+		*speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case HNS3_CFG_SPEED_1G:
-		*speed = ETH_SPEED_NUM_1G;
+		*speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case HNS3_CFG_SPEED_10G:
-		*speed = ETH_SPEED_NUM_10G;
+		*speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case HNS3_CFG_SPEED_25G:
-		*speed = ETH_SPEED_NUM_25G;
+		*speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case HNS3_CFG_SPEED_40G:
-		*speed = ETH_SPEED_NUM_40G;
+		*speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case HNS3_CFG_SPEED_50G:
-		*speed = ETH_SPEED_NUM_50G;
+		*speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case HNS3_CFG_SPEED_100G:
-		*speed = ETH_SPEED_NUM_100G;
+		*speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	case HNS3_CFG_SPEED_200G:
-		*speed = ETH_SPEED_NUM_200G;
+		*speed = RTE_ETH_SPEED_NUM_200G;
 		break;
 	default:
 		return -EINVAL;
@@ -3559,39 +3559,39 @@ hns3_cfg_mac_speed_dup_hw(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
 	hns3_set_bit(req->speed_dup, HNS3_CFG_DUPLEX_B, !!duplex ? 1 : 0);
 
 	switch (speed) {
-	case ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_10M:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10M);
 		break;
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100M);
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_1G);
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10G);
 		break;
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_25G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_25G);
 		break;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_40G);
 		break;
-	case ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_50G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_50G);
 		break;
-	case ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_100G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100G);
 		break;
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_200G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_200G);
 		break;
@@ -4254,14 +4254,14 @@ hns3_mac_init(struct hns3_hw *hw)
 	int ret;
 
 	pf->support_sfp_query = true;
-	mac->link_duplex = ETH_LINK_FULL_DUPLEX;
+	mac->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	ret = hns3_cfg_mac_speed_dup_hw(hw, mac->link_speed, mac->link_duplex);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Config mac speed dup fail ret = %d", ret);
 		return ret;
 	}
 
-	mac->link_status = ETH_LINK_DOWN;
+	mac->link_status = RTE_ETH_LINK_DOWN;
 
 	return hns3_config_mtu(hw, pf->mps);
 }
@@ -4511,7 +4511,7 @@ hns3_dev_promiscuous_enable(struct rte_eth_dev *dev)
 	 * all packets coming in in the receiving direction.
 	 */
 	offloads = dev->data->dev_conf.rxmode.offloads;
-	if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		ret = hns3_enable_vlan_filter(hns, false);
 		if (ret) {
 			hns3_err(hw, "failed to enable promiscuous mode due to "
@@ -4552,7 +4552,7 @@ hns3_dev_promiscuous_disable(struct rte_eth_dev *dev)
 	}
 	/* when promiscuous mode was disabled, restore the vlan filter status */
 	offloads = dev->data->dev_conf.rxmode.offloads;
-	if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		ret = hns3_enable_vlan_filter(hns, true);
 		if (ret) {
 			hns3_err(hw, "failed to disable promiscuous mode due to"
@@ -4672,8 +4672,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
 		mac_info->supported_speed =
 					rte_le_to_cpu_32(resp->supported_speed);
 		mac_info->support_autoneg = resp->autoneg_ability;
-		mac_info->link_autoneg = (resp->autoneg == 0) ? ETH_LINK_FIXED
-					: ETH_LINK_AUTONEG;
+		mac_info->link_autoneg = (resp->autoneg == 0) ? RTE_ETH_LINK_FIXED
+					: RTE_ETH_LINK_AUTONEG;
 	} else {
 		mac_info->query_type = HNS3_DEFAULT_QUERY;
 	}
@@ -4684,8 +4684,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
 static uint8_t
 hns3_check_speed_dup(uint8_t duplex, uint32_t speed)
 {
-	if (!(speed == ETH_SPEED_NUM_10M || speed == ETH_SPEED_NUM_100M))
-		duplex = ETH_LINK_FULL_DUPLEX;
+	if (!(speed == RTE_ETH_SPEED_NUM_10M || speed == RTE_ETH_SPEED_NUM_100M))
+		duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	return duplex;
 }
@@ -4735,7 +4735,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
 		return ret;
 
 	/* Do nothing if no SFP */
-	if (mac_info.link_speed == ETH_SPEED_NUM_NONE)
+	if (mac_info.link_speed == RTE_ETH_SPEED_NUM_NONE)
 		return 0;
 
 	/*
@@ -4762,7 +4762,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
 
 	/* Config full duplex for SFP */
 	return hns3_cfg_mac_speed_dup(hw, mac_info.link_speed,
-				      ETH_LINK_FULL_DUPLEX);
+				      RTE_ETH_LINK_FULL_DUPLEX);
 }
 
 static void
@@ -4881,10 +4881,10 @@ hns3_cfg_mac_mode(struct hns3_hw *hw, bool enable)
 	hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_B, val);
 
 	/*
-	 * If DEV_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
+	 * If RTE_ETH_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
 	 * when receiving frames. Otherwise, CRC will be stripped.
 	 */
-	if (hw->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (hw->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, 0);
 	else
 		hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, val);
@@ -4912,7 +4912,7 @@ hns3_get_mac_link_status(struct hns3_hw *hw)
 	ret = hns3_cmd_send(hw, &desc, 1);
 	if (ret) {
 		hns3_err(hw, "get link status cmd failed %d", ret);
-		return ETH_LINK_DOWN;
+		return RTE_ETH_LINK_DOWN;
 	}
 
 	req = (struct hns3_link_status_cmd *)desc.data;
@@ -5094,19 +5094,19 @@ hns3_set_firber_default_support_speed(struct hns3_hw *hw)
 	struct hns3_mac *mac = &hw->mac;
 
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		return HNS3_FIBER_LINK_SPEED_1G_BIT;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		return HNS3_FIBER_LINK_SPEED_10G_BIT;
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_25G:
 		return HNS3_FIBER_LINK_SPEED_25G_BIT;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		return HNS3_FIBER_LINK_SPEED_40G_BIT;
-	case ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_50G:
 		return HNS3_FIBER_LINK_SPEED_50G_BIT;
-	case ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_100G:
 		return HNS3_FIBER_LINK_SPEED_100G_BIT;
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_200G:
 		return HNS3_FIBER_LINK_SPEED_200G_BIT;
 	default:
 		hns3_warn(hw, "invalid speed %u Mbps.", mac->link_speed);
@@ -5344,20 +5344,20 @@ hns3_convert_link_speeds2bitmap_copper(uint32_t link_speeds)
 {
 	uint32_t speed_bit;
 
-	switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
-	case ETH_LINK_SPEED_10M:
+	switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+	case RTE_ETH_LINK_SPEED_10M:
 		speed_bit = HNS3_PHY_LINK_SPEED_10M_BIT;
 		break;
-	case ETH_LINK_SPEED_10M_HD:
+	case RTE_ETH_LINK_SPEED_10M_HD:
 		speed_bit = HNS3_PHY_LINK_SPEED_10M_HD_BIT;
 		break;
-	case ETH_LINK_SPEED_100M:
+	case RTE_ETH_LINK_SPEED_100M:
 		speed_bit = HNS3_PHY_LINK_SPEED_100M_BIT;
 		break;
-	case ETH_LINK_SPEED_100M_HD:
+	case RTE_ETH_LINK_SPEED_100M_HD:
 		speed_bit = HNS3_PHY_LINK_SPEED_100M_HD_BIT;
 		break;
-	case ETH_LINK_SPEED_1G:
+	case RTE_ETH_LINK_SPEED_1G:
 		speed_bit = HNS3_PHY_LINK_SPEED_1000M_BIT;
 		break;
 	default:
@@ -5373,26 +5373,26 @@ hns3_convert_link_speeds2bitmap_fiber(uint32_t link_speeds)
 {
 	uint32_t speed_bit;
 
-	switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
-	case ETH_LINK_SPEED_1G:
+	switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+	case RTE_ETH_LINK_SPEED_1G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_1G_BIT;
 		break;
-	case ETH_LINK_SPEED_10G:
+	case RTE_ETH_LINK_SPEED_10G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_10G_BIT;
 		break;
-	case ETH_LINK_SPEED_25G:
+	case RTE_ETH_LINK_SPEED_25G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_25G_BIT;
 		break;
-	case ETH_LINK_SPEED_40G:
+	case RTE_ETH_LINK_SPEED_40G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_40G_BIT;
 		break;
-	case ETH_LINK_SPEED_50G:
+	case RTE_ETH_LINK_SPEED_50G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_50G_BIT;
 		break;
-	case ETH_LINK_SPEED_100G:
+	case RTE_ETH_LINK_SPEED_100G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_100G_BIT;
 		break;
-	case ETH_LINK_SPEED_200G:
+	case RTE_ETH_LINK_SPEED_200G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_200G_BIT;
 		break;
 	default:
@@ -5427,28 +5427,28 @@ hns3_check_port_speed(struct hns3_hw *hw, uint32_t link_speeds)
 static inline uint32_t
 hns3_get_link_speed(uint32_t link_speeds)
 {
-	uint32_t speed = ETH_SPEED_NUM_NONE;
-
-	if (link_speeds & ETH_LINK_SPEED_10M ||
-	    link_speeds & ETH_LINK_SPEED_10M_HD)
-		speed = ETH_SPEED_NUM_10M;
-	if (link_speeds & ETH_LINK_SPEED_100M ||
-	    link_speeds & ETH_LINK_SPEED_100M_HD)
-		speed = ETH_SPEED_NUM_100M;
-	if (link_speeds & ETH_LINK_SPEED_1G)
-		speed = ETH_SPEED_NUM_1G;
-	if (link_speeds & ETH_LINK_SPEED_10G)
-		speed = ETH_SPEED_NUM_10G;
-	if (link_speeds & ETH_LINK_SPEED_25G)
-		speed = ETH_SPEED_NUM_25G;
-	if (link_speeds & ETH_LINK_SPEED_40G)
-		speed = ETH_SPEED_NUM_40G;
-	if (link_speeds & ETH_LINK_SPEED_50G)
-		speed = ETH_SPEED_NUM_50G;
-	if (link_speeds & ETH_LINK_SPEED_100G)
-		speed = ETH_SPEED_NUM_100G;
-	if (link_speeds & ETH_LINK_SPEED_200G)
-		speed = ETH_SPEED_NUM_200G;
+	uint32_t speed = RTE_ETH_SPEED_NUM_NONE;
+
+	if (link_speeds & RTE_ETH_LINK_SPEED_10M ||
+	    link_speeds & RTE_ETH_LINK_SPEED_10M_HD)
+		speed = RTE_ETH_SPEED_NUM_10M;
+	if (link_speeds & RTE_ETH_LINK_SPEED_100M ||
+	    link_speeds & RTE_ETH_LINK_SPEED_100M_HD)
+		speed = RTE_ETH_SPEED_NUM_100M;
+	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
+		speed = RTE_ETH_SPEED_NUM_1G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
+		speed = RTE_ETH_SPEED_NUM_10G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
+		speed = RTE_ETH_SPEED_NUM_25G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
+		speed = RTE_ETH_SPEED_NUM_40G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_50G)
+		speed = RTE_ETH_SPEED_NUM_50G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_100G)
+		speed = RTE_ETH_SPEED_NUM_100G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_200G)
+		speed = RTE_ETH_SPEED_NUM_200G;
 
 	return speed;
 }
@@ -5456,11 +5456,11 @@ hns3_get_link_speed(uint32_t link_speeds)
 static uint8_t
 hns3_get_link_duplex(uint32_t link_speeds)
 {
-	if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
-	    (link_speeds & ETH_LINK_SPEED_100M_HD))
-		return ETH_LINK_HALF_DUPLEX;
+	if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+	    (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+		return RTE_ETH_LINK_HALF_DUPLEX;
 	else
-		return ETH_LINK_FULL_DUPLEX;
+		return RTE_ETH_LINK_FULL_DUPLEX;
 }
 
 static int
@@ -5594,9 +5594,9 @@ hns3_apply_link_speed(struct hns3_hw *hw)
 	struct hns3_set_link_speed_cfg cfg;
 
 	memset(&cfg, 0, sizeof(struct hns3_set_link_speed_cfg));
-	cfg.autoneg = (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) ?
-			ETH_LINK_AUTONEG : ETH_LINK_FIXED;
-	if (cfg.autoneg != ETH_LINK_AUTONEG) {
+	cfg.autoneg = (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) ?
+			RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
+	if (cfg.autoneg != RTE_ETH_LINK_AUTONEG) {
 		cfg.speed = hns3_get_link_speed(conf->link_speeds);
 		cfg.duplex = hns3_get_link_duplex(conf->link_speeds);
 	}
@@ -5869,7 +5869,7 @@ hns3_do_stop(struct hns3_adapter *hns)
 	ret = hns3_cfg_mac_mode(hw, false);
 	if (ret)
 		return ret;
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED) == 0) {
 		hns3_configure_all_mac_addr(hns, true);
@@ -6080,17 +6080,17 @@ hns3_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	current_mode = hns3_get_current_fc_mode(dev);
 	switch (current_mode) {
 	case HNS3_FC_FULL:
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	case HNS3_FC_TX_PAUSE:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case HNS3_FC_RX_PAUSE:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case HNS3_FC_NONE:
 	default:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		break;
 	}
 
@@ -6236,7 +6236,7 @@ hns3_get_dcb_info(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info)
 	int i;
 
 	rte_spinlock_lock(&hw->lock);
-	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = pf->local_max_tc;
 	else
 		dcb_info->nb_tcs = 1;
@@ -6536,7 +6536,7 @@ hns3_stop_service(struct hns3_adapter *hns)
 	struct rte_eth_dev *eth_dev;
 
 	eth_dev = &rte_eth_devices[hw->data->port_id];
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 	if (hw->adapter_state == HNS3_NIC_STARTED) {
 		rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
 		hns3_update_linkstatus_and_event(hw, false);
@@ -6826,7 +6826,7 @@ get_current_fec_auto_state(struct hns3_hw *hw, uint8_t *state)
 	 * in device of link speed
 	 * below 10 Gbps.
 	 */
-	if (hw->mac.link_speed < ETH_SPEED_NUM_10G) {
+	if (hw->mac.link_speed < RTE_ETH_SPEED_NUM_10G) {
 		*state = 0;
 		return 0;
 	}
@@ -6858,7 +6858,7 @@ hns3_fec_get_internal(struct hns3_hw *hw, uint32_t *fec_capa)
 	 * configured FEC mode is returned.
 	 * If link is up, current FEC mode is returned.
 	 */
-	if (hw->mac.link_status == ETH_LINK_DOWN) {
+	if (hw->mac.link_status == RTE_ETH_LINK_DOWN) {
 		ret = get_current_fec_auto_state(hw, &auto_state);
 		if (ret)
 			return ret;
@@ -6957,12 +6957,12 @@ get_current_speed_fec_cap(struct hns3_hw *hw, struct rte_eth_fec_capa *fec_capa)
 	uint32_t cur_capa;
 
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		cur_capa = fec_capa[1].capa;
 		break;
-	case ETH_SPEED_NUM_25G:
-	case ETH_SPEED_NUM_100G:
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_200G:
 		cur_capa = fec_capa[0].capa;
 		break;
 	default:
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index e28056b1bd60..0f55fd4c83ad 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -190,10 +190,10 @@ struct hns3_mac {
 	uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
 	uint8_t media_type;
 	uint8_t phy_addr;
-	uint8_t link_duplex  : 1; /* ETH_LINK_[HALF/FULL]_DUPLEX */
-	uint8_t link_autoneg : 1; /* ETH_LINK_[AUTONEG/FIXED] */
-	uint8_t link_status  : 1; /* ETH_LINK_[DOWN/UP] */
-	uint32_t link_speed;      /* ETH_SPEED_NUM_ */
+	uint8_t link_duplex  : 1; /* RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+	uint8_t link_autoneg : 1; /* RTE_ETH_LINK_[AUTONEG/FIXED] */
+	uint8_t link_status  : 1; /* RTE_ETH_LINK_[DOWN/UP] */
+	uint32_t link_speed;      /* RTE_ETH_SPEED_NUM_ */
 	/*
 	 * Some firmware versions support only the SFP speed query. In addition
 	 * to the SFP speed query, some firmware supports the query of the speed
@@ -1076,9 +1076,9 @@ static inline uint64_t
 hns3_txvlan_cap_get(struct hns3_hw *hw)
 {
 	if (hw->port_base_vlan_cfg.state)
-		return DEV_TX_OFFLOAD_VLAN_INSERT;
+		return RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	else
-		return DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT;
+		return RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
 }
 
 #endif /* _HNS3_ETHDEV_H_ */
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 54dbd4b798f2..7b784048b518 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -807,15 +807,15 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
 	}
 
 	hw->adapter_state = HNS3_NIC_CONFIGURING;
-	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		hns3_err(hw, "setting link speed/duplex not supported");
 		ret = -EINVAL;
 		goto cfg_err;
 	}
 
 	/* When RSS is not configured, redirect the packet queue 0 */
-	if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 		hw->rss_dis_flag = false;
 		rss_conf = conf->rx_adv_conf.rss_conf;
 		ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -832,7 +832,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
 		goto cfg_err;
 
 	/* config hardware GRO */
-	gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+	gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
 	ret = hns3_config_gro(hw, gro_en);
 	if (ret)
 		goto cfg_err;
@@ -935,32 +935,32 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 	info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
 	info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
 
-	info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_TCP_CKSUM |
-				 DEV_RX_OFFLOAD_SCTP_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_SCATTER |
-				 DEV_RX_OFFLOAD_VLAN_STRIP |
-				 DEV_RX_OFFLOAD_VLAN_FILTER |
-				 DEV_RX_OFFLOAD_RSS_HASH |
-				 DEV_RX_OFFLOAD_TCP_LRO);
-	info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_TCP_CKSUM |
-				 DEV_TX_OFFLOAD_UDP_CKSUM |
-				 DEV_TX_OFFLOAD_SCTP_CKSUM |
-				 DEV_TX_OFFLOAD_MULTI_SEGS |
-				 DEV_TX_OFFLOAD_TCP_TSO |
-				 DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				 DEV_TX_OFFLOAD_GRE_TNL_TSO |
-				 DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-				 DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+	info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_SCATTER |
+				 RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				 RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				 RTE_ETH_RX_OFFLOAD_RSS_HASH |
+				 RTE_ETH_RX_OFFLOAD_TCP_LRO);
+	info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				 RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				 RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
 				 hns3_txvlan_cap_get(hw));
 
 	if (hns3_dev_get_support(hw, OUTER_UDP_CKSUM))
-		info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+		info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 
 	if (hns3_dev_get_support(hw, INDEP_TXRX))
 		info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -1640,10 +1640,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	tmp_mask = (unsigned int)mask;
 
-	if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
 		rte_spinlock_lock(&hw->lock);
 		/* Enable or disable VLAN filter */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ret = hns3vf_en_vlan_filter(hw, true);
 		else
 			ret = hns3vf_en_vlan_filter(hw, false);
@@ -1653,10 +1653,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	}
 
 	/* Vlan stripping setting */
-	if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
 		rte_spinlock_lock(&hw->lock);
 		/* Enable or disable VLAN stripping */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			ret = hns3vf_en_hw_strip_rxvtag(hw, true);
 		else
 			ret = hns3vf_en_hw_strip_rxvtag(hw, false);
@@ -1724,7 +1724,7 @@ hns3vf_restore_vlan_conf(struct hns3_adapter *hns)
 	int ret;
 
 	dev_conf = &hw->data->dev_conf;
-	en = dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP ? true
+	en = dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ? true
 								   : false;
 	ret = hns3vf_en_hw_strip_rxvtag(hw, en);
 	if (ret)
@@ -1749,8 +1749,8 @@ hns3vf_dev_configure_vlan(struct rte_eth_dev *dev)
 	}
 
 	/* Apply vlan offload setting */
-	ret = hns3vf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK |
-					ETH_VLAN_FILTER_MASK);
+	ret = hns3vf_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK |
+					RTE_ETH_VLAN_FILTER_MASK);
 	if (ret)
 		hns3_err(hw, "dev config vlan offload failed, ret = %d.", ret);
 
@@ -2059,7 +2059,7 @@ hns3vf_do_stop(struct hns3_adapter *hns)
 	struct hns3_hw *hw = &hns->hw;
 	int ret;
 
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	/*
 	 * The "hns3vf_do_stop" function will also be called by .stop_service to
@@ -2218,31 +2218,31 @@ hns3vf_dev_link_update(struct rte_eth_dev *eth_dev,
 
 	memset(&new_link, 0, sizeof(new_link));
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_10M:
-	case ETH_SPEED_NUM_100M:
-	case ETH_SPEED_NUM_1G:
-	case ETH_SPEED_NUM_10G:
-	case ETH_SPEED_NUM_25G:
-	case ETH_SPEED_NUM_40G:
-	case ETH_SPEED_NUM_50G:
-	case ETH_SPEED_NUM_100G:
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_200G:
 		if (mac->link_status)
 			new_link.link_speed = mac->link_speed;
 		break;
 	default:
 		if (mac->link_status)
-			new_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+			new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 	}
 
 	if (!mac->link_status)
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	new_link.link_duplex = mac->link_duplex;
-	new_link.link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+	new_link.link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg =
-	    !(eth_dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED);
+	    !(eth_dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(eth_dev, &new_link);
 }
@@ -2570,11 +2570,11 @@ hns3vf_stop_service(struct hns3_adapter *hns)
 		 * Make sure call update link status before hns3vf_stop_poll_job
 		 * because update link status depend on polling job exist.
 		 */
-		hns3vf_update_link_status(hw, ETH_LINK_DOWN, hw->mac.link_speed,
+		hns3vf_update_link_status(hw, RTE_ETH_LINK_DOWN, hw->mac.link_speed,
 					  hw->mac.link_duplex);
 		hns3vf_stop_poll_job(eth_dev);
 	}
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	hns3_set_rxtx_function(eth_dev);
 	rte_wmb();
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 38a2ee58a651..da6918fddda3 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1298,10 +1298,10 @@ hns3_rss_input_tuple_supported(struct hns3_hw *hw,
 	 * Kunpeng930 and future kunpeng series support to use src/dst port
 	 * fields to RSS hash for IPv6 SCTP packet type.
 	 */
-	if (rss->types & (ETH_RSS_L4_DST_ONLY | ETH_RSS_L4_SRC_ONLY) &&
-	    (rss->types & ETH_RSS_IP ||
+	if (rss->types & (RTE_ETH_RSS_L4_DST_ONLY | RTE_ETH_RSS_L4_SRC_ONLY) &&
+	    (rss->types & RTE_ETH_RSS_IP ||
 	    (!hw->rss_info.ipv6_sctp_offload_supported &&
-	    rss->types & ETH_RSS_NONFRAG_IPV6_SCTP)))
+	    rss->types & RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
 		return false;
 
 	return true;
diff --git a/drivers/net/hns3/hns3_ptp.c b/drivers/net/hns3/hns3_ptp.c
index 5dfe68cc4dbd..9a829d7011ad 100644
--- a/drivers/net/hns3/hns3_ptp.c
+++ b/drivers/net/hns3/hns3_ptp.c
@@ -21,7 +21,7 @@ hns3_mbuf_dyn_rx_timestamp_register(struct rte_eth_dev *dev,
 	struct hns3_hw *hw = &hns->hw;
 	int ret;
 
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		return 0;
 
 	ret = rte_mbuf_dyn_rx_timestamp_register
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index 3a81e90e0911..85495bbe89d9 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -76,69 +76,69 @@ static const struct {
 	uint64_t rss_types;
 	uint64_t rss_field;
 } hns3_set_tuple_table[] = {
-	{ ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
-	{ ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
-	{ ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) },
-	{ ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) },
 };
 
@@ -146,44 +146,44 @@ static const struct {
 	uint64_t rss_types;
 	uint64_t rss_field;
 } hns3_set_rss_types[] = {
-	{ ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
+	{ RTE_ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_VER) },
-	{ ETH_RSS_NONFRAG_IPV4_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
-	{ ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
+	{ RTE_ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) |
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_VER) },
-	{ ETH_RSS_NONFRAG_IPV6_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }
 };
@@ -365,10 +365,10 @@ hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw,
 	 * When user does not specify the following types or a combination of
 	 * the following types, it enables all fields for the supported RSS
 	 * types. the following types as:
-	 * - ETH_RSS_L3_SRC_ONLY
-	 * - ETH_RSS_L3_DST_ONLY
-	 * - ETH_RSS_L4_SRC_ONLY
-	 * - ETH_RSS_L4_DST_ONLY
+	 * - RTE_ETH_RSS_L3_SRC_ONLY
+	 * - RTE_ETH_RSS_L3_DST_ONLY
+	 * - RTE_ETH_RSS_L4_SRC_ONLY
+	 * - RTE_ETH_RSS_L4_DST_ONLY
 	 */
 	if (fields_count == 0) {
 		for (i = 0; i < RTE_DIM(hns3_set_rss_types); i++) {
@@ -520,8 +520,8 @@ hns3_dev_rss_reta_update(struct rte_eth_dev *dev,
 	memcpy(indirection_tbl, rss_cfg->rss_indirection_tbl,
 	       sizeof(rss_cfg->rss_indirection_tbl));
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].reta[shift] >= hw->alloc_rss_size) {
 			rte_spinlock_unlock(&hw->lock);
 			hns3_err(hw, "queue id(%u) set to redirection table "
@@ -572,8 +572,8 @@ hns3_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 	rte_spinlock_lock(&hw->lock);
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] =
 						rss_cfg->rss_indirection_tbl[i];
@@ -692,7 +692,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 	}
 
 	/* When RSS is off, redirect the packet queue 0 */
-	if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) == 0)
+	if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0)
 		hns3_rss_uninit(hns);
 
 	/* Configure RSS hash algorithm and hash key offset */
@@ -709,7 +709,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 	 * When RSS is off, it doesn't need to configure rss redirection table
 	 * to hardware.
 	 */
-	if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+	if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		ret = hns3_set_rss_indir_table(hw, rss_cfg->rss_indirection_tbl,
 					       hw->rss_ind_tbl_size);
 		if (ret)
@@ -723,7 +723,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 	return ret;
 
 rss_indir_table_uninit:
-	if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+	if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		ret1 = hns3_rss_reset_indir_table(hw);
 		if (ret1 != 0)
 			return ret;
diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h
index 996083b88b25..6f153a1b7bfb 100644
--- a/drivers/net/hns3/hns3_rss.h
+++ b/drivers/net/hns3/hns3_rss.h
@@ -8,20 +8,20 @@
 #include <rte_flow.h>
 
 #define HNS3_ETH_RSS_SUPPORT ( \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L3_SRC_ONLY | \
-	ETH_RSS_L3_DST_ONLY | \
-	ETH_RSS_L4_SRC_ONLY | \
-	ETH_RSS_L4_DST_ONLY)
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L3_SRC_ONLY | \
+	RTE_ETH_RSS_L3_DST_ONLY | \
+	RTE_ETH_RSS_L4_SRC_ONLY | \
+	RTE_ETH_RSS_L4_DST_ONLY)
 
 #define HNS3_RSS_IND_TBL_SIZE	512 /* The size of hash lookup table */
 #define HNS3_RSS_IND_TBL_SIZE_MAX 2048
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 602548a4f25b..920ee8ceeab9 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1924,7 +1924,7 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	memset(&rxq->dfx_stats, 0, sizeof(struct hns3_rx_dfx_stats));
 
 	/* CRC len set here is used for amending packet length */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1969,7 +1969,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
 						 rxq->rx_buf_len);
 	}
 
-	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
+	if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
 	    dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
 		dev->data->scattered_rx = true;
 }
@@ -2845,7 +2845,7 @@ hns3_get_rx_function(struct rte_eth_dev *dev)
 	vec_allowed = vec_support && hns3_get_default_vec_support();
 	sve_allowed = vec_support && hns3_get_sve_support();
 	simple_allowed = !dev->data->scattered_rx &&
-			 (offloads & DEV_RX_OFFLOAD_TCP_LRO) == 0;
+			 (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) == 0;
 
 	if (hns->rx_func_hint == HNS3_IO_FUNC_HINT_VEC && vec_allowed)
 		return hns3_recv_pkts_vec;
@@ -3139,7 +3139,7 @@ hns3_restore_gro_conf(struct hns3_hw *hw)
 	int ret;
 
 	offloads = hw->data->dev_conf.rxmode.offloads;
-	gro_en = offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+	gro_en = offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
 	ret = hns3_config_gro(hw, gro_en);
 	if (ret)
 		hns3_err(hw, "restore hardware GRO to %s failed, ret = %d",
@@ -4291,7 +4291,7 @@ hns3_tx_check_simple_support(struct rte_eth_dev *dev)
 	if (hns3_dev_get_support(hw, PTP))
 		return false;
 
-	return (offloads == (offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE));
+	return (offloads == (offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE));
 }
 
 static bool
@@ -4303,16 +4303,16 @@ hns3_get_tx_prep_needed(struct rte_eth_dev *dev)
 	return true;
 #else
 #define HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK (\
-		DEV_TX_OFFLOAD_IPV4_CKSUM | \
-		DEV_TX_OFFLOAD_TCP_CKSUM | \
-		DEV_TX_OFFLOAD_UDP_CKSUM | \
-		DEV_TX_OFFLOAD_SCTP_CKSUM | \
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-		DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
-		DEV_TX_OFFLOAD_TCP_TSO | \
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-		DEV_TX_OFFLOAD_GRE_TNL_TSO | \
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO)
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)
 
 	uint64_t tx_offload = dev->data->dev_conf.txmode.offloads;
 	if (tx_offload & HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index c8229e9076b5..dfea5d5b4c2f 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -307,7 +307,7 @@ struct hns3_rx_queue {
 	uint16_t rx_rearm_start; /* index of BD that driver re-arming from */
 	uint16_t rx_rearm_nb;    /* number of remaining BDs to be re-armed */
 
-	/* 4 if DEV_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
+	/* 4 if RTE_ETH_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
 	uint8_t crc_len;
 
 	/*
diff --git a/drivers/net/hns3/hns3_rxtx_vec.c b/drivers/net/hns3/hns3_rxtx_vec.c
index ff434d2d33ed..455110361aac 100644
--- a/drivers/net/hns3/hns3_rxtx_vec.c
+++ b/drivers/net/hns3/hns3_rxtx_vec.c
@@ -22,8 +22,8 @@ hns3_tx_check_vec_support(struct rte_eth_dev *dev)
 	if (hns3_dev_get_support(hw, PTP))
 		return -ENOTSUP;
 
-	/* Only support DEV_TX_OFFLOAD_MBUF_FAST_FREE */
-	if (txmode->offloads != DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	/* Only support RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE */
+	if (txmode->offloads != RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		return -ENOTSUP;
 
 	return 0;
@@ -228,10 +228,10 @@ hns3_rxq_vec_check(struct hns3_rx_queue *rxq, void *arg)
 int
 hns3_rx_check_vec_support(struct rte_eth_dev *dev)
 {
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-	uint64_t offloads_mask = DEV_RX_OFFLOAD_TCP_LRO |
-				 DEV_RX_OFFLOAD_VLAN;
+	uint64_t offloads_mask = RTE_ETH_RX_OFFLOAD_TCP_LRO |
+				 RTE_ETH_RX_OFFLOAD_VLAN;
 
 	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if (hns3_dev_get_support(hw, PTP))
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 0a4db0891d4a..293df887bf7c 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1629,7 +1629,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
 
 	/* Set the global registers with default ether type value */
 	if (!pf->support_multi_driver) {
-		ret = i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+		ret = i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					 RTE_ETHER_TYPE_VLAN);
 		if (ret != I40E_SUCCESS) {
 			PMD_INIT_LOG(ERR,
@@ -1896,8 +1896,8 @@ i40e_dev_configure(struct rte_eth_dev *dev)
 	ad->tx_simple_allowed = true;
 	ad->tx_vec_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Only legacy filter API needs the following fdir config. So when the
 	 * legacy filter API is deprecated, the following codes should also be
@@ -1931,13 +1931,13 @@ i40e_dev_configure(struct rte_eth_dev *dev)
 	 *  number, which will be available after rx_queue_setup(). dev_start()
 	 *  function is good to place RSS setup.
 	 */
-	if (mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+	if (mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) {
 		ret = i40e_vmdq_setup(dev);
 		if (ret)
 			goto err;
 	}
 
-	if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if (mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		ret = i40e_dcb_setup(dev);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "failed to configure DCB.");
@@ -2214,17 +2214,17 @@ i40e_parse_link_speeds(uint16_t link_speeds)
 {
 	uint8_t link_speed = I40E_LINK_SPEED_UNKNOWN;
 
-	if (link_speeds & ETH_LINK_SPEED_40G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
 		link_speed |= I40E_LINK_SPEED_40GB;
-	if (link_speeds & ETH_LINK_SPEED_25G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
 		link_speed |= I40E_LINK_SPEED_25GB;
-	if (link_speeds & ETH_LINK_SPEED_20G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_20G)
 		link_speed |= I40E_LINK_SPEED_20GB;
-	if (link_speeds & ETH_LINK_SPEED_10G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 		link_speed |= I40E_LINK_SPEED_10GB;
-	if (link_speeds & ETH_LINK_SPEED_1G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
 		link_speed |= I40E_LINK_SPEED_1GB;
-	if (link_speeds & ETH_LINK_SPEED_100M)
+	if (link_speeds & RTE_ETH_LINK_SPEED_100M)
 		link_speed |= I40E_LINK_SPEED_100MB;
 
 	return link_speed;
@@ -2332,13 +2332,13 @@ i40e_apply_link_speed(struct rte_eth_dev *dev)
 	abilities |= I40E_AQ_PHY_ENABLE_ATOMIC_LINK |
 		     I40E_AQ_PHY_LINK_ENABLED;
 
-	if (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
-		conf->link_speeds = ETH_LINK_SPEED_40G |
-				    ETH_LINK_SPEED_25G |
-				    ETH_LINK_SPEED_20G |
-				    ETH_LINK_SPEED_10G |
-				    ETH_LINK_SPEED_1G |
-				    ETH_LINK_SPEED_100M;
+	if (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
+		conf->link_speeds = RTE_ETH_LINK_SPEED_40G |
+				    RTE_ETH_LINK_SPEED_25G |
+				    RTE_ETH_LINK_SPEED_20G |
+				    RTE_ETH_LINK_SPEED_10G |
+				    RTE_ETH_LINK_SPEED_1G |
+				    RTE_ETH_LINK_SPEED_100M;
 
 		abilities |= I40E_AQ_PHY_AN_ENABLED;
 	} else {
@@ -2876,34 +2876,34 @@ update_link_reg(struct i40e_hw *hw, struct rte_eth_link *link)
 	/* Parse the link status */
 	switch (link_speed) {
 	case I40E_REG_SPEED_0:
-		link->link_speed = ETH_SPEED_NUM_100M;
+		link->link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case I40E_REG_SPEED_1:
-		link->link_speed = ETH_SPEED_NUM_1G;
+		link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case I40E_REG_SPEED_2:
 		if (hw->mac.type == I40E_MAC_X722)
-			link->link_speed = ETH_SPEED_NUM_2_5G;
+			link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		else
-			link->link_speed = ETH_SPEED_NUM_10G;
+			link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case I40E_REG_SPEED_3:
 		if (hw->mac.type == I40E_MAC_X722) {
-			link->link_speed = ETH_SPEED_NUM_5G;
+			link->link_speed = RTE_ETH_SPEED_NUM_5G;
 		} else {
 			reg_val = I40E_READ_REG(hw, I40E_PRTMAC_MACC);
 
 			if (reg_val & I40E_REG_MACC_25GB)
-				link->link_speed = ETH_SPEED_NUM_25G;
+				link->link_speed = RTE_ETH_SPEED_NUM_25G;
 			else
-				link->link_speed = ETH_SPEED_NUM_40G;
+				link->link_speed = RTE_ETH_SPEED_NUM_40G;
 		}
 		break;
 	case I40E_REG_SPEED_4:
 		if (hw->mac.type == I40E_MAC_X722)
-			link->link_speed = ETH_SPEED_NUM_10G;
+			link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		else
-			link->link_speed = ETH_SPEED_NUM_20G;
+			link->link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "Unknown link speed info %u", link_speed);
@@ -2930,8 +2930,8 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
 		status = i40e_aq_get_link_info(hw, enable_lse,
 						&link_status, NULL);
 		if (unlikely(status != I40E_SUCCESS)) {
-			link->link_speed = ETH_SPEED_NUM_NONE;
-			link->link_duplex = ETH_LINK_FULL_DUPLEX;
+			link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+			link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR, "Failed to get link info");
 			return;
 		}
@@ -2946,28 +2946,28 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
 	/* Parse the link status */
 	switch (link_status.link_speed) {
 	case I40E_LINK_SPEED_100MB:
-		link->link_speed = ETH_SPEED_NUM_100M;
+		link->link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case I40E_LINK_SPEED_1GB:
-		link->link_speed = ETH_SPEED_NUM_1G;
+		link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case I40E_LINK_SPEED_10GB:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case I40E_LINK_SPEED_20GB:
-		link->link_speed = ETH_SPEED_NUM_20G;
+		link->link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case I40E_LINK_SPEED_25GB:
-		link->link_speed = ETH_SPEED_NUM_25G;
+		link->link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case I40E_LINK_SPEED_40GB:
-		link->link_speed = ETH_SPEED_NUM_40G;
+		link->link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	default:
 		if (link->link_status)
-			link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+			link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		else
-			link->link_speed = ETH_SPEED_NUM_NONE;
+			link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 }
@@ -2984,9 +2984,9 @@ i40e_dev_link_update(struct rte_eth_dev *dev,
 	memset(&link, 0, sizeof(link));
 
 	/* i40e uses full duplex only */
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_SPEED_FIXED);
 
 	if (!wait_to_complete && !enable_lse)
 		update_link_reg(hw, &link);
@@ -3720,33 +3720,33 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_KEEP_CRC |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_RSS_HASH;
-
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
 		dev_info->tx_queue_offload_capa;
 	dev_info->dev_capa =
 		RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -3805,7 +3805,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	if (I40E_PHY_TYPE_SUPPORT_40G(hw->phy.phy_types)) {
 		/* For XL710 */
-		dev_info->speed_capa = ETH_LINK_SPEED_40G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_40G;
 		dev_info->default_rxportconf.nb_queues = 2;
 		dev_info->default_txportconf.nb_queues = 2;
 		if (dev->data->nb_rx_queues == 1)
@@ -3819,17 +3819,17 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	} else if (I40E_PHY_TYPE_SUPPORT_25G(hw->phy.phy_types)) {
 		/* For XXV710 */
-		dev_info->speed_capa = ETH_LINK_SPEED_25G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_25G;
 		dev_info->default_rxportconf.nb_queues = 1;
 		dev_info->default_txportconf.nb_queues = 1;
 		dev_info->default_rxportconf.ring_size = 256;
 		dev_info->default_txportconf.ring_size = 256;
 	} else {
 		/* For X710 */
-		dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
 		dev_info->default_rxportconf.nb_queues = 1;
 		dev_info->default_txportconf.nb_queues = 1;
-		if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_10G) {
+		if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_10G) {
 			dev_info->default_rxportconf.ring_size = 512;
 			dev_info->default_txportconf.ring_size = 256;
 		} else {
@@ -3868,7 +3868,7 @@ i40e_vlan_tpid_set_by_registers(struct rte_eth_dev *dev,
 	int ret;
 
 	if (qinq) {
-		if (vlan_type == ETH_VLAN_TYPE_OUTER)
+		if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
 			reg_id = 2;
 	}
 
@@ -3915,12 +3915,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
 	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	int qinq = dev->data->dev_conf.rxmode.offloads &
-		   DEV_RX_OFFLOAD_VLAN_EXTEND;
+		   RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	int ret = 0;
 
-	if ((vlan_type != ETH_VLAN_TYPE_INNER &&
-	     vlan_type != ETH_VLAN_TYPE_OUTER) ||
-	    (!qinq && vlan_type == ETH_VLAN_TYPE_INNER)) {
+	if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+	     vlan_type != RTE_ETH_VLAN_TYPE_OUTER) ||
+	    (!qinq && vlan_type == RTE_ETH_VLAN_TYPE_INNER)) {
 		PMD_DRV_LOG(ERR,
 			    "Unsupported vlan type.");
 		return -EINVAL;
@@ -3934,12 +3934,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
 	/* 802.1ad frames ability is added in NVM API 1.7*/
 	if (hw->flags & I40E_HW_FLAG_802_1AD_CAPABLE) {
 		if (qinq) {
-			if (vlan_type == ETH_VLAN_TYPE_OUTER)
+			if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
 				hw->first_tag = rte_cpu_to_le_16(tpid);
-			else if (vlan_type == ETH_VLAN_TYPE_INNER)
+			else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER)
 				hw->second_tag = rte_cpu_to_le_16(tpid);
 		} else {
-			if (vlan_type == ETH_VLAN_TYPE_OUTER)
+			if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
 				hw->second_tag = rte_cpu_to_le_16(tpid);
 		}
 		ret = i40e_aq_set_switch_config(hw, 0, 0, 0, NULL);
@@ -3998,37 +3998,37 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			i40e_vsi_config_vlan_filter(vsi, TRUE);
 		else
 			i40e_vsi_config_vlan_filter(vsi, FALSE);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			i40e_vsi_config_vlan_stripping(vsi, TRUE);
 		else
 			i40e_vsi_config_vlan_stripping(vsi, FALSE);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
 			i40e_vsi_config_double_vlan(vsi, TRUE);
 			/* Set global registers with default ethertype. */
-			i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+			i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					   RTE_ETHER_TYPE_VLAN);
-			i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+			i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
 					   RTE_ETHER_TYPE_VLAN);
 		}
 		else
 			i40e_vsi_config_double_vlan(vsi, FALSE);
 	}
 
-	if (mask & ETH_QINQ_STRIP_MASK) {
+	if (mask & RTE_ETH_QINQ_STRIP_MASK) {
 		/* Enable or disable outer VLAN stripping */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
 			i40e_vsi_config_outer_vlan_stripping(vsi, TRUE);
 		else
 			i40e_vsi_config_outer_vlan_stripping(vsi, FALSE);
@@ -4111,17 +4111,17 @@ i40e_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	 /* Return current mode according to actual setting*/
 	switch (hw->fc.current_mode) {
 	case I40E_FC_FULL:
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	case I40E_FC_TX_PAUSE:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case I40E_FC_RX_PAUSE:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case I40E_FC_NONE:
 	default:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	};
 
 	return 0;
@@ -4137,10 +4137,10 @@ i40e_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	struct i40e_hw *hw;
 	struct i40e_pf *pf;
 	enum i40e_fc_mode rte_fcmode_2_i40e_fcmode[] = {
-		[RTE_FC_NONE] = I40E_FC_NONE,
-		[RTE_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
-		[RTE_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
-		[RTE_FC_FULL] = I40E_FC_FULL
+		[RTE_ETH_FC_NONE] = I40E_FC_NONE,
+		[RTE_ETH_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
+		[RTE_ETH_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
+		[RTE_ETH_FC_FULL] = I40E_FC_FULL
 	};
 
 	/* high_water field in the rte_eth_fc_conf using the kilobytes unit */
@@ -4287,7 +4287,7 @@ i40e_macaddr_add(struct rte_eth_dev *dev,
 	}
 
 	rte_memcpy(&mac_filter.mac_addr, mac_addr, RTE_ETHER_ADDR_LEN);
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 		mac_filter.filter_type = I40E_MACVLAN_PERFECT_MATCH;
 	else
 		mac_filter.filter_type = I40E_MAC_PERFECT_MATCH;
@@ -4440,7 +4440,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
 	int ret;
 
 	if (reta_size != lut_size ||
-		reta_size > ETH_RSS_RETA_SIZE_512) {
+		reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
 		PMD_DRV_LOG(ERR,
 			"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
 			reta_size, lut_size);
@@ -4456,8 +4456,8 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
 	if (ret)
 		goto out;
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			lut[i] = reta_conf[idx].reta[shift];
 	}
@@ -4483,7 +4483,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
 	int ret;
 
 	if (reta_size != lut_size ||
-		reta_size > ETH_RSS_RETA_SIZE_512) {
+		reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
 		PMD_DRV_LOG(ERR,
 			"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
 			reta_size, lut_size);
@@ -4500,8 +4500,8 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
 	if (ret)
 		goto out;
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = lut[i];
 	}
@@ -4818,7 +4818,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
 			pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
 				hw->func_caps.num_vsis - vsi_count);
 			pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
-				ETH_64_POOLS);
+				RTE_ETH_64_POOLS);
 			if (pf->max_nb_vmdq_vsi) {
 				pf->flags |= I40E_FLAG_VMDQ;
 				pf->vmdq_nb_qps = pf->vmdq_nb_qp_max;
@@ -6104,10 +6104,10 @@ i40e_dev_init_vlan(struct rte_eth_dev *dev)
 	int mask = 0;
 
 	/* Apply vlan offload setting */
-	mask = ETH_VLAN_STRIP_MASK |
-	       ETH_QINQ_STRIP_MASK |
-	       ETH_VLAN_FILTER_MASK |
-	       ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK |
+	       RTE_ETH_QINQ_STRIP_MASK |
+	       RTE_ETH_VLAN_FILTER_MASK |
+	       RTE_ETH_VLAN_EXTEND_MASK;
 	ret = i40e_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_DRV_LOG(INFO, "Failed to update vlan offload");
@@ -6236,9 +6236,9 @@ i40e_pf_setup(struct i40e_pf *pf)
 
 	/* Configure filter control */
 	memset(&settings, 0, sizeof(settings));
-	if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_128)
+	if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_128)
 		settings.hash_lut_size = I40E_HASH_LUT_SIZE_128;
-	else if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_512)
+	else if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_512)
 		settings.hash_lut_size = I40E_HASH_LUT_SIZE_512;
 	else {
 		PMD_DRV_LOG(ERR, "Hash lookup table size (%u) not supported",
@@ -7098,7 +7098,7 @@ i40e_find_vlan_filter(struct i40e_vsi *vsi,
 {
 	uint32_t vid_idx, vid_bit;
 
-	if (vlan_id > ETH_VLAN_ID_MAX)
+	if (vlan_id > RTE_ETH_VLAN_ID_MAX)
 		return 0;
 
 	vid_idx = I40E_VFTA_IDX(vlan_id);
@@ -7133,7 +7133,7 @@ i40e_set_vlan_filter(struct i40e_vsi *vsi,
 	struct i40e_aqc_add_remove_vlan_element_data vlan_data = {0};
 	int ret;
 
-	if (vlan_id > ETH_VLAN_ID_MAX)
+	if (vlan_id > RTE_ETH_VLAN_ID_MAX)
 		return;
 
 	i40e_store_vlan_filter(vsi, vlan_id, on);
@@ -7727,25 +7727,25 @@ static int
 i40e_dev_get_filter_type(uint16_t filter_type, uint16_t *flag)
 {
 	switch (filter_type) {
-	case RTE_TUNNEL_FILTER_IMAC_IVLAN:
+	case RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN;
 		break;
-	case RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID:
+	case RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID;
 		break;
-	case RTE_TUNNEL_FILTER_IMAC_TENID:
+	case RTE_ETH_TUNNEL_FILTER_IMAC_TENID:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID;
 		break;
-	case RTE_TUNNEL_FILTER_OMAC_TENID_IMAC:
+	case RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC;
 		break;
-	case ETH_TUNNEL_FILTER_IMAC:
+	case RTE_ETH_TUNNEL_FILTER_IMAC:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC;
 		break;
-	case ETH_TUNNEL_FILTER_OIP:
+	case RTE_ETH_TUNNEL_FILTER_OIP:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_OIP;
 		break;
-	case ETH_TUNNEL_FILTER_IIP:
+	case RTE_ETH_TUNNEL_FILTER_IIP:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IIP;
 		break;
 	default:
@@ -8711,16 +8711,16 @@ i40e_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
 					  I40E_AQC_TUNNEL_TYPE_VXLAN);
 		break;
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
 					  I40E_AQC_TUNNEL_TYPE_VXLAN_GPE);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -1;
 		break;
@@ -8746,12 +8746,12 @@ i40e_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		ret = i40e_del_vxlan_port(pf, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -1;
 		break;
@@ -8843,7 +8843,7 @@ int
 i40e_pf_reset_rss_reta(struct i40e_pf *pf)
 {
 	struct i40e_hw *hw = &pf->adapter->hw;
-	uint8_t lut[ETH_RSS_RETA_SIZE_512];
+	uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
 	uint32_t i;
 	int num;
 
@@ -8851,7 +8851,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
 	 * configured. It's necessary to calculate the actual PF
 	 * queues that are configured.
 	 */
-	if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+	if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
 		num = i40e_pf_calc_configured_queues_num(pf);
 	else
 		num = pf->dev_data->nb_rx_queues;
@@ -8930,7 +8930,7 @@ i40e_pf_config_rss(struct i40e_pf *pf)
 	rss_hf = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
 	mq_mode = pf->dev_data->dev_conf.rxmode.mq_mode;
 	if (!(rss_hf & pf->adapter->flow_types_mask) ||
-	    !(mq_mode & ETH_MQ_RX_RSS_FLAG))
+	    !(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 		return 0;
 
 	hw = I40E_PF_TO_HW(pf);
@@ -10267,16 +10267,16 @@ i40e_start_timecounters(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 
 	switch (link.link_speed) {
-	case ETH_SPEED_NUM_40G:
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_25G:
 		tsync_inc_l = I40E_PTP_40GB_INCVAL & 0xFFFFFFFF;
 		tsync_inc_h = I40E_PTP_40GB_INCVAL >> 32;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		tsync_inc_l = I40E_PTP_10GB_INCVAL & 0xFFFFFFFF;
 		tsync_inc_h = I40E_PTP_10GB_INCVAL >> 32;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		tsync_inc_l = I40E_PTP_1GB_INCVAL & 0xFFFFFFFF;
 		tsync_inc_h = I40E_PTP_1GB_INCVAL >> 32;
 		break;
@@ -10504,7 +10504,7 @@ i40e_parse_dcb_configure(struct rte_eth_dev *dev,
 	else
 		*tc_map = RTE_LEN2MASK(dcb_rx_conf->nb_tcs, uint8_t);
 
-	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		dcb_cfg->pfc.willing = 0;
 		dcb_cfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS;
 		dcb_cfg->pfc.pfcenable = *tc_map;
@@ -11012,7 +11012,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
 	uint16_t bsf, tc_mapping;
 	int i, j = 0;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = rte_bsf32(vsi->enabled_tc + 1);
 	else
 		dcb_info->nb_tcs = 1;
@@ -11060,7 +11060,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
 				dcb_info->tc_queue.tc_rxq[j][i].nb_queue;
 		}
 		j++;
-	} while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, ETH_MAX_VMDQ_POOL));
+	} while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, RTE_ETH_MAX_VMDQ_POOL));
 	return 0;
 }
 
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 1d57b9617e66..d8042abbd9be 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -147,17 +147,17 @@ enum i40e_flxpld_layer_idx {
 		       I40E_FLAG_RSS_AQ_CAPABLE)
 
 #define I40E_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L2_PAYLOAD)
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L2_PAYLOAD)
 
 /* All bits of RSS hash enable for X722*/
 #define I40E_RSS_HENA_ALL_X722 ( \
@@ -1063,7 +1063,7 @@ struct i40e_rte_flow_rss_conf {
 	uint8_t key[(I40E_VFQF_HKEY_MAX_INDEX > I40E_PFQF_HKEY_MAX_INDEX ?
 		     I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
 		    sizeof(uint32_t)];		/**< Hash key. */
-	uint16_t queue[ETH_RSS_RETA_SIZE_512];	/**< Queues indices to use. */
+	uint16_t queue[RTE_ETH_RSS_RETA_SIZE_512];	/**< Queues indices to use. */
 
 	bool symmetric_enable;		/**< true, if enable symmetric */
 	uint64_t config_pctypes;	/**< All PCTYPES with the flow  */
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index e41a84f1d737..9acaa1875105 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -2015,7 +2015,7 @@ i40e_get_outer_vlan(struct rte_eth_dev *dev)
 {
 	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	int qinq = dev->data->dev_conf.rxmode.offloads &
-		DEV_RX_OFFLOAD_VLAN_EXTEND;
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	uint64_t reg_r = 0;
 	uint16_t reg_id;
 	uint16_t tpid;
@@ -3601,13 +3601,13 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
 }
 
 static uint16_t i40e_supported_tunnel_filter_types[] = {
-	ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_TENID |
-	ETH_TUNNEL_FILTER_IVLAN,
-	ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_IVLAN,
-	ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_TENID,
-	ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_TENID |
-	ETH_TUNNEL_FILTER_IMAC,
-	ETH_TUNNEL_FILTER_IMAC,
+	RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID |
+	RTE_ETH_TUNNEL_FILTER_IVLAN,
+	RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
+	RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID,
+	RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_TENID |
+	RTE_ETH_TUNNEL_FILTER_IMAC,
+	RTE_ETH_TUNNEL_FILTER_IMAC,
 };
 
 static int
@@ -3697,12 +3697,12 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 					rte_memcpy(&filter->outer_mac,
 						   &eth_spec->dst,
 						   RTE_ETHER_ADDR_LEN);
-					filter_type |= ETH_TUNNEL_FILTER_OMAC;
+					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
 						   &eth_spec->dst,
 						   RTE_ETHER_ADDR_LEN);
-					filter_type |= ETH_TUNNEL_FILTER_IMAC;
+					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
 			}
 			break;
@@ -3724,7 +3724,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 					filter->inner_vlan =
 					      rte_be_to_cpu_16(vlan_spec->tci) &
 					      I40E_VLAN_TCI_MASK;
-				filter_type |= ETH_TUNNEL_FILTER_IVLAN;
+				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
@@ -3798,7 +3798,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 					   vxlan_spec->vni, 3);
 				filter->tenant_id =
 					rte_be_to_cpu_32(tenant_id_be);
-				filter_type |= ETH_TUNNEL_FILTER_TENID;
+				filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
 			}
 
 			vxlan_flag = 1;
@@ -3927,12 +3927,12 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 					rte_memcpy(&filter->outer_mac,
 						   &eth_spec->dst,
 						   RTE_ETHER_ADDR_LEN);
-					filter_type |= ETH_TUNNEL_FILTER_OMAC;
+					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
 						   &eth_spec->dst,
 						   RTE_ETHER_ADDR_LEN);
-					filter_type |= ETH_TUNNEL_FILTER_IMAC;
+					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
 			}
 
@@ -3955,7 +3955,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 					filter->inner_vlan =
 					      rte_be_to_cpu_16(vlan_spec->tci) &
 					      I40E_VLAN_TCI_MASK;
-				filter_type |= ETH_TUNNEL_FILTER_IVLAN;
+				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
@@ -4050,7 +4050,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 					   nvgre_spec->tni, 3);
 				filter->tenant_id =
 					rte_be_to_cpu_32(tenant_id_be);
-				filter_type |= ETH_TUNNEL_FILTER_TENID;
+				filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
 			}
 
 			nvgre_flag = 1;
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 5da3d187076e..8962e9d97aa7 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -105,47 +105,47 @@ struct i40e_hash_map_rss_inset {
 
 const struct i40e_hash_map_rss_inset i40e_hash_rss_inset[] = {
 	/* IPv4 */
-	{ ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
-	{ ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+	{ RTE_ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+	{ RTE_ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
 
-	{ ETH_RSS_NONFRAG_IPV4_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	  I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
 
-	{ ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
 
 	/* IPv6 */
-	{ ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
-	{ ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+	{ RTE_ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+	{ RTE_ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
 
-	{ ETH_RSS_NONFRAG_IPV6_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	  I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
 
-	{ ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
 
 	/* Port */
-	{ ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
+	{ RTE_ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
 	/* Ether */
-	{ ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
-	{ ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
+	{ RTE_ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
+	{ RTE_ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
 
 	/* VLAN */
-	{ ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
-	{ ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
+	{ RTE_ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
+	{ RTE_ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
 };
 
 #define I40E_HASH_VOID_NEXT_ALLOW	BIT_ULL(RTE_FLOW_ITEM_TYPE_ETH)
@@ -208,30 +208,30 @@ struct i40e_hash_match_pattern {
 #define I40E_HASH_MAP_CUS_PATTERN(pattern, rss_mask, cus_pctype) { \
 	pattern, rss_mask, true, cus_pctype }
 
-#define I40E_HASH_L2_RSS_MASK		(ETH_RSS_VLAN | ETH_RSS_ETH | \
-					ETH_RSS_L2_SRC_ONLY | \
-					ETH_RSS_L2_DST_ONLY)
+#define I40E_HASH_L2_RSS_MASK		(RTE_ETH_RSS_VLAN | RTE_ETH_RSS_ETH | \
+					RTE_ETH_RSS_L2_SRC_ONLY | \
+					RTE_ETH_RSS_L2_DST_ONLY)
 
 #define I40E_HASH_L23_RSS_MASK		(I40E_HASH_L2_RSS_MASK | \
-					ETH_RSS_L3_SRC_ONLY | \
-					ETH_RSS_L3_DST_ONLY)
+					RTE_ETH_RSS_L3_SRC_ONLY | \
+					RTE_ETH_RSS_L3_DST_ONLY)
 
-#define I40E_HASH_IPV4_L23_RSS_MASK	(ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
-#define I40E_HASH_IPV6_L23_RSS_MASK	(ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV4_L23_RSS_MASK	(RTE_ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV6_L23_RSS_MASK	(RTE_ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
 
 #define I40E_HASH_L234_RSS_MASK		(I40E_HASH_L23_RSS_MASK | \
-					ETH_RSS_PORT | ETH_RSS_L4_SRC_ONLY | \
-					ETH_RSS_L4_DST_ONLY)
+					RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY | \
+					RTE_ETH_RSS_L4_DST_ONLY)
 
-#define I40E_HASH_IPV4_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV4)
-#define I40E_HASH_IPV6_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV6)
+#define I40E_HASH_IPV4_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV4)
+#define I40E_HASH_IPV6_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV6)
 
-#define I40E_HASH_L4_TYPES		(ETH_RSS_NONFRAG_IPV4_TCP | \
-					ETH_RSS_NONFRAG_IPV4_UDP | \
-					ETH_RSS_NONFRAG_IPV4_SCTP | \
-					ETH_RSS_NONFRAG_IPV6_TCP | \
-					ETH_RSS_NONFRAG_IPV6_UDP | \
-					ETH_RSS_NONFRAG_IPV6_SCTP)
+#define I40E_HASH_L4_TYPES		(RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+					RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+					RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+					RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+					RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+					RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 /* Current supported patterns and RSS types.
  * All items that have the same pattern types are together.
@@ -239,72 +239,72 @@ struct i40e_hash_match_pattern {
 static const struct i40e_hash_match_pattern match_patterns[] = {
 	/* Ether */
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_ETH,
-			      ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
+			      RTE_ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
 			      I40E_FILTER_PCTYPE_L2_PAYLOAD),
 
 	/* IPv4 */
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
-			      ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
+			      RTE_ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_FRAG_IPV4),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
-			      ETH_RSS_NONFRAG_IPV4_OTHER |
+			      RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
 			      I40E_HASH_IPV4_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_OTHER),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_TCP,
-			      ETH_RSS_NONFRAG_IPV4_TCP |
+			      RTE_ETH_RSS_NONFRAG_IPV4_TCP |
 			      I40E_HASH_IPV4_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_TCP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_UDP,
-			      ETH_RSS_NONFRAG_IPV4_UDP |
+			      RTE_ETH_RSS_NONFRAG_IPV4_UDP |
 			      I40E_HASH_IPV4_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_UDP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_SCTP,
-			      ETH_RSS_NONFRAG_IPV4_SCTP |
+			      RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
 			      I40E_HASH_IPV4_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_SCTP),
 
 	/* IPv6 */
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
-			      ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
+			      RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_FRAG_IPV6),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
-			      ETH_RSS_NONFRAG_IPV6_OTHER |
+			      RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			      I40E_HASH_IPV6_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_OTHER),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_FRAG,
-			      ETH_RSS_FRAG_IPV6 | I40E_HASH_L23_RSS_MASK,
+			      RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_FRAG_IPV6),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_TCP,
-			      ETH_RSS_NONFRAG_IPV6_TCP |
+			      RTE_ETH_RSS_NONFRAG_IPV6_TCP |
 			      I40E_HASH_IPV6_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_TCP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_UDP,
-			      ETH_RSS_NONFRAG_IPV6_UDP |
+			      RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			      I40E_HASH_IPV6_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_UDP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_SCTP,
-			      ETH_RSS_NONFRAG_IPV6_SCTP |
+			      RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
 			      I40E_HASH_IPV6_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_SCTP),
 
 	/* ESP */
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_UDP_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_UDP_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
 
 	/* GTPC */
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPC,
@@ -319,27 +319,27 @@ static const struct i40e_hash_match_pattern match_patterns[] = {
 				  I40E_HASH_IPV4_L234_RSS_MASK,
 				  I40E_CUSTOMIZED_GTPU),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV4,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV6,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU,
 				  I40E_HASH_IPV6_L234_RSS_MASK,
 				  I40E_CUSTOMIZED_GTPU),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV4,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV6,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
 
 	/* L2TPV3 */
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_L2TPV3,
-				  ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
+				  RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_L2TPV3,
-				  ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
+				  RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
 
 	/* AH */
-	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, ETH_RSS_AH,
+	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, RTE_ETH_RSS_AH,
 				  I40E_CUSTOMIZED_AH_IPV4),
-	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, ETH_RSS_AH,
+	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, RTE_ETH_RSS_AH,
 				  I40E_CUSTOMIZED_AH_IPV6),
 };
 
@@ -575,29 +575,29 @@ i40e_hash_get_inset(uint64_t rss_types)
 	/* If SRC_ONLY and DST_ONLY of the same level are used simultaneously,
 	 * it is the same case as none of them are added.
 	 */
-	mask = rss_types & (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY);
-	if (mask == ETH_RSS_L2_SRC_ONLY)
+	mask = rss_types & (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY);
+	if (mask == RTE_ETH_RSS_L2_SRC_ONLY)
 		inset &= ~I40E_INSET_DMAC;
-	else if (mask == ETH_RSS_L2_DST_ONLY)
+	else if (mask == RTE_ETH_RSS_L2_DST_ONLY)
 		inset &= ~I40E_INSET_SMAC;
 
-	mask = rss_types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
-	if (mask == ETH_RSS_L3_SRC_ONLY)
+	mask = rss_types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
+	if (mask == RTE_ETH_RSS_L3_SRC_ONLY)
 		inset &= ~(I40E_INSET_IPV4_DST | I40E_INSET_IPV6_DST);
-	else if (mask == ETH_RSS_L3_DST_ONLY)
+	else if (mask == RTE_ETH_RSS_L3_DST_ONLY)
 		inset &= ~(I40E_INSET_IPV4_SRC | I40E_INSET_IPV6_SRC);
 
-	mask = rss_types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
-	if (mask == ETH_RSS_L4_SRC_ONLY)
+	mask = rss_types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
+	if (mask == RTE_ETH_RSS_L4_SRC_ONLY)
 		inset &= ~I40E_INSET_DST_PORT;
-	else if (mask == ETH_RSS_L4_DST_ONLY)
+	else if (mask == RTE_ETH_RSS_L4_DST_ONLY)
 		inset &= ~I40E_INSET_SRC_PORT;
 
 	if (rss_types & I40E_HASH_L4_TYPES) {
 		uint64_t l3_mask = rss_types &
-				   (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+				   (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
 		uint64_t l4_mask = rss_types &
-				   (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+				   (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
 
 		if (l3_mask && !l4_mask)
 			inset &= ~(I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT);
@@ -836,7 +836,7 @@ i40e_hash_config(struct i40e_pf *pf,
 
 	/* Update lookup table */
 	if (rss_info->queue_num > 0) {
-		uint8_t lut[ETH_RSS_RETA_SIZE_512];
+		uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
 		uint32_t i, j = 0;
 
 		for (i = 0; i < hw->func_caps.rss_table_size; i++) {
@@ -943,7 +943,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
 			    "RSS key is ignored when queues specified");
 
 	pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
-	if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+	if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
 		max_queue = i40e_pf_calc_configured_queues_num(pf);
 	else
 		max_queue = pf->dev_data->nb_rx_queues;
@@ -1081,22 +1081,22 @@ i40e_hash_validate_rss_types(uint64_t rss_types)
 	uint64_t type, mask;
 
 	/* Validate L2 */
-	type = ETH_RSS_ETH & rss_types;
-	mask = (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY) & rss_types;
+	type = RTE_ETH_RSS_ETH & rss_types;
+	mask = (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY) & rss_types;
 	if (!type && mask)
 		return false;
 
 	/* Validate L3 */
-	type = (I40E_HASH_L4_TYPES | ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-	       ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_IPV6 |
-	       ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
-	mask = (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY) & rss_types;
+	type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+	       RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_IPV6 |
+	       RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
+	mask = (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY) & rss_types;
 	if (!type && mask)
 		return false;
 
 	/* Validate L4 */
-	type = (I40E_HASH_L4_TYPES | ETH_RSS_PORT) & rss_types;
-	mask = (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY) & rss_types;
+	type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_PORT) & rss_types;
+	mask = (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY) & rss_types;
 	if (!type && mask)
 		return false;
 
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index e2d8b2b5f7f1..ccb3924a5f68 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -1207,24 +1207,24 @@ i40e_notify_vf_link_status(struct rte_eth_dev *dev, struct i40e_pf_vf *vf)
 	event.event_data.link_event.link_status =
 		dev->data->dev_link.link_status;
 
-	/* need to convert the ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
+	/* need to convert the RTE_ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
 	switch (dev->data->dev_link.link_speed) {
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_100MB;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_1GB;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_10GB;
 		break;
-	case ETH_SPEED_NUM_20G:
+	case RTE_ETH_SPEED_NUM_20G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_20GB;
 		break;
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_25G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_25GB;
 		break;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_40GB;
 		break;
 	default:
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 554b1142c136..a13bb81115f4 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1329,7 +1329,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 	for (i = 0; i < tx_rs_thresh; i++)
 		rte_prefetch0((txep + i)->mbuf);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
 		if (k) {
 			for (j = 0; j != k; j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
 				for (i = 0; i < RTE_I40E_TX_MAX_FREE_BUF_SZ; ++i, ++txep) {
@@ -1995,7 +1995,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->queue_id = queue_idx;
 	rxq->reg_idx = reg_idx;
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -2243,7 +2243,7 @@ i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
 	}
 	/* check simple tx conflict */
 	if (ad->tx_simple_allowed) {
-		if ((txq->offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
+		if ((txq->offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
 				txq->tx_rs_thresh < RTE_PMD_I40E_TX_MAX_BURST) {
 			PMD_DRV_LOG(ERR, "No-simple tx is required.");
 			return -EINVAL;
@@ -3417,7 +3417,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
 	/* Use a simple Tx queue if possible (only fast free is allowed) */
 	ad->tx_simple_allowed =
 		(txq->offloads ==
-		 (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+		 (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
 		 txq->tx_rs_thresh >= RTE_PMD_I40E_TX_MAX_BURST);
 	ad->tx_vec_allowed = (ad->tx_simple_allowed &&
 			txq->tx_rs_thresh <= RTE_I40E_TX_MAX_FREE_BUF_SZ);
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 2301e6301d7d..5e6eecc50116 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -120,7 +120,7 @@ struct i40e_rx_queue {
 	bool rx_deferred_start; /**< don't start this queue in dev start */
 	uint16_t rx_using_sse; /**<flag indicate the usage of vPMD for rx */
 	uint8_t dcb_tc;         /**< Traffic class of rx queue */
-	uint64_t offloads; /**< Rx offload flags of DEV_RX_OFFLOAD_* */
+	uint64_t offloads; /**< Rx offload flags of RTE_ETH_RX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
@@ -166,7 +166,7 @@ struct i40e_tx_queue {
 	bool q_set; /**< indicate if tx queue has been configured */
 	bool tx_deferred_start; /**< don't start this queue in dev start */
 	uint8_t dcb_tc;         /**< Traffic class of tx queue */
-	uint64_t offloads; /**< Tx offload flags of DEV_RX_OFFLOAD_* */
+	uint64_t offloads; /**< Tx offload flags of RTE_ETH_RX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 4ffe030fcb64..7abc0821d119 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -900,7 +900,7 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->tx_next_dd - (n - 1);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		void **cache_objs;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index f52e3c567558..f9a7f4655050 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -100,7 +100,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 	  */
 	txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
 		for (i = 0; i < n; i++) {
 			free[i] = txep[i].mbuf;
 			txep[i].mbuf = NULL;
@@ -211,7 +211,7 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
 	struct i40e_adapter *ad =
 		I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 	struct i40e_rx_queue *rxq;
 	uint16_t desc, i;
 	bool first_queue;
@@ -221,11 +221,11 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
 		return -1;
 
 	 /* no header split support */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
 		return -1;
 
 	/* no QinQ support */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 		return -1;
 
 	/**
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 12d5a2e48a9b..663c46b91dc5 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -42,30 +42,30 @@ i40e_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX;
 	dev_info->hash_key_size = (I40E_VFQF_HKEY_MAX_INDEX + 1) *
 		sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_64;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_64;
 	dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
 	dev_info->max_mac_addrs = I40E_NUM_MACADDR_MAX;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_FILTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_MULTI_SEGS  |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -385,19 +385,19 @@ i40e_vf_representor_vlan_offload_set(struct rte_eth_dev *ethdev, int mask)
 		return -EINVAL;
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* Enable or disable VLAN filtering offload */
 		if (ethdev->data->dev_conf.rxmode.offloads &
-		    DEV_RX_OFFLOAD_VLAN_FILTER)
+		    RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			return i40e_vsi_config_vlan_filter(vsi, TRUE);
 		else
 			return i40e_vsi_config_vlan_filter(vsi, FALSE);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping offload */
 		if (ethdev->data->dev_conf.rxmode.offloads &
-		    DEV_RX_OFFLOAD_VLAN_STRIP)
+		    RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			return i40e_vsi_config_vlan_stripping(vsi, TRUE);
 		else
 			return i40e_vsi_config_vlan_stripping(vsi, FALSE);
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 34bfa9af4734..12f541f53926 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -50,18 +50,18 @@
 	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
 
 #define IAVF_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 |         \
-	ETH_RSS_NONFRAG_IPV4_TCP |  \
-	ETH_RSS_NONFRAG_IPV4_UDP |  \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 |         \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP |  \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP |  \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
 
 #define IAVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
 #define IAVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 611f1f7722b0..df44df772e4e 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -266,53 +266,53 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
 	static const uint64_t map_hena_rss[] = {
 		/* IPv4 */
 		[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP] =
-				ETH_RSS_NONFRAG_IPV4_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP] =
-				ETH_RSS_NONFRAG_IPV4_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_UDP] =
-				ETH_RSS_NONFRAG_IPV4_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK] =
-				ETH_RSS_NONFRAG_IPV4_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP] =
-				ETH_RSS_NONFRAG_IPV4_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_SCTP] =
-				ETH_RSS_NONFRAG_IPV4_SCTP,
+				RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_OTHER] =
-				ETH_RSS_NONFRAG_IPV4_OTHER,
-		[IAVF_FILTER_PCTYPE_FRAG_IPV4] = ETH_RSS_FRAG_IPV4,
+				RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+		[IAVF_FILTER_PCTYPE_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
 
 		/* IPv6 */
 		[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP] =
-				ETH_RSS_NONFRAG_IPV6_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP] =
-				ETH_RSS_NONFRAG_IPV6_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_UDP] =
-				ETH_RSS_NONFRAG_IPV6_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK] =
-				ETH_RSS_NONFRAG_IPV6_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP] =
-				ETH_RSS_NONFRAG_IPV6_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_SCTP] =
-				ETH_RSS_NONFRAG_IPV6_SCTP,
+				RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_OTHER] =
-				ETH_RSS_NONFRAG_IPV6_OTHER,
-		[IAVF_FILTER_PCTYPE_FRAG_IPV6] = ETH_RSS_FRAG_IPV6,
+				RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+		[IAVF_FILTER_PCTYPE_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
 
 		/* L2 Payload */
-		[IAVF_FILTER_PCTYPE_L2_PAYLOAD] = ETH_RSS_L2_PAYLOAD
+		[IAVF_FILTER_PCTYPE_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
 	};
 
-	const uint64_t ipv4_rss = ETH_RSS_NONFRAG_IPV4_UDP |
-				  ETH_RSS_NONFRAG_IPV4_TCP |
-				  ETH_RSS_NONFRAG_IPV4_SCTP |
-				  ETH_RSS_NONFRAG_IPV4_OTHER |
-				  ETH_RSS_FRAG_IPV4;
+	const uint64_t ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+				  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+				  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+				  RTE_ETH_RSS_FRAG_IPV4;
 
-	const uint64_t ipv6_rss = ETH_RSS_NONFRAG_IPV6_UDP |
-				  ETH_RSS_NONFRAG_IPV6_TCP |
-				  ETH_RSS_NONFRAG_IPV6_SCTP |
-				  ETH_RSS_NONFRAG_IPV6_OTHER |
-				  ETH_RSS_FRAG_IPV6;
+	const uint64_t ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+				  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+				  RTE_ETH_RSS_FRAG_IPV6;
 
 	struct iavf_info *vf =  IAVF_DEV_PRIVATE_TO_VF(adapter);
 	uint64_t caps = 0, hena = 0, valid_rss_hf = 0;
@@ -331,13 +331,13 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
 	}
 
 	/**
-	 * ETH_RSS_IPV4 and ETH_RSS_IPV6 can be considered as 2
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
 	 * generalizations of all other IPv4 and IPv6 RSS types.
 	 */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		rss_hf |= ipv4_rss;
 
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		rss_hf |= ipv6_rss;
 
 	RTE_BUILD_BUG_ON(RTE_DIM(map_hena_rss) > sizeof(uint64_t) * CHAR_BIT);
@@ -363,10 +363,10 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
 	}
 
 	if (valid_rss_hf & ipv4_rss)
-		valid_rss_hf |= rss_hf & ETH_RSS_IPV4;
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
 
 	if (valid_rss_hf & ipv6_rss)
-		valid_rss_hf |= rss_hf & ETH_RSS_IPV6;
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
 
 	if (rss_hf & ~valid_rss_hf)
 		PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
@@ -467,7 +467,7 @@ iavf_dev_vlan_insert_set(struct rte_eth_dev *dev)
 		return 0;
 
 	enable = !!(dev->data->dev_conf.txmode.offloads &
-		    DEV_TX_OFFLOAD_VLAN_INSERT);
+		    RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
 	iavf_config_vlan_insert_v2(adapter, enable);
 
 	return 0;
@@ -479,10 +479,10 @@ iavf_dev_init_vlan(struct rte_eth_dev *dev)
 	int err;
 
 	err = iavf_dev_vlan_offload_set(dev,
-					ETH_VLAN_STRIP_MASK |
-					ETH_QINQ_STRIP_MASK |
-					ETH_VLAN_FILTER_MASK |
-					ETH_VLAN_EXTEND_MASK);
+					RTE_ETH_VLAN_STRIP_MASK |
+					RTE_ETH_QINQ_STRIP_MASK |
+					RTE_ETH_VLAN_FILTER_MASK |
+					RTE_ETH_VLAN_EXTEND_MASK);
 	if (err) {
 		PMD_DRV_LOG(ERR, "Failed to update vlan offload");
 		return err;
@@ -512,8 +512,8 @@ iavf_dev_configure(struct rte_eth_dev *dev)
 	ad->rx_vec_allowed = true;
 	ad->tx_vec_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Large VF setting */
 	if (num_queue_pairs > IAVF_MAX_NUM_QUEUES_DFLT) {
@@ -611,7 +611,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
 	}
 
 	rxq->max_pkt_len = max_pkt_len;
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 	    rxq->max_pkt_len > buf_size) {
 		dev_data->scattered_rx = 1;
 	}
@@ -961,34 +961,34 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->flow_type_rss_offloads = IAVF_RSS_OFFLOAD_ALL;
 	dev_info->max_mac_addrs = IAVF_NUM_MACADDR_MAX;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_CRC)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_KEEP_CRC;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_free_thresh = IAVF_DEFAULT_RX_FREE_THRESH,
@@ -1048,42 +1048,42 @@ iavf_dev_link_update(struct rte_eth_dev *dev,
 	 */
 	switch (vf->link_speed) {
 	case 10:
-		new_link.link_speed = ETH_SPEED_NUM_10M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case 100:
-		new_link.link_speed = ETH_SPEED_NUM_100M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case 1000:
-		new_link.link_speed = ETH_SPEED_NUM_1G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case 10000:
-		new_link.link_speed = ETH_SPEED_NUM_10G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case 20000:
-		new_link.link_speed = ETH_SPEED_NUM_20G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case 25000:
-		new_link.link_speed = ETH_SPEED_NUM_25G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case 40000:
-		new_link.link_speed = ETH_SPEED_NUM_40G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case 50000:
-		new_link.link_speed = ETH_SPEED_NUM_50G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case 100000:
-		new_link.link_speed = ETH_SPEED_NUM_100G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	default:
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	new_link.link_status = vf->link_up ? ETH_LINK_UP :
-					     ETH_LINK_DOWN;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vf->link_up ? RTE_ETH_LINK_UP :
+					     RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(dev, &new_link);
 }
@@ -1231,14 +1231,14 @@ iavf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask)
 	bool enable;
 	int err;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
 
 		iavf_iterate_vlan_filters_v2(dev, enable);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 		err = iavf_config_vlan_strip_v2(adapter, enable);
 		/* If not support, the stripping is already disabled by PF */
@@ -1267,9 +1267,9 @@ iavf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		return -ENOTSUP;
 
 	/* Vlan stripping setting */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			err = iavf_enable_vlan_strip(adapter);
 		else
 			err = iavf_disable_vlan_strip(adapter);
@@ -1311,8 +1311,8 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
 	rte_memcpy(lut, vf->rss_lut, reta_size);
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			lut[i] = reta_conf[idx].reta[shift];
 	}
@@ -1348,8 +1348,8 @@ iavf_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = vf->rss_lut[i];
 	}
@@ -1556,7 +1556,7 @@ iavf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	ret = iavf_query_stats(adapter, &pstats);
 	if (ret == 0) {
 		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
-					 DEV_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
 					 RTE_ETHER_CRC_LEN;
 		iavf_update_stats(vsi, pstats);
 		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index 1f2d3772d105..248054f79efd 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -341,90 +341,90 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
 /* rss type super set */
 
 /* IPv4 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV4	(ETH_RSS_ETH | ETH_RSS_IPV4 | \
-					 ETH_RSS_FRAG_IPV4 | \
-					 ETH_RSS_IPV4_CHKSUM)
+#define IAVF_RSS_TYPE_OUTER_IPV4	(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_FRAG_IPV4 | \
+					 RTE_ETH_RSS_IPV4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV4_UDP	(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV4_TCP	(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV4_SCTP	(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 /* IPv6 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV6	(ETH_RSS_ETH | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_OUTER_IPV6	(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
 #define IAVF_RSS_TYPE_OUTER_IPV6_FRAG	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_FRAG_IPV6)
+					 RTE_ETH_RSS_FRAG_IPV6)
 #define IAVF_RSS_TYPE_OUTER_IPV6_UDP	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV6_TCP	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV6_SCTP	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 /* VLAN IPV4 */
 #define IAVF_RSS_TYPE_VLAN_IPV4		(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV4_UDP	(IAVF_RSS_TYPE_OUTER_IPV4_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV4_TCP	(IAVF_RSS_TYPE_OUTER_IPV4_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV4_SCTP	(IAVF_RSS_TYPE_OUTER_IPV4_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 /* VLAN IPv6 */
 #define IAVF_RSS_TYPE_VLAN_IPV6		(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_FRAG	(IAVF_RSS_TYPE_OUTER_IPV6_FRAG | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_UDP	(IAVF_RSS_TYPE_OUTER_IPV6_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_TCP	(IAVF_RSS_TYPE_OUTER_IPV6_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_SCTP	(IAVF_RSS_TYPE_OUTER_IPV6_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 /* IPv4 inner */
-#define IAVF_RSS_TYPE_INNER_IPV4	ETH_RSS_IPV4
-#define IAVF_RSS_TYPE_INNER_IPV4_UDP	(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV4_TCP	(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV4_SCTP	(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV4	RTE_ETH_RSS_IPV4
+#define IAVF_RSS_TYPE_INNER_IPV4_UDP	(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV4_TCP	(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV4_SCTP	(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 /* IPv6 inner */
-#define IAVF_RSS_TYPE_INNER_IPV6	ETH_RSS_IPV6
-#define IAVF_RSS_TYPE_INNER_IPV6_UDP	(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV6_TCP	(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV6_SCTP	(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV6	RTE_ETH_RSS_IPV6
+#define IAVF_RSS_TYPE_INNER_IPV6_UDP	(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV6_TCP	(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV6_SCTP	(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 /* GTPU IPv4 */
 #define IAVF_RSS_TYPE_GTPU_IPV4		(IAVF_RSS_TYPE_INNER_IPV4 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV4_UDP	(IAVF_RSS_TYPE_INNER_IPV4_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV4_TCP	(IAVF_RSS_TYPE_INNER_IPV4_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 /* GTPU IPv6 */
 #define IAVF_RSS_TYPE_GTPU_IPV6		(IAVF_RSS_TYPE_INNER_IPV6 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV6_UDP	(IAVF_RSS_TYPE_INNER_IPV6_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV6_TCP	(IAVF_RSS_TYPE_INNER_IPV6_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 /* ESP, AH, L2TPV3 and PFCP */
-#define IAVF_RSS_TYPE_IPV4_ESP		(ETH_RSS_ESP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV4_AH		(ETH_RSS_AH | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_ESP		(ETH_RSS_ESP | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV6_AH		(ETH_RSS_AH | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV4_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV6_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
 
 /**
  * Supported pattern for hash.
@@ -442,7 +442,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_vlan_ipv4_udp,		IAVF_RSS_TYPE_VLAN_IPV4_UDP,	&outer_ipv4_udp_tmplt},
 	{iavf_pattern_eth_vlan_ipv4_tcp,		IAVF_RSS_TYPE_VLAN_IPV4_TCP,	&outer_ipv4_tcp_tmplt},
 	{iavf_pattern_eth_vlan_ipv4_sctp,		IAVF_RSS_TYPE_VLAN_IPV4_SCTP,	&outer_ipv4_sctp_tmplt},
-	{iavf_pattern_eth_ipv4_gtpu,			ETH_RSS_IPV4,			&outer_ipv4_udp_tmplt},
+	{iavf_pattern_eth_ipv4_gtpu,			RTE_ETH_RSS_IPV4,			&outer_ipv4_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv4,		IAVF_RSS_TYPE_GTPU_IPV4,	&inner_ipv4_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv4_udp,		IAVF_RSS_TYPE_GTPU_IPV4_UDP,	&inner_ipv4_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv4_tcp,		IAVF_RSS_TYPE_GTPU_IPV4_TCP,	&inner_ipv4_tcp_tmplt},
@@ -484,9 +484,9 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_ipv4_ah,			IAVF_RSS_TYPE_IPV4_AH,		&ipv4_ah_tmplt},
 	{iavf_pattern_eth_ipv4_l2tpv3,			IAVF_RSS_TYPE_IPV4_L2TPV3,	&ipv4_l2tpv3_tmplt},
 	{iavf_pattern_eth_ipv4_pfcp,			IAVF_RSS_TYPE_IPV4_PFCP,	&ipv4_pfcp_tmplt},
-	{iavf_pattern_eth_ipv4_gtpc,			ETH_RSS_IPV4,			&ipv4_udp_gtpc_tmplt},
-	{iavf_pattern_eth_ecpri,			ETH_RSS_ECPRI,			&eth_ecpri_tmplt},
-	{iavf_pattern_eth_ipv4_ecpri,			ETH_RSS_ECPRI,			&ipv4_ecpri_tmplt},
+	{iavf_pattern_eth_ipv4_gtpc,			RTE_ETH_RSS_IPV4,			&ipv4_udp_gtpc_tmplt},
+	{iavf_pattern_eth_ecpri,			RTE_ETH_RSS_ECPRI,			&eth_ecpri_tmplt},
+	{iavf_pattern_eth_ipv4_ecpri,			RTE_ETH_RSS_ECPRI,			&ipv4_ecpri_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv4,		IAVF_RSS_TYPE_INNER_IPV4,	&inner_ipv4_tmplt},
 	{iavf_pattern_eth_ipv6_gre_ipv4,		IAVF_RSS_TYPE_INNER_IPV4, &inner_ipv4_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv4_tcp,	IAVF_RSS_TYPE_INNER_IPV4_TCP, &inner_ipv4_tcp_tmplt},
@@ -504,7 +504,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_vlan_ipv6_udp,		IAVF_RSS_TYPE_VLAN_IPV6_UDP,	&outer_ipv6_udp_tmplt},
 	{iavf_pattern_eth_vlan_ipv6_tcp,		IAVF_RSS_TYPE_VLAN_IPV6_TCP,	&outer_ipv6_tcp_tmplt},
 	{iavf_pattern_eth_vlan_ipv6_sctp,		IAVF_RSS_TYPE_VLAN_IPV6_SCTP,	&outer_ipv6_sctp_tmplt},
-	{iavf_pattern_eth_ipv6_gtpu,			ETH_RSS_IPV6,			&outer_ipv6_udp_tmplt},
+	{iavf_pattern_eth_ipv6_gtpu,			RTE_ETH_RSS_IPV6,			&outer_ipv6_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv6,		IAVF_RSS_TYPE_GTPU_IPV6,	&inner_ipv6_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv6_udp,		IAVF_RSS_TYPE_GTPU_IPV6_UDP,	&inner_ipv6_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv6_tcp,		IAVF_RSS_TYPE_GTPU_IPV6_TCP,	&inner_ipv6_tcp_tmplt},
@@ -546,7 +546,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_ipv6_ah,			IAVF_RSS_TYPE_IPV6_AH,		&ipv6_ah_tmplt},
 	{iavf_pattern_eth_ipv6_l2tpv3,			IAVF_RSS_TYPE_IPV6_L2TPV3,	&ipv6_l2tpv3_tmplt},
 	{iavf_pattern_eth_ipv6_pfcp,			IAVF_RSS_TYPE_IPV6_PFCP,	&ipv6_pfcp_tmplt},
-	{iavf_pattern_eth_ipv6_gtpc,			ETH_RSS_IPV6,			&ipv6_udp_gtpc_tmplt},
+	{iavf_pattern_eth_ipv6_gtpc,			RTE_ETH_RSS_IPV6,			&ipv6_udp_gtpc_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv6,		IAVF_RSS_TYPE_INNER_IPV6,	&inner_ipv6_tmplt},
 	{iavf_pattern_eth_ipv6_gre_ipv6,		IAVF_RSS_TYPE_INNER_IPV6, &inner_ipv6_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv6_tcp,	IAVF_RSS_TYPE_INNER_IPV6_TCP, &inner_ipv6_tcp_tmplt},
@@ -580,52 +580,52 @@ iavf_rss_hash_set(struct iavf_adapter *ad, uint64_t rss_hf, bool add)
 	struct virtchnl_rss_cfg rss_cfg;
 
 #define IAVF_RSS_HF_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 	rss_cfg.rss_algorithm = VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC;
-	if (rss_hf & ETH_RSS_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_IPV4) {
 		rss_cfg.proto_hdrs = inner_ipv4_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		rss_cfg.proto_hdrs = inner_ipv4_udp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		rss_cfg.proto_hdrs = inner_ipv4_tcp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
 		rss_cfg.proto_hdrs = inner_ipv4_sctp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_IPV6) {
 		rss_cfg.proto_hdrs = inner_ipv6_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
 		rss_cfg.proto_hdrs = inner_ipv6_udp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 		rss_cfg.proto_hdrs = inner_ipv6_tcp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
 		rss_cfg.proto_hdrs = inner_ipv6_sctp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
@@ -779,28 +779,28 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 		hdr = &proto_hdrs->proto_hdr[i];
 		switch (hdr->type) {
 		case VIRTCHNL_PROTO_HDR_ETH:
-			if (!(rss_type & ETH_RSS_ETH))
+			if (!(rss_type & RTE_ETH_RSS_ETH))
 				hdr->field_selector = 0;
-			else if (rss_type & ETH_RSS_L2_SRC_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
 				REFINE_PROTO_FLD(DEL, ETH_DST);
-			else if (rss_type & ETH_RSS_L2_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
 				REFINE_PROTO_FLD(DEL, ETH_SRC);
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV4:
 			if (rss_type &
-			    (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			     ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV4_SCTP)) {
-				if (rss_type & ETH_RSS_FRAG_IPV4) {
+			    (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			     RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
 					iavf_hash_add_fragment_hdr(proto_hdrs, i + 1);
-				} else if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+				} else if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV4_DST);
-				} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+				} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV4_SRC);
 				} else if (rss_type &
-					   (ETH_RSS_L4_SRC_ONLY |
-					    ETH_RSS_L4_DST_ONLY)) {
+					   (RTE_ETH_RSS_L4_SRC_ONLY |
+					    RTE_ETH_RSS_L4_DST_ONLY)) {
 					REFINE_PROTO_FLD(DEL, IPV4_DST);
 					REFINE_PROTO_FLD(DEL, IPV4_SRC);
 				}
@@ -808,39 +808,39 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_IPV4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, IPV4_CHKSUM);
 
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV4_FRAG:
 			if (rss_type &
-			    (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			     ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV4_SCTP)) {
-				if (rss_type & ETH_RSS_FRAG_IPV4)
+			    (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			     RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_FRAG_IPV4)
 					REFINE_PROTO_FLD(ADD, IPV4_FRAG_PKID);
 			} else {
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_IPV4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, IPV4_CHKSUM);
 
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV6:
 			if (rss_type &
-			    (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-			     ETH_RSS_NONFRAG_IPV6_UDP |
-			     ETH_RSS_NONFRAG_IPV6_TCP |
-			     ETH_RSS_NONFRAG_IPV6_SCTP)) {
-				if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			    (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+			     RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV6_DST);
-				} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+				} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV6_SRC);
 				} else if (rss_type &
-					   (ETH_RSS_L4_SRC_ONLY |
-					    ETH_RSS_L4_DST_ONLY)) {
+					   (RTE_ETH_RSS_L4_SRC_ONLY |
+					    RTE_ETH_RSS_L4_DST_ONLY)) {
 					REFINE_PROTO_FLD(DEL, IPV6_DST);
 					REFINE_PROTO_FLD(DEL, IPV6_SRC);
 				}
@@ -857,7 +857,7 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			}
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG:
-			if (rss_type & ETH_RSS_FRAG_IPV6)
+			if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
 				REFINE_PROTO_FLD(ADD, IPV6_EH_FRAG_PKID);
 			else
 				hdr->field_selector = 0;
@@ -865,87 +865,87 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			break;
 		case VIRTCHNL_PROTO_HDR_UDP:
 			if (rss_type &
-			    (ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV6_UDP)) {
-				if (rss_type & ETH_RSS_L4_SRC_ONLY)
+			    (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+				if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 					REFINE_PROTO_FLD(DEL, UDP_DST_PORT);
-				else if (rss_type & ETH_RSS_L4_DST_ONLY)
+				else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 					REFINE_PROTO_FLD(DEL, UDP_SRC_PORT);
 				else if (rss_type &
-					 (ETH_RSS_L3_SRC_ONLY |
-					  ETH_RSS_L3_DST_ONLY))
+					 (RTE_ETH_RSS_L3_SRC_ONLY |
+					  RTE_ETH_RSS_L3_DST_ONLY))
 					hdr->field_selector = 0;
 			} else {
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_L4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, UDP_CHKSUM);
 			break;
 		case VIRTCHNL_PROTO_HDR_TCP:
 			if (rss_type &
-			    (ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV6_TCP)) {
-				if (rss_type & ETH_RSS_L4_SRC_ONLY)
+			    (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+				if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 					REFINE_PROTO_FLD(DEL, TCP_DST_PORT);
-				else if (rss_type & ETH_RSS_L4_DST_ONLY)
+				else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 					REFINE_PROTO_FLD(DEL, TCP_SRC_PORT);
 				else if (rss_type &
-					 (ETH_RSS_L3_SRC_ONLY |
-					  ETH_RSS_L3_DST_ONLY))
+					 (RTE_ETH_RSS_L3_SRC_ONLY |
+					  RTE_ETH_RSS_L3_DST_ONLY))
 					hdr->field_selector = 0;
 			} else {
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_L4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, TCP_CHKSUM);
 			break;
 		case VIRTCHNL_PROTO_HDR_SCTP:
 			if (rss_type &
-			    (ETH_RSS_NONFRAG_IPV4_SCTP |
-			     ETH_RSS_NONFRAG_IPV6_SCTP)) {
-				if (rss_type & ETH_RSS_L4_SRC_ONLY)
+			    (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 					REFINE_PROTO_FLD(DEL, SCTP_DST_PORT);
-				else if (rss_type & ETH_RSS_L4_DST_ONLY)
+				else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 					REFINE_PROTO_FLD(DEL, SCTP_SRC_PORT);
 				else if (rss_type &
-					 (ETH_RSS_L3_SRC_ONLY |
-					  ETH_RSS_L3_DST_ONLY))
+					 (RTE_ETH_RSS_L3_SRC_ONLY |
+					  RTE_ETH_RSS_L3_DST_ONLY))
 					hdr->field_selector = 0;
 			} else {
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_L4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, SCTP_CHKSUM);
 			break;
 		case VIRTCHNL_PROTO_HDR_S_VLAN:
-			if (!(rss_type & ETH_RSS_S_VLAN))
+			if (!(rss_type & RTE_ETH_RSS_S_VLAN))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_C_VLAN:
-			if (!(rss_type & ETH_RSS_C_VLAN))
+			if (!(rss_type & RTE_ETH_RSS_C_VLAN))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_L2TPV3:
-			if (!(rss_type & ETH_RSS_L2TPV3))
+			if (!(rss_type & RTE_ETH_RSS_L2TPV3))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_ESP:
-			if (!(rss_type & ETH_RSS_ESP))
+			if (!(rss_type & RTE_ETH_RSS_ESP))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_AH:
-			if (!(rss_type & ETH_RSS_AH))
+			if (!(rss_type & RTE_ETH_RSS_AH))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_PFCP:
-			if (!(rss_type & ETH_RSS_PFCP))
+			if (!(rss_type & RTE_ETH_RSS_PFCP))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_ECPRI:
-			if (!(rss_type & ETH_RSS_ECPRI))
+			if (!(rss_type & RTE_ETH_RSS_ECPRI))
 				hdr->field_selector = 0;
 			break;
 		default:
@@ -962,7 +962,7 @@ iavf_refine_proto_hdrs_gtpu(struct virtchnl_proto_hdrs *proto_hdrs,
 	struct virtchnl_proto_hdr *hdr;
 	int i;
 
-	if (!(rss_type & ETH_RSS_GTPU))
+	if (!(rss_type & RTE_ETH_RSS_GTPU))
 		return;
 
 	for (i = 0; i < proto_hdrs->count; i++) {
@@ -1059,10 +1059,10 @@ static void iavf_refine_proto_hdrs(struct virtchnl_proto_hdrs *proto_hdrs,
 }
 
 static uint64_t invalid_rss_comb[] = {
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	RTE_ETH_RSS_L3_PRE32 | RTE_ETH_RSS_L3_PRE40 |
 	RTE_ETH_RSS_L3_PRE48 | RTE_ETH_RSS_L3_PRE56 |
 	RTE_ETH_RSS_L3_PRE96
@@ -1073,27 +1073,27 @@ struct rss_attr_type {
 	uint64_t type;
 };
 
-#define VALID_RSS_IPV4_L4	(ETH_RSS_NONFRAG_IPV4_UDP	| \
-				 ETH_RSS_NONFRAG_IPV4_TCP	| \
-				 ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4	(RTE_ETH_RSS_NONFRAG_IPV4_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
-#define VALID_RSS_IPV6_L4	(ETH_RSS_NONFRAG_IPV6_UDP	| \
-				 ETH_RSS_NONFRAG_IPV6_TCP	| \
-				 ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4	(RTE_ETH_RSS_NONFRAG_IPV6_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
-#define VALID_RSS_IPV4		(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4		(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
 				 VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6		(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6		(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
 				 VALID_RSS_IPV6_L4)
 #define VALID_RSS_L3		(VALID_RSS_IPV4 | VALID_RSS_IPV6)
 #define VALID_RSS_L4		(VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
 
-#define VALID_RSS_ATTR		(ETH_RSS_L3_SRC_ONLY	| \
-				 ETH_RSS_L3_DST_ONLY	| \
-				 ETH_RSS_L4_SRC_ONLY	| \
-				 ETH_RSS_L4_DST_ONLY	| \
-				 ETH_RSS_L2_SRC_ONLY	| \
-				 ETH_RSS_L2_DST_ONLY	| \
+#define VALID_RSS_ATTR		(RTE_ETH_RSS_L3_SRC_ONLY	| \
+				 RTE_ETH_RSS_L3_DST_ONLY	| \
+				 RTE_ETH_RSS_L4_SRC_ONLY	| \
+				 RTE_ETH_RSS_L4_DST_ONLY	| \
+				 RTE_ETH_RSS_L2_SRC_ONLY	| \
+				 RTE_ETH_RSS_L2_DST_ONLY	| \
 				 RTE_ETH_RSS_L3_PRE64)
 
 #define INVALID_RSS_ATTR	(RTE_ETH_RSS_L3_PRE32	| \
@@ -1103,9 +1103,9 @@ struct rss_attr_type {
 				 RTE_ETH_RSS_L3_PRE96)
 
 static struct rss_attr_type rss_attr_to_valid_type[] = {
-	{ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY,	ETH_RSS_ETH},
-	{ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
-	{ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
+	{RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY,	RTE_ETH_RSS_ETH},
+	{RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
+	{RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
 	/* current ipv6 prefix only supports prefix 64 bits*/
 	{RTE_ETH_RSS_L3_PRE64,				VALID_RSS_IPV6},
 	{INVALID_RSS_ATTR,				0}
@@ -1122,15 +1122,15 @@ iavf_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
 	 * hash function.
 	 */
 	if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
-		if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
-		    ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+		if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+		    RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
 			return true;
 
 		if (!(rss_type &
-		   (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
-		    ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
-		    ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
-		    ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+		   (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+		    RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
 			return true;
 	}
 
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 88bbd40c1027..ac4db117f5cd 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -617,7 +617,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	rxq->vsi = vsi;
 	rxq->offloads = offloads;
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index f4ae2fd6e123..2d7f6b1b2dca 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -24,22 +24,22 @@
 #define IAVF_VPMD_TX_MAX_FREE_BUF 64
 
 #define IAVF_TX_NO_VECTOR_FLAGS (				 \
-		DEV_TX_OFFLOAD_MULTI_SEGS |		 \
-		DEV_TX_OFFLOAD_TCP_TSO)
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |		 \
+		RTE_ETH_TX_OFFLOAD_TCP_TSO)
 
 #define IAVF_TX_VECTOR_OFFLOAD (				 \
-		DEV_TX_OFFLOAD_VLAN_INSERT |		 \
-		DEV_TX_OFFLOAD_QINQ_INSERT |		 \
-		DEV_TX_OFFLOAD_IPV4_CKSUM |		 \
-		DEV_TX_OFFLOAD_SCTP_CKSUM |		 \
-		DEV_TX_OFFLOAD_UDP_CKSUM |		 \
-		DEV_TX_OFFLOAD_TCP_CKSUM)
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |		 \
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |		 \
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |		 \
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |		 \
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |		 \
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 
 #define IAVF_RX_VECTOR_OFFLOAD (				 \
-		DEV_RX_OFFLOAD_CHECKSUM |		 \
-		DEV_RX_OFFLOAD_SCTP_CKSUM |		 \
-		DEV_RX_OFFLOAD_VLAN |		 \
-		DEV_RX_OFFLOAD_RSS_HASH)
+		RTE_ETH_RX_OFFLOAD_CHECKSUM |		 \
+		RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |		 \
+		RTE_ETH_RX_OFFLOAD_VLAN |		 \
+		RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define IAVF_VECTOR_PATH 0
 #define IAVF_VECTOR_OFFLOAD_PATH 1
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 72a4fcab04a5..b47c51b8ebe4 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -906,7 +906,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 		 * needs to load 2nd 16B of each desc for RSS hash parsing,
 		 * will cause performance drop to get into this context.
 		 */
-		if (offloads & DEV_RX_OFFLOAD_RSS_HASH ||
+		if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH ||
 		    rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh7 =
@@ -958,7 +958,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 					(_mm256_castsi128_si256(raw_desc_bh0),
 					raw_desc_bh1, 1);
 
-			if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+			if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 				/**
 				 * to shift the 32b RSS hash value to the
 				 * highest 32b of each 128b before mask
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 12375d3d80bd..b8f2f69f12fc 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1141,7 +1141,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 			 * needs to load 2nd 16B of each desc for RSS hash parsing,
 			 * will cause performance drop to get into this context.
 			 */
-			if (offloads & DEV_RX_OFFLOAD_RSS_HASH ||
+			if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH ||
 			    rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
@@ -1193,7 +1193,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 						(_mm256_castsi128_si256(raw_desc_bh0),
 						 raw_desc_bh1, 1);
 
-				if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+				if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 					/**
 					 * to shift the 32b RSS hash value to the
 					 * highest 32b of each 128b before mask
@@ -1721,7 +1721,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->next_dd - (n - 1);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
 								rte_lcore_id());
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index edb54991e298..1de43b9b8ee2 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -819,7 +819,7 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
 		 * needs to load 2nd 16B of each desc for RSS hash parsing,
 		 * will cause performance drop to get into this context.
 		 */
-		if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh3 =
 				_mm_load_si128
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index c9c01a14e349..7b7df5eebb6d 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -835,7 +835,7 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
 		PMD_DRV_LOG(DEBUG, "RSS is not supported");
 		return -ENOTSUP;
 	}
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
 		/* set all lut items to default queue */
 		memset(hw->rss_lut, 0, hw->vf_res->rss_lut_size);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index b8a537cb8556..a90e40964ec5 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -95,7 +95,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
 	}
 
 	rxq->max_pkt_len = max_pkt_len;
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 	    (rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size) {
 		dev_data->scattered_rx = 1;
 	}
@@ -576,7 +576,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -637,7 +637,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
 	}
 
 	ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	ad->pf.adapter_stopped = 1;
 
 	return 0;
@@ -652,8 +652,8 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
 	ad->rx_bulk_alloc_allowed = true;
 	ad->tx_simple_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	return 0;
 }
@@ -675,27 +675,27 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -925,42 +925,42 @@ ice_dcf_link_update(struct rte_eth_dev *dev,
 	 */
 	switch (hw->link_speed) {
 	case 10:
-		new_link.link_speed = ETH_SPEED_NUM_10M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case 100:
-		new_link.link_speed = ETH_SPEED_NUM_100M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case 1000:
-		new_link.link_speed = ETH_SPEED_NUM_1G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case 10000:
-		new_link.link_speed = ETH_SPEED_NUM_10G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case 20000:
-		new_link.link_speed = ETH_SPEED_NUM_20G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case 25000:
-		new_link.link_speed = ETH_SPEED_NUM_25G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case 40000:
-		new_link.link_speed = ETH_SPEED_NUM_40G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case 50000:
-		new_link.link_speed = ETH_SPEED_NUM_50G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case 100000:
-		new_link.link_speed = ETH_SPEED_NUM_100G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	default:
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	new_link.link_status = hw->link_up ? ETH_LINK_UP :
-					     ETH_LINK_DOWN;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = hw->link_up ? RTE_ETH_LINK_UP :
+					     RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(dev, &new_link);
 }
@@ -979,11 +979,11 @@ ice_dcf_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ice_create_tunnel(parent_hw, TNL_VXLAN,
 					udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_ECPRI:
+	case RTE_ETH_TUNNEL_TYPE_ECPRI:
 		ret = ice_create_tunnel(parent_hw, TNL_ECPRI,
 					udp_tunnel->udp_port);
 		break;
@@ -1010,8 +1010,8 @@ ice_dcf_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
-	case RTE_TUNNEL_TYPE_ECPRI:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_ECPRI:
 		ret = ice_destroy_tunnel(parent_hw, udp_tunnel->udp_port, 0);
 		break;
 	default:
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index 44fb38dbe7b1..b9fcfc80ad9b 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -37,7 +37,7 @@ ice_dcf_vf_repr_dev_configure(struct rte_eth_dev *dev)
 static int
 ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -45,7 +45,7 @@ ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
 static int
 ice_dcf_vf_repr_dev_stop(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -143,28 +143,28 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -246,9 +246,9 @@ ice_dcf_vf_repr_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		return -ENOTSUP;
 
 	/* Vlan stripping setting */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		bool enable = !!(dev_conf->rxmode.offloads &
-				 DEV_RX_OFFLOAD_VLAN_STRIP);
+				 RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 		if (enable && repr->outer_vlan_info.port_vlan_ena) {
 			PMD_DRV_LOG(ERR,
@@ -345,7 +345,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
 	if (!ice_dcf_vlan_offload_ena(repr))
 		return -ENOTSUP;
 
-	if (vlan_type != ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
 		PMD_DRV_LOG(ERR,
 			    "Can accelerate only outer VLAN in QinQ\n");
 		return -EINVAL;
@@ -375,7 +375,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
 
 	if (repr->outer_vlan_info.stripping_ena) {
 		err = ice_dcf_vf_repr_vlan_offload_set(dev,
-						       ETH_VLAN_STRIP_MASK);
+						       RTE_ETH_VLAN_STRIP_MASK);
 		if (err) {
 			PMD_DRV_LOG(ERR,
 				    "Failed to reset VLAN stripping : %d\n",
@@ -449,7 +449,7 @@ ice_dcf_vf_repr_init_vlan(struct rte_eth_dev *vf_rep_eth_dev)
 	int err;
 
 	err = ice_dcf_vf_repr_vlan_offload_set(vf_rep_eth_dev,
-					       ETH_VLAN_STRIP_MASK);
+					       RTE_ETH_VLAN_STRIP_MASK);
 	if (err) {
 		PMD_DRV_LOG(ERR, "Failed to set VLAN offload");
 		return err;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index edbc74632711..6a6637a15af7 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1487,9 +1487,9 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
 	TAILQ_INIT(&vsi->mac_list);
 	TAILQ_INIT(&vsi->vlan_list);
 
-	/* Be sync with ETH_RSS_RETA_SIZE_x maximum value definition */
+	/* Be sync with RTE_ETH_RSS_RETA_SIZE_x maximum value definition */
 	pf->hash_lut_size = hw->func_caps.common_cap.rss_table_size >
-			ETH_RSS_RETA_SIZE_512 ? ETH_RSS_RETA_SIZE_512 :
+			RTE_ETH_RSS_RETA_SIZE_512 ? RTE_ETH_RSS_RETA_SIZE_512 :
 			hw->func_caps.common_cap.rss_table_size;
 	pf->flags |= ICE_FLAG_RSS_AQ_CAPABLE;
 
@@ -2993,14 +2993,14 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	int ret;
 
 #define ICE_RSS_HF_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 	ret = ice_rem_vsi_rss_cfg(hw, vsi->idx);
 	if (ret)
@@ -3010,7 +3010,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	cfg.symm = 0;
 	cfg.hdr_type = ICE_RSS_OUTER_HEADERS;
 	/* Configure RSS for IPv4 with src/dst addr as input set */
-	if (rss_hf & ETH_RSS_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_IPV4) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV4;
 		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -3020,7 +3020,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for IPv6 with src/dst addr as input set */
-	if (rss_hf & ETH_RSS_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_IPV6) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV6;
 		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -3030,7 +3030,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for udp4 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -3041,7 +3041,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for udp6 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -3052,7 +3052,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for tcp4 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -3063,7 +3063,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for tcp6 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -3074,7 +3074,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for sctp4 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_SCTP_IPV4;
@@ -3085,7 +3085,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for sctp6 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_SCTP_IPV6;
@@ -3095,7 +3095,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_IPV4) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV4;
@@ -3105,7 +3105,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_IPV6) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV6;
@@ -3115,7 +3115,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
 				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -3125,7 +3125,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
 				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -3135,7 +3135,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
 				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -3145,7 +3145,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
 				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -3288,8 +3288,8 @@ ice_dev_configure(struct rte_eth_dev *dev)
 	ad->rx_bulk_alloc_allowed = true;
 	ad->tx_simple_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (dev->data->nb_rx_queues) {
 		ret = ice_init_rss(pf);
@@ -3569,8 +3569,8 @@ ice_dev_start(struct rte_eth_dev *dev)
 	ice_set_rx_function(dev);
 	ice_set_tx_function(dev);
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-			ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+			RTE_ETH_VLAN_EXTEND_MASK;
 	ret = ice_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
@@ -3682,40 +3682,40 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_KEEP_CRC |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_FILTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->flow_type_rss_offloads = 0;
 
 	if (!is_safe_mode) {
 		dev_info->rx_offload_capa |=
-			DEV_RX_OFFLOAD_IPV4_CKSUM |
-			DEV_RX_OFFLOAD_UDP_CKSUM |
-			DEV_RX_OFFLOAD_TCP_CKSUM |
-			DEV_RX_OFFLOAD_QINQ_STRIP |
-			DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-			DEV_RX_OFFLOAD_VLAN_EXTEND |
-			DEV_RX_OFFLOAD_RSS_HASH |
-			DEV_RX_OFFLOAD_TIMESTAMP;
+			RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+			RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+			RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+			RTE_ETH_RX_OFFLOAD_RSS_HASH |
+			RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 		dev_info->tx_offload_capa |=
-			DEV_TX_OFFLOAD_QINQ_INSERT |
-			DEV_TX_OFFLOAD_IPV4_CKSUM |
-			DEV_TX_OFFLOAD_UDP_CKSUM |
-			DEV_TX_OFFLOAD_TCP_CKSUM |
-			DEV_TX_OFFLOAD_SCTP_CKSUM |
-			DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-			DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+			RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+			RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+			RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 		dev_info->flow_type_rss_offloads |= ICE_RSS_OFFLOAD_ALL;
 	}
 
 	dev_info->rx_queue_offload_capa = 0;
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->reta_size = pf->hash_lut_size;
 	dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
@@ -3754,24 +3754,24 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.nb_align = ICE_ALIGN_RING_DESC,
 	};
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M |
-			       ETH_LINK_SPEED_100M |
-			       ETH_LINK_SPEED_1G |
-			       ETH_LINK_SPEED_2_5G |
-			       ETH_LINK_SPEED_5G |
-			       ETH_LINK_SPEED_10G |
-			       ETH_LINK_SPEED_20G |
-			       ETH_LINK_SPEED_25G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			       RTE_ETH_LINK_SPEED_100M |
+			       RTE_ETH_LINK_SPEED_1G |
+			       RTE_ETH_LINK_SPEED_2_5G |
+			       RTE_ETH_LINK_SPEED_5G |
+			       RTE_ETH_LINK_SPEED_10G |
+			       RTE_ETH_LINK_SPEED_20G |
+			       RTE_ETH_LINK_SPEED_25G;
 
 	phy_type_low = hw->port_info->phy.phy_type_low;
 	phy_type_high = hw->port_info->phy.phy_type_high;
 
 	if (ICE_PHY_TYPE_SUPPORT_50G(phy_type_low))
-		dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
 
 	if (ICE_PHY_TYPE_SUPPORT_100G_LOW(phy_type_low) ||
 			ICE_PHY_TYPE_SUPPORT_100G_HIGH(phy_type_high))
-		dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
 
 	dev_info->nb_rx_queues = dev->data->nb_rx_queues;
 	dev_info->nb_tx_queues = dev->data->nb_tx_queues;
@@ -3836,8 +3836,8 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		status = ice_aq_get_link_info(hw->port_info, enable_lse,
 					      &link_status, NULL);
 		if (status != ICE_SUCCESS) {
-			link.link_speed = ETH_SPEED_NUM_100M;
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			link.link_speed = RTE_ETH_SPEED_NUM_100M;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR, "Failed to get link info");
 			goto out;
 		}
@@ -3853,55 +3853,55 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		goto out;
 
 	/* Full-duplex operation at all supported speeds */
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	/* Parse the link status */
 	switch (link_status.link_speed) {
 	case ICE_AQ_LINK_SPEED_10MB:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case ICE_AQ_LINK_SPEED_100MB:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case ICE_AQ_LINK_SPEED_1000MB:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case ICE_AQ_LINK_SPEED_2500MB:
-		link.link_speed = ETH_SPEED_NUM_2_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	case ICE_AQ_LINK_SPEED_5GB:
-		link.link_speed = ETH_SPEED_NUM_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 	case ICE_AQ_LINK_SPEED_10GB:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case ICE_AQ_LINK_SPEED_20GB:
-		link.link_speed = ETH_SPEED_NUM_20G;
+		link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case ICE_AQ_LINK_SPEED_25GB:
-		link.link_speed = ETH_SPEED_NUM_25G;
+		link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case ICE_AQ_LINK_SPEED_40GB:
-		link.link_speed = ETH_SPEED_NUM_40G;
+		link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case ICE_AQ_LINK_SPEED_50GB:
-		link.link_speed = ETH_SPEED_NUM_50G;
+		link.link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case ICE_AQ_LINK_SPEED_100GB:
-		link.link_speed = ETH_SPEED_NUM_100G;
+		link.link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	case ICE_AQ_LINK_SPEED_UNKNOWN:
 		PMD_DRV_LOG(ERR, "Unknown link speed");
-		link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "None link speed");
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			      ETH_LINK_SPEED_FIXED);
+			      RTE_ETH_LINK_SPEED_FIXED);
 
 out:
 	ice_atomic_write_link_status(dev, &link);
@@ -4377,15 +4377,15 @@ ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ice_vsi_config_vlan_filter(vsi, true);
 		else
 			ice_vsi_config_vlan_filter(vsi, false);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			ice_vsi_config_vlan_stripping(vsi, true);
 		else
 			ice_vsi_config_vlan_stripping(vsi, false);
@@ -4500,8 +4500,8 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
 		goto out;
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			lut[i] = reta_conf[idx].reta[shift];
 	}
@@ -4550,8 +4550,8 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
 		goto out;
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = lut[i];
 	}
@@ -5460,7 +5460,7 @@ ice_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ice_create_tunnel(hw, TNL_VXLAN, udp_tunnel->udp_port);
 		break;
 	default:
@@ -5484,7 +5484,7 @@ ice_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ice_destroy_tunnel(hw, udp_tunnel->udp_port, 0);
 		break;
 	default:
@@ -5505,7 +5505,7 @@ ice_timesync_enable(struct rte_eth_dev *dev)
 	int ret;
 
 	if (dev->data->dev_started && !(dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_TIMESTAMP)) {
+	    RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
 		PMD_DRV_LOG(ERR, "Rx timestamp offload not configured");
 		return -1;
 	}
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 1cd3753ccc5f..599e0028f7e8 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -117,19 +117,19 @@
 		       ICE_FLAG_VF_MAC_BY_PF)
 
 #define ICE_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L2_PAYLOAD)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L2_PAYLOAD)
 
 /**
  * The overhead from MTU to max frame size.
diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c
index 20a3204fab7e..35eff8b17d28 100644
--- a/drivers/net/ice/ice_hash.c
+++ b/drivers/net/ice/ice_hash.c
@@ -39,27 +39,27 @@
 #define ICE_IPV4_PROT		BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_PROT)
 #define ICE_IPV6_PROT		BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PROT)
 
-#define VALID_RSS_IPV4_L4	(ETH_RSS_NONFRAG_IPV4_UDP	| \
-				 ETH_RSS_NONFRAG_IPV4_TCP	| \
-				 ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4	(RTE_ETH_RSS_NONFRAG_IPV4_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
-#define VALID_RSS_IPV6_L4	(ETH_RSS_NONFRAG_IPV6_UDP	| \
-				 ETH_RSS_NONFRAG_IPV6_TCP	| \
-				 ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4	(RTE_ETH_RSS_NONFRAG_IPV6_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
-#define VALID_RSS_IPV4		(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4		(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
 				 VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6		(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6		(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
 				 VALID_RSS_IPV6_L4)
 #define VALID_RSS_L3		(VALID_RSS_IPV4 | VALID_RSS_IPV6)
 #define VALID_RSS_L4		(VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
 
-#define VALID_RSS_ATTR		(ETH_RSS_L3_SRC_ONLY	| \
-				 ETH_RSS_L3_DST_ONLY	| \
-				 ETH_RSS_L4_SRC_ONLY	| \
-				 ETH_RSS_L4_DST_ONLY	| \
-				 ETH_RSS_L2_SRC_ONLY	| \
-				 ETH_RSS_L2_DST_ONLY	| \
+#define VALID_RSS_ATTR		(RTE_ETH_RSS_L3_SRC_ONLY	| \
+				 RTE_ETH_RSS_L3_DST_ONLY	| \
+				 RTE_ETH_RSS_L4_SRC_ONLY	| \
+				 RTE_ETH_RSS_L4_DST_ONLY	| \
+				 RTE_ETH_RSS_L2_SRC_ONLY	| \
+				 RTE_ETH_RSS_L2_DST_ONLY	| \
 				 RTE_ETH_RSS_L3_PRE32	| \
 				 RTE_ETH_RSS_L3_PRE48	| \
 				 RTE_ETH_RSS_L3_PRE64)
@@ -373,87 +373,87 @@ struct ice_rss_hash_cfg eth_tmplt = {
 };
 
 /* IPv4 */
-#define ICE_RSS_TYPE_ETH_IPV4		(ETH_RSS_ETH | ETH_RSS_IPV4 | \
-					 ETH_RSS_FRAG_IPV4 | \
-					 ETH_RSS_IPV4_CHKSUM)
+#define ICE_RSS_TYPE_ETH_IPV4		(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_FRAG_IPV4 | \
+					 RTE_ETH_RSS_IPV4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV4_UDP	(ICE_RSS_TYPE_ETH_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV4_TCP	(ICE_RSS_TYPE_ETH_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV4_SCTP	(ICE_RSS_TYPE_ETH_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP | \
-					 ETH_RSS_L4_CHKSUM)
-#define ICE_RSS_TYPE_IPV4		ETH_RSS_IPV4
-#define ICE_RSS_TYPE_IPV4_UDP		(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP)
-#define ICE_RSS_TYPE_IPV4_TCP		(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP)
-#define ICE_RSS_TYPE_IPV4_SCTP		(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP)
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
+#define ICE_RSS_TYPE_IPV4		RTE_ETH_RSS_IPV4
+#define ICE_RSS_TYPE_IPV4_UDP		(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define ICE_RSS_TYPE_IPV4_TCP		(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define ICE_RSS_TYPE_IPV4_SCTP		(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
 /* IPv6 */
-#define ICE_RSS_TYPE_ETH_IPV6		(ETH_RSS_ETH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_ETH_IPV6_FRAG	(ETH_RSS_ETH | ETH_RSS_IPV6 | \
-					 ETH_RSS_FRAG_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6		(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6_FRAG	(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_FRAG_IPV6)
 #define ICE_RSS_TYPE_ETH_IPV6_UDP	(ICE_RSS_TYPE_ETH_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV6_TCP	(ICE_RSS_TYPE_ETH_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV6_SCTP	(ICE_RSS_TYPE_ETH_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP | \
-					 ETH_RSS_L4_CHKSUM)
-#define ICE_RSS_TYPE_IPV6		ETH_RSS_IPV6
-#define ICE_RSS_TYPE_IPV6_UDP		(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP)
-#define ICE_RSS_TYPE_IPV6_TCP		(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP)
-#define ICE_RSS_TYPE_IPV6_SCTP		(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP)
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
+#define ICE_RSS_TYPE_IPV6		RTE_ETH_RSS_IPV6
+#define ICE_RSS_TYPE_IPV6_UDP		(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define ICE_RSS_TYPE_IPV6_TCP		(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define ICE_RSS_TYPE_IPV6_SCTP		(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 /* VLAN IPV4 */
 #define ICE_RSS_TYPE_VLAN_IPV4		(ICE_RSS_TYPE_IPV4 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
-					 ETH_RSS_FRAG_IPV4)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+					 RTE_ETH_RSS_FRAG_IPV4)
 #define ICE_RSS_TYPE_VLAN_IPV4_UDP	(ICE_RSS_TYPE_IPV4_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV4_TCP	(ICE_RSS_TYPE_IPV4_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV4_SCTP	(ICE_RSS_TYPE_IPV4_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 /* VLAN IPv6 */
 #define ICE_RSS_TYPE_VLAN_IPV6		(ICE_RSS_TYPE_IPV6 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV6_FRAG	(ICE_RSS_TYPE_IPV6 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
-					 ETH_RSS_FRAG_IPV6)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+					 RTE_ETH_RSS_FRAG_IPV6)
 #define ICE_RSS_TYPE_VLAN_IPV6_UDP	(ICE_RSS_TYPE_IPV6_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV6_TCP	(ICE_RSS_TYPE_IPV6_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV6_SCTP	(ICE_RSS_TYPE_IPV6_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 
 /* GTPU IPv4 */
 #define ICE_RSS_TYPE_GTPU_IPV4		(ICE_RSS_TYPE_IPV4 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV4_UDP	(ICE_RSS_TYPE_IPV4_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV4_TCP	(ICE_RSS_TYPE_IPV4_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 /* GTPU IPv6 */
 #define ICE_RSS_TYPE_GTPU_IPV6		(ICE_RSS_TYPE_IPV6 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV6_UDP	(ICE_RSS_TYPE_IPV6_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV6_TCP	(ICE_RSS_TYPE_IPV6_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 
 /* PPPOE */
-#define ICE_RSS_TYPE_PPPOE		(ETH_RSS_ETH | ETH_RSS_PPPOE)
+#define ICE_RSS_TYPE_PPPOE		(RTE_ETH_RSS_ETH | RTE_ETH_RSS_PPPOE)
 
 /* PPPOE IPv4 */
 #define ICE_RSS_TYPE_PPPOE_IPV4		(ICE_RSS_TYPE_IPV4 | \
@@ -472,17 +472,17 @@ struct ice_rss_hash_cfg eth_tmplt = {
 					 ICE_RSS_TYPE_PPPOE)
 
 /* ESP, AH, L2TPV3 and PFCP */
-#define ICE_RSS_TYPE_IPV4_ESP		(ETH_RSS_ESP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_ESP		(ETH_RSS_ESP | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_AH		(ETH_RSS_AH | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_AH		(ETH_RSS_AH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
 
 /* MAC */
-#define ICE_RSS_TYPE_ETH		ETH_RSS_ETH
+#define ICE_RSS_TYPE_ETH		RTE_ETH_RSS_ETH
 
 /**
  * Supported pattern for hash.
@@ -647,86 +647,86 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 	uint64_t *hash_flds = &hash_cfg->hash_flds;
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH) {
-		if (!(rss_type & ETH_RSS_ETH))
+		if (!(rss_type & RTE_ETH_RSS_ETH))
 			*hash_flds &= ~ICE_FLOW_HASH_ETH;
-		if (rss_type & ETH_RSS_L2_SRC_ONLY)
+		if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
 			*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_DA));
-		else if (rss_type & ETH_RSS_L2_DST_ONLY)
+		else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
 			*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_SA));
 		*addl_hdrs &= ~ICE_FLOW_SEG_HDR_ETH;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH_NON_IP) {
-		if (rss_type & ETH_RSS_ETH)
+		if (rss_type & RTE_ETH_RSS_ETH)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_TYPE);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_VLAN) {
-		if (rss_type & ETH_RSS_C_VLAN)
+		if (rss_type & RTE_ETH_RSS_C_VLAN)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_C_VLAN);
-		else if (rss_type & ETH_RSS_S_VLAN)
+		else if (rss_type & RTE_ETH_RSS_S_VLAN)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_S_VLAN);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_PPPOE) {
-		if (!(rss_type & ETH_RSS_PPPOE))
+		if (!(rss_type & RTE_ETH_RSS_PPPOE))
 			*hash_flds &= ~ICE_FLOW_HASH_PPPOE_SESS_ID;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV4) {
 		if (rss_type &
-		   (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-		    ETH_RSS_NONFRAG_IPV4_UDP |
-		    ETH_RSS_NONFRAG_IPV4_TCP |
-		    ETH_RSS_NONFRAG_IPV4_SCTP)) {
-			if (rss_type & ETH_RSS_FRAG_IPV4) {
+		   (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+		    RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+			if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
 				*addl_hdrs |= ICE_FLOW_SEG_HDR_IPV_FRAG;
 				*addl_hdrs &= ~(ICE_FLOW_SEG_HDR_IPV_OTHER);
 				*hash_flds |=
 					BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_ID);
 			}
-			if (rss_type & ETH_RSS_L3_SRC_ONLY)
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_DA));
-			else if (rss_type & ETH_RSS_L3_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_SA));
 			else if (rss_type &
-				(ETH_RSS_L4_SRC_ONLY |
-				ETH_RSS_L4_DST_ONLY))
+				(RTE_ETH_RSS_L4_SRC_ONLY |
+				RTE_ETH_RSS_L4_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_IPV4;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_IPV4;
 		}
 
-		if (rss_type & ETH_RSS_IPV4_CHKSUM)
+		if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_CHKSUM);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV6) {
 		if (rss_type &
-		   (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-		    ETH_RSS_NONFRAG_IPV6_UDP |
-		    ETH_RSS_NONFRAG_IPV6_TCP |
-		    ETH_RSS_NONFRAG_IPV6_SCTP)) {
-			if (rss_type & ETH_RSS_FRAG_IPV6)
+		   (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+		    RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+			if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
 				*hash_flds |=
 					BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_ID);
-			if (rss_type & ETH_RSS_L3_SRC_ONLY)
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
-			else if (rss_type & ETH_RSS_L3_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 			else if (rss_type &
-				(ETH_RSS_L4_SRC_ONLY |
-				ETH_RSS_L4_DST_ONLY))
+				(RTE_ETH_RSS_L4_SRC_ONLY |
+				RTE_ETH_RSS_L4_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_IPV6;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_IPV6;
 		}
 
 		if (rss_type & RTE_ETH_RSS_L3_PRE32) {
-			if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_SA));
-			} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+			} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_DA));
 			} else {
@@ -735,10 +735,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 			}
 		}
 		if (rss_type & RTE_ETH_RSS_L3_PRE48) {
-			if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_SA));
-			} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+			} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_DA));
 			} else {
@@ -747,10 +747,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 			}
 		}
 		if (rss_type & RTE_ETH_RSS_L3_PRE64) {
-			if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_SA));
-			} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+			} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_DA));
 			} else {
@@ -762,81 +762,81 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_UDP) {
 		if (rss_type &
-		   (ETH_RSS_NONFRAG_IPV4_UDP |
-		    ETH_RSS_NONFRAG_IPV6_UDP)) {
-			if (rss_type & ETH_RSS_L4_SRC_ONLY)
+		   (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+			if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_DST_PORT));
-			else if (rss_type & ETH_RSS_L4_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_SRC_PORT));
 			else if (rss_type &
-				(ETH_RSS_L3_SRC_ONLY |
-				  ETH_RSS_L3_DST_ONLY))
+				(RTE_ETH_RSS_L3_SRC_ONLY |
+				  RTE_ETH_RSS_L3_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
 		}
 
-		if (rss_type & ETH_RSS_L4_CHKSUM)
+		if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_CHKSUM);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_TCP) {
 		if (rss_type &
-		   (ETH_RSS_NONFRAG_IPV4_TCP |
-		    ETH_RSS_NONFRAG_IPV6_TCP)) {
-			if (rss_type & ETH_RSS_L4_SRC_ONLY)
+		   (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+			if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_DST_PORT));
-			else if (rss_type & ETH_RSS_L4_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_SRC_PORT));
 			else if (rss_type &
-				(ETH_RSS_L3_SRC_ONLY |
-				  ETH_RSS_L3_DST_ONLY))
+				(RTE_ETH_RSS_L3_SRC_ONLY |
+				  RTE_ETH_RSS_L3_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
 		}
 
-		if (rss_type & ETH_RSS_L4_CHKSUM)
+		if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_CHKSUM);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_SCTP) {
 		if (rss_type &
-		   (ETH_RSS_NONFRAG_IPV4_SCTP |
-		    ETH_RSS_NONFRAG_IPV6_SCTP)) {
-			if (rss_type & ETH_RSS_L4_SRC_ONLY)
+		   (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+			if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_DST_PORT));
-			else if (rss_type & ETH_RSS_L4_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_SRC_PORT));
 			else if (rss_type &
-				(ETH_RSS_L3_SRC_ONLY |
-				  ETH_RSS_L3_DST_ONLY))
+				(RTE_ETH_RSS_L3_SRC_ONLY |
+				  RTE_ETH_RSS_L3_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
 		}
 
-		if (rss_type & ETH_RSS_L4_CHKSUM)
+		if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_CHKSUM);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_L2TPV3) {
-		if (!(rss_type & ETH_RSS_L2TPV3))
+		if (!(rss_type & RTE_ETH_RSS_L2TPV3))
 			*hash_flds &= ~ICE_FLOW_HASH_L2TPV3_SESS_ID;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_ESP) {
-		if (!(rss_type & ETH_RSS_ESP))
+		if (!(rss_type & RTE_ETH_RSS_ESP))
 			*hash_flds &= ~ICE_FLOW_HASH_ESP_SPI;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_AH) {
-		if (!(rss_type & ETH_RSS_AH))
+		if (!(rss_type & RTE_ETH_RSS_AH))
 			*hash_flds &= ~ICE_FLOW_HASH_AH_SPI;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_PFCP_SESSION) {
-		if (!(rss_type & ETH_RSS_PFCP))
+		if (!(rss_type & RTE_ETH_RSS_PFCP))
 			*hash_flds &= ~ICE_FLOW_HASH_PFCP_SEID;
 	}
 }
@@ -870,7 +870,7 @@ ice_refine_hash_cfg_gtpu(struct ice_rss_hash_cfg *hash_cfg,
 	uint64_t *hash_flds = &hash_cfg->hash_flds;
 
 	/* update hash field for gtpu eh/gtpu dwn/gtpu up. */
-	if (!(rss_type & ETH_RSS_GTPU))
+	if (!(rss_type & RTE_ETH_RSS_GTPU))
 		return;
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_GTPU_DWN)
@@ -892,10 +892,10 @@ static void ice_refine_hash_cfg(struct ice_rss_hash_cfg *hash_cfg,
 }
 
 static uint64_t invalid_rss_comb[] = {
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	RTE_ETH_RSS_L3_PRE40 |
 	RTE_ETH_RSS_L3_PRE56 |
 	RTE_ETH_RSS_L3_PRE96
@@ -907,9 +907,9 @@ struct rss_attr_type {
 };
 
 static struct rss_attr_type rss_attr_to_valid_type[] = {
-	{ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY,	ETH_RSS_ETH},
-	{ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
-	{ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
+	{RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY,	RTE_ETH_RSS_ETH},
+	{RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
+	{RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
 	/* current ipv6 prefix only supports prefix 64 bits*/
 	{RTE_ETH_RSS_L3_PRE32,				VALID_RSS_IPV6},
 	{RTE_ETH_RSS_L3_PRE48,				VALID_RSS_IPV6},
@@ -928,16 +928,16 @@ ice_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
 	 * hash function.
 	 */
 	if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
-		if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
-		    ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+		if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+		    RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
 			return true;
 
 		if (!(rss_type &
-		   (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
-		    ETH_RSS_FRAG_IPV4 | ETH_RSS_FRAG_IPV6 |
-		    ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
-		    ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
-		    ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+		   (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+		    RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_FRAG_IPV6 |
+		    RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
 			return true;
 	}
 
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index ff362c21d9f5..8406240d7209 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -303,7 +303,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
 		}
 	}
 
-	if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 		/* Register mbuf field and flag for Rx timestamp */
 		err = rte_mbuf_dyn_rx_timestamp_register(
 				&ice_timestamp_dynfield_offset,
@@ -367,7 +367,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
 	regval |= (0x03 << QRXFLXP_CNTXT_RXDID_PRIO_S) &
 		QRXFLXP_CNTXT_RXDID_PRIO_M;
 
-	if (ad->ptp_ena || rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (ad->ptp_ena || rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 		regval |= QRXFLXP_CNTXT_TS_M;
 
 	ICE_WRITE_REG(hw, QRXFLXP_CNTXT(rxq->reg_idx), regval);
@@ -1117,7 +1117,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
 
 	rxq->reg_idx = vsi->base_queue + queue_idx;
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1624,7 +1624,7 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq)
 			ice_rxd_to_vlan_tci(mb, &rxdp[j]);
 			rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
-			if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+			if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 				ts_ns = ice_tstamp_convert_32b_64b(hw,
 					rte_le_to_cpu_32(rxdp[j].wb.flex_ts.ts_high));
 				if (ice_timestamp_dynflag > 0) {
@@ -1942,7 +1942,7 @@ ice_recv_scattered_pkts(void *rx_queue,
 		rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
 		pkt_flags = ice_rxd_error_to_pkt_flags(rx_stat_err0);
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
-		if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 			ts_ns = ice_tstamp_convert_32b_64b(hw,
 				rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high));
 			if (ice_timestamp_dynflag > 0) {
@@ -2373,7 +2373,7 @@ ice_recv_pkts(void *rx_queue,
 		rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
 		pkt_flags = ice_rxd_error_to_pkt_flags(rx_stat_err0);
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
-		if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 			ts_ns = ice_tstamp_convert_32b_64b(hw,
 				rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high));
 			if (ice_timestamp_dynflag > 0) {
@@ -2889,7 +2889,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
 	for (i = 0; i < txq->tx_rs_thresh; i++)
 		rte_prefetch0((txep + i)->mbuf);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
 		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
 			rte_mempool_put(txep->mbuf->pool, txep->mbuf);
 			txep->mbuf = NULL;
@@ -3365,7 +3365,7 @@ ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq)
 	/* Use a simple Tx queue if possible (only fast free is allowed) */
 	ad->tx_simple_allowed =
 		(txq->offloads ==
-		(txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+		(txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
 		txq->tx_rs_thresh >= ICE_TX_MAX_BURST);
 
 	if (ad->tx_simple_allowed)
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 490693bff218..86955539bea8 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -474,7 +474,7 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 			 * will cause performance drop to get into this context.
 			 */
 			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-					DEV_RX_OFFLOAD_RSS_HASH) {
+					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
 					_mm_load_si128
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 7efe7b50a206..af23f6a34e58 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -585,7 +585,7 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
 			 * will cause performance drop to get into this context.
 			 */
 			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-					DEV_RX_OFFLOAD_RSS_HASH) {
+					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
 					_mm_load_si128
@@ -995,7 +995,7 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->tx_next_dd - (n - 1);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		void **cache_objs;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index f0f99265857e..b1d975b31a5a 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -248,23 +248,23 @@ ice_rxq_vec_setup_default(struct ice_rx_queue *rxq)
 }
 
 #define ICE_TX_NO_VECTOR_FLAGS (			\
-		DEV_TX_OFFLOAD_MULTI_SEGS |		\
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |	\
-		DEV_TX_OFFLOAD_TCP_TSO)
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |		\
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |	\
+		RTE_ETH_TX_OFFLOAD_TCP_TSO)
 
 #define ICE_TX_VECTOR_OFFLOAD (				\
-		DEV_TX_OFFLOAD_VLAN_INSERT |		\
-		DEV_TX_OFFLOAD_QINQ_INSERT |		\
-		DEV_TX_OFFLOAD_IPV4_CKSUM |		\
-		DEV_TX_OFFLOAD_SCTP_CKSUM |		\
-		DEV_TX_OFFLOAD_UDP_CKSUM |		\
-		DEV_TX_OFFLOAD_TCP_CKSUM)
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |		\
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |		\
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 
 #define ICE_RX_VECTOR_OFFLOAD (				\
-		DEV_RX_OFFLOAD_CHECKSUM |		\
-		DEV_RX_OFFLOAD_SCTP_CKSUM |		\
-		DEV_RX_OFFLOAD_VLAN |			\
-		DEV_RX_OFFLOAD_RSS_HASH)
+		RTE_ETH_RX_OFFLOAD_CHECKSUM |		\
+		RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |		\
+		RTE_ETH_RX_OFFLOAD_VLAN |			\
+		RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define ICE_VECTOR_PATH		0
 #define ICE_VECTOR_OFFLOAD_PATH	1
@@ -287,7 +287,7 @@ ice_rx_vec_queue_default(struct ice_rx_queue *rxq)
 	if (rxq->proto_xtr != PROTO_XTR_NONE)
 		return -1;
 
-	if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 		return -1;
 
 	if (rxq->offloads & ICE_RX_VECTOR_OFFLOAD)
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 653bd28b417c..117494131f32 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -479,7 +479,7 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 		 * will cause performance drop to get into this context.
 		 */
 		if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_RSS_HASH) {
+				RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh3 =
 				_mm_load_si128
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 2a1ed90b641b..7ce80a442b35 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -307,8 +307,8 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (rx_mq_mode != ETH_MQ_RX_NONE &&
-		rx_mq_mode != ETH_MQ_RX_RSS) {
+	if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+		rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
 		/* RSS together with VMDq not supported*/
 		PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
 				rx_mq_mode);
@@ -318,7 +318,7 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
 	/* To no break software that set invalid mode, only display
 	 * warning if invalid mode is used.
 	 */
-	if (tx_mq_mode != ETH_MQ_TX_NONE)
+	if (tx_mq_mode != RTE_ETH_MQ_TX_NONE)
 		PMD_INIT_LOG(WARNING,
 			"TX mode %d is not supported. Due to meaningless in this driver, just ignore",
 			tx_mq_mode);
@@ -334,8 +334,8 @@ eth_igc_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	ret  = igc_check_mq_mode(dev);
 	if (ret != 0)
@@ -473,12 +473,12 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		uint16_t duplex, speed;
 		hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
 		link.link_duplex = (duplex == FULL_DUPLEX) ?
-				ETH_LINK_FULL_DUPLEX :
-				ETH_LINK_HALF_DUPLEX;
+				RTE_ETH_LINK_FULL_DUPLEX :
+				RTE_ETH_LINK_HALF_DUPLEX;
 		link.link_speed = speed;
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 		link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 		if (speed == SPEED_2500) {
 			uint32_t tipg = IGC_READ_REG(hw, IGC_TIPG);
@@ -490,9 +490,9 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		}
 	} else {
 		link.link_speed = 0;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_status = ETH_LINK_DOWN;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -525,7 +525,7 @@ eth_igc_interrupt_action(struct rte_eth_dev *dev)
 				" Port %d: Link Up - speed %u Mbps - %s",
 				dev->data->port_id,
 				(unsigned int)link.link_speed,
-				link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+				link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 				"full-duplex" : "half-duplex");
 		else
 			PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -972,18 +972,18 @@ eth_igc_start(struct rte_eth_dev *dev)
 
 	/* VLAN Offload Settings */
 	eth_igc_vlan_offload_set(dev,
-		ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK);
+		RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK);
 
 	/* Setup link speed and duplex */
 	speeds = &dev->data->dev_conf.link_speeds;
-	if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		hw->phy.autoneg_advertised = IGC_ALL_SPEED_DUPLEX_2500;
 		hw->mac.autoneg = 1;
 	} else {
 		int num_speeds = 0;
 
-		if (*speeds & ETH_LINK_SPEED_FIXED) {
+		if (*speeds & RTE_ETH_LINK_SPEED_FIXED) {
 			PMD_DRV_LOG(ERR,
 				    "Force speed mode currently not supported");
 			igc_dev_clear_queues(dev);
@@ -993,33 +993,33 @@ eth_igc_start(struct rte_eth_dev *dev)
 		hw->phy.autoneg_advertised = 0;
 		hw->mac.autoneg = 1;
 
-		if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G)) {
+		if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G)) {
 			num_speeds = -1;
 			goto error_invalid_config;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_1G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_1G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_2_5G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_2_5G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_2500_FULL;
 			num_speeds++;
 		}
@@ -1482,14 +1482,14 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = hw->mac.rar_entry_count;
 	dev_info->rx_offload_capa = IGC_RX_OFFLOAD_ALL;
 	dev_info->tx_offload_capa = IGC_TX_OFFLOAD_ALL;
-	dev_info->rx_queue_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->rx_queue_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	dev_info->max_rx_queues = IGC_QUEUE_PAIRS_NUM;
 	dev_info->max_tx_queues = IGC_QUEUE_PAIRS_NUM;
 	dev_info->max_vmdq_pools = 0;
 
 	dev_info->hash_key_size = IGC_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = IGC_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1515,9 +1515,9 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->rx_desc_lim = rx_desc_lim;
 	dev_info->tx_desc_lim = tx_desc_lim;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G;
 
 	dev_info->max_mtu = dev_info->max_rx_pktlen - IGC_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -2141,13 +2141,13 @@ eth_igc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		rx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -2179,16 +2179,16 @@ eth_igc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		hw->fc.requested_mode = igc_fc_none;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		hw->fc.requested_mode = igc_fc_rx_pause;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		hw->fc.requested_mode = igc_fc_tx_pause;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		hw->fc.requested_mode = igc_fc_full;
 		break;
 	default:
@@ -2234,29 +2234,29 @@ eth_igc_rss_reta_update(struct rte_eth_dev *dev,
 	struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
 	uint16_t i;
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR,
 			"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
-			reta_size, ETH_RSS_RETA_SIZE_128);
+			reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
-	RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+	RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
 
 	/* set redirection table */
-	for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+	for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
 		union igc_rss_reta_reg reta, reg;
 		uint16_t idx, shift;
 		uint8_t j, mask;
 
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				IGC_RSS_RDT_REG_SIZE_MASK);
 
 		/* if no need to update the register */
 		if (!mask ||
-		    shift > (RTE_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
+		    shift > (RTE_ETH_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
 			continue;
 
 		/* check mask whether need to read the register value first */
@@ -2290,29 +2290,29 @@ eth_igc_rss_reta_query(struct rte_eth_dev *dev,
 	struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
 	uint16_t i;
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR,
 			"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
-			reta_size, ETH_RSS_RETA_SIZE_128);
+			reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
-	RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+	RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
 
 	/* read redirection table */
-	for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+	for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
 		union igc_rss_reta_reg reta;
 		uint16_t idx, shift;
 		uint8_t j, mask;
 
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				IGC_RSS_RDT_REG_SIZE_MASK);
 
 		/* if no need to read register */
 		if (!mask ||
-		    shift > (RTE_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
+		    shift > (RTE_ETH_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
 			continue;
 
 		/* read register and get the queue index */
@@ -2369,23 +2369,23 @@ eth_igc_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	rss_hf = 0;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_EX)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP_EX)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP_EX)
-		rss_hf |= ETH_RSS_IPV6_UDP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
 
 	rss_conf->rss_hf |= rss_hf;
 	return 0;
@@ -2514,22 +2514,22 @@ eth_igc_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			igc_vlan_hw_strip_enable(dev);
 		else
 			igc_vlan_hw_strip_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			igc_vlan_hw_filter_enable(dev);
 		else
 			igc_vlan_hw_filter_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			return igc_vlan_hw_extend_enable(dev);
 		else
 			return igc_vlan_hw_extend_disable(dev);
@@ -2547,7 +2547,7 @@ eth_igc_vlan_tpid_set(struct rte_eth_dev *dev,
 	uint32_t reg_val;
 
 	/* only outer TPID of double VLAN can be configured*/
-	if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		reg_val = IGC_READ_REG(hw, IGC_VET);
 		reg_val = (reg_val & (~IGC_VET_EXT)) |
 			((uint32_t)tpid << IGC_VET_EXT_SHIFT);
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index 5e6c2ff30157..f56cad79e939 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -66,37 +66,37 @@ extern "C" {
 #define IGC_TX_MAX_MTU_SEG	UINT8_MAX
 
 #define IGC_RX_OFFLOAD_ALL	(    \
-	DEV_RX_OFFLOAD_VLAN_STRIP  | \
-	DEV_RX_OFFLOAD_VLAN_FILTER | \
-	DEV_RX_OFFLOAD_VLAN_EXTEND | \
-	DEV_RX_OFFLOAD_IPV4_CKSUM  | \
-	DEV_RX_OFFLOAD_UDP_CKSUM   | \
-	DEV_RX_OFFLOAD_TCP_CKSUM   | \
-	DEV_RX_OFFLOAD_SCTP_CKSUM  | \
-	DEV_RX_OFFLOAD_KEEP_CRC    | \
-	DEV_RX_OFFLOAD_SCATTER     | \
-	DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RX_OFFLOAD_VLAN_STRIP  | \
+	RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+	RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+	RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  | \
+	RTE_ETH_RX_OFFLOAD_UDP_CKSUM   | \
+	RTE_ETH_RX_OFFLOAD_TCP_CKSUM   | \
+	RTE_ETH_RX_OFFLOAD_SCTP_CKSUM  | \
+	RTE_ETH_RX_OFFLOAD_KEEP_CRC    | \
+	RTE_ETH_RX_OFFLOAD_SCATTER     | \
+	RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define IGC_TX_OFFLOAD_ALL	(    \
-	DEV_TX_OFFLOAD_VLAN_INSERT | \
-	DEV_TX_OFFLOAD_IPV4_CKSUM  | \
-	DEV_TX_OFFLOAD_UDP_CKSUM   | \
-	DEV_TX_OFFLOAD_TCP_CKSUM   | \
-	DEV_TX_OFFLOAD_SCTP_CKSUM  | \
-	DEV_TX_OFFLOAD_TCP_TSO     | \
-	DEV_TX_OFFLOAD_UDP_TSO	   | \
-	DEV_TX_OFFLOAD_MULTI_SEGS)
+	RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  | \
+	RTE_ETH_TX_OFFLOAD_UDP_CKSUM   | \
+	RTE_ETH_TX_OFFLOAD_TCP_CKSUM   | \
+	RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  | \
+	RTE_ETH_TX_OFFLOAD_TCP_TSO     | \
+	RTE_ETH_TX_OFFLOAD_UDP_TSO	   | \
+	RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define IGC_RSS_OFFLOAD_ALL	(    \
-	ETH_RSS_IPV4               | \
-	ETH_RSS_NONFRAG_IPV4_TCP   | \
-	ETH_RSS_NONFRAG_IPV4_UDP   | \
-	ETH_RSS_IPV6               | \
-	ETH_RSS_NONFRAG_IPV6_TCP   | \
-	ETH_RSS_NONFRAG_IPV6_UDP   | \
-	ETH_RSS_IPV6_EX            | \
-	ETH_RSS_IPV6_TCP_EX        | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4               | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP   | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP   | \
+	RTE_ETH_RSS_IPV6               | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP   | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP   | \
+	RTE_ETH_RSS_IPV6_EX            | \
+	RTE_ETH_RSS_IPV6_TCP_EX        | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define IGC_MAX_ETQF_FILTERS		3	/* etqf(3) is used for 1588 */
 #define IGC_ETQF_FILTER_1588		3
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 56132e8c6cd6..1d34ae2e1b15 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -127,7 +127,7 @@ struct igc_rx_queue {
 	uint8_t             crc_len;    /**< 0 if CRC stripped, 4 otherwise. */
 	uint8_t             drop_en;	/**< If not 0, set SRRCTL.Drop_En. */
 	uint32_t            flags;      /**< RX flags. */
-	uint64_t	    offloads;   /**< offloads of DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads;   /**< offloads of RTE_ETH_RX_OFFLOAD_* */
 };
 
 /** Offload features */
@@ -209,7 +209,7 @@ struct igc_tx_queue {
 	/**< Start context position for transmit queue. */
 	struct igc_advctx_info ctx_cache[IGC_CTX_NUM];
 	/**< Hardware context history.*/
-	uint64_t	       offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+	uint64_t	       offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
 };
 
 static inline uint64_t
@@ -847,23 +847,23 @@ igc_hw_rss_hash_set(struct igc_hw *hw, struct rte_eth_rss_conf *rss_conf)
 	/* Set configured hashing protocols in MRQC register */
 	rss_hf = rss_conf->rss_hf;
 	mrqc = IGC_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV4_TCP;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6;
-	if (rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_EX)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP;
-	if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV4_UDP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP;
-	if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP_EX;
 	IGC_WRITE_REG(hw, IGC_MRQC, mrqc);
 }
@@ -1037,10 +1037,10 @@ igc_dev_mq_rx_configure(struct rte_eth_dev *dev)
 	}
 
 	switch (dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		igc_rss_configure(dev);
 		break;
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 		/*
 		 * configure RSS register for following,
 		 * then disable the RSS logic
@@ -1111,7 +1111,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 * call to configure
 		 */
-		rxq->crc_len = (offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+		rxq->crc_len = (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
 				RTE_ETHER_CRC_LEN : 0;
 
 		bus_addr = rxq->rx_ring_phys_addr;
@@ -1177,7 +1177,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 		IGC_WRITE_REG(hw, IGC_RXDCTL(rxq->reg_idx), rxdctl);
 	}
 
-	if (offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		dev->data->scattered_rx = 1;
 
 	if (dev->data->scattered_rx) {
@@ -1221,20 +1221,20 @@ igc_rx_init(struct rte_eth_dev *dev)
 	rxcsum |= IGC_RXCSUM_PCSD;
 
 	/* Enable both L3/L4 rx checksum offload */
-	if (offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		rxcsum |= IGC_RXCSUM_IPOFL;
 	else
 		rxcsum &= ~IGC_RXCSUM_IPOFL;
 
 	if (offloads &
-		(DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+		(RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
 		rxcsum |= IGC_RXCSUM_TUOFL;
-		offloads |= DEV_RX_OFFLOAD_SCTP_CKSUM;
+		offloads |= RTE_ETH_RX_OFFLOAD_SCTP_CKSUM;
 	} else {
 		rxcsum &= ~IGC_RXCSUM_TUOFL;
 	}
 
-	if (offloads & DEV_RX_OFFLOAD_SCTP_CKSUM)
+	if (offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM)
 		rxcsum |= IGC_RXCSUM_CRCOFL;
 	else
 		rxcsum &= ~IGC_RXCSUM_CRCOFL;
@@ -1242,7 +1242,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 	IGC_WRITE_REG(hw, IGC_RXCSUM, rxcsum);
 
 	/* Setup the Receive Control Register. */
-	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rctl &= ~IGC_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
 	else
 		rctl |= IGC_RCTL_SECRC; /* Strip Ethernet CRC. */
@@ -1279,12 +1279,12 @@ igc_rx_init(struct rte_eth_dev *dev)
 		IGC_WRITE_REG(hw, IGC_RDT(rxq->reg_idx), rxq->nb_rx_desc - 1);
 
 		dvmolr = IGC_READ_REG(hw, IGC_DVMOLR(rxq->reg_idx));
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			dvmolr |= IGC_DVMOLR_STRVLAN;
 		else
 			dvmolr &= ~IGC_DVMOLR_STRVLAN;
 
-		if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			dvmolr &= ~IGC_DVMOLR_STRCRC;
 		else
 			dvmolr |= IGC_DVMOLR_STRCRC;
@@ -2253,10 +2253,10 @@ eth_igc_vlan_strip_queue_set(struct rte_eth_dev *dev,
 	reg_val = IGC_READ_REG(hw, IGC_DVMOLR(rx_queue_id));
 	if (on) {
 		reg_val |= IGC_DVMOLR_STRVLAN;
-		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
 		reg_val &= ~(IGC_DVMOLR_STRVLAN | IGC_DVMOLR_HIDVLAN);
-		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	IGC_WRITE_REG(hw, IGC_DVMOLR(rx_queue_id), reg_val);
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index f94a1fed0a38..c688c3735c06 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -280,37 +280,37 @@ ionic_dev_link_update(struct rte_eth_dev *eth_dev,
 	memset(&link, 0, sizeof(link));
 
 	if (adapter->idev.port_info->config.an_enable) {
-		link.link_autoneg = ETH_LINK_AUTONEG;
+		link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	}
 
 	if (!adapter->link_up ||
 	    !(lif->state & IONIC_LIF_F_UP)) {
 		/* Interface is down */
-		link.link_status = ETH_LINK_DOWN;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	} else {
 		/* Interface is up */
-		link.link_status = ETH_LINK_UP;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_status = RTE_ETH_LINK_UP;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		switch (adapter->link_speed) {
 		case  10000:
-			link.link_speed = ETH_SPEED_NUM_10G;
+			link.link_speed = RTE_ETH_SPEED_NUM_10G;
 			break;
 		case  25000:
-			link.link_speed = ETH_SPEED_NUM_25G;
+			link.link_speed = RTE_ETH_SPEED_NUM_25G;
 			break;
 		case  40000:
-			link.link_speed = ETH_SPEED_NUM_40G;
+			link.link_speed = RTE_ETH_SPEED_NUM_40G;
 			break;
 		case  50000:
-			link.link_speed = ETH_SPEED_NUM_50G;
+			link.link_speed = RTE_ETH_SPEED_NUM_50G;
 			break;
 		case 100000:
-			link.link_speed = ETH_SPEED_NUM_100G;
+			link.link_speed = RTE_ETH_SPEED_NUM_100G;
 			break;
 		default:
-			link.link_speed = ETH_SPEED_NUM_NONE;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 			break;
 		}
 	}
@@ -387,17 +387,17 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->flow_type_rss_offloads = IONIC_ETH_RSS_OFFLOAD_ALL;
 
 	dev_info->speed_capa =
-		ETH_LINK_SPEED_10G |
-		ETH_LINK_SPEED_25G |
-		ETH_LINK_SPEED_40G |
-		ETH_LINK_SPEED_50G |
-		ETH_LINK_SPEED_100G;
+		RTE_ETH_LINK_SPEED_10G |
+		RTE_ETH_LINK_SPEED_25G |
+		RTE_ETH_LINK_SPEED_40G |
+		RTE_ETH_LINK_SPEED_50G |
+		RTE_ETH_LINK_SPEED_100G;
 
 	/*
 	 * Per-queue capabilities
 	 * RTE does not support disabling a feature on a queue if it is
 	 * enabled globally on the device. Thus the driver does not advertise
-	 * capabilities like DEV_TX_OFFLOAD_IPV4_CKSUM as per-queue even
+	 * capabilities like RTE_ETH_TX_OFFLOAD_IPV4_CKSUM as per-queue even
 	 * though the driver would be otherwise capable of disabling it on
 	 * a per-queue basis.
 	 */
@@ -411,24 +411,24 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
 	 */
 
 	dev_info->rx_offload_capa = dev_info->rx_queue_offload_capa |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_RSS_HASH |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH |
 		0;
 
 	dev_info->tx_offload_capa = dev_info->tx_queue_offload_capa |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
 		0;
 
 	dev_info->rx_desc_lim = rx_desc_lim;
@@ -463,9 +463,9 @@ ionic_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 		fc_conf->autoneg = 0;
 
 		if (idev->port_info->config.pause_type)
-			fc_conf->mode = RTE_FC_FULL;
+			fc_conf->mode = RTE_ETH_FC_FULL;
 		else
-			fc_conf->mode = RTE_FC_NONE;
+			fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return 0;
@@ -487,14 +487,14 @@ ionic_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		pause_type = IONIC_PORT_PAUSE_TYPE_NONE;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		pause_type = IONIC_PORT_PAUSE_TYPE_LINK;
 		break;
-	case RTE_FC_RX_PAUSE:
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		return -ENOTSUP;
 	}
 
@@ -545,12 +545,12 @@ ionic_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	num = tbl_sz / RTE_RETA_GROUP_SIZE;
+	num = tbl_sz / RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < num; i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if (reta_conf[i].mask & ((uint64_t)1 << j)) {
-				index = (i * RTE_RETA_GROUP_SIZE) + j;
+				index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
 				lif->rss_ind_tbl[index] = reta_conf[i].reta[j];
 			}
 		}
@@ -585,12 +585,12 @@ ionic_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	num = reta_size / RTE_RETA_GROUP_SIZE;
+	num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < num; i++) {
 		memcpy(reta_conf->reta,
-			&lif->rss_ind_tbl[i * RTE_RETA_GROUP_SIZE],
-			RTE_RETA_GROUP_SIZE);
+			&lif->rss_ind_tbl[i * RTE_ETH_RETA_GROUP_SIZE],
+			RTE_ETH_RETA_GROUP_SIZE);
 		reta_conf++;
 	}
 
@@ -618,17 +618,17 @@ ionic_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
 			IONIC_RSS_HASH_KEY_SIZE);
 
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 	rss_conf->rss_hf = rss_hf;
 
@@ -660,17 +660,17 @@ ionic_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
 		if (!lif->rss_ind_tbl)
 			return -EINVAL;
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV4)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
 			rss_types |= IONIC_RSS_TYPE_IPV4;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			rss_types |= IONIC_RSS_TYPE_IPV4_TCP;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 			rss_types |= IONIC_RSS_TYPE_IPV4_UDP;
-		if (rss_conf->rss_hf & ETH_RSS_IPV6)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
 			rss_types |= IONIC_RSS_TYPE_IPV6;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 			rss_types |= IONIC_RSS_TYPE_IPV6_TCP;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 			rss_types |= IONIC_RSS_TYPE_IPV6_UDP;
 
 		ionic_lif_rss_config(lif, rss_types, key, NULL);
@@ -842,15 +842,15 @@ ionic_dev_configure(struct rte_eth_dev *eth_dev)
 static inline uint32_t
 ionic_parse_link_speeds(uint16_t link_speeds)
 {
-	if (link_speeds & ETH_LINK_SPEED_100G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_100G)
 		return 100000;
-	else if (link_speeds & ETH_LINK_SPEED_50G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_50G)
 		return 50000;
-	else if (link_speeds & ETH_LINK_SPEED_40G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_40G)
 		return 40000;
-	else if (link_speeds & ETH_LINK_SPEED_25G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_25G)
 		return 25000;
-	else if (link_speeds & ETH_LINK_SPEED_10G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 		return 10000;
 	else
 		return 0;
@@ -874,12 +874,12 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
 	IONIC_PRINT_CALL();
 
 	allowed_speeds =
-		ETH_LINK_SPEED_FIXED |
-		ETH_LINK_SPEED_10G |
-		ETH_LINK_SPEED_25G |
-		ETH_LINK_SPEED_40G |
-		ETH_LINK_SPEED_50G |
-		ETH_LINK_SPEED_100G;
+		RTE_ETH_LINK_SPEED_FIXED |
+		RTE_ETH_LINK_SPEED_10G |
+		RTE_ETH_LINK_SPEED_25G |
+		RTE_ETH_LINK_SPEED_40G |
+		RTE_ETH_LINK_SPEED_50G |
+		RTE_ETH_LINK_SPEED_100G;
 
 	if (dev_conf->link_speeds & ~allowed_speeds) {
 		IONIC_PRINT(ERR, "Invalid link setting");
@@ -896,7 +896,7 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
 	}
 
 	/* Configure link */
-	an_enable = (dev_conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+	an_enable = (dev_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 	ionic_dev_cmd_port_autoneg(idev, an_enable);
 	err = ionic_dev_cmd_wait_check(idev, IONIC_DEVCMD_TIMEOUT);
diff --git a/drivers/net/ionic/ionic_ethdev.h b/drivers/net/ionic/ionic_ethdev.h
index 6cbcd0f825a3..652f28c97d57 100644
--- a/drivers/net/ionic/ionic_ethdev.h
+++ b/drivers/net/ionic/ionic_ethdev.h
@@ -8,12 +8,12 @@
 #include <rte_ethdev.h>
 
 #define IONIC_ETH_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define IONIC_ETH_DEV_TO_LIF(eth_dev) ((struct ionic_lif *) \
 	(eth_dev)->data->dev_private)
diff --git a/drivers/net/ionic/ionic_lif.c b/drivers/net/ionic/ionic_lif.c
index a1f9ce2d81cb..5e8fdf3893ad 100644
--- a/drivers/net/ionic/ionic_lif.c
+++ b/drivers/net/ionic/ionic_lif.c
@@ -1688,12 +1688,12 @@ ionic_lif_configure_vlan_offload(struct ionic_lif *lif, int mask)
 
 	/*
 	 * IONIC_ETH_HW_VLAN_RX_FILTER cannot be turned off, so
-	 * set DEV_RX_OFFLOAD_VLAN_FILTER and ignore ETH_VLAN_FILTER_MASK
+	 * set RTE_ETH_RX_OFFLOAD_VLAN_FILTER and ignore RTE_ETH_VLAN_FILTER_MASK
 	 */
-	rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			lif->features |= IONIC_ETH_HW_VLAN_RX_STRIP;
 		else
 			lif->features &= ~IONIC_ETH_HW_VLAN_RX_STRIP;
@@ -1733,19 +1733,19 @@ ionic_lif_configure(struct ionic_lif *lif)
 	/*
 	 * NB: While it is true that RSS_HASH is always enabled on ionic,
 	 *     setting this flag unconditionally causes problems in DTS.
-	 * rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	 * rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	 */
 
 	/* RX per-port */
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM ||
-	    rxmode->offloads & DEV_RX_OFFLOAD_UDP_CKSUM ||
-	    rxmode->offloads & DEV_RX_OFFLOAD_TCP_CKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM ||
+	    rxmode->offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM ||
+	    rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
 		lif->features |= IONIC_ETH_HW_RX_CSUM;
 	else
 		lif->features &= ~IONIC_ETH_HW_RX_CSUM;
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		lif->features |= IONIC_ETH_HW_RX_SG;
 		lif->eth_dev->data->scattered_rx = 1;
 	} else {
@@ -1754,30 +1754,30 @@ ionic_lif_configure(struct ionic_lif *lif)
 	}
 
 	/* Covers VLAN_STRIP */
-	ionic_lif_configure_vlan_offload(lif, ETH_VLAN_STRIP_MASK);
+	ionic_lif_configure_vlan_offload(lif, RTE_ETH_VLAN_STRIP_MASK);
 
 	/* TX per-port */
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		lif->features |= IONIC_ETH_HW_TX_CSUM;
 	else
 		lif->features &= ~IONIC_ETH_HW_TX_CSUM;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		lif->features |= IONIC_ETH_HW_VLAN_TX_TAG;
 	else
 		lif->features &= ~IONIC_ETH_HW_VLAN_TX_TAG;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		lif->features |= IONIC_ETH_HW_TX_SG;
 	else
 		lif->features &= ~IONIC_ETH_HW_TX_SG;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		lif->features |= IONIC_ETH_HW_TSO;
 		lif->features |= IONIC_ETH_HW_TSO_IPV6;
 		lif->features |= IONIC_ETH_HW_TSO_ECN;
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index 4d16a39c6b6d..e3df7c56debe 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -203,11 +203,11 @@ ionic_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id,
 		txq->flags |= IONIC_QCQ_F_DEFERRED;
 
 	/* Convert the offload flags into queue flags */
-	if (offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+	if (offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		txq->flags |= IONIC_QCQ_F_CSUM_L3;
-	if (offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+	if (offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 		txq->flags |= IONIC_QCQ_F_CSUM_TCP;
-	if (offloads & DEV_TX_OFFLOAD_UDP_CKSUM)
+	if (offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)
 		txq->flags |= IONIC_QCQ_F_CSUM_UDP;
 
 	eth_dev->data->tx_queues[tx_queue_id] = txq;
@@ -743,11 +743,11 @@ ionic_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 
 	/*
 	 * Note: the interface does not currently support
-	 * DEV_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
+	 * RTE_ETH_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
 	 * when the adapter will be able to keep the CRC and subtract
 	 * it to the length for all received packets:
 	 * if (eth_dev->data->dev_conf.rxmode.offloads &
-	 *     DEV_RX_OFFLOAD_KEEP_CRC)
+	 *     RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 	 *   rxq->crc_len = ETHER_CRC_LEN;
 	 */
 
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 063a9c6a6f7f..17088585757f 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -50,11 +50,11 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->speed_capa =
 		(hw->retimer.mac_type ==
 			IFPGA_RAWDEV_RETIMER_MAC_TYPE_10GE_XFI) ?
-		ETH_LINK_SPEED_10G :
+		RTE_ETH_LINK_SPEED_10G :
 		((hw->retimer.mac_type ==
 			IFPGA_RAWDEV_RETIMER_MAC_TYPE_25GE_25GAUI) ?
-		ETH_LINK_SPEED_25G :
-		ETH_LINK_SPEED_AUTONEG);
+		RTE_ETH_LINK_SPEED_25G :
+		RTE_ETH_LINK_SPEED_AUTONEG);
 
 	dev_info->max_rx_queues  = 1;
 	dev_info->max_tx_queues  = 1;
@@ -67,30 +67,30 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
 	};
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_VLAN_FILTER;
-
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
 		dev_info->tx_queue_offload_capa;
 
 	dev_info->dev_capa =
@@ -2399,10 +2399,10 @@ ipn3ke_update_link(struct rte_rawdev *rawdev,
 				(uint64_t *)&link_speed);
 	switch (link_speed) {
 	case IFPGA_RAWDEV_LINK_SPEED_10GB:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case IFPGA_RAWDEV_LINK_SPEED_25GB:
-		link->link_speed = ETH_SPEED_NUM_25G;
+		link->link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	default:
 		IPN3KE_AFU_PMD_ERR("Unknown link speed info %u", link_speed);
@@ -2460,9 +2460,9 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,
 
 	memset(&link, 0, sizeof(link));
 
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_autoneg = !(ethdev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	rawdev = hw->rawdev;
 	ipn3ke_update_link(rawdev, rpst->port_id, &link);
@@ -2518,9 +2518,9 @@ ipn3ke_rpst_link_check(struct ipn3ke_rpst *rpst)
 
 	memset(&link, 0, sizeof(link));
 
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_autoneg = !(rpst->ethdev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	rawdev = hw->rawdev;
 	ipn3ke_update_link(rawdev, rpst->port_id, &link);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 46c95425adfb..7fd2c539e002 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1857,7 +1857,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 	qinq &= IXGBE_DMATXCTL_GDV;
 
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
+	case RTE_ETH_VLAN_TYPE_INNER:
 		if (qinq) {
 			reg = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
 			reg = (reg & (~IXGBE_VLNCTRL_VET)) | (uint32_t)tpid;
@@ -1872,7 +1872,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 				    " by single VLAN");
 		}
 		break;
-	case ETH_VLAN_TYPE_OUTER:
+	case RTE_ETH_VLAN_TYPE_OUTER:
 		if (qinq) {
 			/* Only the high 16-bits is valid */
 			IXGBE_WRITE_REG(hw, IXGBE_EXVET, (uint32_t)tpid <<
@@ -1959,10 +1959,10 @@ ixgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 
 	if (on) {
 		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
-		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
 		rxq->vlan_flags = PKT_RX_VLAN;
-		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
 
@@ -2083,7 +2083,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	if (hw->mac.type == ixgbe_mac_82598EB) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 			ctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
 			ctrl |= IXGBE_VLNCTRL_VME;
 			IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, ctrl);
@@ -2100,7 +2100,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
 			ctrl = IXGBE_READ_REG(hw, IXGBE_RXDCTL(rxq->reg_idx));
-			if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+			if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 				ctrl |= IXGBE_RXDCTL_VME;
 				on = TRUE;
 			} else {
@@ -2122,17 +2122,17 @@ ixgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	struct ixgbe_rx_queue *rxq;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		rxmode = &dev->data->dev_conf.rxmode;
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 		else
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 	}
 }
@@ -2143,19 +2143,18 @@ ixgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	rxmode = &dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK)
 		ixgbe_vlan_hw_strip_config(dev);
-	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ixgbe_vlan_hw_filter_enable(dev);
 		else
 			ixgbe_vlan_hw_filter_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			ixgbe_vlan_hw_extend_enable(dev);
 		else
 			ixgbe_vlan_hw_extend_disable(dev);
@@ -2194,10 +2193,10 @@ ixgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
 	switch (nb_rx_q) {
 	case 1:
 	case 2:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
 		break;
 	case 4:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
 		break;
 	default:
 		return -EINVAL;
@@ -2221,18 +2220,18 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
 		/* check multi-queue mode */
 		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
 			break;
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
 			/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
 			PMD_INIT_LOG(ERR, "SRIOV active,"
 					" unsupported mq_mode rx %d.",
 					dev_conf->rxmode.mq_mode);
 			return -EINVAL;
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
 			if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
 				if (ixgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
 					PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -2242,12 +2241,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 					return -EINVAL;
 				}
 			break;
-		case ETH_MQ_RX_VMDQ_ONLY:
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_NONE:
 			/* if nothing mq mode configure, use default scheme */
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
 			break;
-		default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+		default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
 			/* SRIOV only works in VMDq enable mode */
 			PMD_INIT_LOG(ERR, "SRIOV is active,"
 					" wrong mq_mode rx %d.",
@@ -2256,12 +2255,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 		}
 
 		switch (dev_conf->txmode.mq_mode) {
-		case ETH_MQ_TX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+		case RTE_ETH_MQ_TX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+			dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
 			break;
-		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
+		default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
+			dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_ONLY;
 			break;
 		}
 
@@ -2276,13 +2275,13 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 			return -EINVAL;
 		}
 	} else {
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
 			PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
 					  " not supported.");
 			return -EINVAL;
 		}
 		/* check configuration for vmdb+dcb mode */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_conf *conf;
 
 			if (nb_rx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2291,15 +2290,15 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools must be %d or %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_tx_conf *conf;
 
 			if (nb_tx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2308,39 +2307,39 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools != %d and"
 						" nb_queue_pools != %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
 
 		/* For DCB mode check our configuration before we go further */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
 			const struct rte_eth_dcb_rx_conf *conf;
 
 			conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
 
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 			const struct rte_eth_dcb_tx_conf *conf;
 
 			conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
@@ -2349,7 +2348,7 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 		 * When DCB/VT is off, maximum number of queues changes,
 		 * except for 82598EB, which remains constant.
 		 */
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
 				hw->mac.type != ixgbe_mac_82598EB) {
 			if (nb_tx_q > IXGBE_NONE_MODE_TX_NB_QUEUES) {
 				PMD_INIT_LOG(ERR,
@@ -2373,8 +2372,8 @@ ixgbe_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multipe queue mode checking */
 	ret  = ixgbe_check_mq_mode(dev);
@@ -2619,15 +2618,15 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 		goto error;
 	}
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = ixgbe_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
 		goto error;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
 		/* Enable vlan filtering for VMDq */
 		ixgbe_vmdq_vlan_hw_filter_enable(dev);
 	}
@@ -2704,17 +2703,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 	case ixgbe_mac_X550:
 	case ixgbe_mac_X550EM_x:
 	case ixgbe_mac_X550EM_a:
-		allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_2_5G |  ETH_LINK_SPEED_5G |
-			ETH_LINK_SPEED_10G;
+		allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_2_5G |  RTE_ETH_LINK_SPEED_5G |
+			RTE_ETH_LINK_SPEED_10G;
 		if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
 				hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
-			allowed_speeds = ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+			allowed_speeds = RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
 		break;
 	default:
-		allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_10G;
+		allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G;
 	}
 
 	link_speeds = &dev->data->dev_conf.link_speeds;
@@ -2728,7 +2727,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 	}
 
 	speed = 0x0;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		switch (hw->mac.type) {
 		case ixgbe_mac_82598EB:
 			speed = IXGBE_LINK_SPEED_82598_AUTONEG;
@@ -2746,17 +2745,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 			speed = IXGBE_LINK_SPEED_82599_AUTONEG;
 		}
 	} else {
-		if (*link_speeds & ETH_LINK_SPEED_10G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
 			speed |= IXGBE_LINK_SPEED_10GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
 			speed |= IXGBE_LINK_SPEED_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_2_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
 			speed |= IXGBE_LINK_SPEED_2_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_1G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed |= IXGBE_LINK_SPEED_1GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_100M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed |= IXGBE_LINK_SPEED_100_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_10M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
 			speed |= IXGBE_LINK_SPEED_10_FULL;
 	}
 
@@ -3832,7 +3831,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		 * When DCB/VT is off, maximum number of queues changes,
 		 * except for 82598EB, which remains constant.
 		 */
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
 				hw->mac.type != ixgbe_mac_82598EB)
 			dev_info->max_tx_queues = IXGBE_NONE_MODE_TX_NB_QUEUES;
 	}
@@ -3842,9 +3841,9 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
 	if (hw->mac.type == ixgbe_mac_82598EB)
-		dev_info->max_vmdq_pools = ETH_16_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	else
-		dev_info->max_vmdq_pools = ETH_64_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->max_mtu =  dev_info->max_rx_pktlen - IXGBE_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 	dev_info->vmdq_queue_num = dev_info->max_rx_queues;
@@ -3883,21 +3882,21 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->reta_size = ixgbe_reta_size_get(hw->mac.type);
 	dev_info->flow_type_rss_offloads = IXGBE_RSS_OFFLOAD_ALL;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
 	if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
 			hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
-		dev_info->speed_capa = ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
 
 	if (hw->mac.type == ixgbe_mac_X540 ||
 	    hw->mac.type == ixgbe_mac_X540_vf ||
 	    hw->mac.type == ixgbe_mac_X550 ||
 	    hw->mac.type == ixgbe_mac_X550_vf) {
-		dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
 	}
 	if (hw->mac.type == ixgbe_mac_X550) {
-		dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
-		dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
 	}
 
 	/* Driver-preferred Rx/Tx parameters */
@@ -3966,9 +3965,9 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
 	if (hw->mac.type == ixgbe_mac_82598EB)
-		dev_info->max_vmdq_pools = ETH_16_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	else
-		dev_info->max_vmdq_pools = ETH_64_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->rx_queue_offload_capa = ixgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (ixgbe_get_rx_port_offloads(dev) |
 				     dev_info->rx_queue_offload_capa);
@@ -4211,11 +4210,11 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	u32 esdp_reg;
 
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_SPEED_FIXED);
 
 	hw->mac.get_link_status = true;
 
@@ -4237,8 +4236,8 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
 		diag = ixgbe_check_link(hw, &link_speed, &link_up, wait);
 
 	if (diag != 0) {
-		link.link_speed = ETH_SPEED_NUM_100M;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
@@ -4274,37 +4273,37 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (link_speed) {
 	default:
 	case IXGBE_LINK_SPEED_UNKNOWN:
-		link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 
 	case IXGBE_LINK_SPEED_10_FULL:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 
 	case IXGBE_LINK_SPEED_100_FULL:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	case IXGBE_LINK_SPEED_1GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case IXGBE_LINK_SPEED_2_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_2_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 
 	case IXGBE_LINK_SPEED_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 
 	case IXGBE_LINK_SPEED_10GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	}
 
@@ -4521,7 +4520,7 @@ ixgbe_dev_link_status_print(struct rte_eth_dev *dev)
 		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -4740,13 +4739,13 @@ ixgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		tx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -5044,8 +5043,8 @@ ixgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i += IXGBE_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IXGBE_4_BIT_MASK);
 		if (!mask)
@@ -5092,8 +5091,8 @@ ixgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i += IXGBE_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IXGBE_4_BIT_MASK);
 		if (!mask)
@@ -5255,22 +5254,22 @@ ixgbevf_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
 		     dev->data->port_id);
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/*
 	 * VF has no ability to enable/disable HW CRC
 	 * Keep the persistent behavior the same as Host PF
 	 */
 #ifndef RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
-		conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #else
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
 		PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #endif
 
@@ -5330,8 +5329,8 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
 	ixgbevf_set_vfta_all(dev, 1);
 
 	/* Set HW strip */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = ixgbevf_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -5568,10 +5567,10 @@ ixgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	int on = 0;
 
 	/* VF function only support hw strip feature, others are not support */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
-			on = !!(rxq->offloads &	DEV_RX_OFFLOAD_VLAN_STRIP);
+			on = !!(rxq->offloads &	RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 			ixgbevf_vlan_strip_queue_set(dev, i, on);
 		}
 	}
@@ -5702,12 +5701,12 @@ ixgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
 		return -ENOTSUP;
 
 	if (on) {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = ~0;
 			IXGBE_WRITE_REG(hw, IXGBE_UTA(i), ~0);
 		}
 	} else {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = 0;
 			IXGBE_WRITE_REG(hw, IXGBE_UTA(i), 0);
 		}
@@ -5721,15 +5720,15 @@ ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
 {
 	uint32_t new_val = orig_val;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
 		new_val |= IXGBE_VMOLR_AUPE;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
 		new_val |= IXGBE_VMOLR_ROMPE;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 		new_val |= IXGBE_VMOLR_ROPE;
-	if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 		new_val |= IXGBE_VMOLR_BAM;
-	if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 		new_val |= IXGBE_VMOLR_MPE;
 
 	return new_val;
@@ -6724,15 +6723,15 @@ ixgbe_start_timecounters(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 
 	switch (link.link_speed) {
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		incval = IXGBE_INCVAL_100;
 		shift = IXGBE_INCVAL_SHIFT_100;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		incval = IXGBE_INCVAL_1GB;
 		shift = IXGBE_INCVAL_SHIFT_1GB;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 	default:
 		incval = IXGBE_INCVAL_10GB;
 		shift = IXGBE_INCVAL_SHIFT_10GB;
@@ -7143,16 +7142,16 @@ ixgbe_reta_size_get(enum ixgbe_mac_type mac_type) {
 	case ixgbe_mac_X550:
 	case ixgbe_mac_X550EM_x:
 	case ixgbe_mac_X550EM_a:
-		return ETH_RSS_RETA_SIZE_512;
+		return RTE_ETH_RSS_RETA_SIZE_512;
 	case ixgbe_mac_X550_vf:
 	case ixgbe_mac_X550EM_x_vf:
 	case ixgbe_mac_X550EM_a_vf:
-		return ETH_RSS_RETA_SIZE_64;
+		return RTE_ETH_RSS_RETA_SIZE_64;
 	case ixgbe_mac_X540_vf:
 	case ixgbe_mac_82599_vf:
 		return 0;
 	default:
-		return ETH_RSS_RETA_SIZE_128;
+		return RTE_ETH_RSS_RETA_SIZE_128;
 	}
 }
 
@@ -7162,10 +7161,10 @@ ixgbe_reta_reg_get(enum ixgbe_mac_type mac_type, uint16_t reta_idx) {
 	case ixgbe_mac_X550:
 	case ixgbe_mac_X550EM_x:
 	case ixgbe_mac_X550EM_a:
-		if (reta_idx < ETH_RSS_RETA_SIZE_128)
+		if (reta_idx < RTE_ETH_RSS_RETA_SIZE_128)
 			return IXGBE_RETA(reta_idx >> 2);
 		else
-			return IXGBE_ERETA((reta_idx - ETH_RSS_RETA_SIZE_128) >> 2);
+			return IXGBE_ERETA((reta_idx - RTE_ETH_RSS_RETA_SIZE_128) >> 2);
 	case ixgbe_mac_X550_vf:
 	case ixgbe_mac_X550EM_x_vf:
 	case ixgbe_mac_X550EM_a_vf:
@@ -7221,7 +7220,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	uint8_t nb_tcs;
 	uint8_t i, j;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
 	else
 		dcb_info->nb_tcs = 1;
@@ -7232,7 +7231,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	if (dcb_config->vt_mode) { /* vt is enabled*/
 		struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
 		if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
 			for (j = 0; j < nb_tcs; j++) {
@@ -7256,9 +7255,9 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	} else { /* vt is disabled*/
 		struct rte_eth_dcb_rx_conf *rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
-		if (dcb_info->nb_tcs == ETH_4_TCS) {
+		if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7271,7 +7270,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 			dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
 			dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
 			dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
-		} else if (dcb_info->nb_tcs == ETH_8_TCS) {
+		} else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7524,7 +7523,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = ixgbe_e_tag_filter_add(dev, l2_tunnel);
 		break;
 	default:
@@ -7556,7 +7555,7 @@ ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 		return ret;
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = ixgbe_e_tag_filter_del(dev, l2_tunnel);
 		break;
 	default:
@@ -7653,12 +7652,12 @@ ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ixgbe_add_vxlan_port(hw, udp_tunnel->udp_port);
 		break;
 
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -EINVAL;
 		break;
@@ -7690,11 +7689,11 @@ ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ixgbe_del_vxlan_port(hw, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -EINVAL;
 		break;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 950fb2d2450c..876b670f2682 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -114,15 +114,15 @@
 #define IXGBE_FDIR_NVGRE_TUNNEL_TYPE    0x0
 
 #define IXGBE_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define IXGBE_VF_IRQ_ENABLE_MASK        3          /* vf irq enable mask */
 #define IXGBE_VF_MAXMSIVECTOR           1
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 27a49bbce5e7..7894047829a8 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -90,9 +90,9 @@ static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl);
 static uint32_t ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
 				 uint32_t key);
 static uint32_t atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc);
+		enum rte_eth_fdir_pballoc_type pballoc);
 static uint32_t atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc);
+		enum rte_eth_fdir_pballoc_type pballoc);
 static int fdir_write_perfect_filter_82599(struct ixgbe_hw *hw,
 			union ixgbe_atr_input *input, uint8_t queue,
 			uint32_t fdircmd, uint32_t fdirhash,
@@ -163,20 +163,20 @@ fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl)
  * flexbytes matching field, and drop queue (only for perfect matching mode).
  */
 static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf, uint32_t *fdirctrl)
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf, uint32_t *fdirctrl)
 {
 	*fdirctrl = 0;
 
 	switch (conf->pballoc) {
-	case RTE_FDIR_PBALLOC_64K:
+	case RTE_ETH_FDIR_PBALLOC_64K:
 		/* 8k - 1 signature filters */
 		*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_64K;
 		break;
-	case RTE_FDIR_PBALLOC_128K:
+	case RTE_ETH_FDIR_PBALLOC_128K:
 		/* 16k - 1 signature filters */
 		*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_128K;
 		break;
-	case RTE_FDIR_PBALLOC_256K:
+	case RTE_ETH_FDIR_PBALLOC_256K:
 		/* 32k - 1 signature filters */
 		*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_256K;
 		break;
@@ -807,13 +807,13 @@ ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
 
 static uint32_t
 atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		return ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				PERFECT_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		return ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				PERFECT_BUCKET_128KB_HASH_MASK;
@@ -850,15 +850,15 @@ ixgbe_fdir_check_cmd_complete(struct ixgbe_hw *hw, uint32_t *fdircmd)
  */
 static uint32_t
 atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
 	uint32_t bucket_hash, sig_hash;
 
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		bucket_hash = ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				SIG_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		bucket_hash = ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				SIG_BUCKET_128KB_HASH_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 27322ab9038a..bdc9d4796c02 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -1259,7 +1259,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
 		return -rte_errno;
 	}
 
-	filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+	filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
 	/**
 	 * grp and e_cid_base are bit fields and only use 14 bits.
 	 * e-tag id is taken as little endian by HW.
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
index e45c5501e6bf..944c9f23809e 100644
--- a/drivers/net/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -392,7 +392,7 @@ ixgbe_crypto_create_session(void *device,
 	aead_xform = &conf->crypto_xform->aead;
 
 	if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 			ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -400,7 +400,7 @@ ixgbe_crypto_create_session(void *device,
 			return -ENOTSUP;
 		}
 	} else {
-		if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+		if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 			ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -633,11 +633,11 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	tx_offloads = dev->data->dev_conf.txmode.offloads;
 
 	/* sanity checks */
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
 		return -1;
 	}
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
 		return -1;
 	}
@@ -657,7 +657,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
 	IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
 		reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
 		if (reg != 0) {
@@ -665,7 +665,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 			return -1;
 		}
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 		IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL,
 				IXGBE_SECTXCTRL_STORE_FORWARD);
 		reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 295e5a39b245..9f1bd0a62ba4 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -104,15 +104,15 @@ int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
 	memset(uta_info, 0, sizeof(struct ixgbe_uta_info));
 	hw->mac.mc_filter_type = 0;
 
-	if (vf_num >= ETH_32_POOLS) {
+	if (vf_num >= RTE_ETH_32_POOLS) {
 		nb_queue = 2;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
-	} else if (vf_num >= ETH_16_POOLS) {
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+	} else if (vf_num >= RTE_ETH_16_POOLS) {
 		nb_queue = 4;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
 	} else {
 		nb_queue = 8;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
 	}
 
 	RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -263,15 +263,15 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
 	gpie |= IXGBE_GPIE_MSIX_MODE | IXGBE_GPIE_PBA_SUPPORT;
 
 	switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		gcr_ext |= IXGBE_GCR_EXT_VT_MODE_64;
 		gpie |= IXGBE_GPIE_VTMODE_64;
 		break;
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		gcr_ext |= IXGBE_GCR_EXT_VT_MODE_32;
 		gpie |= IXGBE_GPIE_VTMODE_32;
 		break;
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		gcr_ext |= IXGBE_GCR_EXT_VT_MODE_16;
 		gpie |= IXGBE_GPIE_VTMODE_16;
 		break;
@@ -674,29 +674,29 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
 	/* Notify VF of number of DCB traffic classes */
 	eth_conf = &dev->data->dev_conf;
 	switch (eth_conf->txmode.mq_mode) {
-	case ETH_MQ_TX_NONE:
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_NONE:
+	case RTE_ETH_MQ_TX_DCB:
 		PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
 			", but its tx mode = %d\n", vf,
 			eth_conf->txmode.mq_mode);
 		return -1;
 
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		vmdq_dcb_tx_conf = &eth_conf->tx_adv_conf.vmdq_dcb_tx_conf;
 		switch (vmdq_dcb_tx_conf->nb_queue_pools) {
-		case ETH_16_POOLS:
-			num_tcs = ETH_8_TCS;
+		case RTE_ETH_16_POOLS:
+			num_tcs = RTE_ETH_8_TCS;
 			break;
-		case ETH_32_POOLS:
-			num_tcs = ETH_4_TCS;
+		case RTE_ETH_32_POOLS:
+			num_tcs = RTE_ETH_4_TCS;
 			break;
 		default:
 			return -1;
 		}
 		break;
 
-	/* ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
-	case ETH_MQ_TX_VMDQ_ONLY:
+	/* RTE_ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
+	case RTE_ETH_MQ_TX_VMDQ_ONLY:
 		hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 		vmvir = IXGBE_READ_REG(hw, IXGBE_VMVIR(vf));
 		vlana = vmvir & IXGBE_VMVIR_VLANA_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index b263dfe1d574..9e5716f935a2 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2592,26 +2592,26 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM   |
-		DEV_TX_OFFLOAD_SCTP_CKSUM  |
-		DEV_TX_OFFLOAD_TCP_TSO     |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO     |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	if (hw->mac.type == ixgbe_mac_82599EB ||
 	    hw->mac.type == ixgbe_mac_X540)
-		tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 
 	if (hw->mac.type == ixgbe_mac_X550 ||
 	    hw->mac.type == ixgbe_mac_X550EM_x ||
 	    hw->mac.type == ixgbe_mac_X550EM_a)
-		tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
 #endif
 	return tx_offload_capa;
 }
@@ -2780,7 +2780,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_deferred_start = tx_conf->tx_deferred_start;
 #ifdef RTE_LIB_SECURITY
 	txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SECURITY);
+			RTE_ETH_TX_OFFLOAD_SECURITY);
 #endif
 
 	/*
@@ -3021,7 +3021,7 @@ ixgbe_get_rx_queue_offloads(struct rte_eth_dev *dev)
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (hw->mac.type != ixgbe_mac_82598EB)
-		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	return offloads;
 }
@@ -3032,19 +3032,19 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
 	uint64_t offloads;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	offloads = DEV_RX_OFFLOAD_IPV4_CKSUM  |
-		   DEV_RX_OFFLOAD_UDP_CKSUM   |
-		   DEV_RX_OFFLOAD_TCP_CKSUM   |
-		   DEV_RX_OFFLOAD_KEEP_CRC    |
-		   DEV_RX_OFFLOAD_VLAN_FILTER |
-		   DEV_RX_OFFLOAD_SCATTER |
-		   DEV_RX_OFFLOAD_RSS_HASH;
+	offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+		   RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+		   RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		   RTE_ETH_RX_OFFLOAD_SCATTER |
+		   RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (hw->mac.type == ixgbe_mac_82598EB)
-		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	if (ixgbe_is_vf(dev) == 0)
-		offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
 	/*
 	 * RSC is only supported by 82599 and x540 PF devices in a non-SR-IOV
@@ -3054,20 +3054,20 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
 	     hw->mac.type == ixgbe_mac_X540 ||
 	     hw->mac.type == ixgbe_mac_X550) &&
 	    !RTE_ETH_DEV_SRIOV(dev).active)
-		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+		offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 
 	if (hw->mac.type == ixgbe_mac_82599EB ||
 	    hw->mac.type == ixgbe_mac_X540)
-		offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
 
 	if (hw->mac.type == ixgbe_mac_X550 ||
 	    hw->mac.type == ixgbe_mac_X550EM_x ||
 	    hw->mac.type == ixgbe_mac_X550EM_a)
-		offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		offloads |= DEV_RX_OFFLOAD_SECURITY;
+		offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
 #endif
 
 	return offloads;
@@ -3122,7 +3122,7 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
 		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -3507,23 +3507,23 @@ ixgbe_hw_rss_hash_set(struct ixgbe_hw *hw, struct rte_eth_rss_conf *rss_conf)
 	/* Set configured hashing protocols in MRQC register */
 	rss_hf = rss_conf->rss_hf;
 	mrqc = IXGBE_MRQC_RSSEN; /* Enable RSS */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_TCP;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6;
-	if (rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_EX)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_TCP;
-	if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_UDP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_UDP;
-	if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP;
 	IXGBE_WRITE_REG(hw, mrqc_reg, mrqc);
 }
@@ -3605,23 +3605,23 @@ ixgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 	}
 	rss_hf = 0;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP)
-		rss_hf |= ETH_RSS_IPV6_UDP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
 	rss_conf->rss_hf = rss_hf;
 	return 0;
 }
@@ -3697,12 +3697,12 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
 	num_pools = cfg->nb_queue_pools;
 	/* Check we have a valid number of pools */
-	if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+	if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
 		ixgbe_rss_disable(dev);
 		return;
 	}
 	/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
-	nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+	nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
 
 	/*
 	 * RXPBSIZE
@@ -3727,7 +3727,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
 	}
 	/* zero alloc all unused TCs */
-	for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		uint32_t rxpbsize = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(i));
 
 		rxpbsize &= (~(0x3FF << IXGBE_RXPBSIZE_SHIFT));
@@ -3736,7 +3736,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	}
 
 	/* MRQC: enable vmdq and dcb */
-	mrqc = (num_pools == ETH_16_POOLS) ?
+	mrqc = (num_pools == RTE_ETH_16_POOLS) ?
 		IXGBE_MRQC_VMDQRT8TCEN : IXGBE_MRQC_VMDQRT4TCEN;
 	IXGBE_WRITE_REG(hw, IXGBE_MRQC, mrqc);
 
@@ -3752,7 +3752,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 
 	/* RTRUP2TC: mapping user priorities to traffic classes (TCs) */
 	queue_mapping = 0;
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 		/*
 		 * mapping is done with 3 bits per priority,
 		 * so shift by i*3 each time
@@ -3776,7 +3776,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 
 	/* VFRE: pool enabling for receive - 16 or 32 */
 	IXGBE_WRITE_REG(hw, IXGBE_VFRE(0),
-			num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+			num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	/*
 	 * MPSAR - allow pools to read specific mac addresses
@@ -3858,7 +3858,7 @@ ixgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
 	if (hw->mac.type != ixgbe_mac_82598EB)
 		/*PF VF Transmit Enable*/
 		IXGBE_WRITE_REG(hw, IXGBE_VFTE(0),
-			vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+			vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	/*Configure general DCB TX parameters*/
 	ixgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3874,12 +3874,12 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
-	if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3889,7 +3889,7 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3907,12 +3907,12 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
-	if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3922,7 +3922,7 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3949,7 +3949,7 @@ ixgbe_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3976,7 +3976,7 @@ ixgbe_dcb_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -4145,7 +4145,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
 
 	switch (dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_VMDQ_DCB:
+	case RTE_ETH_MQ_RX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		if (hw->mac.type != ixgbe_mac_82598EB) {
 			config_dcb_rx = DCB_RX_CONFIG;
@@ -4158,8 +4158,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			ixgbe_vmdq_dcb_configure(dev);
 		}
 		break;
-	case ETH_MQ_RX_DCB:
-	case ETH_MQ_RX_DCB_RSS:
+	case RTE_ETH_MQ_RX_DCB:
+	case RTE_ETH_MQ_RX_DCB_RSS:
 		dcb_config->vt_mode = false;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -4172,7 +4172,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		break;
 	}
 	switch (dev->data->dev_conf.txmode.mq_mode) {
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/* get DCB and VT TX configuration parameters
@@ -4183,7 +4183,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		ixgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
 		break;
 
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_DCB:
 		dcb_config->vt_mode = false;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/*get DCB TX configuration parameters from rte_eth_conf*/
@@ -4199,15 +4199,15 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	nb_tcs = dcb_config->num_tcs.pfc_tcs;
 	/* Unpack map */
 	ixgbe_dcb_unpack_map_cee(dcb_config, IXGBE_DCB_RX_CONFIG, map);
-	if (nb_tcs == ETH_4_TCS) {
+	if (nb_tcs == RTE_ETH_4_TCS) {
 		/* Avoid un-configured priority mapping to TC0 */
 		uint8_t j = 4;
 		uint8_t mask = 0xFF;
 
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
 			mask = (uint8_t)(mask & (~(1 << map[i])));
 		for (i = 0; mask && (i < IXGBE_DCB_MAX_TRAFFIC_CLASS); i++) {
-			if ((mask & 0x1) && (j < ETH_DCB_NUM_USER_PRIORITIES))
+			if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
 				map[j++] = i;
 			mask >>= 1;
 		}
@@ -4257,9 +4257,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
 		}
 		/* zero alloc all unused TCs */
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
-		}
 	}
 	if (config_dcb_tx) {
 		/* Only support an equally distributed
@@ -4273,7 +4272,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), txpbthresh);
 		}
 		/* Clear unused TCs, if any, to zero buffer size*/
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), 0);
 			IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), 0);
 		}
@@ -4309,7 +4308,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	ixgbe_dcb_config_tc_stats_82599(hw, dcb_config);
 
 	/* Check if the PFC is supported */
-	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
 		for (i = 0; i < nb_tcs; i++) {
 			/*
@@ -4323,7 +4322,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			tc->pfc = ixgbe_dcb_pfc_enabled;
 		}
 		ixgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
-		if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+		if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
 			pfc_en &= 0x0F;
 		ret = ixgbe_dcb_config_pfc(hw, pfc_en, map);
 	}
@@ -4344,12 +4343,12 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	/* check support mq_mode for DCB */
-	if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
-	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
-	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS))
+	if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
 		return;
 
-	if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+	if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
 		return;
 
 	/** Configure DCB hardware **/
@@ -4405,7 +4404,7 @@ ixgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 
 	/* VFRE: pool enabling for receive - 64 */
 	IXGBE_WRITE_REG(hw, IXGBE_VFRE(0), UINT32_MAX);
-	if (num_pools == ETH_64_POOLS)
+	if (num_pools == RTE_ETH_64_POOLS)
 		IXGBE_WRITE_REG(hw, IXGBE_VFRE(1), UINT32_MAX);
 
 	/*
@@ -4526,11 +4525,11 @@ ixgbe_config_vf_rss(struct rte_eth_dev *dev)
 	mrqc = IXGBE_READ_REG(hw, IXGBE_MRQC);
 	mrqc &= ~IXGBE_MRQC_MRQE_MASK;
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		mrqc |= IXGBE_MRQC_VMDQRSS64EN;
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		mrqc |= IXGBE_MRQC_VMDQRSS32EN;
 		break;
 
@@ -4551,17 +4550,17 @@ ixgbe_config_vf_default(struct rte_eth_dev *dev)
 		IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		IXGBE_WRITE_REG(hw, IXGBE_MRQC,
 			IXGBE_MRQC_VMDQEN);
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		IXGBE_WRITE_REG(hw, IXGBE_MRQC,
 			IXGBE_MRQC_VMDQRT4TCEN);
 		break;
 
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		IXGBE_WRITE_REG(hw, IXGBE_MRQC,
 			IXGBE_MRQC_VMDQRT8TCEN);
 		break;
@@ -4588,21 +4587,21 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * any DCB/RSS w/o VMDq multi-queue setting
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_DCB_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			ixgbe_rss_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
 			ixgbe_vmdq_dcb_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
 			ixgbe_vmdq_rx_hw_configure(dev);
 			break;
 
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_NONE:
 		default:
 			/* if mq_mode is none, disable rss mode.*/
 			ixgbe_rss_disable(dev);
@@ -4613,18 +4612,18 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * Support RSS together with SRIOV.
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			ixgbe_config_vf_rss(dev);
 			break;
-		case ETH_MQ_RX_VMDQ_DCB:
-		case ETH_MQ_RX_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_DCB:
 		/* In SRIOV, the configuration is the same as VMDq case */
 			ixgbe_vmdq_dcb_configure(dev);
 			break;
 		/* DCB/RSS together with SRIOV is not supported */
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
-		case ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
 			PMD_INIT_LOG(ERR,
 				"Could not support DCB/RSS with VMDq & SRIOV");
 			return -1;
@@ -4658,7 +4657,7 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV inactive scheme
 		 * any DCB w/o VMDq multi-queue setting
 		 */
-		if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+		if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
 			ixgbe_vmdq_tx_hw_configure(hw);
 		else {
 			mtqc = IXGBE_MTQC_64Q_1PB;
@@ -4671,13 +4670,13 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV active scheme
 		 * FIXME if support DCB together with VMDq & SRIOV
 		 */
-		case ETH_64_POOLS:
+		case RTE_ETH_64_POOLS:
 			mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_64VF;
 			break;
-		case ETH_32_POOLS:
+		case RTE_ETH_32_POOLS:
 			mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_32VF;
 			break;
-		case ETH_16_POOLS:
+		case RTE_ETH_16_POOLS:
 			mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_RT_ENA |
 				IXGBE_MTQC_8TC_8TQ;
 			break;
@@ -4885,7 +4884,7 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
 		rxq->rx_using_sse = rx_using_sse;
 #ifdef RTE_LIB_SECURITY
 		rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_SECURITY);
+				RTE_ETH_RX_OFFLOAD_SECURITY);
 #endif
 	}
 }
@@ -4913,10 +4912,10 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* Sanity check */
 	dev->dev_ops->dev_infos_get(dev, &dev_info);
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		rsc_capable = true;
 
-	if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
 				   "support it");
 		return -EINVAL;
@@ -4924,8 +4923,8 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* RSC global configuration (chapter 4.6.7.2.1 of 82599 Spec) */
 
-	if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
-	     (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+	     (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		/*
 		 * According to chapter of 4.6.7.2.1 of the Spec Rev.
 		 * 3.0 RSC configuration requires HW CRC stripping being
@@ -4939,7 +4938,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* RFCTL configuration  */
 	rfctl = IXGBE_READ_REG(hw, IXGBE_RFCTL);
-	if ((rsc_capable) && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if ((rsc_capable) && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		rfctl &= ~IXGBE_RFCTL_RSC_DIS;
 	else
 		rfctl |= IXGBE_RFCTL_RSC_DIS;
@@ -4948,7 +4947,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 	IXGBE_WRITE_REG(hw, IXGBE_RFCTL, rfctl);
 
 	/* If LRO hasn't been requested - we are done here. */
-	if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		return 0;
 
 	/* Set RDRXCTL.RSCACKC bit */
@@ -5070,7 +5069,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Configure CRC stripping, if any.
 	 */
 	hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		hlreg0 &= ~IXGBE_HLREG0_RXCRCSTRP;
 	else
 		hlreg0 |= IXGBE_HLREG0_RXCRCSTRP;
@@ -5107,7 +5106,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	/* Setup RX queues */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
@@ -5116,7 +5115,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 * call to configure.
 		 */
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -5158,11 +5157,11 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 		/* It adds dual VLAN length for supporting dual VLAN */
 		if (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
 			dev->data->scattered_rx = 1;
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		dev->data->scattered_rx = 1;
 
 	/*
@@ -5177,7 +5176,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 */
 	rxcsum = IXGBE_READ_REG(hw, IXGBE_RXCSUM);
 	rxcsum |= IXGBE_RXCSUM_PCSD;
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= IXGBE_RXCSUM_IPPCSE;
 	else
 		rxcsum &= ~IXGBE_RXCSUM_IPPCSE;
@@ -5187,7 +5186,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	if (hw->mac.type == ixgbe_mac_82599EB ||
 	    hw->mac.type == ixgbe_mac_X540) {
 		rdrxctl = IXGBE_READ_REG(hw, IXGBE_RDRXCTL);
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rdrxctl &= ~IXGBE_RDRXCTL_CRCSTRIP;
 		else
 			rdrxctl |= IXGBE_RDRXCTL_CRCSTRIP;
@@ -5393,9 +5392,9 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
 
 #ifdef RTE_LIB_SECURITY
 	if ((dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_SECURITY) ||
+			RTE_ETH_RX_OFFLOAD_SECURITY) ||
 		(dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SECURITY)) {
+			RTE_ETH_TX_OFFLOAD_SECURITY)) {
 		ret = ixgbe_crypto_enable_ipsec(dev);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR,
@@ -5683,7 +5682,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	/* Setup RX queues */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
@@ -5732,7 +5731,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
 		buf_size = (uint16_t) ((srrctl & IXGBE_SRRCTL_BSIZEPKT_MASK) <<
 				       IXGBE_SRRCTL_BSIZEPKT_SHIFT);
 
-		if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
 		    /* It adds dual VLAN length for supporting dual VLAN */
 		    (frame_size + 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
 			if (!dev->data->scattered_rx)
@@ -5740,8 +5739,8 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
 			dev->data->scattered_rx = 1;
 		}
 
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	/* Set RQPL for VF RSS according to max Rx queue */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index a1764f2b08af..668a5b9814f6 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -133,7 +133,7 @@ struct ixgbe_rx_queue {
 	uint8_t             rx_udp_csum_zero_err;
 	/** flags to set in mbuf when a vlan is detected. */
 	uint64_t            vlan_flags;
-	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
 	/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
 	struct rte_mbuf fake_mbuf;
 	/** hold packets to return to application */
@@ -227,7 +227,7 @@ struct ixgbe_tx_queue {
 	uint8_t             pthresh;       /**< Prefetch threshold register. */
 	uint8_t             hthresh;       /**< Host threshold register. */
 	uint8_t             wthresh;       /**< Write-back threshold reg. */
-	uint64_t offloads; /**< Tx offload flags of DEV_TX_OFFLOAD_* */
+	uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
 	uint32_t            ctx_curr;      /**< Hardware context states. */
 	/** Hardware context0 history. */
 	struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 005e60668a8b..cd34d4098785 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -277,7 +277,7 @@ static inline int
 ixgbe_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
 {
 #ifndef RTE_LIBRTE_IEEE1588
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 
 	/* no fdir support */
 	if (fconf->mode != RTE_FDIR_MODE_NONE)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index ae03ea6e9db3..ac8976062fa7 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -119,14 +119,14 @@ ixgbe_tc_nb_get(struct rte_eth_dev *dev)
 	uint8_t nb_tcs = 0;
 
 	eth_conf = &dev->data->dev_conf;
-	if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+	if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 		nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
-	} else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	} else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
-		    ETH_32_POOLS)
-			nb_tcs = ETH_4_TCS;
+		    RTE_ETH_32_POOLS)
+			nb_tcs = RTE_ETH_4_TCS;
 		else
-			nb_tcs = ETH_8_TCS;
+			nb_tcs = RTE_ETH_8_TCS;
 	} else {
 		nb_tcs = 1;
 	}
@@ -375,10 +375,10 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 	if (vf_num) {
 		/* no DCB */
 		if (nb_tcs == 1) {
-			if (vf_num >= ETH_32_POOLS) {
+			if (vf_num >= RTE_ETH_32_POOLS) {
 				*nb = 2;
 				*base = vf_num * 2;
-			} else if (vf_num >= ETH_16_POOLS) {
+			} else if (vf_num >= RTE_ETH_16_POOLS) {
 				*nb = 4;
 				*base = vf_num * 4;
 			} else {
@@ -392,7 +392,7 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 		}
 	} else {
 		/* VT off */
-		if (nb_tcs == ETH_8_TCS) {
+		if (nb_tcs == RTE_ETH_8_TCS) {
 			switch (tc_node_no) {
 			case 0:
 				*base = 0;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index 9fa75984fb31..bd528ff346c7 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -58,20 +58,20 @@ ixgbe_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	/**< Maximum number of MAC addresses. */
 
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |	DEV_RX_OFFLOAD_UDP_CKSUM  |
-		DEV_RX_OFFLOAD_TCP_CKSUM;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	RTE_ETH_RX_OFFLOAD_UDP_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 	/**< Device RX offload capabilities. */
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	/**< Device TX offload capabilities. */
 
 	dev_info->speed_capa =
 		representor->pf_ethdev->data->dev_link.link_speed;
-	/**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+	/**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
 
 	dev_info->switch_info.name =
 		representor->pf_ethdev->device->name;
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.c b/drivers/net/ixgbe/rte_pmd_ixgbe.c
index cf089cd9aee5..9729f8575f53 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.c
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.c
@@ -303,10 +303,10 @@ rte_pmd_ixgbe_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
 	 */
 	if (hw->mac.type == ixgbe_mac_82598EB)
 		queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
-				  ETH_16_POOLS;
+				  RTE_ETH_16_POOLS;
 	else
 		queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
-				  ETH_64_POOLS;
+				  RTE_ETH_64_POOLS;
 
 	for (q = 0; q < queues_per_pool; q++)
 		(*dev->dev_ops->vlan_strip_queue_set)(dev,
@@ -736,14 +736,14 @@ rte_pmd_ixgbe_set_tc_bw_alloc(uint16_t port,
 	bw_conf = IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
 	eth_conf = &dev->data->dev_conf;
 
-	if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+	if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 		nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
-	} else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	} else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
-		    ETH_32_POOLS)
-			nb_tcs = ETH_4_TCS;
+		    RTE_ETH_32_POOLS)
+			nb_tcs = RTE_ETH_4_TCS;
 		else
-			nb_tcs = ETH_8_TCS;
+			nb_tcs = RTE_ETH_8_TCS;
 	} else {
 		nb_tcs = 1;
 	}
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.h b/drivers/net/ixgbe/rte_pmd_ixgbe.h
index 90fc8160b1f8..eef6f6661c74 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.h
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.h
@@ -285,8 +285,8 @@ int rte_pmd_ixgbe_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an,
 * @param rx_mask
 *    The RX mode mask, which is one or more of accepting Untagged Packets,
 *    packets that match the PFUTA table, Broadcast and Multicast Promiscuous.
-*    ETH_VMDQ_ACCEPT_UNTAG,ETH_VMDQ_ACCEPT_HASH_UC,
-*    ETH_VMDQ_ACCEPT_BROADCAST and ETH_VMDQ_ACCEPT_MULTICAST will be used
+*    RTE_ETH_VMDQ_ACCEPT_UNTAG, RTE_ETH_VMDQ_ACCEPT_HASH_UC,
+*    RTE_ETH_VMDQ_ACCEPT_BROADCAST and RTE_ETH_VMDQ_ACCEPT_MULTICAST will be used
 *    in rx_mode.
 * @param on
 *    1 - Enable a VF RX mode.
diff --git a/drivers/net/kni/rte_eth_kni.c b/drivers/net/kni/rte_eth_kni.c
index cb9f7c8e8200..c428caf44189 100644
--- a/drivers/net/kni/rte_eth_kni.c
+++ b/drivers/net/kni/rte_eth_kni.c
@@ -61,10 +61,10 @@ struct pmd_internals {
 };
 
 static const struct rte_eth_link pmd_link = {
-		.link_speed = ETH_SPEED_NUM_10G,
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_status = ETH_LINK_DOWN,
-		.link_autoneg = ETH_LINK_FIXED,
+		.link_speed = RTE_ETH_SPEED_NUM_10G,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_status = RTE_ETH_LINK_DOWN,
+		.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 static int is_kni_initialized;
 
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 0fc3f0ab66a9..90ffe31b9fda 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -384,15 +384,15 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
 	case PCI_SUBSYS_DEV_ID_CN2360_210SVPN3:
 	case PCI_SUBSYS_DEV_ID_CN2350_210SVPT:
 	case PCI_SUBSYS_DEV_ID_CN2360_210SVPT:
-		devinfo->speed_capa = ETH_LINK_SPEED_10G;
+		devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
 		break;
 	/* CN23xx 25G cards */
 	case PCI_SUBSYS_DEV_ID_CN2350_225:
 	case PCI_SUBSYS_DEV_ID_CN2360_225:
-		devinfo->speed_capa = ETH_LINK_SPEED_25G;
+		devinfo->speed_capa = RTE_ETH_LINK_SPEED_25G;
 		break;
 	default:
-		devinfo->speed_capa = ETH_LINK_SPEED_10G;
+		devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
 		lio_dev_err(lio_dev,
 			    "Unknown CN23XX subsystem device id. Setting 10G as default link speed.\n");
 		return -EINVAL;
@@ -406,27 +406,27 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
 
 	devinfo->max_mac_addrs = 1;
 
-	devinfo->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM		|
-				    DEV_RX_OFFLOAD_UDP_CKSUM		|
-				    DEV_RX_OFFLOAD_TCP_CKSUM		|
-				    DEV_RX_OFFLOAD_VLAN_STRIP		|
-				    DEV_RX_OFFLOAD_RSS_HASH);
-	devinfo->tx_offload_capa = (DEV_TX_OFFLOAD_IPV4_CKSUM		|
-				    DEV_TX_OFFLOAD_UDP_CKSUM		|
-				    DEV_TX_OFFLOAD_TCP_CKSUM		|
-				    DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM);
+	devinfo->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM		|
+				    RTE_ETH_RX_OFFLOAD_UDP_CKSUM		|
+				    RTE_ETH_RX_OFFLOAD_TCP_CKSUM		|
+				    RTE_ETH_RX_OFFLOAD_VLAN_STRIP		|
+				    RTE_ETH_RX_OFFLOAD_RSS_HASH);
+	devinfo->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
+				    RTE_ETH_TX_OFFLOAD_UDP_CKSUM		|
+				    RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
+				    RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM);
 
 	devinfo->rx_desc_lim = lio_rx_desc_lim;
 	devinfo->tx_desc_lim = lio_tx_desc_lim;
 
 	devinfo->reta_size = LIO_RSS_MAX_TABLE_SZ;
 	devinfo->hash_key_size = LIO_RSS_MAX_KEY_SZ;
-	devinfo->flow_type_rss_offloads = (ETH_RSS_IPV4			|
-					   ETH_RSS_NONFRAG_IPV4_TCP	|
-					   ETH_RSS_IPV6			|
-					   ETH_RSS_NONFRAG_IPV6_TCP	|
-					   ETH_RSS_IPV6_EX		|
-					   ETH_RSS_IPV6_TCP_EX);
+	devinfo->flow_type_rss_offloads = (RTE_ETH_RSS_IPV4			|
+					   RTE_ETH_RSS_NONFRAG_IPV4_TCP	|
+					   RTE_ETH_RSS_IPV6			|
+					   RTE_ETH_RSS_NONFRAG_IPV6_TCP	|
+					   RTE_ETH_RSS_IPV6_EX		|
+					   RTE_ETH_RSS_IPV6_TCP_EX);
 	return 0;
 }
 
@@ -519,10 +519,10 @@ lio_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
 	rss_param->param.flags &= ~LIO_RSS_PARAM_ITABLE_UNCHANGED;
 	rss_param->param.itablesize = LIO_RSS_MAX_TABLE_SZ;
 
-	for (i = 0; i < (reta_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask) & ((uint64_t)1 << j)) {
-				index = (i * RTE_RETA_GROUP_SIZE) + j;
+				index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
 				rss_state->itable[index] = reta_conf[i].reta[j];
 			}
 		}
@@ -562,12 +562,12 @@ lio_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	num = reta_size / RTE_RETA_GROUP_SIZE;
+	num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < num; i++) {
 		memcpy(reta_conf->reta,
-		       &rss_state->itable[i * RTE_RETA_GROUP_SIZE],
-		       RTE_RETA_GROUP_SIZE);
+		       &rss_state->itable[i * RTE_ETH_RETA_GROUP_SIZE],
+		       RTE_ETH_RETA_GROUP_SIZE);
 		reta_conf++;
 	}
 
@@ -595,17 +595,17 @@ lio_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
 		memcpy(hash_key, rss_state->hash_key, rss_state->hash_key_size);
 
 	if (rss_state->ip)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (rss_state->tcp_hash)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (rss_state->ipv6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (rss_state->ipv6_tcp_hash)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (rss_state->ipv6_ex)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (rss_state->ipv6_tcp_ex_hash)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 
 	rss_conf->rss_hf = rss_hf;
 
@@ -673,42 +673,42 @@ lio_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
 		if (rss_state->hash_disable)
 			return -EINVAL;
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
 			hashinfo |= LIO_RSS_HASH_IPV4;
 			rss_state->ip = 1;
 		} else {
 			rss_state->ip = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 			hashinfo |= LIO_RSS_HASH_TCP_IPV4;
 			rss_state->tcp_hash = 1;
 		} else {
 			rss_state->tcp_hash = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV6) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6) {
 			hashinfo |= LIO_RSS_HASH_IPV6;
 			rss_state->ipv6 = 1;
 		} else {
 			rss_state->ipv6 = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 			hashinfo |= LIO_RSS_HASH_TCP_IPV6;
 			rss_state->ipv6_tcp_hash = 1;
 		} else {
 			rss_state->ipv6_tcp_hash = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV6_EX) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX) {
 			hashinfo |= LIO_RSS_HASH_IPV6_EX;
 			rss_state->ipv6_ex = 1;
 		} else {
 			rss_state->ipv6_ex = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) {
 			hashinfo |= LIO_RSS_HASH_TCP_IPV6_EX;
 			rss_state->ipv6_tcp_ex_hash = 1;
 		} else {
@@ -757,7 +757,7 @@ lio_dev_udp_tunnel_add(struct rte_eth_dev *eth_dev,
 	if (udp_tnl == NULL)
 		return -EINVAL;
 
-	if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+	if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
 		lio_dev_err(lio_dev, "Unsupported tunnel type\n");
 		return -1;
 	}
@@ -814,7 +814,7 @@ lio_dev_udp_tunnel_del(struct rte_eth_dev *eth_dev,
 	if (udp_tnl == NULL)
 		return -EINVAL;
 
-	if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+	if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
 		lio_dev_err(lio_dev, "Unsupported tunnel type\n");
 		return -1;
 	}
@@ -912,10 +912,10 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
 
 	/* Initialize */
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	/* Return what we found */
 	if (lio_dev->linfo.link.s.link_up == 0) {
@@ -923,18 +923,18 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
 		return rte_eth_linkstatus_set(eth_dev, &link);
 	}
 
-	link.link_status = ETH_LINK_UP; /* Interface is up */
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP; /* Interface is up */
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	switch (lio_dev->linfo.link.s.speed) {
 	case LIO_LINK_SPEED_10000:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case LIO_LINK_SPEED_25000:
-		link.link_speed = ETH_SPEED_NUM_25G;
+		link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	default:
-		link.link_speed = ETH_SPEED_NUM_NONE;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	}
 
 	return rte_eth_linkstatus_set(eth_dev, &link);
@@ -1086,8 +1086,8 @@ lio_dev_rss_configure(struct rte_eth_dev *eth_dev)
 
 		q_idx = (uint8_t)((eth_dev->data->nb_rx_queues > 1) ?
 				  i % eth_dev->data->nb_rx_queues : 0);
-		conf_idx = i / RTE_RETA_GROUP_SIZE;
-		reta_idx = i % RTE_RETA_GROUP_SIZE;
+		conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
 		reta_conf[conf_idx].reta[reta_idx] = q_idx;
 		reta_conf[conf_idx].mask |= ((uint64_t)1 << reta_idx);
 	}
@@ -1103,10 +1103,10 @@ lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rss_conf rss_conf;
 
 	switch (eth_dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		lio_dev_rss_configure(eth_dev);
 		break;
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 	/* if mq_mode is none, disable rss mode. */
 	default:
 		memset(&rss_conf, 0, sizeof(rss_conf));
@@ -1484,7 +1484,7 @@ lio_dev_set_link_up(struct rte_eth_dev *eth_dev)
 	}
 
 	lio_dev->linfo.link.s.link_up = 1;
-	eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -1505,11 +1505,11 @@ lio_dev_set_link_down(struct rte_eth_dev *eth_dev)
 	}
 
 	lio_dev->linfo.link.s.link_up = 0;
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	if (lio_send_rx_ctrl_cmd(eth_dev, 0)) {
 		lio_dev->linfo.link.s.link_up = 1;
-		eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+		eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 		lio_dev_err(lio_dev, "Unable to set Link Down\n");
 		return -1;
 	}
@@ -1721,9 +1721,9 @@ lio_dev_configure(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Inform firmware about change in number of queues to use.
 	 * Disable IO queues and reset registers for re-configuration.
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index 364e818d65c1..8533e39f6957 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -525,7 +525,7 @@ memif_disconnect(struct rte_eth_dev *dev)
 	int i;
 	int ret;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
 	pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTED;
 
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index 980150293e86..9deb7a5f1360 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -55,10 +55,10 @@ static const char * const valid_arguments[] = {
 };
 
 static const struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_AUTONEG
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_AUTONEG
 };
 
 #define MEMIF_MP_SEND_REGION		"memif_mp_send_region"
@@ -199,7 +199,7 @@ memif_dev_info(struct rte_eth_dev *dev __rte_unused, struct rte_eth_dev_info *de
 	dev_info->max_rx_queues = ETH_MEMIF_MAX_NUM_Q_PAIRS;
 	dev_info->max_tx_queues = ETH_MEMIF_MAX_NUM_Q_PAIRS;
 	dev_info->min_rx_bufsize = 0;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -1219,7 +1219,7 @@ memif_connect(struct rte_eth_dev *dev)
 
 		pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
 		pmd->flags |= ETH_MEMIF_FLAG_CONNECTED;
-		dev->data->dev_link.link_status = ETH_LINK_UP;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	}
 	MIF_LOG(INFO, "Connected.");
 	return 0;
@@ -1381,10 +1381,10 @@ memif_link_update(struct rte_eth_dev *dev,
 
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
 		proc_private = dev->process_private;
-		if (dev->data->dev_link.link_status == ETH_LINK_UP &&
+		if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP &&
 				proc_private->regions_num == 0) {
 			memif_mp_request_regions(dev);
-		} else if (dev->data->dev_link.link_status == ETH_LINK_DOWN &&
+		} else if (dev->data->dev_link.link_status == RTE_ETH_LINK_DOWN &&
 				proc_private->regions_num > 0) {
 			memif_free_regions(dev);
 		}
diff --git a/drivers/net/mlx4/mlx4_ethdev.c b/drivers/net/mlx4/mlx4_ethdev.c
index 783ff94dce8d..d606ec8ca76d 100644
--- a/drivers/net/mlx4/mlx4_ethdev.c
+++ b/drivers/net/mlx4/mlx4_ethdev.c
@@ -657,11 +657,11 @@ mlx4_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 	info->if_index = priv->if_index;
 	info->hash_key_size = MLX4_RSS_HASH_KEY_SIZE;
 	info->speed_capa =
-			ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_10G |
-			ETH_LINK_SPEED_20G |
-			ETH_LINK_SPEED_40G |
-			ETH_LINK_SPEED_56G;
+			RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_20G |
+			RTE_ETH_LINK_SPEED_40G |
+			RTE_ETH_LINK_SPEED_56G;
 	info->flow_type_rss_offloads = mlx4_conv_rss_types(priv, 0, 1);
 
 	return 0;
@@ -821,13 +821,13 @@ mlx4_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 	}
 	link_speed = ethtool_cmd_speed(&edata);
 	if (link_speed == -1)
-		dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	else
 		dev_link.link_speed = link_speed;
 	dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
-				ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+				RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
 	dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				  ETH_LINK_SPEED_FIXED);
+				  RTE_ETH_LINK_SPEED_FIXED);
 	dev->data->dev_link = dev_link;
 	return 0;
 }
@@ -863,13 +863,13 @@ mlx4_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 	fc_conf->autoneg = ethpause.autoneg;
 	if (ethpause.rx_pause && ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (ethpause.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	ret = 0;
 out:
 	MLX4_ASSERT(ret >= 0);
@@ -899,13 +899,13 @@ mlx4_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	ifr.ifr_data = (void *)&ethpause;
 	ethpause.autoneg = fc_conf->autoneg;
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_RX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
 		ethpause.rx_pause = 1;
 	else
 		ethpause.rx_pause = 0;
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_TX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
 		ethpause.tx_pause = 1;
 	else
 		ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c
index 71ea91b3fb82..2e1b6c87e983 100644
--- a/drivers/net/mlx4/mlx4_flow.c
+++ b/drivers/net/mlx4/mlx4_flow.c
@@ -109,21 +109,21 @@ mlx4_conv_rss_types(struct mlx4_priv *priv, uint64_t types, int verbs_to_dpdk)
 	};
 	static const uint64_t dpdk[] = {
 		[INNER] = 0,
-		[IPV4] = ETH_RSS_IPV4,
-		[IPV4_1] = ETH_RSS_FRAG_IPV4,
-		[IPV4_2] = ETH_RSS_NONFRAG_IPV4_OTHER,
-		[IPV6] = ETH_RSS_IPV6,
-		[IPV6_1] = ETH_RSS_FRAG_IPV6,
-		[IPV6_2] = ETH_RSS_NONFRAG_IPV6_OTHER,
-		[IPV6_3] = ETH_RSS_IPV6_EX,
+		[IPV4] = RTE_ETH_RSS_IPV4,
+		[IPV4_1] = RTE_ETH_RSS_FRAG_IPV4,
+		[IPV4_2] = RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+		[IPV6] = RTE_ETH_RSS_IPV6,
+		[IPV6_1] = RTE_ETH_RSS_FRAG_IPV6,
+		[IPV6_2] = RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+		[IPV6_3] = RTE_ETH_RSS_IPV6_EX,
 		[TCP] = 0,
 		[UDP] = 0,
-		[IPV4_TCP] = ETH_RSS_NONFRAG_IPV4_TCP,
-		[IPV4_UDP] = ETH_RSS_NONFRAG_IPV4_UDP,
-		[IPV6_TCP] = ETH_RSS_NONFRAG_IPV6_TCP,
-		[IPV6_TCP_1] = ETH_RSS_IPV6_TCP_EX,
-		[IPV6_UDP] = ETH_RSS_NONFRAG_IPV6_UDP,
-		[IPV6_UDP_1] = ETH_RSS_IPV6_UDP_EX,
+		[IPV4_TCP] = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+		[IPV4_UDP] = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+		[IPV6_TCP] = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+		[IPV6_TCP_1] = RTE_ETH_RSS_IPV6_TCP_EX,
+		[IPV6_UDP] = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+		[IPV6_UDP_1] = RTE_ETH_RSS_IPV6_UDP_EX,
 	};
 	static const uint64_t verbs[RTE_DIM(dpdk)] = {
 		[INNER] = IBV_RX_HASH_INNER,
@@ -1283,7 +1283,7 @@ mlx4_flow_internal_next_vlan(struct mlx4_priv *priv, uint16_t vlan)
  * - MAC flow rules are generated from @p dev->data->mac_addrs
  *   (@p priv->mac array).
  * - An additional flow rule for Ethernet broadcasts is also generated.
- * - All these are per-VLAN if @p DEV_RX_OFFLOAD_VLAN_FILTER
+ * - All these are per-VLAN if @p RTE_ETH_RX_OFFLOAD_VLAN_FILTER
  *   is enabled and VLAN filters are configured.
  *
  * @param priv
@@ -1358,7 +1358,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 	struct rte_ether_addr *rule_mac = &eth_spec.dst;
 	rte_be16_t *rule_vlan =
 		(ETH_DEV(priv)->data->dev_conf.rxmode.offloads &
-		 DEV_RX_OFFLOAD_VLAN_FILTER) &&
+		 RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 		!ETH_DEV(priv)->data->promiscuous ?
 		&vlan_spec.tci :
 		NULL;
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index d56009c41845..2aab0f60a7b5 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -118,7 +118,7 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
 static void
 mlx4_link_status_alarm(struct mlx4_priv *priv)
 {
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 
 	MLX4_ASSERT(priv->intr_alarm == 1);
@@ -183,7 +183,7 @@ mlx4_interrupt_handler(struct mlx4_priv *priv)
 	};
 	uint32_t caught[RTE_DIM(type)] = { 0 };
 	struct ibv_async_event event;
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 	unsigned int i;
 
@@ -280,7 +280,7 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
 int
 mlx4_intr_install(struct mlx4_priv *priv)
 {
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 	int rc;
 
@@ -386,7 +386,7 @@ mlx4_rx_intr_enable(struct rte_eth_dev *dev, uint16_t idx)
 int
 mlx4_rxq_intr_enable(struct mlx4_priv *priv)
 {
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 
 	if (intr_conf->rxq && mlx4_rx_intr_vec_enable(priv) < 0)
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index ee2d2b75e59a..781ee256df71 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -682,12 +682,12 @@ mlx4_rxq_detach(struct rxq *rxq)
 uint64_t
 mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
 {
-	uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
-			    DEV_RX_OFFLOAD_KEEP_CRC |
-			    DEV_RX_OFFLOAD_RSS_HASH;
+	uint64_t offloads = RTE_ETH_RX_OFFLOAD_SCATTER |
+			    RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+			    RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (priv->hw_csum)
-		offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 	return offloads;
 }
 
@@ -703,7 +703,7 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
 uint64_t
 mlx4_get_rx_port_offloads(struct mlx4_priv *priv)
 {
-	uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+	uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	(void)priv;
 	return offloads;
@@ -785,7 +785,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	}
 	/* By default, FCS (CRC) is stripped by hardware. */
 	crc_present = 0;
-	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		if (priv->hw_fcs_strip) {
 			crc_present = 1;
 		} else {
@@ -816,9 +816,9 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		.elts = elts,
 		/* Toggle Rx checksum offload if hardware supports it. */
 		.csum = priv->hw_csum &&
-			(offloads & DEV_RX_OFFLOAD_CHECKSUM),
+			(offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
 		.csum_l2tun = priv->hw_csum_l2tun &&
-			      (offloads & DEV_RX_OFFLOAD_CHECKSUM),
+			      (offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
 		.crc_present = crc_present,
 		.l2tun_offload = priv->hw_csum_l2tun,
 		.stats = {
@@ -832,7 +832,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
 	if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
 		;
-	} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+	} else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
 		uint32_t sges_n;
 
diff --git a/drivers/net/mlx4/mlx4_txq.c b/drivers/net/mlx4/mlx4_txq.c
index 7d8c4f2a2223..0db2e55befd3 100644
--- a/drivers/net/mlx4/mlx4_txq.c
+++ b/drivers/net/mlx4/mlx4_txq.c
@@ -273,20 +273,20 @@ mlx4_txq_fill_dv_obj_info(struct txq *txq, struct mlx4dv_obj *mlxdv)
 uint64_t
 mlx4_get_tx_port_offloads(struct mlx4_priv *priv)
 {
-	uint64_t offloads = DEV_TX_OFFLOAD_MULTI_SEGS;
+	uint64_t offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	if (priv->hw_csum) {
-		offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_UDP_CKSUM |
-			     DEV_TX_OFFLOAD_TCP_CKSUM);
+		offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
 	}
 	if (priv->tso)
-		offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+		offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	if (priv->hw_csum_l2tun) {
-		offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+		offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 		if (priv->tso)
-			offloads |= (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				     DEV_TX_OFFLOAD_GRE_TNL_TSO);
+			offloads |= (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
 	}
 	return offloads;
 }
@@ -394,12 +394,12 @@ mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		.elts_comp_cd_init =
 			RTE_MIN(MLX4_PMD_TX_PER_COMP_REQ, desc / 4),
 		.csum = priv->hw_csum &&
-			(offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-					   DEV_TX_OFFLOAD_UDP_CKSUM |
-					   DEV_TX_OFFLOAD_TCP_CKSUM)),
+			(offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+					   RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+					   RTE_ETH_TX_OFFLOAD_TCP_CKSUM)),
 		.csum_l2tun = priv->hw_csum_l2tun &&
 			      (offloads &
-			       DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM),
+			       RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM),
 		/* Enable Tx loopback for VF devices. */
 		.lb = !!priv->vf,
 		.bounce_buf = bounce_buf,
diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index f34133e2c641..79e27fe2d668 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -439,24 +439,24 @@ mlx5_link_update_unlocked_gset(struct rte_eth_dev *dev,
 	}
 	link_speed = ethtool_cmd_speed(&edata);
 	if (link_speed == -1)
-		dev_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		dev_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	else
 		dev_link.link_speed = link_speed;
 	priv->link_speed_capa = 0;
 	if (edata.supported & (SUPPORTED_1000baseT_Full |
 			       SUPPORTED_1000baseKX_Full))
-		priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (edata.supported & SUPPORTED_10000baseKR_Full)
-		priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (edata.supported & (SUPPORTED_40000baseKR4_Full |
 			       SUPPORTED_40000baseCR4_Full |
 			       SUPPORTED_40000baseSR4_Full |
 			       SUPPORTED_40000baseLR4_Full))
-		priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
-				ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+				RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
 	dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_SPEED_FIXED);
 	*link = dev_link;
 	return 0;
 }
@@ -545,45 +545,45 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
 		return ret;
 	}
 	dev_link.link_speed = (ecmd->speed == UINT32_MAX) ?
-				ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
+				RTE_ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
 	sc = ecmd->link_mode_masks[0] |
 		((uint64_t)ecmd->link_mode_masks[1] << 32);
 	priv->link_speed_capa = 0;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseT_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseKX_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKR_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseR_FEC_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_20G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_20G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseCR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseSR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseLR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_56G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_56G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseCR_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseKR_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseSR_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_25G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_50G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_100G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
 
 	sc = ecmd->link_mode_masks[2] |
 		((uint64_t)ecmd->link_mode_masks[3] << 32);
@@ -591,11 +591,11 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
 		  MLX5_BITSHIFT
 		       (ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseDR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
 	dev_link.link_duplex = ((ecmd->duplex == DUPLEX_HALF) ?
-				ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+				RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
 	dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				  ETH_LINK_SPEED_FIXED);
+				  RTE_ETH_LINK_SPEED_FIXED);
 	*link = dev_link;
 	return 0;
 }
@@ -677,13 +677,13 @@ mlx5_dev_get_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 	fc_conf->autoneg = ethpause.autoneg;
 	if (ethpause.rx_pause && ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (ethpause.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	return 0;
 }
 
@@ -709,14 +709,14 @@ mlx5_dev_set_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	ifr.ifr_data = (void *)&ethpause;
 	ethpause.autoneg = fc_conf->autoneg;
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_RX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
 		ethpause.rx_pause = 1;
 	else
 		ethpause.rx_pause = 0;
 
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_TX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
 		ethpause.tx_pause = 1;
 	else
 		ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index a823d26bebf9..d207ec053e07 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1350,8 +1350,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	 * Remove this check once DPDK supports larger/variable
 	 * indirection tables.
 	 */
-	if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
-		config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+	if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+		config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
 	DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
 		config->ind_table_max_size);
 	config->hw_vlan_strip = !!(sh->device_attr.raw_packet_caps &
@@ -1634,7 +1634,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	/*
 	 * If HW has bug working with tunnel packet decapsulation and
 	 * scatter FCS, and decapsulation is needed, clear the hw_fcs_strip
-	 * bit. Then DEV_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
+	 * bit. Then RTE_ETH_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
 	 */
 	if (config->hca_attr.scatter_fcs_w_decap_disable && config->decap_en)
 		config->hw_fcs_strip = 0;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index e28cc461b914..7727dfb4196c 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1488,10 +1488,10 @@ mlx5_udp_tunnel_port_add(struct rte_eth_dev *dev __rte_unused,
 			 struct rte_eth_udp_tunnel *udp_tunnel)
 {
 	MLX5_ASSERT(udp_tunnel != NULL);
-	if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN &&
+	if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN &&
 	    udp_tunnel->udp_port == 4789)
 		return 0;
-	if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN_GPE &&
+	if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN_GPE &&
 	    udp_tunnel->udp_port == 4790)
 		return 0;
 	return -ENOTSUP;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index a15f86616d49..ea17a86f4955 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1217,7 +1217,7 @@ TAILQ_HEAD(mlx5_legacy_flow_meters, mlx5_legacy_flow_meter);
 struct mlx5_flow_rss_desc {
 	uint32_t level;
 	uint32_t queue_num; /**< Number of entries in @p queue. */
-	uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+	uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
 	uint64_t hash_fields; /* Verbs Hash fields. */
 	uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
 	uint32_t key_len; /**< RSS hash key len. */
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index fe86bb40d351..12ddf4c7ff28 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -90,11 +90,11 @@
 #define MLX5_VPMD_DESCS_PER_LOOP      4
 
 /* Mask of RSS on source only or destination only. */
-#define MLX5_RSS_SRC_DST_ONLY (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | \
-			       ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define MLX5_RSS_SRC_DST_ONLY (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY | \
+			       RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
 
 /* Supported RSS */
-#define MLX5_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP | \
+#define MLX5_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP | \
 			    MLX5_RSS_SRC_DST_ONLY))
 
 /* Timeout in seconds to get a valid link status. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 82e2284d9866..f2b78c3cc69e 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -91,7 +91,7 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
 	}
 
 	if ((dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
+			RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
 			rte_mbuf_dyn_tx_timestamp_register(NULL, NULL) != 0) {
 		DRV_LOG(ERR, "port %u cannot register Tx timestamp field/flag",
 			dev->data->port_id);
@@ -225,8 +225,8 @@ mlx5_set_default_params(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 	info->default_txportconf.ring_size = 256;
 	info->default_rxportconf.burst_size = MLX5_RX_DEFAULT_BURST;
 	info->default_txportconf.burst_size = MLX5_TX_DEFAULT_BURST;
-	if ((priv->link_speed_capa & ETH_LINK_SPEED_200G) |
-		(priv->link_speed_capa & ETH_LINK_SPEED_100G)) {
+	if ((priv->link_speed_capa & RTE_ETH_LINK_SPEED_200G) |
+		(priv->link_speed_capa & RTE_ETH_LINK_SPEED_100G)) {
 		info->default_rxportconf.nb_queues = 16;
 		info->default_txportconf.nb_queues = 16;
 		if (dev->data->nb_rx_queues > 2 ||
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index b4d0b7b5ef32..4309852523b2 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -98,7 +98,7 @@ struct mlx5_flow_expand_node {
 	uint64_t rss_types;
 	/**<
 	 * RSS types bit-field associated with this node
-	 * (see ETH_RSS_* definitions).
+	 * (see RTE_ETH_RSS_* definitions).
 	 */
 	uint64_t node_flags;
 	/**<
@@ -298,7 +298,7 @@ mlx5_flow_expand_rss_skip_explicit(const struct mlx5_flow_expand_node graph[],
  * @param[in] pattern
  *   User flow pattern.
  * @param[in] types
- *   RSS types to expand (see ETH_RSS_* definitions).
+ *   RSS types to expand (see RTE_ETH_RSS_* definitions).
  * @param[in] graph
  *   Input graph to expand @p pattern according to @p types.
  * @param[in] graph_root_index
@@ -560,8 +560,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 			 MLX5_EXPANSION_IPV4,
 			 MLX5_EXPANSION_IPV6),
 		.type = RTE_FLOW_ITEM_TYPE_IPV4,
-		.rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			ETH_RSS_NONFRAG_IPV4_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	},
 	[MLX5_EXPANSION_OUTER_IPV4_UDP] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -569,11 +569,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 						  MLX5_EXPANSION_MPLS,
 						  MLX5_EXPANSION_GTP),
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 	},
 	[MLX5_EXPANSION_OUTER_IPV4_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 	},
 	[MLX5_EXPANSION_OUTER_IPV6] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT
@@ -584,8 +584,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 			 MLX5_EXPANSION_GRE,
 			 MLX5_EXPANSION_NVGRE),
 		.type = RTE_FLOW_ITEM_TYPE_IPV6,
-		.rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-			ETH_RSS_NONFRAG_IPV6_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	},
 	[MLX5_EXPANSION_OUTER_IPV6_UDP] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -593,11 +593,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 						  MLX5_EXPANSION_MPLS,
 						  MLX5_EXPANSION_GTP),
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 	},
 	[MLX5_EXPANSION_OUTER_IPV6_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	},
 	[MLX5_EXPANSION_VXLAN] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_ETH,
@@ -659,32 +659,32 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4_UDP,
 						  MLX5_EXPANSION_IPV4_TCP),
 		.type = RTE_FLOW_ITEM_TYPE_IPV4,
-		.rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			ETH_RSS_NONFRAG_IPV4_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	},
 	[MLX5_EXPANSION_IPV4_UDP] = {
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 	},
 	[MLX5_EXPANSION_IPV4_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 	},
 	[MLX5_EXPANSION_IPV6] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV6_UDP,
 						  MLX5_EXPANSION_IPV6_TCP,
 						  MLX5_EXPANSION_IPV6_FRAG_EXT),
 		.type = RTE_FLOW_ITEM_TYPE_IPV6,
-		.rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-			ETH_RSS_NONFRAG_IPV6_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	},
 	[MLX5_EXPANSION_IPV6_UDP] = {
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 	},
 	[MLX5_EXPANSION_IPV6_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	},
 	[MLX5_EXPANSION_IPV6_FRAG_EXT] = {
 		.type = RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
@@ -1095,7 +1095,7 @@ mlx5_flow_item_acceptable(const struct rte_flow_item *item,
  * @param[in] tunnel
  *   1 when the hash field is for a tunnel item.
  * @param[in] layer_types
- *   ETH_RSS_* types.
+ *   RTE_ETH_RSS_* types.
  * @param[in] hash_fields
  *   Item hash fields.
  *
@@ -1648,14 +1648,14 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev,
 					  &rss->types,
 					  "some RSS protocols are not"
 					  " supported");
-	if ((rss->types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) &&
-	    !(rss->types & ETH_RSS_IP))
+	if ((rss->types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) &&
+	    !(rss->types & RTE_ETH_RSS_IP))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
 					  "L3 partial RSS requested but L3 RSS"
 					  " type not specified");
-	if ((rss->types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) &&
-	    !(rss->types & (ETH_RSS_UDP | ETH_RSS_TCP)))
+	if ((rss->types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) &&
+	    !(rss->types & (RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP)))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
 					  "L4 partial RSS requested but L4 RSS"
@@ -6411,8 +6411,8 @@ flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type,
 		 * mlx5_flow_hashfields_adjust() in advance.
 		 */
 		rss_desc->level = rss->level;
-		/* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
-		rss_desc->types = !rss->types ? ETH_RSS_IP : rss->types;
+		/* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+		rss_desc->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
 	}
 	flow->dev_handles = 0;
 	if (rss && rss->types) {
@@ -7036,7 +7036,7 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev,
 	if (!priv->reta_idx_n || !priv->rxqs_n) {
 		return 0;
 	}
-	if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+	if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 		action_rss.types = 0;
 	for (i = 0; i != priv->reta_idx_n; ++i)
 		queue[i] = (*priv->reta_idx)[i];
@@ -8704,7 +8704,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
 				(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION_CONF,
 				NULL, "invalid port configuration");
-		if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+		if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 			ctx->action_rss.types = 0;
 		for (i = 0; i != priv->reta_idx_n; ++i)
 			ctx->queue[i] = (*priv->reta_idx)[i];
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 5c68d4f7d742..ff85c1c013a5 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -328,18 +328,18 @@ enum mlx5_feature_name {
 
 /* Valid layer type for IPV4 RSS. */
 #define MLX5_IPV4_LAYER_TYPES \
-	(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
-	 ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
-	 ETH_RSS_NONFRAG_IPV4_OTHER)
+	(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+	 RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	 RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
 
 /* IBV hash source bits  for IPV4. */
 #define MLX5_IPV4_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV4 | IBV_RX_HASH_DST_IPV4)
 
 /* Valid layer type for IPV6 RSS. */
 #define MLX5_IPV6_LAYER_TYPES \
-	(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP | \
-	 ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_EX  | ETH_RSS_IPV6_TCP_EX | \
-	 ETH_RSS_IPV6_UDP_EX | ETH_RSS_NONFRAG_IPV6_OTHER)
+	(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	 RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_EX  | RTE_ETH_RSS_IPV6_TCP_EX | \
+	 RTE_ETH_RSS_IPV6_UDP_EX | RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
 
 /* IBV hash source bits  for IPV6. */
 #define MLX5_IPV6_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV6 | IBV_RX_HASH_DST_IPV6)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index e31d4d846825..759fe57f19d6 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -10837,9 +10837,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 	if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV4)) ||
 	    (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV4))) {
 		if (rss_types & MLX5_IPV4_LAYER_TYPES) {
-			if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV4;
-			else if (rss_types & ETH_RSS_L3_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV4;
 			else
 				dev_flow->hash_fields |= MLX5_IPV4_IBV_RX_HASH;
@@ -10847,9 +10847,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 	} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV6)) ||
 		   (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV6))) {
 		if (rss_types & MLX5_IPV6_LAYER_TYPES) {
-			if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV6;
-			else if (rss_types & ETH_RSS_L3_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV6;
 			else
 				dev_flow->hash_fields |= MLX5_IPV6_IBV_RX_HASH;
@@ -10863,11 +10863,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 		return;
 	if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_UDP)) ||
 	    (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_UDP))) {
-		if (rss_types & ETH_RSS_UDP) {
-			if (rss_types & ETH_RSS_L4_SRC_ONLY)
+		if (rss_types & RTE_ETH_RSS_UDP) {
+			if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_SRC_PORT_UDP;
-			else if (rss_types & ETH_RSS_L4_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_DST_PORT_UDP;
 			else
@@ -10875,11 +10875,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 		}
 	} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_TCP)) ||
 		   (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_TCP))) {
-		if (rss_types & ETH_RSS_TCP) {
-			if (rss_types & ETH_RSS_L4_SRC_ONLY)
+		if (rss_types & RTE_ETH_RSS_TCP) {
+			if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_SRC_PORT_TCP;
-			else if (rss_types & ETH_RSS_L4_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_DST_PORT_TCP;
 			else
@@ -14418,9 +14418,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV4:
 		if (rss_types & MLX5_IPV4_LAYER_TYPES) {
 			*hash_field &= ~MLX5_RSS_HASH_IPV4;
-			if (rss_types & ETH_RSS_L3_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_IPV4;
-			else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_IPV4;
 			else
 				*hash_field |= MLX5_RSS_HASH_IPV4;
@@ -14429,9 +14429,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV6:
 		if (rss_types & MLX5_IPV6_LAYER_TYPES) {
 			*hash_field &= ~MLX5_RSS_HASH_IPV6;
-			if (rss_types & ETH_RSS_L3_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_IPV6;
-			else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_IPV6;
 			else
 				*hash_field |= MLX5_RSS_HASH_IPV6;
@@ -14440,11 +14440,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV4_UDP:
 		/* fall-through. */
 	case MLX5_RSS_HASH_IPV6_UDP:
-		if (rss_types & ETH_RSS_UDP) {
+		if (rss_types & RTE_ETH_RSS_UDP) {
 			*hash_field &= ~MLX5_UDP_IBV_RX_HASH;
-			if (rss_types & ETH_RSS_L4_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_PORT_UDP;
-			else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_PORT_UDP;
 			else
 				*hash_field |= MLX5_UDP_IBV_RX_HASH;
@@ -14453,11 +14453,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV4_TCP:
 		/* fall-through. */
 	case MLX5_RSS_HASH_IPV6_TCP:
-		if (rss_types & ETH_RSS_TCP) {
+		if (rss_types & RTE_ETH_RSS_TCP) {
 			*hash_field &= ~MLX5_TCP_IBV_RX_HASH;
-			if (rss_types & ETH_RSS_L4_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_PORT_TCP;
-			else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_PORT_TCP;
 			else
 				*hash_field |= MLX5_TCP_IBV_RX_HASH;
@@ -14605,8 +14605,8 @@ __flow_dv_action_rss_create(struct rte_eth_dev *dev,
 	origin = &shared_rss->origin;
 	origin->func = rss->func;
 	origin->level = rss->level;
-	/* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
-	origin->types = !rss->types ? ETH_RSS_IP : rss->types;
+	/* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+	origin->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
 	/* NULL RSS key indicates default RSS key. */
 	rss_key = !rss->key ? rss_hash_default_key : rss->key;
 	memcpy(shared_rss->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 1627c3905fa4..8a455cbf22f4 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1816,7 +1816,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
 			if (dev_flow->hash_fields != 0)
 				dev_flow->hash_fields |=
 					mlx5_flow_hashfields_adjust
-					(rss_desc, tunnel, ETH_RSS_TCP,
+					(rss_desc, tunnel, RTE_ETH_RSS_TCP,
 					 (IBV_RX_HASH_SRC_PORT_TCP |
 					  IBV_RX_HASH_DST_PORT_TCP));
 			item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP :
@@ -1829,7 +1829,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
 			if (dev_flow->hash_fields != 0)
 				dev_flow->hash_fields |=
 					mlx5_flow_hashfields_adjust
-					(rss_desc, tunnel, ETH_RSS_UDP,
+					(rss_desc, tunnel, RTE_ETH_RSS_UDP,
 					 (IBV_RX_HASH_SRC_PORT_UDP |
 					  IBV_RX_HASH_DST_PORT_UDP));
 			item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
diff --git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c
index c32129cdc2b8..a4f690039e24 100644
--- a/drivers/net/mlx5/mlx5_rss.c
+++ b/drivers/net/mlx5/mlx5_rss.c
@@ -68,7 +68,7 @@ mlx5_rss_hash_update(struct rte_eth_dev *dev,
 		if (!(*priv->rxqs)[i])
 			continue;
 		(*priv->rxqs)[i]->rss_hash = !!rss_conf->rss_hf &&
-			!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS);
+			!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS);
 		++idx;
 	}
 	return 0;
@@ -170,8 +170,8 @@ mlx5_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 	/* Fill each entry of the table even if its bit is not set. */
 	for (idx = 0, i = 0; (i != reta_size); ++i) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		reta_conf[idx].reta[i % RTE_RETA_GROUP_SIZE] =
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] =
 			(*priv->reta_idx)[i];
 	}
 	return 0;
@@ -209,8 +209,8 @@ mlx5_dev_rss_reta_update(struct rte_eth_dev *dev,
 	if (ret)
 		return ret;
 	for (idx = 0, i = 0; (i != reta_size); ++i) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		pos = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (((reta_conf[idx].mask >> i) & 0x1) == 0)
 			continue;
 		MLX5_ASSERT(reta_conf[idx].reta[pos] < priv->rxqs_n);
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index d8d7e481dea0..eb4dc3375248 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -333,22 +333,22 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_dev_config *config = &priv->config;
-	uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
-			     DEV_RX_OFFLOAD_TIMESTAMP |
-			     DEV_RX_OFFLOAD_RSS_HASH);
+	uint64_t offloads = (RTE_ETH_RX_OFFLOAD_SCATTER |
+			     RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+			     RTE_ETH_RX_OFFLOAD_RSS_HASH);
 
 	if (!config->mprq.enabled)
 		offloads |= RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT;
 	if (config->hw_fcs_strip)
-		offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	if (config->hw_csum)
-		offloads |= (DEV_RX_OFFLOAD_IPV4_CKSUM |
-			     DEV_RX_OFFLOAD_UDP_CKSUM |
-			     DEV_RX_OFFLOAD_TCP_CKSUM);
+		offloads |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			     RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
 	if (config->hw_vlan_strip)
-		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	if (MLX5_LRO_SUPPORTED(dev))
-		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+		offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 	return offloads;
 }
 
@@ -362,7 +362,7 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
 uint64_t
 mlx5_get_rx_port_offloads(void)
 {
-	uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+	uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	return offloads;
 }
@@ -694,7 +694,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 				    dev->data->dev_conf.rxmode.offloads;
 
 		/* The offloads should be checked on rte_eth_dev layer. */
-		MLX5_ASSERT(offloads & DEV_RX_OFFLOAD_SCATTER);
+		MLX5_ASSERT(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
 		if (!(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
 			DRV_LOG(ERR, "port %u queue index %u split "
 				     "offload not configured",
@@ -1325,7 +1325,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	struct mlx5_dev_config *config = &priv->config;
 	uint64_t offloads = conf->offloads |
 			   dev->data->dev_conf.rxmode.offloads;
-	unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
+	unsigned int lro_on_queue = !!(offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO);
 	unsigned int max_rx_pktlen = lro_on_queue ?
 			dev->data->dev_conf.rxmode.max_lro_pkt_size :
 			dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN +
@@ -1428,7 +1428,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	} while (tail_len || !rte_is_power_of_2(tmpl->rxq.rxseg_n));
 	MLX5_ASSERT(tmpl->rxq.rxseg_n &&
 		    tmpl->rxq.rxseg_n <= MLX5_MAX_RXQ_NSEG);
-	if (tmpl->rxq.rxseg_n > 1 && !(offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	if (tmpl->rxq.rxseg_n > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
 			" configured and no enough mbuf space(%u) to contain "
 			"the maximum RX packet length(%u) with head-room(%u)",
@@ -1472,7 +1472,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 			config->mprq.stride_size_n : mprq_stride_size;
 		tmpl->rxq.strd_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT;
 		tmpl->rxq.strd_scatter_en =
-				!!(offloads & DEV_RX_OFFLOAD_SCATTER);
+				!!(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
 		tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
 				config->mprq.max_memcpy_len);
 		max_lro_size = RTE_MIN(max_rx_pktlen,
@@ -1487,7 +1487,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
 		tmpl->rxq.sges_n = 0;
 		max_lro_size = max_rx_pktlen;
-	} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+	} else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		unsigned int sges_n;
 
 		if (lro_on_queue && first_mb_free_size <
@@ -1548,9 +1548,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	}
 	mlx5_max_lro_msg_size_adjust(dev, idx, max_lro_size);
 	/* Toggle RX checksum offload if hardware supports it. */
-	tmpl->rxq.csum = !!(offloads & DEV_RX_OFFLOAD_CHECKSUM);
+	tmpl->rxq.csum = !!(offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM);
 	/* Configure Rx timestamp. */
-	tmpl->rxq.hw_timestamp = !!(offloads & DEV_RX_OFFLOAD_TIMESTAMP);
+	tmpl->rxq.hw_timestamp = !!(offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP);
 	tmpl->rxq.timestamp_rx_flag = 0;
 	if (tmpl->rxq.hw_timestamp && rte_mbuf_dyn_rx_timestamp_register(
 			&tmpl->rxq.timestamp_offset,
@@ -1559,11 +1559,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		goto error;
 	}
 	/* Configure VLAN stripping. */
-	tmpl->rxq.vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	tmpl->rxq.vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 	/* By default, FCS (CRC) is stripped by hardware. */
 	tmpl->rxq.crc_present = 0;
 	tmpl->rxq.lro = lro_on_queue;
-	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		if (config->hw_fcs_strip) {
 			/*
 			 * RQs used for LRO-enabled TIRs should not be
@@ -1593,7 +1593,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		tmpl->rxq.crc_present << 2);
 	/* Save port ID. */
 	tmpl->rxq.rss_hash = !!priv->rss_conf.rss_hf &&
-		(!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS));
+		(!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS));
 	tmpl->rxq.port_id = dev->data->port_id;
 	tmpl->priv = priv;
 	tmpl->rxq.mp = rx_seg[0].mp;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.h b/drivers/net/mlx5/mlx5_rxtx_vec.h
index 93b4f517bb3e..65d91bdf67e2 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.h
@@ -16,10 +16,10 @@
 
 /* HW checksum offload capabilities of vectorized Tx. */
 #define MLX5_VEC_TX_CKSUM_OFFLOAD_CAP \
-	(DEV_TX_OFFLOAD_IPV4_CKSUM | \
-	 DEV_TX_OFFLOAD_UDP_CKSUM | \
-	 DEV_TX_OFFLOAD_TCP_CKSUM | \
-	 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+	(RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+	 RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+	 RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+	 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 
 /*
  * Compile time sanity check for vectorized functions.
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index df671379e46d..12aeba60348a 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -523,36 +523,36 @@ mlx5_select_tx_function(struct rte_eth_dev *dev)
 	unsigned int diff = 0, olx = 0, i, m;
 
 	MLX5_ASSERT(priv);
-	if (tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
 		/* We should support Multi-Segment Packets. */
 		olx |= MLX5_TXOFF_CONFIG_MULTI;
 	}
-	if (tx_offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-			   DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			   DEV_TX_OFFLOAD_GRE_TNL_TSO |
-			   DEV_TX_OFFLOAD_IP_TNL_TSO |
-			   DEV_TX_OFFLOAD_UDP_TNL_TSO)) {
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+			   RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO)) {
 		/* We should support TCP Send Offload. */
 		olx |= MLX5_TXOFF_CONFIG_TSO;
 	}
-	if (tx_offloads & (DEV_TX_OFFLOAD_IP_TNL_TSO |
-			   DEV_TX_OFFLOAD_UDP_TNL_TSO |
-			   DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 		/* We should support Software Parser for Tunnels. */
 		olx |= MLX5_TXOFF_CONFIG_SWP;
 	}
-	if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			   DEV_TX_OFFLOAD_UDP_CKSUM |
-			   DEV_TX_OFFLOAD_TCP_CKSUM |
-			   DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 		/* We should support IP/TCP/UDP Checksums. */
 		olx |= MLX5_TXOFF_CONFIG_CSUM;
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) {
 		/* We should support VLAN insertion. */
 		olx |= MLX5_TXOFF_CONFIG_VLAN;
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
 	    rte_mbuf_dynflag_lookup
 			(RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME, NULL) >= 0 &&
 	    rte_mbuf_dynfield_lookup
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 1f92250f5edd..02bb9307ae61 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -98,42 +98,42 @@ uint64_t
 mlx5_get_tx_port_offloads(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	uint64_t offloads = (DEV_TX_OFFLOAD_MULTI_SEGS |
-			     DEV_TX_OFFLOAD_VLAN_INSERT);
+	uint64_t offloads = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+			     RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
 	struct mlx5_dev_config *config = &priv->config;
 
 	if (config->hw_csum)
-		offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_UDP_CKSUM |
-			     DEV_TX_OFFLOAD_TCP_CKSUM);
+		offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
 	if (config->tso)
-		offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+		offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	if (config->tx_pp)
-		offloads |= DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP;
+		offloads |= RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP;
 	if (config->swp) {
 		if (config->swp & MLX5_SW_PARSING_CSUM_CAP)
-			offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+			offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 		if (config->swp & MLX5_SW_PARSING_TSO_CAP)
-			offloads |= (DEV_TX_OFFLOAD_IP_TNL_TSO |
-				     DEV_TX_OFFLOAD_UDP_TNL_TSO);
+			offloads |= (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 	}
 	if (config->tunnel_en) {
 		if (config->hw_csum)
-			offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+			offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 		if (config->tso) {
 			if (config->tunnel_en &
 				MLX5_TUNNELED_OFFLOADS_VXLAN_CAP)
-				offloads |= DEV_TX_OFFLOAD_VXLAN_TNL_TSO;
+				offloads |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
 			if (config->tunnel_en &
 				MLX5_TUNNELED_OFFLOADS_GRE_CAP)
-				offloads |= DEV_TX_OFFLOAD_GRE_TNL_TSO;
+				offloads |= RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO;
 			if (config->tunnel_en &
 				MLX5_TUNNELED_OFFLOADS_GENEVE_CAP)
-				offloads |= DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+				offloads |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
 		}
 	}
 	if (!config->mprq.enabled)
-		offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	return offloads;
 }
 
@@ -801,17 +801,17 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
 	unsigned int inlen_mode; /* Minimal required Inline data. */
 	unsigned int txqs_inline; /* Min Tx queues to enable inline. */
 	uint64_t dev_txoff = priv->dev_data->dev_conf.txmode.offloads;
-	bool tso = txq_ctrl->txq.offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-					    DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-					    DEV_TX_OFFLOAD_GRE_TNL_TSO |
-					    DEV_TX_OFFLOAD_IP_TNL_TSO |
-					    DEV_TX_OFFLOAD_UDP_TNL_TSO);
+	bool tso = txq_ctrl->txq.offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+					    RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+					    RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+					    RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+					    RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 	bool vlan_inline;
 	unsigned int temp;
 
 	txq_ctrl->txq.fast_free =
-		!!((txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
-		   !(txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MULTI_SEGS) &&
+		!!((txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
+		   !(txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) &&
 		   !config->mprq.enabled);
 	if (config->txqs_inline == MLX5_ARG_UNSET)
 		txqs_inline =
@@ -870,7 +870,7 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
 	 * tx_burst routine.
 	 */
 	txq_ctrl->txq.vlan_en = config->hw_vlan_insert;
-	vlan_inline = (dev_txoff & DEV_TX_OFFLOAD_VLAN_INSERT) &&
+	vlan_inline = (dev_txoff & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) &&
 		      !config->hw_vlan_insert;
 	/*
 	 * If there are few Tx queues it is prioritized
@@ -978,19 +978,19 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
 						    MLX5_MAX_TSO_HEADER);
 		txq_ctrl->txq.tso_en = 1;
 	}
-	if (((DEV_TX_OFFLOAD_VXLAN_TNL_TSO & txq_ctrl->txq.offloads) &&
+	if (((RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO & txq_ctrl->txq.offloads) &&
 	    (config->tunnel_en & MLX5_TUNNELED_OFFLOADS_VXLAN_CAP)) |
-	   ((DEV_TX_OFFLOAD_GRE_TNL_TSO & txq_ctrl->txq.offloads) &&
+	   ((RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO & txq_ctrl->txq.offloads) &&
 	    (config->tunnel_en & MLX5_TUNNELED_OFFLOADS_GRE_CAP)) |
-	   ((DEV_TX_OFFLOAD_GENEVE_TNL_TSO & txq_ctrl->txq.offloads) &&
+	   ((RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO & txq_ctrl->txq.offloads) &&
 	    (config->tunnel_en & MLX5_TUNNELED_OFFLOADS_GENEVE_CAP)) |
 	   (config->swp  & MLX5_SW_PARSING_TSO_CAP))
 		txq_ctrl->txq.tunnel_en = 1;
-	txq_ctrl->txq.swp_en = (((DEV_TX_OFFLOAD_IP_TNL_TSO |
-				  DEV_TX_OFFLOAD_UDP_TNL_TSO) &
+	txq_ctrl->txq.swp_en = (((RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO) &
 				  txq_ctrl->txq.offloads) && (config->swp &
 				  MLX5_SW_PARSING_TSO_CAP)) |
-				((DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM &
+				((RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM &
 				 txq_ctrl->txq.offloads) && (config->swp &
 				 MLX5_SW_PARSING_CSUM_CAP));
 }
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 60f97f2d2d1f..07792fc5d94f 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -142,9 +142,9 @@ mlx5_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct mlx5_priv *priv = dev->data->dev_private;
 	unsigned int i;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		int hw_vlan_strip = !!(dev->data->dev_conf.rxmode.offloads &
-				       DEV_RX_OFFLOAD_VLAN_STRIP);
+				       RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 		if (!priv->config.hw_vlan_strip) {
 			DRV_LOG(ERR, "port %u VLAN stripping is not supported",
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 8937ec0d3037..7f7b545ca63a 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -485,8 +485,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	 * Remove this check once DPDK supports larger/variable
 	 * indirection tables.
 	 */
-	if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
-		config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+	if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+		config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
 	DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
 		config->ind_table_max_size);
 	if (config->hw_padding) {
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index 2a0288087357..10fe6d828ccd 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -114,7 +114,7 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
 	struct mvneta_priv *priv = dev->data->dev_private;
 	struct neta_ppio_params *ppio_params;
 
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE) {
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE) {
 		MVNETA_LOG(INFO, "Unsupported RSS and rx multi queue mode %d",
 			dev->data->dev_conf.rxmode.mq_mode);
 		if (dev->data->nb_rx_queues > 1)
@@ -126,7 +126,7 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		priv->multiseg = 1;
 
 	ppio_params = &priv->ppio_params;
@@ -151,10 +151,10 @@ static int
 mvneta_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
 		   struct rte_eth_dev_info *info)
 {
-	info->speed_capa = ETH_LINK_SPEED_10M |
-			   ETH_LINK_SPEED_100M |
-			   ETH_LINK_SPEED_1G |
-			   ETH_LINK_SPEED_2_5G;
+	info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			   RTE_ETH_LINK_SPEED_100M |
+			   RTE_ETH_LINK_SPEED_1G |
+			   RTE_ETH_LINK_SPEED_2_5G;
 
 	info->max_rx_queues = MRVL_NETA_RXQ_MAX;
 	info->max_tx_queues = MRVL_NETA_TXQ_MAX;
@@ -503,28 +503,28 @@ mvneta_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 
 	switch (ethtool_cmd_speed(&edata)) {
 	case SPEED_10:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case SPEED_100:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case SPEED_1000:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case SPEED_2500:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	default:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	}
 
-	dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
-							 ETH_LINK_HALF_DUPLEX;
-	dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
-							   ETH_LINK_FIXED;
+	dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+							 RTE_ETH_LINK_HALF_DUPLEX;
+	dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+							   RTE_ETH_LINK_FIXED;
 
 	neta_ppio_get_link_state(priv->ppio, &link_up);
-	dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index 6428f9ff7931..64aadcffd85a 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -54,14 +54,14 @@
 #define MRVL_NETA_MRU_TO_MTU(mru)	((mru) - MRVL_NETA_HDRS_LEN)
 
 /** Rx offloads capabilities */
-#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_CHECKSUM)
+#define MVNETA_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_CHECKSUM)
 
 /** Tx offloads capabilities */
-#define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				    DEV_TX_OFFLOAD_UDP_CKSUM  | \
-				    DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MVNETA_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+				    RTE_ETH_TX_OFFLOAD_UDP_CKSUM  | \
+				    RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 #define MVNETA_TX_OFFLOADS (MVNETA_TX_OFFLOAD_CHECKSUM | \
-			    DEV_TX_OFFLOAD_MULTI_SEGS)
+			    RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define MVNETA_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
 				PKT_TX_TCP_CKSUM | \
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index 9836bb071a82..62d8aa586dae 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -734,7 +734,7 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	rxq->priv = priv;
 	rxq->mp = mp;
 	rxq->cksum_enabled = dev->data->dev_conf.rxmode.offloads &
-			     DEV_RX_OFFLOAD_IPV4_CKSUM;
+			     RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 	rxq->queue_id = idx;
 	rxq->port_id = dev->data->port_id;
 	rxq->size = desc;
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index a6458d2ce9b5..d0746b0d1215 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -58,15 +58,15 @@
 #define MRVL_COOKIE_HIGH_ADDR_MASK 0xffffff0000000000
 
 /** Port Rx offload capabilities */
-#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
-			  DEV_RX_OFFLOAD_CHECKSUM)
+#define MRVL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+			  RTE_ETH_RX_OFFLOAD_CHECKSUM)
 
 /** Port Tx offloads capabilities */
-#define MRVL_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				  DEV_TX_OFFLOAD_UDP_CKSUM  | \
-				  DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MRVL_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM  | \
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 #define MRVL_TX_OFFLOADS (MRVL_TX_OFFLOAD_CHECKSUM | \
-			  DEV_TX_OFFLOAD_MULTI_SEGS)
+			  RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define MRVL_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
 			      PKT_TX_TCP_CKSUM | \
@@ -442,14 +442,14 @@ mrvl_configure_rss(struct mrvl_priv *priv, struct rte_eth_rss_conf *rss_conf)
 
 	if (rss_conf->rss_hf == 0) {
 		priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
-	} else if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+	} else if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
 		priv->ppio_params.inqs_params.hash_type =
 			PP2_PPIO_HASH_T_2_TUPLE;
-	} else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	} else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		priv->ppio_params.inqs_params.hash_type =
 			PP2_PPIO_HASH_T_5_TUPLE;
 		priv->rss_hf_tcp = 1;
-	} else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	} else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		priv->ppio_params.inqs_params.hash_type =
 			PP2_PPIO_HASH_T_5_TUPLE;
 		priv->rss_hf_tcp = 0;
@@ -483,8 +483,8 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE &&
-	    dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE &&
+	    dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		MRVL_LOG(INFO, "Unsupported rx multi queue mode %d",
 			dev->data->dev_conf.rxmode.mq_mode);
 		return -EINVAL;
@@ -502,7 +502,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		priv->multiseg = 1;
 
 	ret = mrvl_configure_rxqs(priv, dev->data->port_id,
@@ -524,7 +524,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		return ret;
 
 	if (dev->data->nb_rx_queues == 1 &&
-	    dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	    dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		MRVL_LOG(WARNING, "Disabling hash for 1 rx queue");
 		priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
 		priv->configured = 1;
@@ -623,7 +623,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
 	int ret;
 
 	if (!priv->ppio) {
-		dev->data->dev_link.link_status = ETH_LINK_UP;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 		return 0;
 	}
 
@@ -644,7 +644,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
 		return ret;
 	}
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -664,14 +664,14 @@ mrvl_dev_set_link_down(struct rte_eth_dev *dev)
 	int ret;
 
 	if (!priv->ppio) {
-		dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 		return 0;
 	}
 	ret = pp2_ppio_disable(priv->ppio);
 	if (ret)
 		return ret;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
@@ -893,7 +893,7 @@ mrvl_dev_start(struct rte_eth_dev *dev)
 	if (dev->data->all_multicast == 1)
 		mrvl_allmulticast_enable(dev);
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		ret = mrvl_populate_vlan_table(dev, 1);
 		if (ret) {
 			MRVL_LOG(ERR, "Failed to populate VLAN table");
@@ -929,11 +929,11 @@ mrvl_dev_start(struct rte_eth_dev *dev)
 		priv->flow_ctrl = 0;
 	}
 
-	if (dev->data->dev_link.link_status == ETH_LINK_UP) {
+	if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
 		ret = mrvl_dev_set_link_up(dev);
 		if (ret) {
 			MRVL_LOG(ERR, "Failed to set link up");
-			dev->data->dev_link.link_status = ETH_LINK_DOWN;
+			dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 			goto out;
 		}
 	}
@@ -1202,30 +1202,30 @@ mrvl_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 
 	switch (ethtool_cmd_speed(&edata)) {
 	case SPEED_10:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case SPEED_100:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case SPEED_1000:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case SPEED_2500:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	case SPEED_10000:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	default:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	}
 
-	dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
-							 ETH_LINK_HALF_DUPLEX;
-	dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
-							   ETH_LINK_FIXED;
+	dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+							 RTE_ETH_LINK_HALF_DUPLEX;
+	dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+							   RTE_ETH_LINK_FIXED;
 	pp2_ppio_get_link_state(priv->ppio, &link_up);
-	dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -1709,11 +1709,11 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
 {
 	struct mrvl_priv *priv = dev->data->dev_private;
 
-	info->speed_capa = ETH_LINK_SPEED_10M |
-			   ETH_LINK_SPEED_100M |
-			   ETH_LINK_SPEED_1G |
-			   ETH_LINK_SPEED_2_5G |
-			   ETH_LINK_SPEED_10G;
+	info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			   RTE_ETH_LINK_SPEED_100M |
+			   RTE_ETH_LINK_SPEED_1G |
+			   RTE_ETH_LINK_SPEED_2_5G |
+			   RTE_ETH_LINK_SPEED_10G;
 
 	info->max_rx_queues = MRVL_PP2_RXQ_MAX;
 	info->max_tx_queues = MRVL_PP2_TXQ_MAX;
@@ -1733,9 +1733,9 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
 	info->tx_offload_capa = MRVL_TX_OFFLOADS;
 	info->tx_queue_offload_capa = MRVL_TX_OFFLOADS;
 
-	info->flow_type_rss_offloads = ETH_RSS_IPV4 |
-				       ETH_RSS_NONFRAG_IPV4_TCP |
-				       ETH_RSS_NONFRAG_IPV4_UDP;
+	info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+				       RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				       RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	/* By default packets are dropped if no descriptors are available */
 	info->default_rxconf.rx_drop_en = 1;
@@ -1864,13 +1864,13 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
 	int ret;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		MRVL_LOG(ERR, "VLAN stripping is not supported\n");
 		return -ENOTSUP;
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ret = mrvl_populate_vlan_table(dev, 1);
 		else
 			ret = mrvl_populate_vlan_table(dev, 0);
@@ -1879,7 +1879,7 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			return ret;
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
 		MRVL_LOG(ERR, "Extend VLAN not supported\n");
 		return -ENOTSUP;
 	}
@@ -2022,7 +2022,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 
 	rxq->priv = priv;
 	rxq->mp = mp;
-	rxq->cksum_enabled = offloads & DEV_RX_OFFLOAD_IPV4_CKSUM;
+	rxq->cksum_enabled = offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 	rxq->queue_id = idx;
 	rxq->port_id = dev->data->port_id;
 	mrvl_port_to_bpool_lookup[rxq->port_id] = priv->bpool;
@@ -2182,7 +2182,7 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		return ret;
 	}
 
-	fc_conf->mode = en ? RTE_FC_RX_PAUSE : RTE_FC_NONE;
+	fc_conf->mode = en ? RTE_ETH_FC_RX_PAUSE : RTE_ETH_FC_NONE;
 
 	ret = pp2_ppio_get_tx_pause(priv->ppio, &en);
 	if (ret) {
@@ -2191,10 +2191,10 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	if (en) {
-		if (fc_conf->mode == RTE_FC_NONE)
-			fc_conf->mode = RTE_FC_TX_PAUSE;
+		if (fc_conf->mode == RTE_ETH_FC_NONE)
+			fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		else
-			fc_conf->mode = RTE_FC_FULL;
+			fc_conf->mode = RTE_ETH_FC_FULL;
 	}
 
 	return 0;
@@ -2240,19 +2240,19 @@ mrvl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		rx_en = 1;
 		tx_en = 1;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		rx_en = 0;
 		tx_en = 1;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		rx_en = 1;
 		tx_en = 0;
 		break;
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		rx_en = 0;
 		tx_en = 0;
 		break;
@@ -2329,11 +2329,11 @@ mrvl_rss_hash_conf_get(struct rte_eth_dev *dev,
 	if (hash_type == PP2_PPIO_HASH_T_NONE)
 		rss_conf->rss_hf = 0;
 	else if (hash_type == PP2_PPIO_HASH_T_2_TUPLE)
-		rss_conf->rss_hf = ETH_RSS_IPV4;
+		rss_conf->rss_hf = RTE_ETH_RSS_IPV4;
 	else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && priv->rss_hf_tcp)
-		rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && !priv->rss_hf_tcp)
-		rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	return 0;
 }
@@ -3152,7 +3152,7 @@ mrvl_eth_dev_create(struct rte_vdev_device *vdev, const char *name)
 	eth_dev->dev_ops = &mrvl_ops;
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	rte_eth_dev_probing_finish(eth_dev);
 	return 0;
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9e2a40597349..9c4ae80e7e16 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -40,16 +40,16 @@
 #include "hn_nvs.h"
 #include "ndis.h"
 
-#define HN_TX_OFFLOAD_CAPS (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-			    DEV_TX_OFFLOAD_TCP_CKSUM  | \
-			    DEV_TX_OFFLOAD_UDP_CKSUM  | \
-			    DEV_TX_OFFLOAD_TCP_TSO    | \
-			    DEV_TX_OFFLOAD_MULTI_SEGS | \
-			    DEV_TX_OFFLOAD_VLAN_INSERT)
+#define HN_TX_OFFLOAD_CAPS (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+			    RTE_ETH_TX_OFFLOAD_TCP_CKSUM  | \
+			    RTE_ETH_TX_OFFLOAD_UDP_CKSUM  | \
+			    RTE_ETH_TX_OFFLOAD_TCP_TSO    | \
+			    RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+			    RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 
-#define HN_RX_OFFLOAD_CAPS (DEV_RX_OFFLOAD_CHECKSUM | \
-			    DEV_RX_OFFLOAD_VLAN_STRIP | \
-			    DEV_RX_OFFLOAD_RSS_HASH)
+#define HN_RX_OFFLOAD_CAPS (RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+			    RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+			    RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define NETVSC_ARG_LATENCY "latency"
 #define NETVSC_ARG_RXBREAK "rx_copybreak"
@@ -238,21 +238,21 @@ hn_dev_link_update(struct rte_eth_dev *dev,
 	hn_rndis_get_linkspeed(hv);
 
 	link = (struct rte_eth_link) {
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_autoneg = ETH_LINK_SPEED_FIXED,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_autoneg = RTE_ETH_LINK_SPEED_FIXED,
 		.link_speed = hv->link_speed / 10000,
 	};
 
 	if (hv->link_status == NDIS_MEDIA_STATE_CONNECTED)
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 	else
-		link.link_status = ETH_LINK_DOWN;
+		link.link_status = RTE_ETH_LINK_DOWN;
 
 	if (old.link_status == link.link_status)
 		return 0;
 
 	PMD_INIT_LOG(DEBUG, "Port %d is %s", dev->data->port_id,
-		     (link.link_status == ETH_LINK_UP) ? "up" : "down");
+		     (link.link_status == RTE_ETH_LINK_UP) ? "up" : "down");
 
 	return rte_eth_linkstatus_set(dev, &link);
 }
@@ -263,14 +263,14 @@ static int hn_dev_info_get(struct rte_eth_dev *dev,
 	struct hn_data *hv = dev->data->dev_private;
 	int rc;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
 	dev_info->min_rx_bufsize = HN_MIN_RX_BUF_SIZE;
 	dev_info->max_rx_pktlen  = HN_MAX_XFER_LEN;
 	dev_info->max_mac_addrs  = 1;
 
 	dev_info->hash_key_size = NDIS_HASH_KEYSIZE_TOEPLITZ;
 	dev_info->flow_type_rss_offloads = hv->rss_offloads;
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 
 	dev_info->max_rx_queues = hv->max_queues;
 	dev_info->max_tx_queues = hv->max_queues;
@@ -306,8 +306,8 @@ static int hn_rss_reta_update(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < NDIS_HASH_INDCNT; i++) {
-		uint16_t idx = i / RTE_RETA_GROUP_SIZE;
-		uint16_t shift = i % RTE_RETA_GROUP_SIZE;
+		uint16_t idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint16_t shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint64_t mask = (uint64_t)1 << shift;
 
 		if (reta_conf[idx].mask & mask)
@@ -346,8 +346,8 @@ static int hn_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < NDIS_HASH_INDCNT; i++) {
-		uint16_t idx = i / RTE_RETA_GROUP_SIZE;
-		uint16_t shift = i % RTE_RETA_GROUP_SIZE;
+		uint16_t idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint16_t shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint64_t mask = (uint64_t)1 << shift;
 
 		if (reta_conf[idx].mask & mask)
@@ -362,17 +362,17 @@ static void hn_rss_hash_init(struct hn_data *hv,
 	/* Convert from DPDK RSS hash flags to NDIS hash flags */
 	hv->rss_hash = NDIS_HASH_FUNCTION_TOEPLITZ;
 
-	if (rss_conf->rss_hf & ETH_RSS_IPV4)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
 		hv->rss_hash |= NDIS_HASH_IPV4;
-	if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		hv->rss_hash |= NDIS_HASH_TCP_IPV4;
-	if (rss_conf->rss_hf & ETH_RSS_IPV6)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
 		hv->rss_hash |=  NDIS_HASH_IPV6;
-	if (rss_conf->rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX)
 		hv->rss_hash |=  NDIS_HASH_IPV6_EX;
-	if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		hv->rss_hash |= NDIS_HASH_TCP_IPV6;
-	if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		hv->rss_hash |= NDIS_HASH_TCP_IPV6_EX;
 
 	memcpy(hv->rss_key, rss_conf->rss_key ? : rss_default_key,
@@ -427,22 +427,22 @@ static int hn_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	rss_conf->rss_hf = 0;
 	if (hv->rss_hash & NDIS_HASH_IPV4)
-		rss_conf->rss_hf |= ETH_RSS_IPV4;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
 
 	if (hv->rss_hash & NDIS_HASH_TCP_IPV4)
-		rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 
 	if (hv->rss_hash & NDIS_HASH_IPV6)
-		rss_conf->rss_hf |= ETH_RSS_IPV6;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
 
 	if (hv->rss_hash & NDIS_HASH_IPV6_EX)
-		rss_conf->rss_hf |= ETH_RSS_IPV6_EX;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_EX;
 
 	if (hv->rss_hash & NDIS_HASH_TCP_IPV6)
-		rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 
 	if (hv->rss_hash & NDIS_HASH_TCP_IPV6_EX)
-		rss_conf->rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 
 	return 0;
 }
@@ -686,8 +686,8 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev_conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev_conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	unsupported = txmode->offloads & ~HN_TX_OFFLOAD_CAPS;
 	if (unsupported) {
@@ -705,7 +705,7 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	hv->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	hv->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 	err = hn_rndis_conf_offload(hv, txmode->offloads,
 				    rxmode->offloads);
diff --git a/drivers/net/netvsc/hn_rndis.c b/drivers/net/netvsc/hn_rndis.c
index 62ba39636cd8..1b63b27e0c3e 100644
--- a/drivers/net/netvsc/hn_rndis.c
+++ b/drivers/net/netvsc/hn_rndis.c
@@ -710,15 +710,15 @@ hn_rndis_query_rsscaps(struct hn_data *hv,
 
 	hv->rss_offloads = 0;
 	if (caps.ndis_caps & NDIS_RSS_CAP_IPV4)
-		hv->rss_offloads |= ETH_RSS_IPV4
-			| ETH_RSS_NONFRAG_IPV4_TCP
-			| ETH_RSS_NONFRAG_IPV4_UDP;
+		hv->rss_offloads |= RTE_ETH_RSS_IPV4
+			| RTE_ETH_RSS_NONFRAG_IPV4_TCP
+			| RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (caps.ndis_caps & NDIS_RSS_CAP_IPV6)
-		hv->rss_offloads |= ETH_RSS_IPV6
-			| ETH_RSS_NONFRAG_IPV6_TCP;
+		hv->rss_offloads |= RTE_ETH_RSS_IPV6
+			| RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (caps.ndis_caps & NDIS_RSS_CAP_IPV6_EX)
-		hv->rss_offloads |= ETH_RSS_IPV6_EX
-			| ETH_RSS_IPV6_TCP_EX;
+		hv->rss_offloads |= RTE_ETH_RSS_IPV6_EX
+			| RTE_ETH_RSS_IPV6_TCP_EX;
 
 	/* Commit! */
 	*rxr_cnt0 = rxr_cnt;
@@ -800,7 +800,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 		params.ndis_hdr.ndis_size = NDIS_OFFLOAD_PARAMS_SIZE;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_TCP4)
 			params.ndis_tcp4csum = NDIS_OFFLOAD_PARAM_TX;
 		else
@@ -812,7 +812,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) {
 		if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4)
 		    == NDIS_RXCSUM_CAP_TCP4)
 			params.ndis_tcp4csum |= NDIS_OFFLOAD_PARAM_RX;
@@ -826,7 +826,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4)
 			params.ndis_udp4csum = NDIS_OFFLOAD_PARAM_TX;
 		else
@@ -839,7 +839,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (rx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+	if (rx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4)
 			params.ndis_udp4csum |= NDIS_OFFLOAD_PARAM_RX;
 		else
@@ -851,21 +851,21 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
 		if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_IP4)
 		    == NDIS_TXCSUM_CAP_IP4)
 			params.ndis_ip4csum = NDIS_OFFLOAD_PARAM_TX;
 		else
 			goto unsupported;
 	}
-	if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
 			params.ndis_ip4csum |= NDIS_OFFLOAD_PARAM_RX;
 		else
 			goto unsupported;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		if (hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023)
 			params.ndis_lsov2_ip4 = NDIS_OFFLOAD_LSOV2_ON;
 		else
@@ -907,41 +907,41 @@ int hn_rndis_get_offload(struct hn_data *hv,
 		return error;
 	}
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-				    DEV_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				    RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_IP4)
 	    == HN_NDIS_TXCSUM_CAP_IP4)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_TCP4)
 	    == HN_NDIS_TXCSUM_CAP_TCP4 &&
 	    (hwcaps.ndis_csum.ndis_ip6_txcsum & HN_NDIS_TXCSUM_CAP_TCP6)
 	    == HN_NDIS_TXCSUM_CAP_TCP6)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4) &&
 	    (hwcaps.ndis_csum.ndis_ip6_txcsum & NDIS_TXCSUM_CAP_UDP6))
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_UDP_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
 
 	if ((hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023) &&
 	    (hwcaps.ndis_lsov2.ndis_ip6_opts & HN_NDIS_LSOV2_CAP_IP6)
 	    == HN_NDIS_LSOV2_CAP_IP6)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
-				    DEV_RX_OFFLOAD_RSS_HASH;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				    RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4) &&
 	    (hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_TCP6))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4) &&
 	    (hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_UDP6))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_UDP_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
 
 	return 0;
 }
diff --git a/drivers/net/nfb/nfb_ethdev.c b/drivers/net/nfb/nfb_ethdev.c
index 99d93ebf4667..3c39937816a4 100644
--- a/drivers/net/nfb/nfb_ethdev.c
+++ b/drivers/net/nfb/nfb_ethdev.c
@@ -200,7 +200,7 @@ nfb_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_rx_pktlen = (uint32_t)-1;
 	dev_info->max_rx_queues = dev->data->nb_rx_queues;
 	dev_info->max_tx_queues = dev->data->nb_tx_queues;
-	dev_info->speed_capa = ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -268,26 +268,26 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
 
 	status.speed = MAC_SPEED_UNKNOWN;
 
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_status = ETH_LINK_DOWN;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_autoneg = ETH_LINK_SPEED_FIXED;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_SPEED_FIXED;
 
 	if (internals->rxmac[0] != NULL) {
 		nc_rxmac_read_status(internals->rxmac[0], &status);
 
 		switch (status.speed) {
 		case MAC_SPEED_10G:
-			link.link_speed = ETH_SPEED_NUM_10G;
+			link.link_speed = RTE_ETH_SPEED_NUM_10G;
 			break;
 		case MAC_SPEED_40G:
-			link.link_speed = ETH_SPEED_NUM_40G;
+			link.link_speed = RTE_ETH_SPEED_NUM_40G;
 			break;
 		case MAC_SPEED_100G:
-			link.link_speed = ETH_SPEED_NUM_100G;
+			link.link_speed = RTE_ETH_SPEED_NUM_100G;
 			break;
 		default:
-			link.link_speed = ETH_SPEED_NUM_NONE;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 			break;
 		}
 	}
@@ -296,7 +296,7 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
 		nc_rxmac_read_status(internals->rxmac[i], &status);
 
 		if (status.enabled && status.link_up) {
-			link.link_status = ETH_LINK_UP;
+			link.link_status = RTE_ETH_LINK_UP;
 			break;
 		}
 	}
diff --git a/drivers/net/nfb/nfb_rx.c b/drivers/net/nfb/nfb_rx.c
index 3ebb332ae46c..f76e2ba64621 100644
--- a/drivers/net/nfb/nfb_rx.c
+++ b/drivers/net/nfb/nfb_rx.c
@@ -42,7 +42,7 @@ nfb_check_timestamp(struct rte_devargs *devargs)
 	}
 	/* Timestamps are enabled when there is
 	 * key-value pair: enable_timestamp=1
-	 * TODO: timestamp should be enabled with DEV_RX_OFFLOAD_TIMESTAMP
+	 * TODO: timestamp should be enabled with RTE_ETH_RX_OFFLOAD_TIMESTAMP
 	 */
 	if (rte_kvargs_process(kvlist, TIMESTAMP_ARG,
 		timestamp_check_handler, NULL) < 0) {
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 0003fd54dde5..3ea697c54462 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -160,8 +160,8 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Checking TX mode */
 	if (txmode->mq_mode) {
@@ -170,7 +170,7 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	}
 
 	/* Checking RX mode */
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS &&
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS &&
 	    !(hw->cap & NFP_NET_CFG_CTRL_RSS)) {
 		PMD_INIT_LOG(INFO, "RSS not supported");
 		return -EINVAL;
@@ -359,19 +359,19 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
 		if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
 			ctrl |= NFP_NET_CFG_CTRL_RXCSUM;
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 		if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
 			ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
 	}
 
 	hw->mtu = dev->data->mtu;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
 
 	/* L2 broadcast */
@@ -383,13 +383,13 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 		ctrl |= NFP_NET_CFG_CTRL_L2MC;
 
 	/* TX checksum offload */
-	if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 		ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
 
 	/* LSO offload */
-	if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		if (hw->cap & NFP_NET_CFG_CTRL_LSO)
 			ctrl |= NFP_NET_CFG_CTRL_LSO;
 		else
@@ -397,7 +397,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 	}
 
 	/* RX gather */
-	if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		ctrl |= NFP_NET_CFG_CTRL_GATHER;
 
 	return ctrl;
@@ -485,14 +485,14 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 	int ret;
 
 	static const uint32_t ls_to_ethtool[] = {
-		[NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = ETH_SPEED_NUM_NONE,
-		[NFP_NET_CFG_STS_LINK_RATE_UNKNOWN]     = ETH_SPEED_NUM_NONE,
-		[NFP_NET_CFG_STS_LINK_RATE_1G]          = ETH_SPEED_NUM_1G,
-		[NFP_NET_CFG_STS_LINK_RATE_10G]         = ETH_SPEED_NUM_10G,
-		[NFP_NET_CFG_STS_LINK_RATE_25G]         = ETH_SPEED_NUM_25G,
-		[NFP_NET_CFG_STS_LINK_RATE_40G]         = ETH_SPEED_NUM_40G,
-		[NFP_NET_CFG_STS_LINK_RATE_50G]         = ETH_SPEED_NUM_50G,
-		[NFP_NET_CFG_STS_LINK_RATE_100G]        = ETH_SPEED_NUM_100G,
+		[NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = RTE_ETH_SPEED_NUM_NONE,
+		[NFP_NET_CFG_STS_LINK_RATE_UNKNOWN]     = RTE_ETH_SPEED_NUM_NONE,
+		[NFP_NET_CFG_STS_LINK_RATE_1G]          = RTE_ETH_SPEED_NUM_1G,
+		[NFP_NET_CFG_STS_LINK_RATE_10G]         = RTE_ETH_SPEED_NUM_10G,
+		[NFP_NET_CFG_STS_LINK_RATE_25G]         = RTE_ETH_SPEED_NUM_25G,
+		[NFP_NET_CFG_STS_LINK_RATE_40G]         = RTE_ETH_SPEED_NUM_40G,
+		[NFP_NET_CFG_STS_LINK_RATE_50G]         = RTE_ETH_SPEED_NUM_50G,
+		[NFP_NET_CFG_STS_LINK_RATE_100G]        = RTE_ETH_SPEED_NUM_100G,
 	};
 
 	PMD_DRV_LOG(DEBUG, "Link update");
@@ -504,15 +504,15 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 	memset(&link, 0, sizeof(struct rte_eth_link));
 
 	if (nn_link_status & NFP_NET_CFG_STS_LINK)
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	nn_link_status = (nn_link_status >> NFP_NET_CFG_STS_LINK_RATE_SHIFT) &
 			 NFP_NET_CFG_STS_LINK_RATE_MASK;
 
 	if (nn_link_status >= RTE_DIM(ls_to_ethtool))
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	else
 		link.link_speed = ls_to_ethtool[nn_link_status];
 
@@ -701,26 +701,26 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = 1;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
-		dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+		dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM |
-					     DEV_RX_OFFLOAD_UDP_CKSUM |
-					     DEV_RX_OFFLOAD_TCP_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+					     RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+					     RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)
-		dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+		dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_TXCSUM)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM |
-					     DEV_TX_OFFLOAD_UDP_CKSUM |
-					     DEV_TX_OFFLOAD_TCP_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+					     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+					     RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_LSO_ANY)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_GATHER)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -757,22 +757,22 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	};
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
-		dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
-						   ETH_RSS_NONFRAG_IPV4_TCP |
-						   ETH_RSS_NONFRAG_IPV4_UDP |
-						   ETH_RSS_IPV6 |
-						   ETH_RSS_NONFRAG_IPV6_TCP |
-						   ETH_RSS_NONFRAG_IPV6_UDP;
+		dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+						   RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+						   RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+						   RTE_ETH_RSS_IPV6 |
+						   RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+						   RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 		dev_info->reta_size = NFP_NET_CFG_RSS_ITBL_SZ;
 		dev_info->hash_key_size = NFP_NET_CFG_RSS_KEY_SZ;
 	}
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			       ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
-			       ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			       RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+			       RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -843,7 +843,7 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
 	if (link.link_status)
 		PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 			    dev->data->port_id, link.link_speed,
-			    link.link_duplex == ETH_LINK_FULL_DUPLEX
+			    link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX
 			    ? "full-duplex" : "half-duplex");
 	else
 		PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -973,12 +973,12 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	new_ctrl = 0;
 
 	/* Enable vlan strip if it is not configured yet */
-	if ((mask & ETH_VLAN_STRIP_OFFLOAD) &&
+	if ((mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
 	    !(hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
 		new_ctrl = hw->ctrl | NFP_NET_CFG_CTRL_RXVLAN;
 
 	/* Disable vlan strip just if it is configured */
-	if (!(mask & ETH_VLAN_STRIP_OFFLOAD) &&
+	if (!(mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
 	    (hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
 		new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_RXVLAN;
 
@@ -1018,8 +1018,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 	 */
 	for (i = 0; i < reta_size; i += 4) {
 		/* Handling 4 RSS entries per loop */
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
 
 		if (!mask)
@@ -1099,8 +1099,8 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 	 */
 	for (i = 0; i < reta_size; i += 4) {
 		/* Handling 4 RSS entries per loop */
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
 
 		if (!mask)
@@ -1138,22 +1138,22 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 
 	rss_hf = rss_conf->rss_hf;
 
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_TCP;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_UDP;
 
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_TCP;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_UDP;
 
 	cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK;
@@ -1223,22 +1223,22 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
 	cfg_rss_ctrl = nn_cfg_readl(hw, NFP_NET_CFG_RSS_CTRL);
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 	/* Propagate current RSS hash functions to caller */
 	rss_conf->rss_hf = rss_hf;
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 1169ea77a8c7..e08e594b04fe 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -141,7 +141,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
 		new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 62cb3536e0c9..817fe64dbceb 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -103,7 +103,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
 		new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3b5c6615adfa..fc76b84b5b66 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -409,7 +409,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 	dev->data->dev_link.link_status = link_up;
 
 	link_speeds = &dev->data->dev_conf.link_speeds;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG)
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG)
 		negotiate = true;
 
 	err = hw->mac.get_link_capabilities(hw, &speed, &negotiate);
@@ -418,11 +418,11 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 
 	allowed_speeds = 0;
 	if (hw->mac.default_speeds & NGBE_LINK_SPEED_1GB_FULL)
-		allowed_speeds |= ETH_LINK_SPEED_1G;
+		allowed_speeds |= RTE_ETH_LINK_SPEED_1G;
 	if (hw->mac.default_speeds & NGBE_LINK_SPEED_100M_FULL)
-		allowed_speeds |= ETH_LINK_SPEED_100M;
+		allowed_speeds |= RTE_ETH_LINK_SPEED_100M;
 	if (hw->mac.default_speeds & NGBE_LINK_SPEED_10M_FULL)
-		allowed_speeds |= ETH_LINK_SPEED_10M;
+		allowed_speeds |= RTE_ETH_LINK_SPEED_10M;
 
 	if (*link_speeds & ~allowed_speeds) {
 		PMD_INIT_LOG(ERR, "Invalid link setting");
@@ -430,14 +430,14 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 	}
 
 	speed = 0x0;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		speed = hw->mac.default_speeds;
 	} else {
-		if (*link_speeds & ETH_LINK_SPEED_1G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed |= NGBE_LINK_SPEED_1GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_100M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed |= NGBE_LINK_SPEED_100M_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_10M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
 			speed |= NGBE_LINK_SPEED_10M_FULL;
 	}
 
@@ -653,8 +653,8 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->rx_desc_lim = rx_desc_lim;
 	dev_info->tx_desc_lim = tx_desc_lim;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_10M;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_10M;
 
 	/* Driver-preferred Rx/Tx parameters */
 	dev_info->default_rxportconf.burst_size = 32;
@@ -682,11 +682,11 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 	int wait = 1;
 
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			~ETH_LINK_SPEED_AUTONEG);
+			~RTE_ETH_LINK_SPEED_AUTONEG);
 
 	hw->mac.get_link_status = true;
 
@@ -699,8 +699,8 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 
 	err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
 	if (err != 0) {
-		link.link_speed = ETH_SPEED_NUM_NONE;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
@@ -708,27 +708,27 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 		return rte_eth_linkstatus_set(dev, &link);
 
 	intr->flags &= ~NGBE_FLAG_NEED_LINK_CONFIG;
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (link_speed) {
 	default:
 	case NGBE_LINK_SPEED_UNKNOWN:
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 
 	case NGBE_LINK_SPEED_10M_FULL:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		lan_speed = 0;
 		break;
 
 	case NGBE_LINK_SPEED_100M_FULL:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		lan_speed = 1;
 		break;
 
 	case NGBE_LINK_SPEED_1GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		lan_speed = 2;
 		break;
 	}
@@ -912,11 +912,11 @@ ngbe_dev_link_status_print(struct rte_eth_dev *dev)
 
 	rte_eth_linkstatus_get(dev, &link);
 
-	if (link.link_status == ETH_LINK_UP) {
+	if (link.link_status == RTE_ETH_LINK_UP) {
 		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned int)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -956,7 +956,7 @@ ngbe_dev_interrupt_action(struct rte_eth_dev *dev)
 		ngbe_dev_link_update(dev, 0);
 
 		/* likely to up */
-		if (link.link_status != ETH_LINK_UP)
+		if (link.link_status != RTE_ETH_LINK_UP)
 			/* handle it 1 sec later, wait it being stable */
 			timeout = NGBE_LINK_UP_CHECK_TIMEOUT;
 		/* likely to down */
diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
index 25b9e5b1ce1b..ca03469d0e6d 100644
--- a/drivers/net/null/rte_eth_null.c
+++ b/drivers/net/null/rte_eth_null.c
@@ -61,16 +61,16 @@ struct pmd_internals {
 	rte_spinlock_t rss_lock;
 
 	uint16_t reta_size;
-	struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_128 /
-			RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_128 /
+			RTE_ETH_RETA_GROUP_SIZE];
 
 	uint8_t rss_key[40];                /**< 40-byte hash key. */
 };
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(eth_null_logtype, NOTICE);
@@ -189,7 +189,7 @@ eth_dev_start(struct rte_eth_dev *dev)
 	if (dev == NULL)
 		return -EINVAL;
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -199,7 +199,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
 	if (dev == NULL)
 		return 0;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -391,9 +391,9 @@ eth_rss_reta_update(struct rte_eth_dev *dev,
 	rte_spinlock_lock(&internal->rss_lock);
 
 	/* Copy RETA table */
-	for (i = 0; i < (internal->reta_size / RTE_RETA_GROUP_SIZE); i++) {
+	for (i = 0; i < (internal->reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
 		internal->reta_conf[i].mask = reta_conf[i].mask;
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				internal->reta_conf[i].reta[j] = reta_conf[i].reta[j];
 	}
@@ -416,8 +416,8 @@ eth_rss_reta_query(struct rte_eth_dev *dev,
 	rte_spinlock_lock(&internal->rss_lock);
 
 	/* Copy RETA table */
-	for (i = 0; i < (internal->reta_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (internal->reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = internal->reta_conf[i].reta[j];
 	}
@@ -548,8 +548,8 @@ eth_dev_null_create(struct rte_vdev_device *dev, struct pmd_options *args)
 	internals->port_id = eth_dev->data->port_id;
 	rte_eth_random_addr(internals->eth_addr.addr_bytes);
 
-	internals->flow_type_rss_offloads =  ETH_RSS_PROTO_MASK;
-	internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_RETA_GROUP_SIZE;
+	internals->flow_type_rss_offloads =  RTE_ETH_RSS_PROTO_MASK;
+	internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_ETH_RETA_GROUP_SIZE;
 
 	rte_memcpy(internals->rss_key, default_rss_key, 40);
 
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index f578123ed00b..5b8cbec67b5d 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -158,7 +158,7 @@ octeontx_link_status_print(struct rte_eth_dev *eth_dev,
 		octeontx_log_info("Port %u: Link Up - speed %u Mbps - %s",
 			  (eth_dev->data->port_id),
 			  link->link_speed,
-			  link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+			  link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 			  "full-duplex" : "half-duplex");
 	else
 		octeontx_log_info("Port %d: Link Down",
@@ -171,38 +171,38 @@ octeontx_link_status_update(struct octeontx_nic *nic,
 {
 	memset(link, 0, sizeof(*link));
 
-	link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	switch (nic->speed) {
 	case OCTEONTX_LINK_SPEED_SGMII:
-		link->link_speed = ETH_SPEED_NUM_1G;
+		link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case OCTEONTX_LINK_SPEED_XAUI:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 
 	case OCTEONTX_LINK_SPEED_RXAUI:
 	case OCTEONTX_LINK_SPEED_10G_R:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case OCTEONTX_LINK_SPEED_QSGMII:
-		link->link_speed = ETH_SPEED_NUM_5G;
+		link->link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 	case OCTEONTX_LINK_SPEED_40G_R:
-		link->link_speed = ETH_SPEED_NUM_40G;
+		link->link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 
 	case OCTEONTX_LINK_SPEED_RESERVE1:
 	case OCTEONTX_LINK_SPEED_RESERVE2:
 	default:
-		link->link_speed = ETH_SPEED_NUM_NONE;
+		link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 		octeontx_log_err("incorrect link speed %d", nic->speed);
 		break;
 	}
 
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
-	link->link_autoneg = ETH_LINK_AUTONEG;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 }
 
 static void
@@ -355,20 +355,20 @@ octeontx_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
 	uint16_t flags = 0;
 
-	if (nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= OCCTX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (nic->tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= OCCTX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(nic->tx_offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= OCCTX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (nic->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= OCCTX_TX_MULTI_SEG_F;
 
 	return flags;
@@ -380,21 +380,21 @@ octeontx_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
 	uint16_t flags = 0;
 
-	if (nic->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
-			 DEV_RX_OFFLOAD_UDP_CKSUM))
+	if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			 RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= OCCTX_RX_OFFLOAD_CSUM_F;
 
-	if (nic->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= OCCTX_RX_OFFLOAD_CSUM_F;
 
-	if (nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		flags |= OCCTX_RX_MULTI_SEG_F;
 		eth_dev->data->scattered_rx = 1;
 		/* If scatter mode is enabled, TX should also be in multi
 		 * seg mode, else memory leak will occur
 		 */
-		nic->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		nic->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	}
 
 	return flags;
@@ -423,18 +423,18 @@ octeontx_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-		rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+		rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		octeontx_log_err("unsupported rx qmode %d", rxmode->mq_mode);
 		return -EINVAL;
 	}
 
-	if (!(txmode->offloads & DEV_TX_OFFLOAD_MT_LOCKFREE)) {
+	if (!(txmode->offloads & RTE_ETH_TX_OFFLOAD_MT_LOCKFREE)) {
 		PMD_INIT_LOG(NOTICE, "cant disable lockfree tx");
-		txmode->offloads |= DEV_TX_OFFLOAD_MT_LOCKFREE;
+		txmode->offloads |= RTE_ETH_TX_OFFLOAD_MT_LOCKFREE;
 	}
 
-	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		octeontx_log_err("setting link speed/duplex not supported");
 		return -EINVAL;
 	}
@@ -530,13 +530,13 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	 * when this feature has not been enabled before.
 	 */
 	if (data->dev_started && frame_size > buffsz &&
-	    !(nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	    !(nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		octeontx_log_err("Scatter mode is disabled");
 		return -EINVAL;
 	}
 
 	/* Check <seg size> * <max_seg>  >= max_frame */
-	if ((nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER)	&&
+	if ((nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	&&
 	    (frame_size > buffsz * OCCTX_RX_NB_SEG_MAX))
 		return -EINVAL;
 
@@ -571,7 +571,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
 
 	/* Setup scatter mode if needed by jumbo */
 	if (data->mtu > buffsz) {
-		nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+		nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 		nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
 		nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
 	}
@@ -843,10 +843,10 @@ octeontx_dev_info(struct rte_eth_dev *dev,
 	struct octeontx_nic *nic = octeontx_pmd_priv(dev);
 
 	/* Autonegotiation may be disabled */
-	dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
-	dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			ETH_LINK_SPEED_40G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_40G;
 
 	/* Min/Max MTU supported */
 	dev_info->min_rx_bufsize = OCCTX_MIN_FRS;
@@ -1356,7 +1356,7 @@ octeontx_create(struct rte_vdev_device *dev, int port, uint8_t evdev,
 	nic->ev_ports = 1;
 	nic->print_flag = -1;
 
-	data->dev_link.link_status = ETH_LINK_DOWN;
+	data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	data->dev_started = 0;
 	data->promiscuous = 0;
 	data->all_multicast = 0;
diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index 3a02824e3948..c493fa7a03ed 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -55,23 +55,23 @@
 #define OCCTX_MAX_MTU		(OCCTX_MAX_FRS - OCCTX_L2_OVERHEAD)
 
 #define OCTEONTX_RX_OFFLOADS		(				   \
-					 DEV_RX_OFFLOAD_CHECKSUM	 | \
-					 DEV_RX_OFFLOAD_SCTP_CKSUM       | \
-					 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-					 DEV_RX_OFFLOAD_SCATTER	         | \
-					 DEV_RX_OFFLOAD_SCATTER		 | \
-					 DEV_RX_OFFLOAD_VLAN_FILTER)
+					 RTE_ETH_RX_OFFLOAD_CHECKSUM	 | \
+					 RTE_ETH_RX_OFFLOAD_SCTP_CKSUM       | \
+					 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+					 RTE_ETH_RX_OFFLOAD_SCATTER	         | \
+					 RTE_ETH_RX_OFFLOAD_SCATTER		 | \
+					 RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 
 #define OCTEONTX_TX_OFFLOADS		(				   \
-					 DEV_TX_OFFLOAD_MBUF_FAST_FREE	 | \
-					 DEV_TX_OFFLOAD_MT_LOCKFREE	 | \
-					 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-					 DEV_TX_OFFLOAD_OUTER_UDP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_IPV4_CKSUM	 | \
-					 DEV_TX_OFFLOAD_TCP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_UDP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_SCTP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_MULTI_SEGS)
+					 RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE	 | \
+					 RTE_ETH_TX_OFFLOAD_MT_LOCKFREE	 | \
+					 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+					 RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_TCP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_UDP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 static inline struct octeontx_nic *
 octeontx_pmd_priv(struct rte_eth_dev *dev)
diff --git a/drivers/net/octeontx/octeontx_ethdev_ops.c b/drivers/net/octeontx/octeontx_ethdev_ops.c
index dbe13ce3826b..6ec2b71b0672 100644
--- a/drivers/net/octeontx/octeontx_ethdev_ops.c
+++ b/drivers/net/octeontx/octeontx_ethdev_ops.c
@@ -43,20 +43,20 @@ octeontx_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	rxmode = &dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 			rc = octeontx_vlan_hw_filter(nic, true);
 			if (rc)
 				goto done;
 
-			nic->rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+			nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			nic->rx_offload_flags |= OCCTX_RX_VLAN_FLTR_F;
 		} else {
 			rc = octeontx_vlan_hw_filter(nic, false);
 			if (rc)
 				goto done;
 
-			nic->rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+			nic->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			nic->rx_offload_flags &= ~OCCTX_RX_VLAN_FLTR_F;
 		}
 	}
@@ -139,7 +139,7 @@ octeontx_dev_vlan_offload_init(struct rte_eth_dev *dev)
 
 	TAILQ_INIT(&nic->vlan_info.fltr_tbl);
 
-	rc = octeontx_dev_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+	rc = octeontx_dev_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
 	if (rc)
 		octeontx_log_err("Failed to set vlan offload rc=%d", rc);
 
@@ -219,13 +219,13 @@ octeontx_dev_flow_ctrl_get(struct rte_eth_dev *dev,
 		return rc;
 
 	if (conf.rx_pause && conf.tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (conf.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (conf.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	/* low_water & high_water values are in Bytes */
 	fc_conf->low_water = conf.low_water;
@@ -272,10 +272,10 @@ octeontx_dev_flow_ctrl_set(struct rte_eth_dev *dev,
 		return -EINVAL;
 	}
 
-	rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-			(fc_conf->mode == RTE_FC_RX_PAUSE);
-	tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-			(fc_conf->mode == RTE_FC_TX_PAUSE);
+	rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+			(fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+	tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+			(fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
 
 	conf.high_water = fc_conf->high_water;
 	conf.low_water = fc_conf->low_water;
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 9c5d748e8575..72da8856bd86 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -21,7 +21,7 @@ nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
 
 	if (otx2_dev_is_vf(dev) ||
 	    dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG)
-		capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+		capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	return capa;
 }
@@ -33,10 +33,10 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
 
 	/* TSO not supported for earlier chip revisions */
 	if (otx2_dev_is_96xx_A0(dev) || otx2_dev_is_95xx_Ax(dev))
-		capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
-			  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			  DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-			  DEV_TX_OFFLOAD_GRE_TNL_TSO);
+		capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+			  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
 	return capa;
 }
 
@@ -66,8 +66,8 @@ nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
 	req->npa_func = otx2_npa_pf_func_get();
 	req->sso_func = otx2_sso_pf_func_get();
 	req->rx_cfg = BIT_ULL(35 /* DIS_APAD */);
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
-			 DEV_RX_OFFLOAD_UDP_CKSUM)) {
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			 RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
 		req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */);
 		req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */);
 	}
@@ -373,7 +373,7 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
 
 	aq->rq.sso_ena = 0;
 
-	if (rxq->offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		aq->rq.ipsech_ena = 1;
 
 	aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */
@@ -665,7 +665,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
 	 * These are needed in deriving raw clock value from tsc counter.
 	 * read_clock eth op returns raw clock value.
 	 */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
 	    otx2_ethdev_is_ptp_en(dev)) {
 		rc = otx2_nix_raw_clock_tsc_conv(dev);
 		if (rc) {
@@ -692,7 +692,7 @@ nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
 	 * Maximum three segments can be supported with W8, Choose
 	 * NIX_MAXSQESZ_W16 for multi segment offload.
 	 */
-	if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		return NIX_MAXSQESZ_W16;
 	else
 		return NIX_MAXSQESZ_W8;
@@ -707,29 +707,29 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rxmode *rxmode = &conf->rxmode;
 	uint16_t flags = 0;
 
-	if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
-			(dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+			(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		flags |= NIX_RX_OFFLOAD_RSS_F;
 
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
-			 DEV_RX_OFFLOAD_UDP_CKSUM))
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			 RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		flags |= NIX_RX_MULTI_SEG_F;
 
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
-				DEV_RX_OFFLOAD_QINQ_STRIP))
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				RTE_ETH_RX_OFFLOAD_QINQ_STRIP))
 		flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		flags |= NIX_RX_OFFLOAD_SECURITY_F;
 
 	if (!dev->ptype_disable)
@@ -768,43 +768,43 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
 			 offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
 
-	if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
-	    conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+	    conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
 
-	if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= NIX_TX_MULTI_SEG_F;
 
 	/* Enable Inner checksum for TSO */
-	if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+	if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		flags |= (NIX_TX_OFFLOAD_TSO_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
 	/* Enable Inner and Outer checksum for Tunnel TSO */
-	if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		    DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		    DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		flags |= (NIX_TX_OFFLOAD_TSO_F |
 			  NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
-	if (conf & DEV_TX_OFFLOAD_SECURITY)
+	if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
 		flags |= NIX_TX_OFFLOAD_SECURITY_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
 	return flags;
@@ -914,8 +914,8 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
 	buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
 
 	if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
-		dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
-		dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+		dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 		/* Setting up the rx[tx]_offload_flags due to change
 		 * in rx[tx]_offloads.
@@ -1848,21 +1848,21 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
 		goto fail_configure;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-	    rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+	    rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode);
 		goto fail_configure;
 	}
 
-	if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		otx2_err("Unsupported mq tx mode %d", txmode->mq_mode);
 		goto fail_configure;
 	}
 
 	if (otx2_dev_is_Ax(dev) &&
-	    (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
-	    ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
-	    (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+	    ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
 		otx2_err("Outer IP and SCTP checksum unsupported");
 		goto fail_configure;
 	}
@@ -2235,7 +2235,7 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
 	 * enabled in PF owning this VF
 	 */
 	memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info));
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
 	    otx2_ethdev_is_ptp_en(dev))
 		otx2_nix_timesync_enable(eth_dev);
 	else
@@ -2563,8 +2563,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
 	rc = otx2_eth_sec_ctx_create(eth_dev);
 	if (rc)
 		goto free_mac_addrs;
-	dev->tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
-	dev->rx_offload_capa |= DEV_RX_OFFLOAD_SECURITY;
+	dev->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
+	dev->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SECURITY;
 
 	/* Initialize rte-flow */
 	rc = otx2_flow_init(dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 4557a0ee1945..a5282c6c1231 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -117,43 +117,43 @@
 #define CQ_TIMER_THRESH_DEFAULT	0xAULL /* ~1usec i.e (0xA * 100nsec) */
 #define CQ_TIMER_THRESH_MAX     255
 
-#define NIX_RSS_L3_L4_SRC_DST  (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY \
-				| ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define NIX_RSS_L3_L4_SRC_DST  (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY \
+				| RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
 
-#define NIX_RSS_OFFLOAD		(ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\
-				 ETH_RSS_TCP | ETH_RSS_SCTP | \
-				 ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD | \
-				 NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | \
-				 ETH_RSS_C_VLAN)
+#define NIX_RSS_OFFLOAD		(RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |\
+				 RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | \
+				 RTE_ETH_RSS_TUNNEL | RTE_ETH_RSS_L2_PAYLOAD | \
+				 NIX_RSS_L3_L4_SRC_DST | RTE_ETH_RSS_LEVEL_MASK | \
+				 RTE_ETH_RSS_C_VLAN)
 
 #define NIX_TX_OFFLOAD_CAPA ( \
-	DEV_TX_OFFLOAD_MBUF_FAST_FREE	| \
-	DEV_TX_OFFLOAD_MT_LOCKFREE	| \
-	DEV_TX_OFFLOAD_VLAN_INSERT	| \
-	DEV_TX_OFFLOAD_QINQ_INSERT	| \
-	DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM	| \
-	DEV_TX_OFFLOAD_OUTER_UDP_CKSUM	| \
-	DEV_TX_OFFLOAD_TCP_CKSUM	| \
-	DEV_TX_OFFLOAD_UDP_CKSUM	| \
-	DEV_TX_OFFLOAD_SCTP_CKSUM	| \
-	DEV_TX_OFFLOAD_TCP_TSO		| \
-	DEV_TX_OFFLOAD_VXLAN_TNL_TSO    | \
-	DEV_TX_OFFLOAD_GENEVE_TNL_TSO   | \
-	DEV_TX_OFFLOAD_GRE_TNL_TSO	| \
-	DEV_TX_OFFLOAD_MULTI_SEGS	| \
-	DEV_TX_OFFLOAD_IPV4_CKSUM)
+	RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE	| \
+	RTE_ETH_TX_OFFLOAD_MT_LOCKFREE	| \
+	RTE_ETH_TX_OFFLOAD_VLAN_INSERT	| \
+	RTE_ETH_TX_OFFLOAD_QINQ_INSERT	| \
+	RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_TCP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_UDP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_SCTP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_TCP_TSO		| \
+	RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO    | \
+	RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO   | \
+	RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO	| \
+	RTE_ETH_TX_OFFLOAD_MULTI_SEGS	| \
+	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 
 #define NIX_RX_OFFLOAD_CAPA ( \
-	DEV_RX_OFFLOAD_CHECKSUM		| \
-	DEV_RX_OFFLOAD_SCTP_CKSUM	| \
-	DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-	DEV_RX_OFFLOAD_SCATTER		| \
-	DEV_RX_OFFLOAD_OUTER_UDP_CKSUM	| \
-	DEV_RX_OFFLOAD_VLAN_STRIP	| \
-	DEV_RX_OFFLOAD_VLAN_FILTER	| \
-	DEV_RX_OFFLOAD_QINQ_STRIP	| \
-	DEV_RX_OFFLOAD_TIMESTAMP	| \
-	DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RX_OFFLOAD_CHECKSUM		| \
+	RTE_ETH_RX_OFFLOAD_SCTP_CKSUM	| \
+	RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+	RTE_ETH_RX_OFFLOAD_SCATTER		| \
+	RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM	| \
+	RTE_ETH_RX_OFFLOAD_VLAN_STRIP	| \
+	RTE_ETH_RX_OFFLOAD_VLAN_FILTER	| \
+	RTE_ETH_RX_OFFLOAD_QINQ_STRIP	| \
+	RTE_ETH_RX_OFFLOAD_TIMESTAMP	| \
+	RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define NIX_DEFAULT_RSS_CTX_GROUP  0
 #define NIX_DEFAULT_RSS_MCAM_IDX  -1
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index 83f905315b38..60bf6c3f5f05 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -49,12 +49,12 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
 
 	val = atoi(value);
 
-	if (val <= ETH_RSS_RETA_SIZE_64)
-		val = ETH_RSS_RETA_SIZE_64;
-	else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
-		val = ETH_RSS_RETA_SIZE_128;
-	else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
-		val = ETH_RSS_RETA_SIZE_256;
+	if (val <= RTE_ETH_RSS_RETA_SIZE_64)
+		val = RTE_ETH_RSS_RETA_SIZE_64;
+	else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
+		val = RTE_ETH_RSS_RETA_SIZE_128;
+	else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
+		val = RTE_ETH_RSS_RETA_SIZE_256;
 	else
 		val = NIX_RSS_RETA_SIZE;
 
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 22a8af5cba45..d5caaa326a5a 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -26,11 +26,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	 * when this feature has not been enabled before.
 	 */
 	if (data->dev_started && frame_size > buffsz &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER))
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER))
 		return -EINVAL;
 
 	/* Check <seg size> * <max_seg>  >= max_frame */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)	&&
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	&&
 	    (frame_size > buffsz * NIX_RX_NB_SEG_MAX))
 		return -EINVAL;
 
@@ -568,17 +568,17 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
 	};
 
 	/* Auto negotiation disabled */
-	devinfo->speed_capa = ETH_LINK_SPEED_FIXED;
+	devinfo->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
 	if (!otx2_dev_is_vf_or_sdp(dev) && !otx2_dev_is_lbk(dev)) {
-		devinfo->speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G;
+		devinfo->speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G;
 
 		/* 50G and 100G to be supported for board version C0
 		 * and above.
 		 */
 		if (!otx2_dev_is_Ax(dev))
-			devinfo->speed_capa |= ETH_LINK_SPEED_50G |
-					       ETH_LINK_SPEED_100G;
+			devinfo->speed_capa |= RTE_ETH_LINK_SPEED_50G |
+					       RTE_ETH_LINK_SPEED_100G;
 	}
 
 	devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.c b/drivers/net/octeontx2/otx2_ethdev_sec.c
index 7bd1ed6da043..4d40184de46d 100644
--- a/drivers/net/octeontx2/otx2_ethdev_sec.c
+++ b/drivers/net/octeontx2/otx2_ethdev_sec.c
@@ -869,8 +869,8 @@ otx2_eth_sec_init(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(sa_width < 32 || sa_width > 512 ||
 			 !RTE_IS_POWER_OF_2(sa_width));
 
-	if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+	if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
 		return 0;
 
 	if (rte_security_dynfield_register() < 0)
@@ -912,8 +912,8 @@ otx2_eth_sec_fini(struct rte_eth_dev *eth_dev)
 	uint16_t port = eth_dev->data->port_id;
 	char name[RTE_MEMZONE_NAMESIZE];
 
-	if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+	if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
 		return;
 
 	lookup_mem_sa_tbl_clear(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 6df0732189eb..1d0fe4e950d4 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -625,7 +625,7 @@ otx2_flow_create(struct rte_eth_dev *dev,
 		goto err_exit;
 	}
 
-	if (hw->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (hw->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		rc = flow_update_sec_tt(dev, actions);
 		if (rc != 0) {
 			rte_flow_error_set(error, EIO,
diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c
index 76bf48100183..071740de86a7 100644
--- a/drivers/net/octeontx2/otx2_flow_ctrl.c
+++ b/drivers/net/octeontx2/otx2_flow_ctrl.c
@@ -54,7 +54,7 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 	int rc;
 
 	if (otx2_dev_is_lbk(dev)) {
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		return 0;
 	}
 
@@ -66,13 +66,13 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 		goto done;
 
 	if (rsp->rx_pause && rsp->tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rsp->rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (rsp->tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 done:
 	return rc;
@@ -159,10 +159,10 @@ otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	if (fc_conf->mode == fc->mode)
 		return 0;
 
-	rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_RX_PAUSE);
-	tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_TX_PAUSE);
+	rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+	tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
 
 	/* Check if TX pause frame is already enabled or not */
 	if (fc->tx_pause ^ tx_pause) {
@@ -212,11 +212,11 @@ otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev)
 	/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
 	if (otx2_dev_is_Ax(dev) &&
 	    (dev->npc_flow.switch_header_type != OTX2_PRIV_FLAGS_HIGIG) &&
-	    (fc_conf.mode == RTE_FC_FULL || fc_conf.mode == RTE_FC_RX_PAUSE)) {
+	    (fc_conf.mode == RTE_ETH_FC_FULL || fc_conf.mode == RTE_ETH_FC_RX_PAUSE)) {
 		fc_conf.mode =
-				(fc_conf.mode == RTE_FC_FULL ||
-				fc_conf.mode == RTE_FC_TX_PAUSE) ?
-				RTE_FC_TX_PAUSE : RTE_FC_NONE;
+				(fc_conf.mode == RTE_ETH_FC_FULL ||
+				fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ?
+				RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
 	}
 
 	return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf);
@@ -234,7 +234,7 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
 		return 0;
 
 	memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
-	/* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+	/* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
 	 * by AF driver, update those info in PMD structure.
 	 */
 	rc = otx2_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -242,10 +242,10 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
 		goto exit;
 
 	fc->mode = fc_conf.mode;
-	fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_RX_PAUSE);
-	fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_TX_PAUSE);
+	fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+	fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
 
 exit:
 	return rc;
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index 79b92fda8a4a..91267bbb8182 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -852,7 +852,7 @@ parse_rss_action(struct rte_eth_dev *dev,
 					  attr, "No support of RSS in egress");
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS)
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS)
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ACTION,
 					  act, "multi-queue mode is disabled");
@@ -1186,7 +1186,7 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev,
 		 *FLOW_KEY_ALG index. So, till we update the action with
 		 *flow_key_alg index, set the action to drop.
 		 */
-		if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+		if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 			flow->npc_action = NIX_RX_ACTIONOP_DROP;
 		else
 			flow->npc_action = NIX_RX_ACTIONOP_UCAST;
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
index 81dd6243b977..8f5d0eed92b6 100644
--- a/drivers/net/octeontx2/otx2_link.c
+++ b/drivers/net/octeontx2/otx2_link.c
@@ -41,7 +41,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
 		otx2_info("Port %d: Link Up - speed %u Mbps - %s",
 			  (int)(eth_dev->data->port_id),
 			  (uint32_t)link->link_speed,
-			  link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+			  link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 			  "full-duplex" : "half-duplex");
 	else
 		otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id));
@@ -92,7 +92,7 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
 
 	eth_link.link_status = link->link_up;
 	eth_link.link_speed = link->speed;
-	eth_link.link_autoneg = ETH_LINK_AUTONEG;
+	eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	eth_link.link_duplex = link->full_duplex;
 
 	otx2_dev->speed = link->speed;
@@ -111,10 +111,10 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
 static int
 lbk_link_update(struct rte_eth_link *link)
 {
-	link->link_status = ETH_LINK_UP;
-	link->link_speed = ETH_SPEED_NUM_100G;
-	link->link_autoneg = ETH_LINK_FIXED;
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_status = RTE_ETH_LINK_UP;
+	link->link_speed = RTE_ETH_SPEED_NUM_100G;
+	link->link_autoneg = RTE_ETH_LINK_FIXED;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	return 0;
 }
 
@@ -131,7 +131,7 @@ cgx_link_update(struct otx2_eth_dev *dev, struct rte_eth_link *link)
 
 	link->link_status = rsp->link_info.link_up;
 	link->link_speed = rsp->link_info.speed;
-	link->link_autoneg = ETH_LINK_AUTONEG;
+	link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	if (rsp->link_info.full_duplex)
 		link->link_duplex = rsp->link_info.full_duplex;
@@ -233,22 +233,22 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
 
 	/* 50G and 100G to be supported for board version C0 and above */
 	if (!otx2_dev_is_Ax(dev)) {
-		if (link_speeds & ETH_LINK_SPEED_100G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_100G)
 			link_speed = 100000;
-		if (link_speeds & ETH_LINK_SPEED_50G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_50G)
 			link_speed = 50000;
 	}
-	if (link_speeds & ETH_LINK_SPEED_40G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
 		link_speed = 40000;
-	if (link_speeds & ETH_LINK_SPEED_25G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
 		link_speed = 25000;
-	if (link_speeds & ETH_LINK_SPEED_20G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_20G)
 		link_speed = 20000;
-	if (link_speeds & ETH_LINK_SPEED_10G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 		link_speed = 10000;
-	if (link_speeds & ETH_LINK_SPEED_5G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_5G)
 		link_speed = 5000;
-	if (link_speeds & ETH_LINK_SPEED_1G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
 		link_speed = 1000;
 
 	return link_speed;
@@ -257,11 +257,11 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
 static inline uint8_t
 nix_parse_eth_link_duplex(uint32_t link_speeds)
 {
-	if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
-			(link_speeds & ETH_LINK_SPEED_100M_HD))
-		return ETH_LINK_HALF_DUPLEX;
+	if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+			(link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+		return RTE_ETH_LINK_HALF_DUPLEX;
 	else
-		return ETH_LINK_FULL_DUPLEX;
+		return RTE_ETH_LINK_FULL_DUPLEX;
 }
 
 int
@@ -279,7 +279,7 @@ otx2_apply_link_speed(struct rte_eth_dev *eth_dev)
 	cfg.speed = nix_parse_link_speeds(dev, conf->link_speeds);
 	if (cfg.speed != SPEED_NONE && cfg.speed != dev->speed) {
 		cfg.duplex = nix_parse_eth_link_duplex(conf->link_speeds);
-		cfg.an = (conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+		cfg.an = (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 		return cgx_change_mode(dev, &cfg);
 	}
diff --git a/drivers/net/octeontx2/otx2_mcast.c b/drivers/net/octeontx2/otx2_mcast.c
index f84aa1bf570c..b9c63ad3bc21 100644
--- a/drivers/net/octeontx2/otx2_mcast.c
+++ b/drivers/net/octeontx2/otx2_mcast.c
@@ -100,7 +100,7 @@ nix_hw_update_mc_addr_list(struct rte_eth_dev *eth_dev)
 
 		action = NIX_RX_ACTIONOP_UCAST;
 
-		if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+		if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 			action = NIX_RX_ACTIONOP_RSS;
 			action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
 		}
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
index 91e5c0f6bd11..abb213058792 100644
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -250,7 +250,7 @@ otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
 	/* System time should be already on by default */
 	nix_start_timecounters(eth_dev);
 
-	dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+	dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 	dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
@@ -287,7 +287,7 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
 	if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
 		return -EINVAL;
 
-	dev->rx_offloads &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+	dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F;
 	dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F;
 
diff --git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
index 7dbe5f69ae65..68cef1caa394 100644
--- a/drivers/net/octeontx2/otx2_rss.c
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -85,8 +85,8 @@ otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Copy RETA table */
-	for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask >> j) & 0x01)
 				rss->ind_tbl[idx] = reta_conf[i].reta[j];
 			idx++;
@@ -118,8 +118,8 @@ otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Copy RETA table */
-	for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = rss->ind_tbl[j];
 	}
@@ -178,23 +178,23 @@ rss_get_key(struct otx2_eth_dev *dev, uint8_t *key)
 }
 
 #define RSS_IPV4_ENABLE ( \
-			  ETH_RSS_IPV4 | \
-			  ETH_RSS_FRAG_IPV4 | \
-			  ETH_RSS_NONFRAG_IPV4_UDP | \
-			  ETH_RSS_NONFRAG_IPV4_TCP | \
-			  ETH_RSS_NONFRAG_IPV4_SCTP)
+			  RTE_ETH_RSS_IPV4 | \
+			  RTE_ETH_RSS_FRAG_IPV4 | \
+			  RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
 #define RSS_IPV6_ENABLE ( \
-			  ETH_RSS_IPV6 | \
-			  ETH_RSS_FRAG_IPV6 | \
-			  ETH_RSS_NONFRAG_IPV6_UDP | \
-			  ETH_RSS_NONFRAG_IPV6_TCP | \
-			  ETH_RSS_NONFRAG_IPV6_SCTP)
+			  RTE_ETH_RSS_IPV6 | \
+			  RTE_ETH_RSS_FRAG_IPV6 | \
+			  RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 #define RSS_IPV6_EX_ENABLE ( \
-			     ETH_RSS_IPV6_EX | \
-			     ETH_RSS_IPV6_TCP_EX | \
-			     ETH_RSS_IPV6_UDP_EX)
+			     RTE_ETH_RSS_IPV6_EX | \
+			     RTE_ETH_RSS_IPV6_TCP_EX | \
+			     RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define RSS_MAX_LEVELS   3
 
@@ -233,24 +233,24 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
 
 	dev->rss_info.nix_rss = ethdev_rss;
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
 	    dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) {
 		flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
 	}
 
-	if (ethdev_rss & ETH_RSS_C_VLAN)
+	if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
 
-	if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
 
-	if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
 
-	if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
 
-	if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
 
 	if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -259,34 +259,34 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
 	if (ethdev_rss & RSS_IPV6_ENABLE)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
 
-	if (ethdev_rss & ETH_RSS_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_TCP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_UDP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_SCTP)
+	if (ethdev_rss & RTE_ETH_RSS_SCTP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
 
 	if (ethdev_rss & RSS_IPV6_EX_ENABLE)
 		flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
 
-	if (ethdev_rss & ETH_RSS_PORT)
+	if (ethdev_rss & RTE_ETH_RSS_PORT)
 		flowkey_cfg |= FLOW_KEY_TYPE_PORT;
 
-	if (ethdev_rss & ETH_RSS_NVGRE)
+	if (ethdev_rss & RTE_ETH_RSS_NVGRE)
 		flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
 
-	if (ethdev_rss & ETH_RSS_VXLAN)
+	if (ethdev_rss & RTE_ETH_RSS_VXLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
 
-	if (ethdev_rss & ETH_RSS_GENEVE)
+	if (ethdev_rss & RTE_ETH_RSS_GENEVE)
 		flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
 
-	if (ethdev_rss & ETH_RSS_GTPU)
+	if (ethdev_rss & RTE_ETH_RSS_GTPU)
 		flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
 
 	return flowkey_cfg;
@@ -343,7 +343,7 @@ otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
 		otx2_nix_rss_set_key(dev, rss_conf->rss_key,
 				     (uint32_t)rss_conf->rss_key_len);
 
-	rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 	flowkey_cfg =
@@ -390,7 +390,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
 	int rc;
 
 	/* Skip further configuration if selected mode is not RSS */
-	if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS || !qcnt)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS || !qcnt)
 		return 0;
 
 	/* Update default RSS key and cfg */
@@ -408,7 +408,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
 	}
 
 	rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
-	rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 	flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, rss_hash_level);
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index ffeade5952dc..986902287b67 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -414,12 +414,12 @@ NIX_RX_FASTPATH_MODES
 	/* For PTP enabled, scalar rx function should be chosen as most of the
 	 * PTP apps are implemented to rx burst 1 pkt.
 	 */
-	if (dev->scalar_ena || dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (dev->scalar_ena || dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 		pick_rx_func(eth_dev, nix_eth_rx_burst);
 	else
 		pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
 
 	/* Copy multi seg version with no offload for tear down sequence */
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index ff299f00b913..c60190074926 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -1070,7 +1070,7 @@ NIX_TX_FASTPATH_MODES
 	else
 		pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
 
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
 
 	rte_mb();
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index f5161e17a16d..cce643b7b51d 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -50,7 +50,7 @@ nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
 
 	action = NIX_RX_ACTIONOP_UCAST;
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		action = NIX_RX_ACTIONOP_RSS;
 		action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
 	}
@@ -99,7 +99,7 @@ nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type,
 	 * Take offset from LA since in case of untagged packet,
 	 * lbptr is zero.
 	 */
-	if (type == ETH_VLAN_TYPE_OUTER) {
+	if (type == RTE_ETH_VLAN_TYPE_OUTER) {
 		vtag_action.act.vtag0_def = vtag_index;
 		vtag_action.act.vtag0_lid = NPC_LID_LA;
 		vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
@@ -413,7 +413,7 @@ nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
 		if (vlan->strip_on ||
 		    (vlan->qinq_on && !vlan->qinq_before_def)) {
 			if (eth_dev->data->dev_conf.rxmode.mq_mode ==
-								ETH_MQ_RX_RSS)
+								RTE_ETH_MQ_RX_RSS)
 				vlan->def_rx_mcam_ent.action |=
 							NIX_RX_ACTIONOP_RSS;
 			else
@@ -717,48 +717,48 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 
 	rxmode = &eth_dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
-			offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
+			offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			rc = nix_vlan_hw_strip(eth_dev, true);
 		} else {
-			offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+			offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			rc = nix_vlan_hw_strip(eth_dev, false);
 		}
 		if (rc)
 			goto done;
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
-			offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
+			offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			rc = nix_vlan_hw_filter(eth_dev, true, 0);
 		} else {
-			offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+			offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			rc = nix_vlan_hw_filter(eth_dev, false, 0);
 		}
 		if (rc)
 			goto done;
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) {
 		if (!dev->vlan_info.qinq_on) {
-			offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+			offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 			rc = otx2_nix_config_double_vlan(eth_dev, true);
 			if (rc)
 				goto done;
 		}
 	} else {
 		if (dev->vlan_info.qinq_on) {
-			offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+			offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 			rc = otx2_nix_config_double_vlan(eth_dev, false);
 			if (rc)
 				goto done;
 		}
 	}
 
-	if (offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
-			DEV_RX_OFFLOAD_QINQ_STRIP)) {
+	if (offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+			RTE_ETH_RX_OFFLOAD_QINQ_STRIP)) {
 		dev->rx_offloads |= offloads;
 		dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
 		otx2_eth_set_rx_function(eth_dev);
@@ -780,7 +780,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
 	tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox);
 
 	tpid_cfg->tpid = tpid;
-	if (type == ETH_VLAN_TYPE_OUTER)
+	if (type == RTE_ETH_VLAN_TYPE_OUTER)
 		tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER;
 	else
 		tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER;
@@ -789,7 +789,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
 	if (rc)
 		return rc;
 
-	if (type == ETH_VLAN_TYPE_OUTER)
+	if (type == RTE_ETH_VLAN_TYPE_OUTER)
 		dev->vlan_info.outer_vlan_tpid = tpid;
 	else
 		dev->vlan_info.inner_vlan_tpid = tpid;
@@ -864,7 +864,7 @@ otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev,       uint16_t vlan_id, int on)
 		vlan->outer_vlan_idx = 0;
 	}
 
-	rc = nix_vlan_handle_default_tx_entry(dev, ETH_VLAN_TYPE_OUTER,
+	rc = nix_vlan_handle_default_tx_entry(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					      vtag_index, on);
 	if (rc < 0) {
 		printf("Default tx entry failed with rc %d\n", rc);
@@ -986,12 +986,12 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
 	} else {
 		/* Reinstall all mcam entries now if filter offload is set */
 		if (eth_dev->data->dev_conf.rxmode.offloads &
-		    DEV_RX_OFFLOAD_VLAN_FILTER)
+		    RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			nix_vlan_reinstall_vlan_filters(eth_dev);
 	}
 
 	mask =
-	    ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+	    RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
 	rc = otx2_nix_vlan_offload_set(eth_dev, mask);
 	if (rc) {
 		otx2_err("Failed to set vlan offload rc=%d", rc);
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index 698d22e22685..74dc36a17648 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -33,14 +33,14 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
 
 	otx_epvf = OTX_EP_DEV(eth_dev);
 
-	devinfo->speed_capa = ETH_LINK_SPEED_10G;
+	devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
 	devinfo->max_rx_queues = otx_epvf->max_rx_queues;
 	devinfo->max_tx_queues = otx_epvf->max_tx_queues;
 
 	devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
 	devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
-	devinfo->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
-	devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+	devinfo->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
+	devinfo->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
 
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index aa4dcd33cc79..9338b30672ec 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -563,7 +563,7 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
 			struct otx_ep_buf_free_info *finfo;
 			int j, frags, num_sg;
 
-			if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+			if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
 				goto xmit_fail;
 
 			finfo = (struct otx_ep_buf_free_info *)rte_malloc(NULL,
@@ -697,7 +697,7 @@ otx2_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
 			struct otx_ep_buf_free_info *finfo;
 			int j, frags, num_sg;
 
-			if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+			if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
 				goto xmit_fail;
 
 			finfo = (struct otx_ep_buf_free_info *)
@@ -954,7 +954,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
 	droq_pkt->l4_len = hdr_lens.l4_len;
 
 	if (droq_pkt->nb_segs > 1 &&
-	    !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	    !(otx_ep->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		rte_pktmbuf_free(droq_pkt);
 		goto oq_read_fail;
 	}
diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index d695c5eef7b0..ec29fd6bc53c 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -136,10 +136,10 @@ static const char *valid_arguments[] = {
 };
 
 static struct rte_eth_link pmd_link = {
-		.link_speed = ETH_SPEED_NUM_10G,
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_status = ETH_LINK_DOWN,
-		.link_autoneg = ETH_LINK_FIXED,
+		.link_speed = RTE_ETH_SPEED_NUM_10G,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_status = RTE_ETH_LINK_DOWN,
+		.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(eth_pcap_logtype, NOTICE);
@@ -659,7 +659,7 @@ eth_dev_start(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_tx_queues; i++)
 		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -714,7 +714,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_tx_queues; i++)
 		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index 4cc002ee8fab..047010e15ed0 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -22,15 +22,15 @@ struct pfe_vdev_init_params {
 static struct pfe *g_pfe;
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 /* Supported Tx offloads */
 static uint64_t dev_tx_offloads_sup =
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 /* TODO: make pfe_svr a runtime option.
  * Driver should be able to get the SVR
@@ -601,9 +601,9 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 	}
 
 	link.link_status = lstatus;
-	link.link_speed = ETH_LINK_SPEED_1G;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_speed = RTE_ETH_LINK_SPEED_1G;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	pfe_eth_atomic_write_link_status(dev, &link);
 
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 6667c2d7ab6d..511742c6a1b3 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -65,8 +65,8 @@ typedef u32 offsize_t;      /* In DWORDS !!! */
 struct eth_phy_cfg {
 /* 0 = autoneg, 1000/10000/20000/25000/40000/50000/100000 */
 	u32 speed;
-#define ETH_SPEED_AUTONEG   0
-#define ETH_SPEED_SMARTLINQ  0x8 /* deprecated - use link_modes field instead */
+#define RTE_ETH_SPEED_AUTONEG   0
+#define RTE_ETH_SPEED_SMARTLINQ  0x8 /* deprecated - use link_modes field instead */
 
 	u32 pause;      /* bitmask */
 #define ETH_PAUSE_NONE		0x0
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 27f6932dc74e..c907d7fd8312 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -342,9 +342,9 @@ qede_assign_rxtx_handlers(struct rte_eth_dev *dev, bool is_dummy)
 	}
 
 	use_tx_offload = !!(tx_offloads &
-			    (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
-			     DEV_TX_OFFLOAD_TCP_TSO | /* tso */
-			     DEV_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
+			    (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
+			     RTE_ETH_TX_OFFLOAD_TCP_TSO | /* tso */
+			     RTE_ETH_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
 
 	if (use_tx_offload) {
 		DP_INFO(edev, "Assigning qede_xmit_pkts\n");
@@ -1002,16 +1002,16 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
 	uint64_t rx_offloads = eth_dev->data->dev_conf.rxmode.offloads;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			(void)qede_vlan_stripping(eth_dev, 1);
 		else
 			(void)qede_vlan_stripping(eth_dev, 0);
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* VLAN filtering kicks in when a VLAN is added */
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 			qede_vlan_filter_set(eth_dev, 0, 1);
 		} else {
 			if (qdev->configured_vlans > 1) { /* Excluding VLAN0 */
@@ -1022,7 +1022,7 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 				 * enabled
 				 */
 				eth_dev->data->dev_conf.rxmode.offloads |=
-						DEV_RX_OFFLOAD_VLAN_FILTER;
+						RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			} else {
 				qede_vlan_filter_set(eth_dev, 0, 0);
 			}
@@ -1069,11 +1069,11 @@ int qede_config_rss(struct rte_eth_dev *eth_dev)
 	/* Configure default RETA */
 	memset(reta_conf, 0, sizeof(reta_conf));
 	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++)
-		reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+		reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
 
 	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
-		id = i / RTE_RETA_GROUP_SIZE;
-		pos = i % RTE_RETA_GROUP_SIZE;
+		id = i / RTE_ETH_RETA_GROUP_SIZE;
+		pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		q = i % QEDE_RSS_COUNT(eth_dev);
 		reta_conf[id].reta[pos] = q;
 	}
@@ -1112,12 +1112,12 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
 	}
 
 	/* Configure TPA parameters */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		if (qede_enable_tpa(eth_dev, true))
 			return -EINVAL;
 		/* Enable scatter mode for LRO */
 		if (!eth_dev->data->scattered_rx)
-			rxmode->offloads |= DEV_RX_OFFLOAD_SCATTER;
+			rxmode->offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 	}
 
 	/* Start queues */
@@ -1132,7 +1132,7 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
 	 * Also, we would like to retain similar behavior in PF case, so we
 	 * don't do PF/VF specific check here.
 	 */
-	if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 		if (qede_config_rss(eth_dev))
 			goto err;
 
@@ -1272,8 +1272,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE(edev);
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* We need to have min 1 RX queue.There is no min check in
 	 * rte_eth_dev_configure(), so we are checking it here.
@@ -1291,8 +1291,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 		DP_NOTICE(edev, false,
 			  "Invalid devargs supplied, requested change will not take effect\n");
 
-	if (!(rxmode->mq_mode == ETH_MQ_RX_NONE ||
-	      rxmode->mq_mode == ETH_MQ_RX_RSS)) {
+	if (!(rxmode->mq_mode == RTE_ETH_MQ_RX_NONE ||
+	      rxmode->mq_mode == RTE_ETH_MQ_RX_RSS)) {
 		DP_ERR(edev, "Unsupported multi-queue mode\n");
 		return -ENOTSUP;
 	}
@@ -1312,7 +1312,7 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 			return -ENOMEM;
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		eth_dev->data->scattered_rx = 1;
 
 	if (qede_start_vport(qdev, eth_dev->data->mtu))
@@ -1321,8 +1321,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 	qdev->mtu = eth_dev->data->mtu;
 
 	/* Enable VLAN offloads by default */
-	ret = qede_vlan_offload_set(eth_dev, ETH_VLAN_STRIP_MASK  |
-					     ETH_VLAN_FILTER_MASK);
+	ret = qede_vlan_offload_set(eth_dev, RTE_ETH_VLAN_STRIP_MASK  |
+					     RTE_ETH_VLAN_FILTER_MASK);
 	if (ret)
 		return ret;
 
@@ -1385,34 +1385,34 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->reta_size = ECORE_RSS_IND_TABLE_SIZE;
 	dev_info->hash_key_size = ECORE_RSS_KEY_SIZE * sizeof(uint32_t);
 	dev_info->flow_type_rss_offloads = (uint64_t)QEDE_RSS_OFFLOAD_ALL;
-	dev_info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM	|
-				     DEV_RX_OFFLOAD_UDP_CKSUM	|
-				     DEV_RX_OFFLOAD_TCP_CKSUM	|
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				     DEV_RX_OFFLOAD_TCP_LRO	|
-				     DEV_RX_OFFLOAD_KEEP_CRC    |
-				     DEV_RX_OFFLOAD_SCATTER	|
-				     DEV_RX_OFFLOAD_VLAN_FILTER |
-				     DEV_RX_OFFLOAD_VLAN_STRIP  |
-				     DEV_RX_OFFLOAD_RSS_HASH);
+	dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM	|
+				     RTE_ETH_RX_OFFLOAD_UDP_CKSUM	|
+				     RTE_ETH_RX_OFFLOAD_TCP_CKSUM	|
+				     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     RTE_ETH_RX_OFFLOAD_TCP_LRO	|
+				     RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+				     RTE_ETH_RX_OFFLOAD_SCATTER	|
+				     RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				     RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+				     RTE_ETH_RX_OFFLOAD_RSS_HASH);
 	dev_info->rx_queue_offload_capa = 0;
 
 	/* TX offloads are on a per-packet basis, so it is applicable
 	 * to both at port and queue levels.
 	 */
-	dev_info->tx_offload_capa = (DEV_TX_OFFLOAD_VLAN_INSERT	|
-				     DEV_TX_OFFLOAD_IPV4_CKSUM	|
-				     DEV_TX_OFFLOAD_UDP_CKSUM	|
-				     DEV_TX_OFFLOAD_TCP_CKSUM	|
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				     DEV_TX_OFFLOAD_MULTI_SEGS  |
-				     DEV_TX_OFFLOAD_TCP_TSO	|
-				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO);
+	dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_VLAN_INSERT	|
+				     RTE_ETH_TX_OFFLOAD_IPV4_CKSUM	|
+				     RTE_ETH_TX_OFFLOAD_UDP_CKSUM	|
+				     RTE_ETH_TX_OFFLOAD_TCP_CKSUM	|
+				     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+				     RTE_ETH_TX_OFFLOAD_TCP_TSO	|
+				     RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO);
 	dev_info->tx_queue_offload_capa = dev_info->tx_offload_capa;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
-		.offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+		.offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
 	};
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1424,17 +1424,17 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
 	memset(&link, 0, sizeof(struct qed_link_output));
 	qdev->ops->common->get_link(edev, &link);
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G)
-		speed_cap |= ETH_LINK_SPEED_1G;
+		speed_cap |= RTE_ETH_LINK_SPEED_1G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G)
-		speed_cap |= ETH_LINK_SPEED_10G;
+		speed_cap |= RTE_ETH_LINK_SPEED_10G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G)
-		speed_cap |= ETH_LINK_SPEED_25G;
+		speed_cap |= RTE_ETH_LINK_SPEED_25G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G)
-		speed_cap |= ETH_LINK_SPEED_40G;
+		speed_cap |= RTE_ETH_LINK_SPEED_40G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G)
-		speed_cap |= ETH_LINK_SPEED_50G;
+		speed_cap |= RTE_ETH_LINK_SPEED_50G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G)
-		speed_cap |= ETH_LINK_SPEED_100G;
+		speed_cap |= RTE_ETH_LINK_SPEED_100G;
 	dev_info->speed_capa = speed_cap;
 
 	return 0;
@@ -1461,10 +1461,10 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
 	/* Link Mode */
 	switch (q_link.duplex) {
 	case QEDE_DUPLEX_HALF:
-		link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case QEDE_DUPLEX_FULL:
-		link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case QEDE_DUPLEX_UNKNOWN:
 	default:
@@ -1473,11 +1473,11 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
 	link.link_duplex = link_duplex;
 
 	/* Link Status */
-	link.link_status = q_link.link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	link.link_status = q_link.link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	/* AN */
 	link.link_autoneg = (q_link.supported_caps & QEDE_SUPPORTED_AUTONEG) ?
-			     ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+			     RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
 
 	DP_INFO(edev, "Link - Speed %u Mode %u AN %u Status %u\n",
 		link.link_speed, link.link_duplex,
@@ -2012,12 +2012,12 @@ static int qede_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Pause is assumed to be supported (SUPPORTED_Pause) */
-	if (fc_conf->mode == RTE_FC_FULL)
+	if (fc_conf->mode == RTE_ETH_FC_FULL)
 		params.pause_config |= (QED_LINK_PAUSE_TX_ENABLE |
 					QED_LINK_PAUSE_RX_ENABLE);
-	if (fc_conf->mode == RTE_FC_TX_PAUSE)
+	if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
 		params.pause_config |= QED_LINK_PAUSE_TX_ENABLE;
-	if (fc_conf->mode == RTE_FC_RX_PAUSE)
+	if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
 		params.pause_config |= QED_LINK_PAUSE_RX_ENABLE;
 
 	params.link_up = true;
@@ -2041,13 +2041,13 @@ static int qede_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 
 	if (current_link.pause_config & (QED_LINK_PAUSE_RX_ENABLE |
 					 QED_LINK_PAUSE_TX_ENABLE))
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (current_link.pause_config & QED_LINK_PAUSE_RX_ENABLE)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (current_link.pause_config & QED_LINK_PAUSE_TX_ENABLE)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -2088,14 +2088,14 @@ qede_dev_supported_ptypes_get(struct rte_eth_dev *eth_dev)
 static void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf)
 {
 	*rss_caps = 0;
-	*rss_caps |= (hf & ETH_RSS_IPV4)              ? ECORE_RSS_IPV4 : 0;
-	*rss_caps |= (hf & ETH_RSS_IPV6)              ? ECORE_RSS_IPV6 : 0;
-	*rss_caps |= (hf & ETH_RSS_IPV6_EX)           ? ECORE_RSS_IPV6 : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_TCP)  ? ECORE_RSS_IPV4_TCP : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_TCP)  ? ECORE_RSS_IPV6_TCP : 0;
-	*rss_caps |= (hf & ETH_RSS_IPV6_TCP_EX)       ? ECORE_RSS_IPV6_TCP : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_UDP)  ? ECORE_RSS_IPV4_UDP : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_UDP)  ? ECORE_RSS_IPV6_UDP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV4)              ? ECORE_RSS_IPV4 : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV6)              ? ECORE_RSS_IPV6 : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV6_EX)           ? ECORE_RSS_IPV6 : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)  ? ECORE_RSS_IPV4_TCP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)  ? ECORE_RSS_IPV6_TCP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV6_TCP_EX)       ? ECORE_RSS_IPV6_TCP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)  ? ECORE_RSS_IPV4_UDP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)  ? ECORE_RSS_IPV6_UDP : 0;
 }
 
 int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
@@ -2221,7 +2221,7 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 	uint8_t entry;
 	int rc = 0;
 
-	if (reta_size > ETH_RSS_RETA_SIZE_128) {
+	if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
 		DP_ERR(edev, "reta_size %d is not supported by hardware\n",
 		       reta_size);
 		return -EINVAL;
@@ -2245,8 +2245,8 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 
 	for_each_hwfn(edev, i) {
 		for (j = 0; j < reta_size; j++) {
-			idx = j / RTE_RETA_GROUP_SIZE;
-			shift = j % RTE_RETA_GROUP_SIZE;
+			idx = j / RTE_ETH_RETA_GROUP_SIZE;
+			shift = j % RTE_ETH_RETA_GROUP_SIZE;
 			if (reta_conf[idx].mask & (1ULL << shift)) {
 				entry = reta_conf[idx].reta[shift];
 				fid = entry * edev->num_hwfns + i;
@@ -2282,15 +2282,15 @@ static int qede_rss_reta_query(struct rte_eth_dev *eth_dev,
 	uint16_t i, idx, shift;
 	uint8_t entry;
 
-	if (reta_size > ETH_RSS_RETA_SIZE_128) {
+	if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
 		DP_ERR(edev, "reta_size %d is not supported\n",
 		       reta_size);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift)) {
 			entry = qdev->rss_ind_table[i];
 			reta_conf[idx].reta[shift] = entry;
@@ -2718,16 +2718,16 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 	adapter->ipgre.num_filters = 0;
 	if (is_vf) {
 		adapter->vxlan.enable = true;
-		adapter->vxlan.filter_type = ETH_TUNNEL_FILTER_IMAC |
-					     ETH_TUNNEL_FILTER_IVLAN;
+		adapter->vxlan.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+					     RTE_ETH_TUNNEL_FILTER_IVLAN;
 		adapter->vxlan.udp_port = QEDE_VXLAN_DEF_PORT;
 		adapter->geneve.enable = true;
-		adapter->geneve.filter_type = ETH_TUNNEL_FILTER_IMAC |
-					      ETH_TUNNEL_FILTER_IVLAN;
+		adapter->geneve.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+					      RTE_ETH_TUNNEL_FILTER_IVLAN;
 		adapter->geneve.udp_port = QEDE_GENEVE_DEF_PORT;
 		adapter->ipgre.enable = true;
-		adapter->ipgre.filter_type = ETH_TUNNEL_FILTER_IMAC |
-					     ETH_TUNNEL_FILTER_IVLAN;
+		adapter->ipgre.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+					     RTE_ETH_TUNNEL_FILTER_IVLAN;
 	} else {
 		adapter->vxlan.enable = false;
 		adapter->geneve.enable = false;
diff --git a/drivers/net/qede/qede_filter.c b/drivers/net/qede/qede_filter.c
index c756594bfc4b..440440423a32 100644
--- a/drivers/net/qede/qede_filter.c
+++ b/drivers/net/qede/qede_filter.c
@@ -20,97 +20,97 @@ const struct _qede_udp_tunn_types {
 	const char *string;
 } qede_tunn_types[] = {
 	{
-		ETH_TUNNEL_FILTER_OMAC,
+		RTE_ETH_TUNNEL_FILTER_OMAC,
 		ECORE_FILTER_MAC,
 		ECORE_TUNN_CLSS_MAC_VLAN,
 		"outer-mac"
 	},
 	{
-		ETH_TUNNEL_FILTER_TENID,
+		RTE_ETH_TUNNEL_FILTER_TENID,
 		ECORE_FILTER_VNI,
 		ECORE_TUNN_CLSS_MAC_VNI,
 		"vni"
 	},
 	{
-		ETH_TUNNEL_FILTER_IMAC,
+		RTE_ETH_TUNNEL_FILTER_IMAC,
 		ECORE_FILTER_INNER_MAC,
 		ECORE_TUNN_CLSS_INNER_MAC_VLAN,
 		"inner-mac"
 	},
 	{
-		ETH_TUNNEL_FILTER_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_IVLAN,
 		ECORE_FILTER_INNER_VLAN,
 		ECORE_TUNN_CLSS_INNER_MAC_VLAN,
 		"inner-vlan"
 	},
 	{
-		ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_TENID,
+		RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_TENID,
 		ECORE_FILTER_MAC_VNI_PAIR,
 		ECORE_TUNN_CLSS_MAC_VNI,
 		"outer-mac and vni"
 	},
 	{
-		ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_IMAC,
+		RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_IMAC,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"outer-mac and inner-mac"
 	},
 	{
-		ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"outer-mac and inner-vlan"
 	},
 	{
-		ETH_TUNNEL_FILTER_TENID | ETH_TUNNEL_FILTER_IMAC,
+		RTE_ETH_TUNNEL_FILTER_TENID | RTE_ETH_TUNNEL_FILTER_IMAC,
 		ECORE_FILTER_INNER_MAC_VNI_PAIR,
 		ECORE_TUNN_CLSS_INNER_MAC_VNI,
 		"vni and inner-mac",
 	},
 	{
-		ETH_TUNNEL_FILTER_TENID | ETH_TUNNEL_FILTER_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_TENID | RTE_ETH_TUNNEL_FILTER_IVLAN,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"vni and inner-vlan",
 	},
 	{
-		ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
 		ECORE_FILTER_INNER_PAIR,
 		ECORE_TUNN_CLSS_INNER_MAC_VLAN,
 		"inner-mac and inner-vlan",
 	},
 	{
-		ETH_TUNNEL_FILTER_OIP,
+		RTE_ETH_TUNNEL_FILTER_OIP,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"outer-IP"
 	},
 	{
-		ETH_TUNNEL_FILTER_IIP,
+		RTE_ETH_TUNNEL_FILTER_IIP,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"inner-IP"
 	},
 	{
-		RTE_TUNNEL_FILTER_IMAC_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"IMAC_IVLAN"
 	},
 	{
-		RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID,
+		RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"IMAC_IVLAN_TENID"
 	},
 	{
-		RTE_TUNNEL_FILTER_IMAC_TENID,
+		RTE_ETH_TUNNEL_FILTER_IMAC_TENID,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"IMAC_TENID"
 	},
 	{
-		RTE_TUNNEL_FILTER_OMAC_TENID_IMAC,
+		RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"OMAC_TENID_IMAC"
@@ -144,7 +144,7 @@ int qede_check_fdir_support(struct rte_eth_dev *eth_dev)
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct rte_fdir_conf *fdir = &eth_dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fdir = &eth_dev->data->dev_conf.fdir_conf;
 
 	/* check FDIR modes */
 	switch (fdir->mode) {
@@ -542,7 +542,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
 	memset(&tunn, 0, sizeof(tunn));
 
 	switch (tunnel_udp->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (qdev->vxlan.udp_port != tunnel_udp->udp_port) {
 			DP_ERR(edev, "UDP port %u doesn't exist\n",
 				tunnel_udp->udp_port);
@@ -570,7 +570,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
 					ECORE_TUNN_CLSS_MAC_VLAN, false);
 
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (qdev->geneve.udp_port != tunnel_udp->udp_port) {
 			DP_ERR(edev, "UDP port %u doesn't exist\n",
 				tunnel_udp->udp_port);
@@ -622,7 +622,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
 	memset(&tunn, 0, sizeof(tunn));
 
 	switch (tunnel_udp->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (qdev->vxlan.udp_port == tunnel_udp->udp_port) {
 			DP_INFO(edev,
 				"UDP port %u for VXLAN was already configured\n",
@@ -659,7 +659,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
 
 		qdev->vxlan.udp_port = udp_port;
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (qdev->geneve.udp_port == tunnel_udp->udp_port) {
 			DP_INFO(edev,
 				"UDP port %u for GENEVE was already configured\n",
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index c2263787b4ec..d585db8b61e8 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -249,7 +249,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
 	bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
 	/* cache align the mbuf size to simplfy rx_buf_size calculation */
 	bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
-	if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)	||
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	||
 	    (max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) {
 		if (!dev->data->scattered_rx) {
 			DP_INFO(edev, "Forcing scatter-gather mode\n");
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index c9334448c887..15112b83f4f7 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -73,14 +73,14 @@
 #define QEDE_MAX_ETHER_HDR_LEN	(RTE_ETHER_HDR_LEN + QEDE_ETH_OVERHEAD)
 #define QEDE_ETH_MAX_LEN	(RTE_ETHER_MTU + QEDE_MAX_ETHER_HDR_LEN)
 
-#define QEDE_RSS_OFFLOAD_ALL    (ETH_RSS_IPV4			|\
-				 ETH_RSS_NONFRAG_IPV4_TCP	|\
-				 ETH_RSS_NONFRAG_IPV4_UDP	|\
-				 ETH_RSS_IPV6			|\
-				 ETH_RSS_NONFRAG_IPV6_TCP	|\
-				 ETH_RSS_NONFRAG_IPV6_UDP	|\
-				 ETH_RSS_VXLAN			|\
-				 ETH_RSS_GENEVE)
+#define QEDE_RSS_OFFLOAD_ALL    (RTE_ETH_RSS_IPV4			|\
+				 RTE_ETH_RSS_NONFRAG_IPV4_TCP	|\
+				 RTE_ETH_RSS_NONFRAG_IPV4_UDP	|\
+				 RTE_ETH_RSS_IPV6			|\
+				 RTE_ETH_RSS_NONFRAG_IPV6_TCP	|\
+				 RTE_ETH_RSS_NONFRAG_IPV6_UDP	|\
+				 RTE_ETH_RSS_VXLAN			|\
+				 RTE_ETH_RSS_GENEVE)
 
 #define QEDE_RXTX_MAX(qdev) \
 	(RTE_MAX(qdev->num_rx_queues, qdev->num_tx_queues))
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index 0440019e07e1..db10f035dfcb 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -56,10 +56,10 @@ struct pmd_internals {
 };
 
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(eth_ring_logtype, NOTICE);
@@ -102,7 +102,7 @@ eth_dev_configure(struct rte_eth_dev *dev __rte_unused) { return 0; }
 static int
 eth_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -110,21 +110,21 @@ static int
 eth_dev_stop(struct rte_eth_dev *dev)
 {
 	dev->data->dev_started = 0;
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
 static int
 eth_dev_set_link_down(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
 static int
 eth_dev_set_link_up(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -163,8 +163,8 @@ eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_mac_addrs = 1;
 	dev_info->max_rx_pktlen = (uint32_t)-1;
 	dev_info->max_rx_queues = (uint16_t)internals->max_rx_queues;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	dev_info->max_tx_queues = (uint16_t)internals->max_tx_queues;
 	dev_info->min_rx_bufsize = 0;
 
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 431c42f508d0..9c1be10ac93d 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -106,13 +106,13 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
 {
 	uint32_t phy_caps = 0;
 
-	if (~speeds & ETH_LINK_SPEED_FIXED) {
+	if (~speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		phy_caps |= (1 << EFX_PHY_CAP_AN);
 		/*
 		 * If no speeds are specified in the mask, any supported
 		 * may be negotiated
 		 */
-		if (speeds == ETH_LINK_SPEED_AUTONEG)
+		if (speeds == RTE_ETH_LINK_SPEED_AUTONEG)
 			phy_caps |=
 				(1 << EFX_PHY_CAP_1000FDX) |
 				(1 << EFX_PHY_CAP_10000FDX) |
@@ -121,17 +121,17 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
 				(1 << EFX_PHY_CAP_50000FDX) |
 				(1 << EFX_PHY_CAP_100000FDX);
 	}
-	if (speeds & ETH_LINK_SPEED_1G)
+	if (speeds & RTE_ETH_LINK_SPEED_1G)
 		phy_caps |= (1 << EFX_PHY_CAP_1000FDX);
-	if (speeds & ETH_LINK_SPEED_10G)
+	if (speeds & RTE_ETH_LINK_SPEED_10G)
 		phy_caps |= (1 << EFX_PHY_CAP_10000FDX);
-	if (speeds & ETH_LINK_SPEED_25G)
+	if (speeds & RTE_ETH_LINK_SPEED_25G)
 		phy_caps |= (1 << EFX_PHY_CAP_25000FDX);
-	if (speeds & ETH_LINK_SPEED_40G)
+	if (speeds & RTE_ETH_LINK_SPEED_40G)
 		phy_caps |= (1 << EFX_PHY_CAP_40000FDX);
-	if (speeds & ETH_LINK_SPEED_50G)
+	if (speeds & RTE_ETH_LINK_SPEED_50G)
 		phy_caps |= (1 << EFX_PHY_CAP_50000FDX);
-	if (speeds & ETH_LINK_SPEED_100G)
+	if (speeds & RTE_ETH_LINK_SPEED_100G)
 		phy_caps |= (1 << EFX_PHY_CAP_100000FDX);
 
 	return phy_caps;
@@ -401,10 +401,10 @@ sfc_set_fw_subvariant(struct sfc_adapter *sa)
 			tx_offloads |= txq_info->offloads;
 	}
 
-	if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			   DEV_TX_OFFLOAD_TCP_CKSUM |
-			   DEV_TX_OFFLOAD_UDP_CKSUM |
-			   DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
 		req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_DEFAULT;
 	else
 		req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_NO_TX_CSUM;
@@ -899,7 +899,7 @@ sfc_attach(struct sfc_adapter *sa)
 	sa->priv.shared->tunnel_encaps =
 		encp->enc_tunnel_encapsulations_supported;
 
-	if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		sa->tso = encp->enc_fw_assisted_tso_v2_enabled ||
 			  encp->enc_tso_v3_enabled;
 		if (!sa->tso)
@@ -908,8 +908,8 @@ sfc_attach(struct sfc_adapter *sa)
 
 	if (sa->tso &&
 	    (sfc_dp_tx_offload_capa(sa->priv.dp_tx) &
-	     (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-	      DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
+	     (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+	      RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
 		sa->tso_encap = encp->enc_fw_assisted_tso_v2_encap_enabled ||
 				encp->enc_tso_v3_enabled;
 		if (!sa->tso_encap)
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index d958fd642fb1..eeb73a7530ef 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -979,11 +979,11 @@ struct sfc_dp_rx sfc_ef100_rx = {
 				  SFC_DP_RX_FEAT_INTR |
 				  SFC_DP_RX_FEAT_STATS,
 	.dev_offload_capa	= 0,
-	.queue_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-				  DEV_RX_OFFLOAD_SCATTER |
-				  DEV_RX_OFFLOAD_RSS_HASH,
+	.queue_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				  RTE_ETH_RX_OFFLOAD_SCATTER |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
 	.get_dev_info		= sfc_ef100_rx_get_dev_info,
 	.qsize_up_rings		= sfc_ef100_rx_qsize_up_rings,
 	.qcreate		= sfc_ef100_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index e166fda888b1..67980a587fe4 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -971,16 +971,16 @@ struct sfc_dp_tx sfc_ef100_tx = {
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS |
 				  SFC_DP_TX_FEAT_STATS,
 	.dev_offload_capa	= 0,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_VLAN_INSERT |
-				  DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_MULTI_SEGS |
-				  DEV_TX_OFFLOAD_TCP_TSO |
-				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				  RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				  RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
 	.get_dev_info		= sfc_ef100_get_dev_info,
 	.qsize_up_rings		= sfc_ef100_tx_qsize_up_rings,
 	.qcreate		= sfc_ef100_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 991329e86f01..9ea207cca163 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -746,8 +746,8 @@ struct sfc_dp_rx sfc_ef10_essb_rx = {
 	},
 	.features		= SFC_DP_RX_FEAT_FLOW_FLAG |
 				  SFC_DP_RX_FEAT_FLOW_MARK,
-	.dev_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_RSS_HASH,
+	.dev_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
 	.queue_offload_capa	= 0,
 	.get_dev_info		= sfc_ef10_essb_rx_get_dev_info,
 	.pool_ops_supported	= sfc_ef10_essb_rx_pool_ops_supported,
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 49a7d4fb42fd..9aaabd30eee6 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -819,10 +819,10 @@ struct sfc_dp_rx sfc_ef10_rx = {
 	},
 	.features		= SFC_DP_RX_FEAT_MULTI_PROCESS |
 				  SFC_DP_RX_FEAT_INTR,
-	.dev_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_RX_OFFLOAD_RSS_HASH,
-	.queue_offload_capa	= DEV_RX_OFFLOAD_SCATTER,
+	.dev_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
+	.queue_offload_capa	= RTE_ETH_RX_OFFLOAD_SCATTER,
 	.get_dev_info		= sfc_ef10_rx_get_dev_info,
 	.qsize_up_rings		= sfc_ef10_rx_qsize_up_rings,
 	.qcreate		= sfc_ef10_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index ed43adb4ca5c..e7da4608bcb0 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -958,9 +958,9 @@ sfc_ef10_tx_qcreate(uint16_t port_id, uint16_t queue_id,
 	if (txq->sw_ring == NULL)
 		goto fail_sw_ring_alloc;
 
-	if (info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-			      DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			      DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) {
+	if (info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+			      RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			      RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) {
 		txq->tsoh = rte_calloc_socket("sfc-ef10-txq-tsoh",
 					      info->txq_entries,
 					      SFC_TSOH_STD_LEN,
@@ -1125,14 +1125,14 @@ struct sfc_dp_tx sfc_ef10_tx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_EF10,
 	},
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
-	.dev_offload_capa	= DEV_TX_OFFLOAD_MULTI_SEGS,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_TSO |
-				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+	.dev_offload_capa	= RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
 	.get_dev_info		= sfc_ef10_get_dev_info,
 	.qsize_up_rings		= sfc_ef10_tx_qsize_up_rings,
 	.qcreate		= sfc_ef10_tx_qcreate,
@@ -1152,11 +1152,11 @@ struct sfc_dp_tx sfc_ef10_simple_tx = {
 		.type		= SFC_DP_TX,
 	},
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
-	.dev_offload_capa	= DEV_TX_OFFLOAD_MBUF_FAST_FREE,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM,
+	.dev_offload_capa	= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM,
 	.get_dev_info		= sfc_ef10_get_dev_info,
 	.qsize_up_rings		= sfc_ef10_tx_qsize_up_rings,
 	.qcreate		= sfc_ef10_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index f5986b610fff..833d833a0408 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -105,19 +105,19 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_vfs = sa->sriov.num_vfs;
 
 	/* Autonegotiation may be disabled */
-	dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_1000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_1G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_10000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_10G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_25000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_25G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_40000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_50000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_100000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
 
 	dev_info->max_rx_queues = sa->rxq_max;
 	dev_info->max_tx_queues = sa->txq_max;
@@ -145,8 +145,8 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->tx_offload_capa = sfc_tx_get_dev_offload_caps(sa) |
 				    dev_info->tx_queue_offload_capa;
 
-	if (dev_info->tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
-		txq_offloads_def |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	if (dev_info->tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+		txq_offloads_def |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->default_txconf.offloads |= txq_offloads_def;
 
@@ -989,16 +989,16 @@ sfc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	switch (link_fc) {
 	case 0:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		break;
 	case EFX_FCNTL_RESPOND:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case EFX_FCNTL_GENERATE:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case (EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE):
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	default:
 		sfc_err(sa, "%s: unexpected flow control value %#x",
@@ -1029,16 +1029,16 @@ sfc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		fcntl = 0;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		fcntl = EFX_FCNTL_RESPOND;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		fcntl = EFX_FCNTL_GENERATE;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		fcntl = EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE;
 		break;
 	default:
@@ -1313,7 +1313,7 @@ sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 	qinfo->conf.rx_deferred_start = rxq_info->deferred_start;
 	qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads;
 	if (rxq_info->type_flags & EFX_RXQ_FLAG_SCATTER) {
-		qinfo->conf.offloads |= DEV_RX_OFFLOAD_SCATTER;
+		qinfo->conf.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 		qinfo->scattered_rx = 1;
 	}
 	qinfo->nb_desc = rxq_info->entries;
@@ -1523,9 +1523,9 @@ static efx_tunnel_protocol_t
 sfc_tunnel_rte_type_to_efx_udp_proto(enum rte_eth_tunnel_type rte_type)
 {
 	switch (rte_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		return EFX_TUNNEL_PROTOCOL_VXLAN;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		return EFX_TUNNEL_PROTOCOL_GENEVE;
 	default:
 		return EFX_TUNNEL_NPROTOS;
@@ -1652,7 +1652,7 @@ sfc_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	/*
 	 * Mapping of hash configuration between RTE and EFX is not one-to-one,
-	 * hence, conversion is done here to derive a correct set of ETH_RSS
+	 * hence, conversion is done here to derive a correct set of RTE_ETH_RSS
 	 * flags which corresponds to the active EFX configuration stored
 	 * locally in 'sfc_adapter' and kept up-to-date
 	 */
@@ -1778,8 +1778,8 @@ sfc_dev_rss_reta_query(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	for (entry = 0; entry < reta_size; entry++) {
-		int grp = entry / RTE_RETA_GROUP_SIZE;
-		int grp_idx = entry % RTE_RETA_GROUP_SIZE;
+		int grp = entry / RTE_ETH_RETA_GROUP_SIZE;
+		int grp_idx = entry % RTE_ETH_RETA_GROUP_SIZE;
 
 		if ((reta_conf[grp].mask >> grp_idx) & 1)
 			reta_conf[grp].reta[grp_idx] = rss->tbl[entry];
@@ -1828,10 +1828,10 @@ sfc_dev_rss_reta_update(struct rte_eth_dev *dev,
 	rte_memcpy(rss_tbl_new, rss->tbl, sizeof(rss->tbl));
 
 	for (entry = 0; entry < reta_size; entry++) {
-		int grp_idx = entry % RTE_RETA_GROUP_SIZE;
+		int grp_idx = entry % RTE_ETH_RETA_GROUP_SIZE;
 		struct rte_eth_rss_reta_entry64 *grp;
 
-		grp = &reta_conf[entry / RTE_RETA_GROUP_SIZE];
+		grp = &reta_conf[entry / RTE_ETH_RETA_GROUP_SIZE];
 
 		if (grp->mask & (1ull << grp_idx)) {
 			if (grp->reta[grp_idx] >= rss->channels) {
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 8096af56739f..be2dfe778a0d 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -392,7 +392,7 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = NULL;
 	const struct rte_flow_item_vlan *mask = NULL;
 	const struct rte_flow_item_vlan supp_mask = {
-		.tci = rte_cpu_to_be_16(ETH_VLAN_ID_MAX),
+		.tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
 		.inner_type = RTE_BE16(0xffff),
 	};
 
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index 5320d8903dac..27b02b1119fb 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -573,66 +573,66 @@ sfc_port_link_mode_to_info(efx_link_mode_t link_mode,
 
 	memset(link_info, 0, sizeof(*link_info));
 	if ((link_mode == EFX_LINK_DOWN) || (link_mode == EFX_LINK_UNKNOWN))
-		link_info->link_status = ETH_LINK_DOWN;
+		link_info->link_status = RTE_ETH_LINK_DOWN;
 	else
-		link_info->link_status = ETH_LINK_UP;
+		link_info->link_status = RTE_ETH_LINK_UP;
 
 	switch (link_mode) {
 	case EFX_LINK_10HDX:
-		link_info->link_speed  = ETH_SPEED_NUM_10M;
-		link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_10M;
+		link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case EFX_LINK_10FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_10M;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_10M;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_100HDX:
-		link_info->link_speed  = ETH_SPEED_NUM_100M;
-		link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_100M;
+		link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case EFX_LINK_100FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_100M;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_100M;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_1000HDX:
-		link_info->link_speed  = ETH_SPEED_NUM_1G;
-		link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_1G;
+		link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case EFX_LINK_1000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_1G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_1G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_10000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_10G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_10G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_25000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_25G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_25G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_40000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_40G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_40G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_50000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_50G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_50G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_100000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_100G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_100G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	default:
 		SFC_ASSERT(B_FALSE);
 		/* FALLTHROUGH */
 	case EFX_LINK_UNKNOWN:
 	case EFX_LINK_DOWN:
-		link_info->link_speed  = ETH_SPEED_NUM_NONE;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_NONE;
 		link_info->link_duplex = 0;
 		break;
 	}
 
-	link_info->link_autoneg = ETH_LINK_AUTONEG;
+	link_info->link_autoneg = RTE_ETH_LINK_AUTONEG;
 }
 
 int
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index 2500b14cb006..9d88d554c1ba 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -405,7 +405,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
 	}
 
 	switch (conf->rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		if (nb_rx_queues != 1) {
 			sfcr_err(sr, "Rx RSS is not supported with %u queues",
 				 nb_rx_queues);
@@ -420,7 +420,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
 			ret = -EINVAL;
 		}
 		break;
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 		break;
 	default:
 		sfcr_err(sr, "Rx mode MQ modes other than RSS not supported");
@@ -428,7 +428,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
 		break;
 	}
 
-	if (conf->txmode.mq_mode != ETH_MQ_TX_NONE) {
+	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
 		sfcr_err(sr, "Tx mode MQ modes not supported");
 		ret = -EINVAL;
 	}
@@ -553,8 +553,8 @@ sfc_repr_dev_link_update(struct rte_eth_dev *dev,
 		sfc_port_link_mode_to_info(EFX_LINK_UNKNOWN, &link);
 	} else {
 		memset(&link, 0, sizeof(link));
-		link.link_status = ETH_LINK_UP;
-		link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		link.link_status = RTE_ETH_LINK_UP;
+		link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index c60ef17a922a..23df27c8f45a 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -648,9 +648,9 @@ struct sfc_dp_rx sfc_efx_rx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_RX_EFX,
 	},
 	.features		= SFC_DP_RX_FEAT_INTR,
-	.dev_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_RSS_HASH,
-	.queue_offload_capa	= DEV_RX_OFFLOAD_SCATTER,
+	.dev_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
+	.queue_offload_capa	= RTE_ETH_RX_OFFLOAD_SCATTER,
 	.qsize_up_rings		= sfc_efx_rx_qsize_up_rings,
 	.qcreate		= sfc_efx_rx_qcreate,
 	.qdestroy		= sfc_efx_rx_qdestroy,
@@ -931,7 +931,7 @@ sfc_rx_get_offload_mask(struct sfc_adapter *sa)
 	uint64_t no_caps = 0;
 
 	if (encp->enc_tunnel_encapsulations_supported == 0)
-		no_caps |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		no_caps |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 	return ~no_caps;
 }
@@ -1140,7 +1140,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 
 	if (!sfc_rx_check_scatter(sa->port.pdu, buf_size,
 				  encp->enc_rx_prefix_size,
-				  (offloads & DEV_RX_OFFLOAD_SCATTER),
+				  (offloads & RTE_ETH_RX_OFFLOAD_SCATTER),
 				  encp->enc_rx_scatter_max,
 				  &error)) {
 		sfc_err(sa, "RxQ %d (internal %u) MTU check failed: %s",
@@ -1166,15 +1166,15 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 		rxq_info->type = EFX_RXQ_TYPE_DEFAULT;
 
 	rxq_info->type_flags |=
-		(offloads & DEV_RX_OFFLOAD_SCATTER) ?
+		(offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ?
 		EFX_RXQ_FLAG_SCATTER : EFX_RXQ_FLAG_NONE;
 
 	if ((encp->enc_tunnel_encapsulations_supported != 0) &&
 	    (sfc_dp_rx_offload_capa(sa->priv.dp_rx) &
-	     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+	     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
 		rxq_info->type_flags |= EFX_RXQ_FLAG_INNER_CLASSES;
 
-	if (offloads & DEV_RX_OFFLOAD_RSS_HASH)
+	if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)
 		rxq_info->type_flags |= EFX_RXQ_FLAG_RSS_HASH;
 
 	if ((sa->negotiated_rx_metadata & RTE_ETH_RX_METADATA_USER_FLAG) != 0)
@@ -1211,7 +1211,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 	rxq_info->refill_mb_pool = mb_pool;
 
 	if (rss->hash_support == EFX_RX_HASH_AVAILABLE && rss->channels > 0 &&
-	    (offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	    (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		rxq_info->rxq_flags = SFC_RXQ_FLAG_RSS_HASH;
 	else
 		rxq_info->rxq_flags = 0;
@@ -1313,19 +1313,19 @@ sfc_rx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
  * Mapping between RTE RSS hash functions and their EFX counterparts.
  */
 static const struct sfc_rss_hf_rte_to_efx sfc_rss_hf_map[] = {
-	{ ETH_RSS_NONFRAG_IPV4_TCP,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 	  EFX_RX_HASH(IPV4_TCP, 4TUPLE) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 	  EFX_RX_HASH(IPV4_UDP, 4TUPLE) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX,
 	  EFX_RX_HASH(IPV6_TCP, 4TUPLE) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX,
 	  EFX_RX_HASH(IPV6_UDP, 4TUPLE) },
-	{ ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER,
+	{ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	  EFX_RX_HASH(IPV4_TCP, 2TUPLE) | EFX_RX_HASH(IPV4_UDP, 2TUPLE) |
 	  EFX_RX_HASH(IPV4, 2TUPLE) },
-	{ ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER |
-	  ETH_RSS_IPV6_EX,
+	{ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+	  RTE_ETH_RSS_IPV6_EX,
 	  EFX_RX_HASH(IPV6_TCP, 2TUPLE) | EFX_RX_HASH(IPV6_UDP, 2TUPLE) |
 	  EFX_RX_HASH(IPV6, 2TUPLE) }
 };
@@ -1645,10 +1645,10 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
 	int rc = 0;
 
 	switch (rxmode->mq_mode) {
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 		/* No special checks are required */
 		break;
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		if (rss->context_type == EFX_RX_SCALE_UNAVAILABLE) {
 			sfc_err(sa, "RSS is not available");
 			rc = EINVAL;
@@ -1665,16 +1665,16 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
 	 * so unsupported offloads cannot be added as the result of
 	 * below check.
 	 */
-	if ((rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM) !=
-	    (offloads_supported & DEV_RX_OFFLOAD_CHECKSUM)) {
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM) !=
+	    (offloads_supported & RTE_ETH_RX_OFFLOAD_CHECKSUM)) {
 		sfc_warn(sa, "Rx checksum offloads cannot be disabled - always on (IPv4/TCP/UDP)");
-		rxmode->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 	}
 
-	if ((offloads_supported & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
-	    (~rxmode->offloads & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+	if ((offloads_supported & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+	    (~rxmode->offloads & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 		sfc_warn(sa, "Rx outer IPv4 checksum offload cannot be disabled - always on");
-		rxmode->offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 	}
 
 	return rc;
@@ -1820,7 +1820,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	}
 
 configure_rss:
-	rss->channels = (dev_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) ?
+	rss->channels = (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) ?
 			 MIN(sas->ethdev_rxq_count, EFX_MAXRSS) : 0;
 
 	if (rss->channels > 0) {
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 13392cdd5a09..0273788c20ce 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -54,23 +54,23 @@ sfc_tx_get_offload_mask(struct sfc_adapter *sa)
 	uint64_t no_caps = 0;
 
 	if (!encp->enc_hw_tx_insert_vlan_enabled)
-		no_caps |= DEV_TX_OFFLOAD_VLAN_INSERT;
+		no_caps |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if (!encp->enc_tunnel_encapsulations_supported)
-		no_caps |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+		no_caps |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 	if (!sa->tso)
-		no_caps |= DEV_TX_OFFLOAD_TCP_TSO;
+		no_caps |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (!sa->tso_encap ||
 	    (encp->enc_tunnel_encapsulations_supported &
 	     (1u << EFX_TUNNEL_PROTOCOL_VXLAN)) == 0)
-		no_caps |= DEV_TX_OFFLOAD_VXLAN_TNL_TSO;
+		no_caps |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
 
 	if (!sa->tso_encap ||
 	    (encp->enc_tunnel_encapsulations_supported &
 	     (1u << EFX_TUNNEL_PROTOCOL_GENEVE)) == 0)
-		no_caps |= DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+		no_caps |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
 
 	return ~no_caps;
 }
@@ -114,8 +114,8 @@ sfc_tx_qcheck_conf(struct sfc_adapter *sa, unsigned int txq_max_fill_level,
 	}
 
 	/* We either perform both TCP and UDP offload, or no offload at all */
-	if (((offloads & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) !=
-	    ((offloads & DEV_TX_OFFLOAD_UDP_CKSUM) == 0)) {
+	if (((offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) !=
+	    ((offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0)) {
 		sfc_err(sa, "TCP and UDP offloads can't be set independently");
 		rc = EINVAL;
 	}
@@ -309,7 +309,7 @@ sfc_tx_check_mode(struct sfc_adapter *sa, const struct rte_eth_txmode *txmode)
 	int rc = 0;
 
 	switch (txmode->mq_mode) {
-	case ETH_MQ_TX_NONE:
+	case RTE_ETH_MQ_TX_NONE:
 		break;
 	default:
 		sfc_err(sa, "Tx multi-queue mode %u not supported",
@@ -529,23 +529,23 @@ sfc_tx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 	if (rc != 0)
 		goto fail_ev_qstart;
 
-	if (txq_info->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+	if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		flags |= EFX_TXQ_CKSUM_IPV4;
 
-	if (txq_info->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+	if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 		flags |= EFX_TXQ_CKSUM_INNER_IPV4;
 
-	if ((txq_info->offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
-	    (txq_info->offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+	if ((txq_info->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+	    (txq_info->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
 		flags |= EFX_TXQ_CKSUM_TCPUDP;
 
-		if (offloads_supported & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+		if (offloads_supported & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 			flags |= EFX_TXQ_CKSUM_INNER_TCPUDP;
 	}
 
-	if (txq_info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+	if (txq_info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
 		flags |= EFX_TXQ_FATSOV2;
 
 	rc = efx_tx_qcreate(sa->nic, txq->hw_index, 0, &txq->mem,
@@ -876,9 +876,9 @@ sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 		/*
 		 * Here VLAN TCI is expected to be zero in case if no
-		 * DEV_TX_OFFLOAD_VLAN_INSERT capability is advertised;
+		 * RTE_ETH_TX_OFFLOAD_VLAN_INSERT capability is advertised;
 		 * if the calling app ignores the absence of
-		 * DEV_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
+		 * RTE_ETH_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
 		 * TX_ERROR will occur
 		 */
 		pkt_descs += sfc_efx_tx_maybe_insert_tag(txq, m_seg, &pend);
@@ -1242,13 +1242,13 @@ struct sfc_dp_tx sfc_efx_tx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_TX_EFX,
 	},
 	.features		= 0,
-	.dev_offload_capa	= DEV_TX_OFFLOAD_VLAN_INSERT |
-				  DEV_TX_OFFLOAD_MULTI_SEGS,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_TSO,
+	.dev_offload_capa	= RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				  RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_TSO,
 	.qsize_up_rings		= sfc_efx_tx_qsize_up_rings,
 	.qcreate		= sfc_efx_tx_qcreate,
 	.qdestroy		= sfc_efx_tx_qdestroy,
diff --git a/drivers/net/softnic/rte_eth_softnic.c b/drivers/net/softnic/rte_eth_softnic.c
index b3b55b9035b1..3ef33818a9e0 100644
--- a/drivers/net/softnic/rte_eth_softnic.c
+++ b/drivers/net/softnic/rte_eth_softnic.c
@@ -173,7 +173,7 @@ pmd_dev_start(struct rte_eth_dev *dev)
 		return status;
 
 	/* Link UP */
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -184,7 +184,7 @@ pmd_dev_stop(struct rte_eth_dev *dev)
 	struct pmd_internals *p = dev->data->dev_private;
 
 	/* Link DOWN */
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	/* Firmware */
 	softnic_pipeline_disable_all(p);
@@ -386,10 +386,10 @@ pmd_ethdev_register(struct rte_vdev_device *vdev,
 
 	/* dev->data */
 	dev->data->dev_private = dev_private;
-	dev->data->dev_link.link_speed = ETH_SPEED_NUM_100G;
-	dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+	dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	dev->data->mac_addrs = &eth_addr;
 	dev->data->promiscuous = 1;
 	dev->data->numa_node = params->cpu_id;
diff --git a/drivers/net/szedata2/rte_eth_szedata2.c b/drivers/net/szedata2/rte_eth_szedata2.c
index 3c6a285e3c5e..6a084e3e1b1b 100644
--- a/drivers/net/szedata2/rte_eth_szedata2.c
+++ b/drivers/net/szedata2/rte_eth_szedata2.c
@@ -1042,7 +1042,7 @@ static int
 eth_dev_configure(struct rte_eth_dev *dev)
 {
 	struct rte_eth_dev_data *data = dev->data;
-	if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		dev->rx_pkt_burst = eth_szedata2_rx_scattered;
 		data->scattered_rx = 1;
 	} else {
@@ -1064,11 +1064,11 @@ eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_rx_queues = internals->max_rx_queues;
 	dev_info->max_tx_queues = internals->max_tx_queues;
 	dev_info->min_rx_bufsize = 0;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
 	dev_info->tx_offload_capa = 0;
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->tx_queue_offload_capa = 0;
-	dev_info->speed_capa = ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -1202,10 +1202,10 @@ eth_link_update(struct rte_eth_dev *dev,
 
 	memset(&link, 0, sizeof(link));
 
-	link.link_speed = ETH_SPEED_NUM_100G;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_status = ETH_LINK_UP;
-	link.link_autoneg = ETH_LINK_FIXED;
+	link.link_speed = RTE_ETH_SPEED_NUM_100G;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 
 	rte_eth_linkstatus_set(dev, &link);
 	return 0;
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index e4f1ad45219e..5d5350d78e03 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -70,16 +70,16 @@
 
 #define TAP_IOV_DEFAULT_MAX 1024
 
-#define TAP_RX_OFFLOAD (DEV_RX_OFFLOAD_SCATTER |	\
-			DEV_RX_OFFLOAD_IPV4_CKSUM |	\
-			DEV_RX_OFFLOAD_UDP_CKSUM |	\
-			DEV_RX_OFFLOAD_TCP_CKSUM)
+#define TAP_RX_OFFLOAD (RTE_ETH_RX_OFFLOAD_SCATTER |	\
+			RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	\
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
 
-#define TAP_TX_OFFLOAD (DEV_TX_OFFLOAD_MULTI_SEGS |	\
-			DEV_TX_OFFLOAD_IPV4_CKSUM |	\
-			DEV_TX_OFFLOAD_UDP_CKSUM |	\
-			DEV_TX_OFFLOAD_TCP_CKSUM |	\
-			DEV_TX_OFFLOAD_TCP_TSO)
+#define TAP_TX_OFFLOAD (RTE_ETH_TX_OFFLOAD_MULTI_SEGS |	\
+			RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |	\
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |	\
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM |	\
+			RTE_ETH_TX_OFFLOAD_TCP_TSO)
 
 static int tap_devices_count;
 
@@ -97,10 +97,10 @@ static const char *valid_arguments[] = {
 static volatile uint32_t tap_trigger;	/* Rx trigger */
 
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 static void
@@ -433,7 +433,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 		len = readv(process_private->rxq_fds[rxq->queue_id],
 			*rxq->iovecs,
-			1 + (rxq->rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ?
+			1 + (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ?
 			     rxq->nb_rx_desc : 1));
 		if (len < (int)sizeof(struct tun_pi))
 			break;
@@ -489,7 +489,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		seg->next = NULL;
 		mbuf->packet_type = rte_net_get_ptype(mbuf, NULL,
 						      RTE_PTYPE_ALL_MASK);
-		if (rxq->rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+		if (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 			tap_verify_csum(mbuf);
 
 		/* account for the receive frame */
@@ -866,7 +866,7 @@ tap_link_set_down(struct rte_eth_dev *dev)
 	struct pmd_internals *pmd = dev->data->dev_private;
 	struct ifreq ifr = { .ifr_flags = IFF_UP };
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 0, LOCAL_ONLY);
 }
 
@@ -876,7 +876,7 @@ tap_link_set_up(struct rte_eth_dev *dev)
 	struct pmd_internals *pmd = dev->data->dev_private;
 	struct ifreq ifr = { .ifr_flags = IFF_UP };
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 1, LOCAL_AND_REMOTE);
 }
 
@@ -956,30 +956,30 @@ tap_dev_speed_capa(void)
 	uint32_t speed = pmd_link.link_speed;
 	uint32_t capa = 0;
 
-	if (speed >= ETH_SPEED_NUM_10M)
-		capa |= ETH_LINK_SPEED_10M;
-	if (speed >= ETH_SPEED_NUM_100M)
-		capa |= ETH_LINK_SPEED_100M;
-	if (speed >= ETH_SPEED_NUM_1G)
-		capa |= ETH_LINK_SPEED_1G;
-	if (speed >= ETH_SPEED_NUM_5G)
-		capa |= ETH_LINK_SPEED_2_5G;
-	if (speed >= ETH_SPEED_NUM_5G)
-		capa |= ETH_LINK_SPEED_5G;
-	if (speed >= ETH_SPEED_NUM_10G)
-		capa |= ETH_LINK_SPEED_10G;
-	if (speed >= ETH_SPEED_NUM_20G)
-		capa |= ETH_LINK_SPEED_20G;
-	if (speed >= ETH_SPEED_NUM_25G)
-		capa |= ETH_LINK_SPEED_25G;
-	if (speed >= ETH_SPEED_NUM_40G)
-		capa |= ETH_LINK_SPEED_40G;
-	if (speed >= ETH_SPEED_NUM_50G)
-		capa |= ETH_LINK_SPEED_50G;
-	if (speed >= ETH_SPEED_NUM_56G)
-		capa |= ETH_LINK_SPEED_56G;
-	if (speed >= ETH_SPEED_NUM_100G)
-		capa |= ETH_LINK_SPEED_100G;
+	if (speed >= RTE_ETH_SPEED_NUM_10M)
+		capa |= RTE_ETH_LINK_SPEED_10M;
+	if (speed >= RTE_ETH_SPEED_NUM_100M)
+		capa |= RTE_ETH_LINK_SPEED_100M;
+	if (speed >= RTE_ETH_SPEED_NUM_1G)
+		capa |= RTE_ETH_LINK_SPEED_1G;
+	if (speed >= RTE_ETH_SPEED_NUM_5G)
+		capa |= RTE_ETH_LINK_SPEED_2_5G;
+	if (speed >= RTE_ETH_SPEED_NUM_5G)
+		capa |= RTE_ETH_LINK_SPEED_5G;
+	if (speed >= RTE_ETH_SPEED_NUM_10G)
+		capa |= RTE_ETH_LINK_SPEED_10G;
+	if (speed >= RTE_ETH_SPEED_NUM_20G)
+		capa |= RTE_ETH_LINK_SPEED_20G;
+	if (speed >= RTE_ETH_SPEED_NUM_25G)
+		capa |= RTE_ETH_LINK_SPEED_25G;
+	if (speed >= RTE_ETH_SPEED_NUM_40G)
+		capa |= RTE_ETH_LINK_SPEED_40G;
+	if (speed >= RTE_ETH_SPEED_NUM_50G)
+		capa |= RTE_ETH_LINK_SPEED_50G;
+	if (speed >= RTE_ETH_SPEED_NUM_56G)
+		capa |= RTE_ETH_LINK_SPEED_56G;
+	if (speed >= RTE_ETH_SPEED_NUM_100G)
+		capa |= RTE_ETH_LINK_SPEED_100G;
 
 	return capa;
 }
@@ -1196,15 +1196,15 @@ tap_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 		tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, REMOTE_ONLY);
 		if (!(ifr.ifr_flags & IFF_UP) ||
 		    !(ifr.ifr_flags & IFF_RUNNING)) {
-			dev_link->link_status = ETH_LINK_DOWN;
+			dev_link->link_status = RTE_ETH_LINK_DOWN;
 			return 0;
 		}
 	}
 	tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, LOCAL_ONLY);
 	dev_link->link_status =
 		((ifr.ifr_flags & IFF_UP) && (ifr.ifr_flags & IFF_RUNNING) ?
-		 ETH_LINK_UP :
-		 ETH_LINK_DOWN);
+		 RTE_ETH_LINK_UP :
+		 RTE_ETH_LINK_DOWN);
 	return 0;
 }
 
@@ -1391,7 +1391,7 @@ tap_gso_ctx_setup(struct rte_gso_ctx *gso_ctx, struct rte_eth_dev *dev)
 	int ret;
 
 	/* initialize GSO context */
-	gso_types = DEV_TX_OFFLOAD_TCP_TSO;
+	gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	if (!pmd->gso_ctx_mp) {
 		/*
 		 * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE
@@ -1606,9 +1606,9 @@ tap_tx_queue_setup(struct rte_eth_dev *dev,
 
 	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
 	txq->csum = !!(offloads &
-			(DEV_TX_OFFLOAD_IPV4_CKSUM |
-			 DEV_TX_OFFLOAD_UDP_CKSUM |
-			 DEV_TX_OFFLOAD_TCP_CKSUM));
+			(RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			 RTE_ETH_TX_OFFLOAD_TCP_CKSUM));
 
 	ret = tap_setup_queue(dev, internals, tx_queue_id, 0);
 	if (ret == -1)
@@ -1760,7 +1760,7 @@ static int
 tap_flow_ctrl_get(struct rte_eth_dev *dev __rte_unused,
 		  struct rte_eth_fc_conf *fc_conf)
 {
-	fc_conf->mode = RTE_FC_NONE;
+	fc_conf->mode = RTE_ETH_FC_NONE;
 	return 0;
 }
 
@@ -1768,7 +1768,7 @@ static int
 tap_flow_ctrl_set(struct rte_eth_dev *dev __rte_unused,
 		  struct rte_eth_fc_conf *fc_conf)
 {
-	if (fc_conf->mode != RTE_FC_NONE)
+	if (fc_conf->mode != RTE_ETH_FC_NONE)
 		return -ENOTSUP;
 	return 0;
 }
@@ -2262,7 +2262,7 @@ rte_pmd_tun_probe(struct rte_vdev_device *dev)
 			}
 		}
 	}
-	pmd_link.link_speed = ETH_SPEED_NUM_10G;
+	pmd_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 
 	TAP_LOG(DEBUG, "Initializing pmd_tun for %s", name);
 
@@ -2436,7 +2436,7 @@ rte_pmd_tap_probe(struct rte_vdev_device *dev)
 		return 0;
 	}
 
-	speed = ETH_SPEED_NUM_10G;
+	speed = RTE_ETH_SPEED_NUM_10G;
 
 	/* use tap%d which causes kernel to choose next available */
 	strlcpy(tap_name, DEFAULT_TAP_NAME "%d", RTE_ETH_NAME_MAX_LEN);
diff --git a/drivers/net/tap/tap_rss.h b/drivers/net/tap/tap_rss.h
index 176e7180bdaa..48c151cf6b68 100644
--- a/drivers/net/tap/tap_rss.h
+++ b/drivers/net/tap/tap_rss.h
@@ -13,7 +13,7 @@
 #define TAP_RSS_HASH_KEY_SIZE 40
 
 /* Supported RSS */
-#define TAP_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP))
+#define TAP_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP))
 
 /* hashed fields for RSS */
 enum hash_field {
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 328d6d56d921..38a2ddc633b5 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -61,14 +61,14 @@ nicvf_link_status_update(struct nicvf *nic,
 {
 	memset(link, 0, sizeof(*link));
 
-	link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	if (nic->duplex == NICVF_HALF_DUPLEX)
-		link->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	else if (nic->duplex == NICVF_FULL_DUPLEX)
-		link->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link->link_speed = nic->speed;
-	link->link_autoneg = ETH_LINK_AUTONEG;
+	link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 }
 
 static void
@@ -134,7 +134,7 @@ nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		/* rte_eth_link_get() might need to wait up to 9 seconds */
 		for (i = 0; i < MAX_CHECK_TIME; i++) {
 			nicvf_link_status_update(nic, &link);
-			if (link.link_status == ETH_LINK_UP)
+			if (link.link_status == RTE_ETH_LINK_UP)
 				break;
 			rte_delay_ms(CHECK_INTERVAL);
 		}
@@ -390,35 +390,35 @@ nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
 {
 	uint64_t nic_rss = 0;
 
-	if (ethdev_rss & ETH_RSS_IPV4)
+	if (ethdev_rss & RTE_ETH_RSS_IPV4)
 		nic_rss |= RSS_IP_ENA;
 
-	if (ethdev_rss & ETH_RSS_IPV6)
+	if (ethdev_rss & RTE_ETH_RSS_IPV6)
 		nic_rss |= RSS_IP_ENA;
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
 
-	if (ethdev_rss & ETH_RSS_PORT)
+	if (ethdev_rss & RTE_ETH_RSS_PORT)
 		nic_rss |= RSS_L2_EXTENDED_HASH_ENA;
 
 	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
-		if (ethdev_rss & ETH_RSS_VXLAN)
+		if (ethdev_rss & RTE_ETH_RSS_VXLAN)
 			nic_rss |= RSS_TUN_VXLAN_ENA;
 
-		if (ethdev_rss & ETH_RSS_GENEVE)
+		if (ethdev_rss & RTE_ETH_RSS_GENEVE)
 			nic_rss |= RSS_TUN_GENEVE_ENA;
 
-		if (ethdev_rss & ETH_RSS_NVGRE)
+		if (ethdev_rss & RTE_ETH_RSS_NVGRE)
 			nic_rss |= RSS_TUN_NVGRE_ENA;
 	}
 
@@ -431,28 +431,28 @@ nicvf_rss_nic_to_ethdev(struct nicvf *nic,  uint64_t nic_rss)
 	uint64_t ethdev_rss = 0;
 
 	if (nic_rss & RSS_IP_ENA)
-		ethdev_rss |= (ETH_RSS_IPV4 | ETH_RSS_IPV6);
+		ethdev_rss |= (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6);
 
 	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_TCP_ENA))
-		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_TCP |
-				ETH_RSS_NONFRAG_IPV6_TCP);
+		ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP);
 
 	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_UDP_ENA))
-		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_UDP |
-				ETH_RSS_NONFRAG_IPV6_UDP);
+		ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP);
 
 	if (nic_rss & RSS_L2_EXTENDED_HASH_ENA)
-		ethdev_rss |= ETH_RSS_PORT;
+		ethdev_rss |= RTE_ETH_RSS_PORT;
 
 	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
 		if (nic_rss & RSS_TUN_VXLAN_ENA)
-			ethdev_rss |= ETH_RSS_VXLAN;
+			ethdev_rss |= RTE_ETH_RSS_VXLAN;
 
 		if (nic_rss & RSS_TUN_GENEVE_ENA)
-			ethdev_rss |= ETH_RSS_GENEVE;
+			ethdev_rss |= RTE_ETH_RSS_GENEVE;
 
 		if (nic_rss & RSS_TUN_NVGRE_ENA)
-			ethdev_rss |= ETH_RSS_NVGRE;
+			ethdev_rss |= RTE_ETH_RSS_NVGRE;
 	}
 	return ethdev_rss;
 }
@@ -479,8 +479,8 @@ nicvf_dev_reta_query(struct rte_eth_dev *dev,
 		return ret;
 
 	/* Copy RETA table */
-	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = tbl[j];
 	}
@@ -509,8 +509,8 @@ nicvf_dev_reta_update(struct rte_eth_dev *dev,
 		return ret;
 
 	/* Copy RETA table */
-	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				tbl[j] = reta_conf[i].reta[j];
 	}
@@ -807,9 +807,9 @@ nicvf_configure_rss(struct rte_eth_dev *dev)
 		    dev->data->nb_rx_queues,
 		    dev->data->dev_conf.lpbk_mode, rsshf);
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
 		ret = nicvf_rss_term(nic);
-	else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+	else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 		ret = nicvf_rss_config(nic, dev->data->nb_rx_queues, rsshf);
 	if (ret)
 		PMD_INIT_LOG(ERR, "Failed to configure RSS %d", ret);
@@ -870,7 +870,7 @@ nicvf_set_tx_function(struct rte_eth_dev *dev)
 
 	for (i = 0; i < dev->data->nb_tx_queues; i++) {
 		txq = dev->data->tx_queues[i];
-		if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
 			multiseg = true;
 			break;
 		}
@@ -992,7 +992,7 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
 	txq->offloads = offloads;
 
-	is_single_pool = !!(offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE);
+	is_single_pool = !!(offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE);
 
 	/* Choose optimum free threshold value for multipool case */
 	if (!is_single_pool) {
@@ -1382,11 +1382,11 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	PMD_INIT_FUNC_TRACE();
 
 	/* Autonegotiation may be disabled */
-	dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
-	dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
-				 ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+				 RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
 	if (nicvf_hw_version(nic) != PCI_SUB_DEVICE_ID_CN81XX_NICVF)
-		dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
 
 	dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU;
 	dev_info->max_rx_pktlen = NIC_HW_MAX_MTU + RTE_ETHER_HDR_LEN;
@@ -1415,10 +1415,10 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = NICVF_DEFAULT_TX_FREE_THRESH,
-		.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE |
-			DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM   |
-			DEV_TX_OFFLOAD_UDP_CKSUM          |
-			DEV_TX_OFFLOAD_TCP_CKSUM,
+		.offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+			RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM   |
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM          |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM,
 	};
 
 	return 0;
@@ -1582,8 +1582,8 @@ nicvf_vf_start(struct rte_eth_dev *dev, struct nicvf *nic, uint32_t rbdrsz)
 		     nic->rbdr->tail, nb_rbdr_desc, nic->vf_id);
 
 	/* Configure VLAN Strip */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	ret = nicvf_vlan_offload_config(dev, mask);
 
 	/* Based on the packet type(IPv4 or IPv6), the nicvf HW aligns L3 data
@@ -1711,7 +1711,7 @@ nicvf_dev_start(struct rte_eth_dev *dev)
 	/* Setup scatter mode if needed by jumbo */
 	if (dev->data->mtu + (uint32_t)NIC_HW_L2_OVERHEAD + 2 * VLAN_TAG_SIZE > buffsz)
 		dev->data->scattered_rx = 1;
-	if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
+	if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) != 0)
 		dev->data->scattered_rx = 1;
 
 	/* Setup MTU */
@@ -1896,8 +1896,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (!rte_eal_has_hugepages()) {
 		PMD_INIT_LOG(INFO, "Huge page is not configured");
@@ -1909,8 +1909,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-		rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+		rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		PMD_INIT_LOG(INFO, "Unsupported rx qmode %d", rxmode->mq_mode);
 		return -EINVAL;
 	}
@@ -1920,7 +1920,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(INFO, "Setting link speed/duplex not supported");
 		return -EINVAL;
 	}
@@ -1955,7 +1955,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		nic->offload_cksum = 1;
 
 	PMD_INIT_LOG(DEBUG, "Configured ethdev port%d hwcap=0x%" PRIx64,
@@ -2032,8 +2032,8 @@ nicvf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	struct nicvf *nic = nicvf_pmd_priv(dev);
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			nicvf_vlan_hw_strip(nic, true);
 		else
 			nicvf_vlan_hw_strip(nic, false);
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index 5d38750d6313..cb474e26b81e 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -16,32 +16,32 @@
 #define NICVF_UNKNOWN_DUPLEX		0xff
 
 #define NICVF_RSS_OFFLOAD_PASS1 ( \
-	ETH_RSS_PORT | \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_PORT | \
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define NICVF_RSS_OFFLOAD_TUNNEL ( \
-	ETH_RSS_VXLAN | \
-	ETH_RSS_GENEVE | \
-	ETH_RSS_NVGRE)
+	RTE_ETH_RSS_VXLAN | \
+	RTE_ETH_RSS_GENEVE | \
+	RTE_ETH_RSS_NVGRE)
 
 #define NICVF_TX_OFFLOAD_CAPA ( \
-	DEV_TX_OFFLOAD_IPV4_CKSUM       | \
-	DEV_TX_OFFLOAD_UDP_CKSUM        | \
-	DEV_TX_OFFLOAD_TCP_CKSUM        | \
-	DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-	DEV_TX_OFFLOAD_MBUF_FAST_FREE   | \
-	DEV_TX_OFFLOAD_MULTI_SEGS)
+	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM       | \
+	RTE_ETH_TX_OFFLOAD_UDP_CKSUM        | \
+	RTE_ETH_TX_OFFLOAD_TCP_CKSUM        | \
+	RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+	RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE   | \
+	RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define NICVF_RX_OFFLOAD_CAPA ( \
-	DEV_RX_OFFLOAD_CHECKSUM    | \
-	DEV_RX_OFFLOAD_VLAN_STRIP  | \
-	DEV_RX_OFFLOAD_SCATTER     | \
-	DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RX_OFFLOAD_CHECKSUM    | \
+	RTE_ETH_RX_OFFLOAD_VLAN_STRIP  | \
+	RTE_ETH_RX_OFFLOAD_SCATTER     | \
+	RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define NICVF_DEFAULT_RX_FREE_THRESH    224
 #define NICVF_DEFAULT_TX_FREE_THRESH    224
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 7b46ffb68635..0b0f9db7cb2a 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -998,7 +998,7 @@ txgbe_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
 	rxbal = rd32(hw, TXGBE_RXBAL(rxq->reg_idx));
 	rxbah = rd32(hw, TXGBE_RXBAH(rxq->reg_idx));
 	rxcfg = rd32(hw, TXGBE_RXCFG(rxq->reg_idx));
-	if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 		restart = (rxcfg & TXGBE_RXCFG_ENA) &&
 			!(rxcfg & TXGBE_RXCFG_VLAN);
 		rxcfg |= TXGBE_RXCFG_VLAN;
@@ -1033,7 +1033,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 	vlan_ext = (portctrl & TXGBE_PORTCTL_VLANEXT);
 	qinq = vlan_ext && (portctrl & TXGBE_PORTCTL_QINQ);
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
+	case RTE_ETH_VLAN_TYPE_INNER:
 		if (vlan_ext) {
 			wr32m(hw, TXGBE_VLANCTL,
 				TXGBE_VLANCTL_TPID_MASK,
@@ -1053,7 +1053,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 				TXGBE_TAGTPID_LSB(tpid));
 		}
 		break;
-	case ETH_VLAN_TYPE_OUTER:
+	case RTE_ETH_VLAN_TYPE_OUTER:
 		if (vlan_ext) {
 			/* Only the high 16-bits is valid */
 			wr32m(hw, TXGBE_EXTAG,
@@ -1138,10 +1138,10 @@ txgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 
 	if (on) {
 		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
-		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
 		rxq->vlan_flags = PKT_RX_VLAN;
-		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
 
@@ -1240,7 +1240,7 @@ txgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
 
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			txgbe_vlan_strip_queue_set(dev, i, 1);
 		else
 			txgbe_vlan_strip_queue_set(dev, i, 0);
@@ -1254,17 +1254,17 @@ txgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	struct txgbe_rx_queue *rxq;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		rxmode = &dev->data->dev_conf.rxmode;
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 		else
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 	}
 }
@@ -1275,25 +1275,25 @@ txgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	rxmode = &dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_STRIP_MASK)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK)
 		txgbe_vlan_hw_strip_config(dev);
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			txgbe_vlan_hw_filter_enable(dev);
 		else
 			txgbe_vlan_hw_filter_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			txgbe_vlan_hw_extend_enable(dev);
 		else
 			txgbe_vlan_hw_extend_disable(dev);
 	}
 
-	if (mask & ETH_QINQ_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+	if (mask & RTE_ETH_QINQ_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
 			txgbe_qinq_hw_strip_enable(dev);
 		else
 			txgbe_qinq_hw_strip_disable(dev);
@@ -1331,10 +1331,10 @@ txgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
 	switch (nb_rx_q) {
 	case 1:
 	case 2:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
 		break;
 	case 4:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
 		break;
 	default:
 		return -EINVAL;
@@ -1357,18 +1357,18 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
 		/* check multi-queue mode */
 		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
 			break;
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
 			/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
 			PMD_INIT_LOG(ERR, "SRIOV active,"
 					" unsupported mq_mode rx %d.",
 					dev_conf->rxmode.mq_mode);
 			return -EINVAL;
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
 			if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
 				if (txgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
 					PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -1378,13 +1378,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 					return -EINVAL;
 				}
 			break;
-		case ETH_MQ_RX_VMDQ_ONLY:
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_NONE:
 			/* if nothing mq mode configure, use default scheme */
 			dev->data->dev_conf.rxmode.mq_mode =
-				ETH_MQ_RX_VMDQ_ONLY;
+				RTE_ETH_MQ_RX_VMDQ_ONLY;
 			break;
-		default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+		default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
 			/* SRIOV only works in VMDq enable mode */
 			PMD_INIT_LOG(ERR, "SRIOV is active,"
 					" wrong mq_mode rx %d.",
@@ -1393,13 +1393,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 		}
 
 		switch (dev_conf->txmode.mq_mode) {
-		case ETH_MQ_TX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+		case RTE_ETH_MQ_TX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+			dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
 			break;
-		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
+		default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
 			dev->data->dev_conf.txmode.mq_mode =
-				ETH_MQ_TX_VMDQ_ONLY;
+				RTE_ETH_MQ_TX_VMDQ_ONLY;
 			break;
 		}
 
@@ -1414,13 +1414,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 			return -EINVAL;
 		}
 	} else {
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
 			PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
 					  " not supported.");
 			return -EINVAL;
 		}
 		/* check configuration for vmdb+dcb mode */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_conf *conf;
 
 			if (nb_rx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1429,15 +1429,15 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools must be %d or %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_tx_conf *conf;
 
 			if (nb_tx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1446,39 +1446,39 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools != %d and"
 						" nb_queue_pools != %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
 
 		/* For DCB mode check our configuration before we go further */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
 			const struct rte_eth_dcb_rx_conf *conf;
 
 			conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
 
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 			const struct rte_eth_dcb_tx_conf *conf;
 
 			conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
@@ -1495,8 +1495,8 @@ txgbe_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multiple queue mode checking */
 	ret  = txgbe_check_mq_mode(dev);
@@ -1694,15 +1694,15 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 		goto error;
 	}
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = txgbe_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
 		goto error;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
 		/* Enable vlan filtering for VMDq */
 		txgbe_vmdq_vlan_hw_filter_enable(dev);
 	}
@@ -1763,8 +1763,8 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 	if (err)
 		goto error;
 
-	allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_10G;
+	allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G;
 
 	link_speeds = &dev->data->dev_conf.link_speeds;
 	if (((*link_speeds) >> 1) & ~(allowed_speeds >> 1)) {
@@ -1773,20 +1773,20 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 	}
 
 	speed = 0x0;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		speed = (TXGBE_LINK_SPEED_100M_FULL |
 			 TXGBE_LINK_SPEED_1GB_FULL |
 			 TXGBE_LINK_SPEED_10GB_FULL);
 	} else {
-		if (*link_speeds & ETH_LINK_SPEED_10G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
 			speed |= TXGBE_LINK_SPEED_10GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
 			speed |= TXGBE_LINK_SPEED_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_2_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
 			speed |= TXGBE_LINK_SPEED_2_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_1G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed |= TXGBE_LINK_SPEED_1GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_100M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed |= TXGBE_LINK_SPEED_100M_FULL;
 	}
 
@@ -2601,7 +2601,7 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
-	dev_info->max_vmdq_pools = ETH_64_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->vmdq_queue_num = dev_info->max_rx_queues;
 	dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
@@ -2634,11 +2634,11 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->tx_desc_lim = tx_desc_lim;
 
 	dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
-	dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
 
 	/* Driver-preferred Rx/Tx parameters */
 	dev_info->default_rxportconf.burst_size = 32;
@@ -2695,11 +2695,11 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	int wait = 1;
 
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_AUTONEG);
 
 	hw->mac.get_link_status = true;
 
@@ -2713,8 +2713,8 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
 
 	if (err != 0) {
-		link.link_speed = ETH_SPEED_NUM_100M;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
@@ -2733,34 +2733,34 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	}
 
 	intr->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (link_speed) {
 	default:
 	case TXGBE_LINK_SPEED_UNKNOWN:
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	case TXGBE_LINK_SPEED_100M_FULL:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	case TXGBE_LINK_SPEED_1GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case TXGBE_LINK_SPEED_2_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_2_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 
 	case TXGBE_LINK_SPEED_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 
 	case TXGBE_LINK_SPEED_10GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	}
 
@@ -2990,7 +2990,7 @@ txgbe_dev_link_status_print(struct rte_eth_dev *dev)
 		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned int)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3221,13 +3221,13 @@ txgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		tx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -3359,16 +3359,16 @@ txgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 		return -ENOTSUP;
 	}
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += 4) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
 		if (!mask)
 			continue;
@@ -3400,16 +3400,16 @@ txgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += 4) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
 		if (!mask)
 			continue;
@@ -3576,12 +3576,12 @@ txgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
 		return -ENOTSUP;
 
 	if (on) {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = ~0;
 			wr32(hw, TXGBE_UCADDRTBL(i), ~0);
 		}
 	} else {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = 0;
 			wr32(hw, TXGBE_UCADDRTBL(i), 0);
 		}
@@ -3605,15 +3605,15 @@ txgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
 {
 	uint32_t new_val = orig_val;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
 		new_val |= TXGBE_POOLETHCTL_UTA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
 		new_val |= TXGBE_POOLETHCTL_MCHA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 		new_val |= TXGBE_POOLETHCTL_UCHA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 		new_val |= TXGBE_POOLETHCTL_BCA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 		new_val |= TXGBE_POOLETHCTL_MCP;
 
 	return new_val;
@@ -4264,15 +4264,15 @@ txgbe_start_timecounters(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 
 	switch (link.link_speed) {
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		incval = TXGBE_INCVAL_100;
 		shift = TXGBE_INCVAL_SHIFT_100;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		incval = TXGBE_INCVAL_1GB;
 		shift = TXGBE_INCVAL_SHIFT_1GB;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 	default:
 		incval = TXGBE_INCVAL_10GB;
 		shift = TXGBE_INCVAL_SHIFT_10GB;
@@ -4628,7 +4628,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	uint8_t nb_tcs;
 	uint8_t i, j;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
 	else
 		dcb_info->nb_tcs = 1;
@@ -4639,7 +4639,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	if (dcb_config->vt_mode) { /* vt is enabled */
 		struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
 		if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
 			for (j = 0; j < nb_tcs; j++) {
@@ -4663,9 +4663,9 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	} else { /* vt is disabled */
 		struct rte_eth_dcb_rx_conf *rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
-		if (dcb_info->nb_tcs == ETH_4_TCS) {
+		if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4678,7 +4678,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 			dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
 			dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
 			dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
-		} else if (dcb_info->nb_tcs == ETH_8_TCS) {
+		} else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4908,7 +4908,7 @@ txgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = txgbe_e_tag_filter_add(dev, l2_tunnel);
 		break;
 	default:
@@ -4939,7 +4939,7 @@ txgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 		return ret;
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = txgbe_e_tag_filter_del(dev, l2_tunnel);
 		break;
 	default:
@@ -4979,7 +4979,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
 			ret = -EINVAL;
@@ -4987,7 +4987,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_VXLANPORT, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add Geneve port 0 is not allowed.");
 			ret = -EINVAL;
@@ -4995,7 +4995,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_GENEVEPORT, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add Teredo port 0 is not allowed.");
 			ret = -EINVAL;
@@ -5003,7 +5003,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_TEREDOPORT, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
 			ret = -EINVAL;
@@ -5035,7 +5035,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORT);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5045,7 +5045,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_VXLANPORT, 0);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		cur_port = (uint16_t)rd32(hw, TXGBE_GENEVEPORT);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5055,7 +5055,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_GENEVEPORT, 0);
 		break;
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		cur_port = (uint16_t)rd32(hw, TXGBE_TEREDOPORT);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5065,7 +5065,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_TEREDOPORT, 0);
 		break;
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORTGPE);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index fd65d89ffe7d..8304b68292da 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -60,15 +60,15 @@
 #define TXGBE_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
 
 #define TXGBE_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define TXGBE_MISC_VEC_ID               RTE_INTR_VEC_ZERO_OFFSET
 #define TXGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 43dc0ed39b75..283b52e8f3db 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -486,14 +486,14 @@ txgbevf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
-	dev_info->max_vmdq_pools = ETH_64_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
 				     dev_info->rx_queue_offload_capa);
 	dev_info->tx_queue_offload_capa = txgbe_get_tx_queue_offloads(dev);
 	dev_info->tx_offload_capa = txgbe_get_tx_port_offloads(dev);
 	dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -574,22 +574,22 @@ txgbevf_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
 		     dev->data->port_id);
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/*
 	 * VF has no ability to enable/disable HW CRC
 	 * Keep the persistent behavior the same as Host PF
 	 */
 #ifndef RTE_LIBRTE_TXGBE_PF_DISABLE_STRIP_CRC
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
-		conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #else
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
 		PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #endif
 
@@ -647,8 +647,8 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
 	txgbevf_set_vfta_all(dev, 1);
 
 	/* Set HW strip */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = txgbevf_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -891,10 +891,10 @@ txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	int on = 0;
 
 	/* VF function only support hw strip feature, others are not support */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
-			on = !!(rxq->offloads &	DEV_RX_OFFLOAD_VLAN_STRIP);
+			on = !!(rxq->offloads &	RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 			txgbevf_vlan_strip_queue_set(dev, i, on);
 		}
 	}
diff --git a/drivers/net/txgbe/txgbe_fdir.c b/drivers/net/txgbe/txgbe_fdir.c
index 8abb86228608..e303d87176ed 100644
--- a/drivers/net/txgbe/txgbe_fdir.c
+++ b/drivers/net/txgbe/txgbe_fdir.c
@@ -102,22 +102,22 @@ txgbe_fdir_enable(struct txgbe_hw *hw, uint32_t fdirctrl)
  * flexbytes matching field, and drop queue (only for perfect matching mode).
  */
 static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf,
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf,
 		     uint32_t *fdirctrl, uint32_t *flex)
 {
 	*fdirctrl = 0;
 	*flex = 0;
 
 	switch (conf->pballoc) {
-	case RTE_FDIR_PBALLOC_64K:
+	case RTE_ETH_FDIR_PBALLOC_64K:
 		/* 8k - 1 signature filters */
 		*fdirctrl |= TXGBE_FDIRCTL_BUF_64K;
 		break;
-	case RTE_FDIR_PBALLOC_128K:
+	case RTE_ETH_FDIR_PBALLOC_128K:
 		/* 16k - 1 signature filters */
 		*fdirctrl |= TXGBE_FDIRCTL_BUF_128K;
 		break;
-	case RTE_FDIR_PBALLOC_256K:
+	case RTE_ETH_FDIR_PBALLOC_256K:
 		/* 32k - 1 signature filters */
 		*fdirctrl |= TXGBE_FDIRCTL_BUF_256K;
 		break;
@@ -521,15 +521,15 @@ txgbe_atr_compute_hash(struct txgbe_atr_input *atr_input,
 
 static uint32_t
 atr_compute_perfect_hash(struct txgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
 	uint32_t bucket_hash;
 
 	bucket_hash = txgbe_atr_compute_hash(input,
 				TXGBE_ATR_BUCKET_HASH_KEY);
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		bucket_hash &= PERFECT_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		bucket_hash &= PERFECT_BUCKET_128KB_HASH_MASK;
 	else
 		bucket_hash &= PERFECT_BUCKET_64KB_HASH_MASK;
@@ -564,15 +564,15 @@ txgbe_fdir_check_cmd_complete(struct txgbe_hw *hw, uint32_t *fdircmd)
  */
 static uint32_t
 atr_compute_signature_hash(struct txgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
 	uint32_t bucket_hash, sig_hash;
 
 	bucket_hash = txgbe_atr_compute_hash(input,
 				TXGBE_ATR_BUCKET_HASH_KEY);
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		bucket_hash &= SIG_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		bucket_hash &= SIG_BUCKET_128KB_HASH_MASK;
 	else
 		bucket_hash &= SIG_BUCKET_64KB_HASH_MASK;
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index eae400b14176..6d7fd1842843 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -1215,7 +1215,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
 		return -rte_errno;
 	}
 
-	filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+	filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
 	/**
 	 * grp and e_cid_base are bit fields and only use 14 bits.
 	 * e-tag id is taken as little endian by HW.
diff --git a/drivers/net/txgbe/txgbe_ipsec.c b/drivers/net/txgbe/txgbe_ipsec.c
index ccd747973ba2..445733f3ba46 100644
--- a/drivers/net/txgbe/txgbe_ipsec.c
+++ b/drivers/net/txgbe/txgbe_ipsec.c
@@ -372,7 +372,7 @@ txgbe_crypto_create_session(void *device,
 	aead_xform = &conf->crypto_xform->aead;
 
 	if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 			ic_session->op = TXGBE_OP_AUTHENTICATED_DECRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -380,7 +380,7 @@ txgbe_crypto_create_session(void *device,
 			return -ENOTSUP;
 		}
 	} else {
-		if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+		if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 			ic_session->op = TXGBE_OP_AUTHENTICATED_ENCRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -611,11 +611,11 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	tx_offloads = dev->data->dev_conf.txmode.offloads;
 
 	/* sanity checks */
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
 		return -1;
 	}
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
 		return -1;
 	}
@@ -634,7 +634,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	reg |= TXGBE_SECRXCTL_CRCSTRIP;
 	wr32(hw, TXGBE_SECRXCTL, reg);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		wr32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA, 0);
 		reg = rd32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA);
 		if (reg != 0) {
@@ -642,7 +642,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 			return -1;
 		}
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 		wr32(hw, TXGBE_SECTXCTL, TXGBE_SECTXCTL_STFWD);
 		reg = rd32(hw, TXGBE_SECTXCTL);
 		if (reg != TXGBE_SECTXCTL_STFWD) {
diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c
index a48972b1a381..30be2873307a 100644
--- a/drivers/net/txgbe/txgbe_pf.c
+++ b/drivers/net/txgbe/txgbe_pf.c
@@ -101,15 +101,15 @@ int txgbe_pf_host_init(struct rte_eth_dev *eth_dev)
 	memset(uta_info, 0, sizeof(struct txgbe_uta_info));
 	hw->mac.mc_filter_type = 0;
 
-	if (vf_num >= ETH_32_POOLS) {
+	if (vf_num >= RTE_ETH_32_POOLS) {
 		nb_queue = 2;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
-	} else if (vf_num >= ETH_16_POOLS) {
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+	} else if (vf_num >= RTE_ETH_16_POOLS) {
 		nb_queue = 4;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
 	} else {
 		nb_queue = 8;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
 	}
 
 	RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -256,13 +256,13 @@ int txgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
 	gcr_ext &= ~TXGBE_PORTCTL_NUMVT_MASK;
 
 	switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		gcr_ext |= TXGBE_PORTCTL_NUMVT_64;
 		break;
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		gcr_ext |= TXGBE_PORTCTL_NUMVT_32;
 		break;
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		gcr_ext |= TXGBE_PORTCTL_NUMVT_16;
 		break;
 	}
@@ -611,29 +611,29 @@ txgbe_get_vf_queues(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf)
 	/* Notify VF of number of DCB traffic classes */
 	eth_conf = &eth_dev->data->dev_conf;
 	switch (eth_conf->txmode.mq_mode) {
-	case ETH_MQ_TX_NONE:
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_NONE:
+	case RTE_ETH_MQ_TX_DCB:
 		PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
 			", but its tx mode = %d\n", vf,
 			eth_conf->txmode.mq_mode);
 		return -1;
 
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		vmdq_dcb_tx_conf = &eth_conf->tx_adv_conf.vmdq_dcb_tx_conf;
 		switch (vmdq_dcb_tx_conf->nb_queue_pools) {
-		case ETH_16_POOLS:
-			num_tcs = ETH_8_TCS;
+		case RTE_ETH_16_POOLS:
+			num_tcs = RTE_ETH_8_TCS;
 			break;
-		case ETH_32_POOLS:
-			num_tcs = ETH_4_TCS;
+		case RTE_ETH_32_POOLS:
+			num_tcs = RTE_ETH_4_TCS;
 			break;
 		default:
 			return -1;
 		}
 		break;
 
-	/* ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
-	case ETH_MQ_TX_VMDQ_ONLY:
+	/* RTE_ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
+	case RTE_ETH_MQ_TX_VMDQ_ONLY:
 		hw = TXGBE_DEV_HW(eth_dev);
 		vmvir = rd32(hw, TXGBE_POOLTAG(vf));
 		vlana = vmvir & TXGBE_POOLTAG_ACT_MASK;
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 7e18dcce0a86..1204dc5499a5 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1960,7 +1960,7 @@ txgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
 uint64_t
 txgbe_get_rx_queue_offloads(struct rte_eth_dev *dev __rte_unused)
 {
-	return DEV_RX_OFFLOAD_VLAN_STRIP;
+	return RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 }
 
 uint64_t
@@ -1970,34 +1970,34 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
 	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
 	struct rte_eth_dev_sriov *sriov = &RTE_ETH_DEV_SRIOV(dev);
 
-	offloads = DEV_RX_OFFLOAD_IPV4_CKSUM  |
-		   DEV_RX_OFFLOAD_UDP_CKSUM   |
-		   DEV_RX_OFFLOAD_TCP_CKSUM   |
-		   DEV_RX_OFFLOAD_KEEP_CRC    |
-		   DEV_RX_OFFLOAD_VLAN_FILTER |
-		   DEV_RX_OFFLOAD_RSS_HASH |
-		   DEV_RX_OFFLOAD_SCATTER;
+	offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+		   RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+		   RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		   RTE_ETH_RX_OFFLOAD_RSS_HASH |
+		   RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	if (!txgbe_is_vf(dev))
-		offloads |= (DEV_RX_OFFLOAD_VLAN_FILTER |
-			     DEV_RX_OFFLOAD_QINQ_STRIP |
-			     DEV_RX_OFFLOAD_VLAN_EXTEND);
+		offloads |= (RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+			     RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+			     RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
 
 	/*
 	 * RSC is only supported by PF devices in a non-SR-IOV
 	 * mode.
 	 */
 	if (hw->mac.type == txgbe_mac_raptor && !sriov->active)
-		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+		offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 
 	if (hw->mac.type == txgbe_mac_raptor)
-		offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
 
-	offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+	offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		offloads |= DEV_RX_OFFLOAD_SECURITY;
+		offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
 #endif
 
 	return offloads;
@@ -2222,32 +2222,32 @@ txgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
 	uint64_t tx_offload_capa;
 
 	tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM   |
-		DEV_TX_OFFLOAD_SCTP_CKSUM  |
-		DEV_TX_OFFLOAD_TCP_TSO     |
-		DEV_TX_OFFLOAD_UDP_TSO	   |
-		DEV_TX_OFFLOAD_UDP_TNL_TSO	|
-		DEV_TX_OFFLOAD_IP_TNL_TSO	|
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO	|
-		DEV_TX_OFFLOAD_GRE_TNL_TSO	|
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO	|
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO	|
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO     |
+		RTE_ETH_TX_OFFLOAD_UDP_TSO	   |
+		RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_IP_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	if (!txgbe_is_vf(dev))
-		tx_offload_capa |= DEV_TX_OFFLOAD_QINQ_INSERT;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
 
-	tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+	tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 
-	tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-			   DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+	tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
 #endif
 	return tx_offload_capa;
 }
@@ -2349,7 +2349,7 @@ txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_deferred_start = tx_conf->tx_deferred_start;
 #ifdef RTE_LIB_SECURITY
 	txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SECURITY);
+			RTE_ETH_TX_OFFLOAD_SECURITY);
 #endif
 
 	/* Modification to set tail pointer for virtual function
@@ -2599,7 +2599,7 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
 		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -2900,20 +2900,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
 	if (hw->mac.type == txgbe_mac_raptor_vf) {
 		mrqc = rd32(hw, TXGBE_VFPLCFG);
 		mrqc &= ~TXGBE_VFPLCFG_RSSMASK;
-		if (rss_hf & ETH_RSS_IPV4)
+		if (rss_hf & RTE_ETH_RSS_IPV4)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV4TCP;
-		if (rss_hf & ETH_RSS_IPV6 ||
-		    rss_hf & ETH_RSS_IPV6_EX)
+		if (rss_hf & RTE_ETH_RSS_IPV6 ||
+		    rss_hf & RTE_ETH_RSS_IPV6_EX)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV6;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
-		    rss_hf & ETH_RSS_IPV6_TCP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV6TCP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV4UDP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
-		    rss_hf & ETH_RSS_IPV6_UDP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV6UDP;
 
 		if (rss_hf)
@@ -2930,20 +2930,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
 	} else {
 		mrqc = rd32(hw, TXGBE_RACTL);
 		mrqc &= ~TXGBE_RACTL_RSSMASK;
-		if (rss_hf & ETH_RSS_IPV4)
+		if (rss_hf & RTE_ETH_RSS_IPV4)
 			mrqc |= TXGBE_RACTL_RSSIPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			mrqc |= TXGBE_RACTL_RSSIPV4TCP;
-		if (rss_hf & ETH_RSS_IPV6 ||
-		    rss_hf & ETH_RSS_IPV6_EX)
+		if (rss_hf & RTE_ETH_RSS_IPV6 ||
+		    rss_hf & RTE_ETH_RSS_IPV6_EX)
 			mrqc |= TXGBE_RACTL_RSSIPV6;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
-		    rss_hf & ETH_RSS_IPV6_TCP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 			mrqc |= TXGBE_RACTL_RSSIPV6TCP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 			mrqc |= TXGBE_RACTL_RSSIPV4UDP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
-		    rss_hf & ETH_RSS_IPV6_UDP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 			mrqc |= TXGBE_RACTL_RSSIPV6UDP;
 
 		if (rss_hf)
@@ -2984,39 +2984,39 @@ txgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 	if (hw->mac.type == txgbe_mac_raptor_vf) {
 		mrqc = rd32(hw, TXGBE_VFPLCFG);
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV4)
-			rss_hf |= ETH_RSS_IPV4;
+			rss_hf |= RTE_ETH_RSS_IPV4;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV4TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV6)
-			rss_hf |= ETH_RSS_IPV6 |
-				  ETH_RSS_IPV6_EX;
+			rss_hf |= RTE_ETH_RSS_IPV6 |
+				  RTE_ETH_RSS_IPV6_EX;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV6TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
-				  ETH_RSS_IPV6_TCP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				  RTE_ETH_RSS_IPV6_TCP_EX;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV4UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV6UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
-				  ETH_RSS_IPV6_UDP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				  RTE_ETH_RSS_IPV6_UDP_EX;
 		if (!(mrqc & TXGBE_VFPLCFG_RSSENA))
 			rss_hf = 0;
 	} else {
 		mrqc = rd32(hw, TXGBE_RACTL);
 		if (mrqc & TXGBE_RACTL_RSSIPV4)
-			rss_hf |= ETH_RSS_IPV4;
+			rss_hf |= RTE_ETH_RSS_IPV4;
 		if (mrqc & TXGBE_RACTL_RSSIPV4TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		if (mrqc & TXGBE_RACTL_RSSIPV6)
-			rss_hf |= ETH_RSS_IPV6 |
-				  ETH_RSS_IPV6_EX;
+			rss_hf |= RTE_ETH_RSS_IPV6 |
+				  RTE_ETH_RSS_IPV6_EX;
 		if (mrqc & TXGBE_RACTL_RSSIPV6TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
-				  ETH_RSS_IPV6_TCP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				  RTE_ETH_RSS_IPV6_TCP_EX;
 		if (mrqc & TXGBE_RACTL_RSSIPV4UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 		if (mrqc & TXGBE_RACTL_RSSIPV6UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
-				  ETH_RSS_IPV6_UDP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				  RTE_ETH_RSS_IPV6_UDP_EX;
 		if (!(mrqc & TXGBE_RACTL_RSSENA))
 			rss_hf = 0;
 	}
@@ -3046,7 +3046,7 @@ txgbe_rss_configure(struct rte_eth_dev *dev)
 	 */
 	if (adapter->rss_reta_updated == 0) {
 		reta = 0;
-		for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+		for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
 			if (j == dev->data->nb_rx_queues)
 				j = 0;
 			reta = (reta >> 8) | LS32(j, 24, 0xFF);
@@ -3083,12 +3083,12 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
 	num_pools = cfg->nb_queue_pools;
 	/* Check we have a valid number of pools */
-	if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+	if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
 		txgbe_rss_disable(dev);
 		return;
 	}
 	/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
-	nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+	nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
 
 	/*
 	 * split rx buffer up into sections, each for 1 traffic class
@@ -3103,7 +3103,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
 	}
 	/* zero alloc all unused TCs */
-	for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		uint32_t rxpbsize = rd32(hw, TXGBE_PBRXSIZE(i));
 
 		rxpbsize &= (~(0x3FF << 10));
@@ -3111,7 +3111,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
 	}
 
-	if (num_pools == ETH_16_POOLS) {
+	if (num_pools == RTE_ETH_16_POOLS) {
 		mrqc = TXGBE_PORTCTL_NUMTC_8;
 		mrqc |= TXGBE_PORTCTL_NUMVT_16;
 	} else {
@@ -3130,7 +3130,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	wr32(hw, TXGBE_POOLCTL, vt_ctl);
 
 	queue_mapping = 0;
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 		/*
 		 * mapping is done with 3 bits per priority,
 		 * so shift by i*3 each time
@@ -3151,7 +3151,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		wr32(hw, TXGBE_VLANTBL(i), 0xFFFFFFFF);
 
 	wr32(hw, TXGBE_POOLRXENA(0),
-			num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+			num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	wr32(hw, TXGBE_ETHADDRIDX, 0);
 	wr32(hw, TXGBE_ETHADDRASSL, 0xFFFFFFFF);
@@ -3221,7 +3221,7 @@ txgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
 	/*PF VF Transmit Enable*/
 	wr32(hw, TXGBE_POOLTXENA(0),
 		vmdq_tx_conf->nb_queue_pools ==
-				ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+				RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	/*Configure general DCB TX parameters*/
 	txgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3237,12 +3237,12 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
-	if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3252,7 +3252,7 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3270,12 +3270,12 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
-	if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3285,7 +3285,7 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3312,7 +3312,7 @@ txgbe_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3339,7 +3339,7 @@ txgbe_dcb_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3475,7 +3475,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(dev);
 
 	switch (dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_VMDQ_DCB:
+	case RTE_ETH_MQ_RX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/*
@@ -3486,8 +3486,8 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		/*Configure general VMDQ and DCB RX parameters*/
 		txgbe_vmdq_dcb_configure(dev);
 		break;
-	case ETH_MQ_RX_DCB:
-	case ETH_MQ_RX_DCB_RSS:
+	case RTE_ETH_MQ_RX_DCB:
+	case RTE_ETH_MQ_RX_DCB_RSS:
 		dcb_config->vt_mode = false;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -3500,7 +3500,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		break;
 	}
 	switch (dev->data->dev_conf.txmode.mq_mode) {
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/* get DCB and VT TX configuration parameters
@@ -3511,7 +3511,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		txgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
 		break;
 
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_DCB:
 		dcb_config->vt_mode = false;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/* get DCB TX configuration parameters from rte_eth_conf */
@@ -3527,15 +3527,15 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	nb_tcs = dcb_config->num_tcs.pfc_tcs;
 	/* Unpack map */
 	txgbe_dcb_unpack_map_cee(dcb_config, TXGBE_DCB_RX_CONFIG, map);
-	if (nb_tcs == ETH_4_TCS) {
+	if (nb_tcs == RTE_ETH_4_TCS) {
 		/* Avoid un-configured priority mapping to TC0 */
 		uint8_t j = 4;
 		uint8_t mask = 0xFF;
 
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
 			mask = (uint8_t)(mask & (~(1 << map[i])));
 		for (i = 0; mask && (i < TXGBE_DCB_TC_MAX); i++) {
-			if ((mask & 0x1) && j < ETH_DCB_NUM_USER_PRIORITIES)
+			if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
 				map[j++] = i;
 			mask >>= 1;
 		}
@@ -3576,7 +3576,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
 
 		/* zero alloc all unused TCs */
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			wr32(hw, TXGBE_PBRXSIZE(i), 0);
 	}
 	if (config_dcb_tx) {
@@ -3592,7 +3592,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			wr32(hw, TXGBE_PBTXDMATH(i), txpbthresh);
 		}
 		/* Clear unused TCs, if any, to zero buffer size*/
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			wr32(hw, TXGBE_PBTXSIZE(i), 0);
 			wr32(hw, TXGBE_PBTXDMATH(i), 0);
 		}
@@ -3634,7 +3634,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	txgbe_dcb_config_tc_stats_raptor(hw, dcb_config);
 
 	/* Check if the PFC is supported */
-	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
 		for (i = 0; i < nb_tcs; i++) {
 			/* If the TC count is 8,
@@ -3648,7 +3648,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			tc->pfc = txgbe_dcb_pfc_enabled;
 		}
 		txgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
-		if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+		if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
 			pfc_en &= 0x0F;
 		ret = txgbe_dcb_config_pfc(hw, pfc_en, map);
 	}
@@ -3719,12 +3719,12 @@ void txgbe_configure_dcb(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	/* check support mq_mode for DCB */
-	if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB &&
-	    dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB &&
-	    dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS)
+	if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
 		return;
 
-	if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+	if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
 		return;
 
 	/** Configure DCB hardware **/
@@ -3780,7 +3780,7 @@ txgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 
 	/* pool enabling for receive - 64 */
 	wr32(hw, TXGBE_POOLRXENA(0), UINT32_MAX);
-	if (num_pools == ETH_64_POOLS)
+	if (num_pools == RTE_ETH_64_POOLS)
 		wr32(hw, TXGBE_POOLRXENA(1), UINT32_MAX);
 
 	/*
@@ -3904,11 +3904,11 @@ txgbe_config_vf_rss(struct rte_eth_dev *dev)
 	mrqc = rd32(hw, TXGBE_PORTCTL);
 	mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_64;
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_32;
 		break;
 
@@ -3931,15 +3931,15 @@ txgbe_config_vf_default(struct rte_eth_dev *dev)
 	mrqc = rd32(hw, TXGBE_PORTCTL);
 	mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_64;
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_32;
 		break;
 
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_16;
 		break;
 	default:
@@ -3962,21 +3962,21 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * any DCB/RSS w/o VMDq multi-queue setting
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_DCB_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			txgbe_rss_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
 			txgbe_vmdq_dcb_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
 			txgbe_vmdq_rx_hw_configure(dev);
 			break;
 
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_NONE:
 		default:
 			/* if mq_mode is none, disable rss mode.*/
 			txgbe_rss_disable(dev);
@@ -3987,18 +3987,18 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * Support RSS together with SRIOV.
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			txgbe_config_vf_rss(dev);
 			break;
-		case ETH_MQ_RX_VMDQ_DCB:
-		case ETH_MQ_RX_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_DCB:
 		/* In SRIOV, the configuration is the same as VMDq case */
 			txgbe_vmdq_dcb_configure(dev);
 			break;
 		/* DCB/RSS together with SRIOV is not supported */
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
-		case ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
 			PMD_INIT_LOG(ERR,
 				"Could not support DCB/RSS with VMDq & SRIOV");
 			return -1;
@@ -4028,7 +4028,7 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV inactive scheme
 		 * any DCB w/o VMDq multi-queue setting
 		 */
-		if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+		if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
 			txgbe_vmdq_tx_hw_configure(hw);
 		else
 			wr32m(hw, TXGBE_PORTCTL, TXGBE_PORTCTL_NUMVT_MASK, 0);
@@ -4038,13 +4038,13 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV active scheme
 		 * FIXME if support DCB together with VMDq & SRIOV
 		 */
-		case ETH_64_POOLS:
+		case RTE_ETH_64_POOLS:
 			mtqc = TXGBE_PORTCTL_NUMVT_64;
 			break;
-		case ETH_32_POOLS:
+		case RTE_ETH_32_POOLS:
 			mtqc = TXGBE_PORTCTL_NUMVT_32;
 			break;
-		case ETH_16_POOLS:
+		case RTE_ETH_16_POOLS:
 			mtqc = TXGBE_PORTCTL_NUMVT_16;
 			break;
 		default:
@@ -4107,10 +4107,10 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* Sanity check */
 	dev->dev_ops->dev_infos_get(dev, &dev_info);
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		rsc_capable = true;
 
-	if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
 				   "support it");
 		return -EINVAL;
@@ -4118,22 +4118,22 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* RSC global configuration */
 
-	if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
-	     (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+	     (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		PMD_INIT_LOG(CRIT, "LRO can't be enabled when HW CRC "
 				    "is disabled");
 		return -EINVAL;
 	}
 
 	rfctl = rd32(hw, TXGBE_PSRCTL);
-	if (rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if (rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		rfctl &= ~TXGBE_PSRCTL_RSCDIA;
 	else
 		rfctl |= TXGBE_PSRCTL_RSCDIA;
 	wr32(hw, TXGBE_PSRCTL, rfctl);
 
 	/* If LRO hasn't been requested - we are done here. */
-	if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		return 0;
 
 	/* Set PSRCTL.RSCACK bit */
@@ -4273,7 +4273,7 @@ txgbe_set_rx_function(struct rte_eth_dev *dev)
 		struct txgbe_rx_queue *rxq = dev->data->rx_queues[i];
 
 		rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_SECURITY);
+				RTE_ETH_RX_OFFLOAD_SECURITY);
 	}
 #endif
 }
@@ -4316,7 +4316,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Configure CRC stripping, if any.
 	 */
 	hlreg0 = rd32(hw, TXGBE_SECRXCTL);
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		hlreg0 &= ~TXGBE_SECRXCTL_CRCSTRIP;
 	else
 		hlreg0 |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4344,7 +4344,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	/* Setup RX queues */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -4354,7 +4354,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 * call to configure.
 		 */
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -4391,11 +4391,11 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 		if (dev->data->mtu + TXGBE_ETH_OVERHEAD +
 				2 * TXGBE_VLAN_TAG_SIZE > buf_size)
 			dev->data->scattered_rx = 1;
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		dev->data->scattered_rx = 1;
 
 	/*
@@ -4410,7 +4410,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 */
 	rxcsum = rd32(hw, TXGBE_PSRCTL);
 	rxcsum |= TXGBE_PSRCTL_PCSD;
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= TXGBE_PSRCTL_L4CSUM;
 	else
 		rxcsum &= ~TXGBE_PSRCTL_L4CSUM;
@@ -4419,7 +4419,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 
 	if (hw->mac.type == txgbe_mac_raptor) {
 		rdrxctl = rd32(hw, TXGBE_SECRXCTL);
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rdrxctl &= ~TXGBE_SECRXCTL_CRCSTRIP;
 		else
 			rdrxctl |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4542,8 +4542,8 @@ txgbe_dev_rxtx_start(struct rte_eth_dev *dev)
 		txgbe_setup_loopback_link_raptor(hw);
 
 #ifdef RTE_LIB_SECURITY
-	if ((dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) ||
-	    (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_SECURITY)) {
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) ||
+	    (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY)) {
 		ret = txgbe_crypto_enable_ipsec(dev);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR,
@@ -4851,7 +4851,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	/* Set PSR type for VF RSS according to max Rx queue */
 	psrtype = TXGBE_VFPLCFG_PSRL4HDR |
@@ -4903,7 +4903,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
 		 */
 		wr32(hw, TXGBE_RXCFG(i), srrctl);
 
-		if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
 		    /* It adds dual VLAN length for supporting dual VLAN */
 		    (dev->data->mtu + TXGBE_ETH_OVERHEAD +
 				2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
@@ -4912,8 +4912,8 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
 			dev->data->scattered_rx = 1;
 		}
 
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	/*
@@ -5084,7 +5084,7 @@ txgbe_config_rss_filter(struct rte_eth_dev *dev,
 	 * little-endian order.
 	 */
 	reta = 0;
-	for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+	for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
 		if (j == conf->conf.queue_num)
 			j = 0;
 		reta = (reta >> 8) | LS32(conf->conf.queue[j], 24, 0xFF);
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index b96f58a3f848..27d4c842c0e7 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -309,7 +309,7 @@ struct txgbe_rx_queue {
 	uint8_t             rx_deferred_start; /**< not in global dev start. */
 	/** flags to set in mbuf when a vlan is detected. */
 	uint64_t            vlan_flags;
-	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
 	/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
 	struct rte_mbuf fake_mbuf;
 	/** hold packets to return to application */
@@ -392,7 +392,7 @@ struct txgbe_tx_queue {
 	uint8_t             pthresh;       /**< Prefetch threshold register. */
 	uint8_t             hthresh;       /**< Host threshold register. */
 	uint8_t             wthresh;       /**< Write-back threshold reg. */
-	uint64_t            offloads; /* Tx offload flags of DEV_TX_OFFLOAD_* */
+	uint64_t            offloads; /* Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
 	uint32_t            ctx_curr;      /**< Hardware context states. */
 	/** Hardware context0 history. */
 	struct txgbe_ctx_info ctx_cache[TXGBE_CTX_NUM];
diff --git a/drivers/net/txgbe/txgbe_tm.c b/drivers/net/txgbe/txgbe_tm.c
index 3abe3959eb1a..3171be73d05d 100644
--- a/drivers/net/txgbe/txgbe_tm.c
+++ b/drivers/net/txgbe/txgbe_tm.c
@@ -118,14 +118,14 @@ txgbe_tc_nb_get(struct rte_eth_dev *dev)
 	uint8_t nb_tcs = 0;
 
 	eth_conf = &dev->data->dev_conf;
-	if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+	if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 		nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
-	} else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	} else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
-		    ETH_32_POOLS)
-			nb_tcs = ETH_4_TCS;
+		    RTE_ETH_32_POOLS)
+			nb_tcs = RTE_ETH_4_TCS;
 		else
-			nb_tcs = ETH_8_TCS;
+			nb_tcs = RTE_ETH_8_TCS;
 	} else {
 		nb_tcs = 1;
 	}
@@ -364,10 +364,10 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 	if (vf_num) {
 		/* no DCB */
 		if (nb_tcs == 1) {
-			if (vf_num >= ETH_32_POOLS) {
+			if (vf_num >= RTE_ETH_32_POOLS) {
 				*nb = 2;
 				*base = vf_num * 2;
-			} else if (vf_num >= ETH_16_POOLS) {
+			} else if (vf_num >= RTE_ETH_16_POOLS) {
 				*nb = 4;
 				*base = vf_num * 4;
 			} else {
@@ -381,7 +381,7 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 		}
 	} else {
 		/* VT off */
-		if (nb_tcs == ETH_8_TCS) {
+		if (nb_tcs == RTE_ETH_8_TCS) {
 			switch (tc_node_no) {
 			case 0:
 				*base = 0;
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a7935a716de9..27f81a5cafc5 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -125,8 +125,8 @@ static pthread_mutex_t internal_list_lock = PTHREAD_MUTEX_INITIALIZER;
 
 static struct rte_eth_link pmd_link = {
 		.link_speed = 10000,
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_status = ETH_LINK_DOWN
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_status = RTE_ETH_LINK_DOWN
 };
 
 struct rte_vhost_vring_state {
@@ -823,7 +823,7 @@ new_device(int vid)
 
 	rte_vhost_get_mtu(vid, &eth_dev->data->mtu);
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	rte_atomic32_set(&internal->dev_attached, 1);
 	update_queuing_status(eth_dev);
@@ -858,7 +858,7 @@ destroy_device(int vid)
 	rte_atomic32_set(&internal->dev_attached, 0);
 	update_queuing_status(eth_dev);
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	if (eth_dev->data->rx_queues && eth_dev->data->tx_queues) {
 		for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1124,7 +1124,7 @@ eth_dev_configure(struct rte_eth_dev *dev)
 	if (vhost_driver_setup(dev) < 0)
 		return -1;
 
-	internal->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	internal->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 	return 0;
 }
@@ -1273,9 +1273,9 @@ eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_tx_queues = internal->max_queues;
 	dev_info->min_rx_bufsize = 0;
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-				DEV_TX_OFFLOAD_VLAN_INSERT;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	return 0;
 }
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 047d3f43a3cf..74ede2aeccc1 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -712,7 +712,7 @@ int
 virtio_dev_close(struct rte_eth_dev *dev)
 {
 	struct virtio_hw *hw = dev->data->dev_private;
-	struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+	struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
 
 	PMD_INIT_LOG(DEBUG, "virtio_dev_close");
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1771,7 +1771,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 		     hw->mac_addr[0], hw->mac_addr[1], hw->mac_addr[2],
 		     hw->mac_addr[3], hw->mac_addr[4], hw->mac_addr[5]);
 
-	if (hw->speed == ETH_SPEED_NUM_UNKNOWN) {
+	if (hw->speed == RTE_ETH_SPEED_NUM_UNKNOWN) {
 		if (virtio_with_feature(hw, VIRTIO_NET_F_SPEED_DUPLEX)) {
 			config = &local_config;
 			virtio_read_dev_config(hw,
@@ -1785,7 +1785,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 		}
 	}
 	if (hw->duplex == DUPLEX_UNKNOWN)
-		hw->duplex = ETH_LINK_FULL_DUPLEX;
+		hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	PMD_INIT_LOG(DEBUG, "link speed = %d, duplex = %d",
 		hw->speed, hw->duplex);
 	if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ)) {
@@ -1884,7 +1884,7 @@ int
 eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
 {
 	struct virtio_hw *hw = eth_dev->data->dev_private;
-	uint32_t speed = ETH_SPEED_NUM_UNKNOWN;
+	uint32_t speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	int vectorized = 0;
 	int ret;
 
@@ -1955,22 +1955,22 @@ static uint32_t
 virtio_dev_speed_capa_get(uint32_t speed)
 {
 	switch (speed) {
-	case ETH_SPEED_NUM_10G:
-		return ETH_LINK_SPEED_10G;
-	case ETH_SPEED_NUM_20G:
-		return ETH_LINK_SPEED_20G;
-	case ETH_SPEED_NUM_25G:
-		return ETH_LINK_SPEED_25G;
-	case ETH_SPEED_NUM_40G:
-		return ETH_LINK_SPEED_40G;
-	case ETH_SPEED_NUM_50G:
-		return ETH_LINK_SPEED_50G;
-	case ETH_SPEED_NUM_56G:
-		return ETH_LINK_SPEED_56G;
-	case ETH_SPEED_NUM_100G:
-		return ETH_LINK_SPEED_100G;
-	case ETH_SPEED_NUM_200G:
-		return ETH_LINK_SPEED_200G;
+	case RTE_ETH_SPEED_NUM_10G:
+		return RTE_ETH_LINK_SPEED_10G;
+	case RTE_ETH_SPEED_NUM_20G:
+		return RTE_ETH_LINK_SPEED_20G;
+	case RTE_ETH_SPEED_NUM_25G:
+		return RTE_ETH_LINK_SPEED_25G;
+	case RTE_ETH_SPEED_NUM_40G:
+		return RTE_ETH_LINK_SPEED_40G;
+	case RTE_ETH_SPEED_NUM_50G:
+		return RTE_ETH_LINK_SPEED_50G;
+	case RTE_ETH_SPEED_NUM_56G:
+		return RTE_ETH_LINK_SPEED_56G;
+	case RTE_ETH_SPEED_NUM_100G:
+		return RTE_ETH_LINK_SPEED_100G;
+	case RTE_ETH_SPEED_NUM_200G:
+		return RTE_ETH_LINK_SPEED_200G;
 	default:
 		return 0;
 	}
@@ -2086,14 +2086,14 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "configure");
 	req_features = VIRTIO_PMD_DEFAULT_GUEST_FEATURES;
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) {
 		PMD_DRV_LOG(ERR,
 			"Unsupported Rx multi queue mode %d",
 			rxmode->mq_mode);
 		return -EINVAL;
 	}
 
-	if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		PMD_DRV_LOG(ERR,
 			"Unsupported Tx multi queue mode %d",
 			txmode->mq_mode);
@@ -2111,20 +2111,20 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 
 	hw->max_rx_pkt_len = ether_hdr_len + rxmode->mtu;
 
-	if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
-			   DEV_RX_OFFLOAD_TCP_CKSUM))
+	if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
 		req_features |= (1ULL << VIRTIO_NET_F_GUEST_CSUM);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		req_features |=
 			(1ULL << VIRTIO_NET_F_GUEST_TSO4) |
 			(1ULL << VIRTIO_NET_F_GUEST_TSO6);
 
-	if (tx_offloads & (DEV_TX_OFFLOAD_UDP_CKSUM |
-			   DEV_TX_OFFLOAD_TCP_CKSUM))
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM))
 		req_features |= (1ULL << VIRTIO_NET_F_CSUM);
 
-	if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		req_features |=
 			(1ULL << VIRTIO_NET_F_HOST_TSO4) |
 			(1ULL << VIRTIO_NET_F_HOST_TSO6);
@@ -2136,15 +2136,15 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 			return ret;
 	}
 
-	if ((rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
-			    DEV_RX_OFFLOAD_TCP_CKSUM)) &&
+	if ((rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			    RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) &&
 		!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_CSUM)) {
 		PMD_DRV_LOG(ERR,
 			"rx checksum not available on this host");
 		return -ENOTSUP;
 	}
 
-	if ((rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) &&
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
 		(!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO4) ||
 		 !virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO6))) {
 		PMD_DRV_LOG(ERR,
@@ -2156,12 +2156,12 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 	if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
 		virtio_dev_cq_start(dev);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 		hw->vlan_strip = 1;
 
-	hw->rx_ol_scatter = (rx_offloads & DEV_RX_OFFLOAD_SCATTER);
+	hw->rx_ol_scatter = (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
 
-	if ((rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 			!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
 		PMD_DRV_LOG(ERR,
 			    "vlan filtering not available on this host");
@@ -2214,7 +2214,7 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 				hw->use_vec_rx = 0;
 			}
 
-			if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+			if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 				PMD_DRV_LOG(INFO,
 					"disabled packed ring vectorized rx for TCP_LRO enabled");
 				hw->use_vec_rx = 0;
@@ -2241,10 +2241,10 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 				hw->use_vec_rx = 0;
 			}
 
-			if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
-					   DEV_RX_OFFLOAD_TCP_CKSUM |
-					   DEV_RX_OFFLOAD_TCP_LRO |
-					   DEV_RX_OFFLOAD_VLAN_STRIP)) {
+			if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+					   RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+					   RTE_ETH_RX_OFFLOAD_TCP_LRO |
+					   RTE_ETH_RX_OFFLOAD_VLAN_STRIP)) {
 				PMD_DRV_LOG(INFO,
 					"disabled split ring vectorized rx for offloading enabled");
 				hw->use_vec_rx = 0;
@@ -2437,7 +2437,7 @@ virtio_dev_stop(struct rte_eth_dev *dev)
 {
 	struct virtio_hw *hw = dev->data->dev_private;
 	struct rte_eth_link link;
-	struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+	struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
 
 	PMD_INIT_LOG(DEBUG, "stop");
 	dev->data->dev_started = 0;
@@ -2478,28 +2478,28 @@ virtio_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complet
 	memset(&link, 0, sizeof(link));
 	link.link_duplex = hw->duplex;
 	link.link_speed  = hw->speed;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	if (!hw->started) {
-		link.link_status = ETH_LINK_DOWN;
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	} else if (virtio_with_feature(hw, VIRTIO_NET_F_STATUS)) {
 		PMD_INIT_LOG(DEBUG, "Get link status from hw");
 		virtio_read_dev_config(hw,
 				offsetof(struct virtio_net_config, status),
 				&status, sizeof(status));
 		if ((status & VIRTIO_NET_S_LINK_UP) == 0) {
-			link.link_status = ETH_LINK_DOWN;
-			link.link_speed = ETH_SPEED_NUM_NONE;
+			link.link_status = RTE_ETH_LINK_DOWN;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 			PMD_INIT_LOG(DEBUG, "Port %d is down",
 				     dev->data->port_id);
 		} else {
-			link.link_status = ETH_LINK_UP;
+			link.link_status = RTE_ETH_LINK_UP;
 			PMD_INIT_LOG(DEBUG, "Port %d is up",
 				     dev->data->port_id);
 		}
 	} else {
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -2512,8 +2512,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct virtio_hw *hw = dev->data->dev_private;
 	uint64_t offloads = rxmode->offloads;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if ((offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if ((offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 				!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
 
 			PMD_DRV_LOG(NOTICE,
@@ -2523,8 +2523,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		}
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK)
-		hw->vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	if (mask & RTE_ETH_VLAN_STRIP_MASK)
+		hw->vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 	return 0;
 }
@@ -2546,32 +2546,32 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = hw->max_mtu;
 
 	host_features = VIRTIO_OPS(hw)->get_features(hw);
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	if (host_features & (1ULL << VIRTIO_NET_F_MRG_RXBUF))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SCATTER;
 	if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
 		dev_info->rx_offload_capa |=
-			DEV_RX_OFFLOAD_TCP_CKSUM |
-			DEV_RX_OFFLOAD_UDP_CKSUM;
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
 	}
 	if (host_features & (1ULL << VIRTIO_NET_F_CTRL_VLAN))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_FILTER;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	tso_mask = (1ULL << VIRTIO_NET_F_GUEST_TSO4) |
 		(1ULL << VIRTIO_NET_F_GUEST_TSO6);
 	if ((host_features & tso_mask) == tso_mask)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_LRO;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-				    DEV_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				    RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	if (host_features & (1ULL << VIRTIO_NET_F_CSUM)) {
 		dev_info->tx_offload_capa |=
-			DEV_TX_OFFLOAD_UDP_CKSUM |
-			DEV_TX_OFFLOAD_TCP_CKSUM;
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 	}
 	tso_mask = (1ULL << VIRTIO_NET_F_HOST_TSO4) |
 		(1ULL << VIRTIO_NET_F_HOST_TSO6);
 	if ((host_features & tso_mask) == tso_mask)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (host_features & (1ULL << VIRTIO_F_RING_PACKED)) {
 		/*
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index a19895af1f17..26d9edf5319c 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -41,20 +41,20 @@
 #define	VMXNET3_TX_MAX_SEG	UINT8_MAX
 
 #define VMXNET3_TX_OFFLOAD_CAP		\
-	(DEV_TX_OFFLOAD_VLAN_INSERT |	\
-	 DEV_TX_OFFLOAD_TCP_CKSUM |	\
-	 DEV_TX_OFFLOAD_UDP_CKSUM |	\
-	 DEV_TX_OFFLOAD_TCP_TSO |	\
-	 DEV_TX_OFFLOAD_MULTI_SEGS)
+	(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |	\
+	 RTE_ETH_TX_OFFLOAD_TCP_CKSUM |	\
+	 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |	\
+	 RTE_ETH_TX_OFFLOAD_TCP_TSO |	\
+	 RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define VMXNET3_RX_OFFLOAD_CAP		\
-	(DEV_RX_OFFLOAD_VLAN_STRIP |	\
-	 DEV_RX_OFFLOAD_VLAN_FILTER |   \
-	 DEV_RX_OFFLOAD_SCATTER |	\
-	 DEV_RX_OFFLOAD_UDP_CKSUM |	\
-	 DEV_RX_OFFLOAD_TCP_CKSUM |	\
-	 DEV_RX_OFFLOAD_TCP_LRO |	\
-	 DEV_RX_OFFLOAD_RSS_HASH)
+	(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |	\
+	 RTE_ETH_RX_OFFLOAD_VLAN_FILTER |   \
+	 RTE_ETH_RX_OFFLOAD_SCATTER |	\
+	 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+	 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |	\
+	 RTE_ETH_RX_OFFLOAD_TCP_LRO |	\
+	 RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 int vmxnet3_segs_dynfield_offset = -1;
 
@@ -398,9 +398,9 @@ eth_vmxnet3_dev_init(struct rte_eth_dev *eth_dev)
 
 	/* set the initial link status */
 	memset(&link, 0, sizeof(link));
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_speed = ETH_SPEED_NUM_10G;
-	link.link_autoneg = ETH_LINK_FIXED;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 	rte_eth_linkstatus_set(eth_dev, &link);
 
 	return 0;
@@ -486,8 +486,8 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (dev->data->nb_tx_queues > VMXNET3_MAX_TX_QUEUES ||
 	    dev->data->nb_rx_queues > VMXNET3_MAX_RX_QUEUES) {
@@ -547,7 +547,7 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
 	hw->queueDescPA = mz->iova;
 	hw->queue_desc_len = (uint16_t)size;
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		/* Allocate memory structure for UPT1_RSSConf and configure */
 		mz = gpa_zone_reserve(dev, sizeof(struct VMXNET3_RSSConf),
 				      "rss_conf", rte_socket_id(),
@@ -843,15 +843,15 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
 	devRead->rxFilterConf.rxMode = 0;
 
 	/* Setting up feature flags */
-	if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		devRead->misc.uptFeatures |= VMXNET3_F_RXCSUM;
 
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		devRead->misc.uptFeatures |= VMXNET3_F_LRO;
 		devRead->misc.maxNumRxSG = 0;
 	}
 
-	if (port_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (port_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		ret = vmxnet3_rss_configure(dev);
 		if (ret != VMXNET3_SUCCESS)
 			return ret;
@@ -863,7 +863,7 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
 	}
 
 	ret = vmxnet3_dev_vlan_offload_set(dev,
-			ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+			RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
 	if (ret)
 		return ret;
 
@@ -930,7 +930,7 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
 	}
 
 	if (VMXNET3_VERSION_GE_4(hw) &&
-	    dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	    dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		/* Check for additional RSS  */
 		ret = vmxnet3_v4_rss_configure(dev);
 		if (ret != VMXNET3_SUCCESS) {
@@ -1039,9 +1039,9 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
 
 	/* Clear recorded link status */
 	memset(&link, 0, sizeof(link));
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_speed = ETH_SPEED_NUM_10G;
-	link.link_autoneg = ETH_LINK_FIXED;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 	rte_eth_linkstatus_set(dev, &link);
 
 	hw->adapter_stopped = 1;
@@ -1365,7 +1365,7 @@ vmxnet3_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
 	dev_info->min_mtu = VMXNET3_MIN_MTU;
 	dev_info->max_mtu = VMXNET3_MAX_MTU;
-	dev_info->speed_capa = ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
 	dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
 
 	dev_info->flow_type_rss_offloads = VMXNET3_RSS_OFFLOAD_ALL;
@@ -1447,10 +1447,10 @@ __vmxnet3_dev_link_update(struct rte_eth_dev *dev,
 	ret = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD);
 
 	if (ret & 0x1)
-		link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_speed = ETH_SPEED_NUM_10G;
-	link.link_autoneg = ETH_LINK_FIXED;
+		link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 
 	return rte_eth_linkstatus_set(dev, &link);
 }
@@ -1503,7 +1503,7 @@ vmxnet3_dev_promiscuous_disable(struct rte_eth_dev *dev)
 	uint32_t *vf_table = hw->shared->devRead.rxFilterConf.vfTable;
 	uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 		memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
 	else
 		memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
@@ -1573,8 +1573,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	uint32_t *vf_table = devRead->rxFilterConf.vfTable;
 	uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			devRead->misc.uptFeatures |= UPT1_F_RXVLAN;
 		else
 			devRead->misc.uptFeatures &= ~UPT1_F_RXVLAN;
@@ -1583,8 +1583,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 				       VMXNET3_CMD_UPDATE_FEATURE);
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
 		else
 			memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.h b/drivers/net/vmxnet3/vmxnet3_ethdev.h
index 8950175460f0..ef858ac9512f 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.h
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.h
@@ -32,18 +32,18 @@
 				VMXNET3_MAX_RX_QUEUES + 1)
 
 #define VMXNET3_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 
 #define VMXNET3_V4_RSS_MASK ( \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define VMXNET3_MANDATORY_V4_RSS ( \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP)
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 
 /* RSS configuration structure - shared with device through GPA */
 typedef struct VMXNET3_RSSConf {
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index b01c4c01f9c9..870100fa4f11 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -1326,13 +1326,13 @@ vmxnet3_v4_rss_configure(struct rte_eth_dev *dev)
 	rss_hf = port_rss_conf->rss_hf &
 		(VMXNET3_V4_RSS_MASK | VMXNET3_RSS_OFFLOAD_ALL);
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP6;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP6;
 
 	VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD,
@@ -1389,13 +1389,13 @@ vmxnet3_rss_configure(struct rte_eth_dev *dev)
 	/* loading hashType */
 	dev_rss_conf->hashType = 0;
 	rss_hf = port_rss_conf->rss_hf & VMXNET3_RSS_OFFLOAD_ALL;
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV4;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV6;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV6;
 
 	return VMXNET3_SUCCESS;
diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
index 68e3c13730ad..a9fef2297842 100644
--- a/examples/bbdev_app/main.c
+++ b/examples/bbdev_app/main.c
@@ -71,11 +71,11 @@ mbuf_input(struct rte_mbuf *mbuf)
 
 static const struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -328,7 +328,7 @@ check_port_link_status(uint16_t port_id)
 
 		if (link_get_err >= 0 && link.link_status) {
 			const char *dp = (link.link_duplex ==
-				ETH_LINK_FULL_DUPLEX) ?
+				RTE_ETH_LINK_FULL_DUPLEX) ?
 				"full-duplex" : "half-duplex";
 			printf("\nPort %u Link Up - speed %s - %s\n",
 				port_id,
diff --git a/examples/bond/main.c b/examples/bond/main.c
index 6352a715c0d9..3f41d8e5965d 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -115,17 +115,17 @@ static struct rte_mempool *mbuf_pool;
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -149,9 +149,9 @@ slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
 			"Error during getting device (port %u) info: %s\n",
 			portid, strerror(-retval));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 		dev_info.flow_type_rss_offloads;
@@ -241,9 +241,9 @@ bond_port_init(struct rte_mempool *mbuf_pool)
 			"Error during getting device (port %u) info: %s\n",
 			BOND_PORT, strerror(-retval));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	retval = rte_eth_dev_configure(BOND_PORT, 1, 1, &local_port_conf);
 	if (retval != 0)
 		rte_exit(EXIT_FAILURE, "port %u: configuration failed (res=%d)\n",
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index 8c4a8feec0c2..c681e237ea46 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -80,15 +80,15 @@ struct app_stats prev_app_stats;
 
 static const struct rte_eth_conf port_conf_default = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
-			.rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
-				ETH_RSS_TCP | ETH_RSS_SCTP,
+			.rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+				RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
 		}
 	},
 };
@@ -126,9 +126,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	port_conf.rx_adv_conf.rss_conf.rss_hf &=
 		dev_info.flow_type_rss_offloads;
diff --git a/examples/ethtool/ethtool-app/main.c b/examples/ethtool/ethtool-app/main.c
index 1bc675962bf3..cdd9e9b60bd8 100644
--- a/examples/ethtool/ethtool-app/main.c
+++ b/examples/ethtool/ethtool-app/main.c
@@ -98,7 +98,7 @@ static void setup_ports(struct app_config *app_cfg, int cnt_ports)
 	int ret;
 
 	memset(&cfg_port, 0, sizeof(cfg_port));
-	cfg_port.txmode.mq_mode = ETH_MQ_TX_NONE;
+	cfg_port.txmode.mq_mode = RTE_ETH_MQ_TX_NONE;
 
 	for (idx_port = 0; idx_port < cnt_ports; idx_port++) {
 		struct app_port *ptr_port = &app_cfg->ports[idx_port];
diff --git a/examples/ethtool/lib/rte_ethtool.c b/examples/ethtool/lib/rte_ethtool.c
index 413251630709..e7cdf8d5775b 100644
--- a/examples/ethtool/lib/rte_ethtool.c
+++ b/examples/ethtool/lib/rte_ethtool.c
@@ -233,13 +233,13 @@ rte_ethtool_get_pauseparam(uint16_t port_id,
 	pause_param->tx_pause = 0;
 	pause_param->rx_pause = 0;
 	switch (fc_conf.mode) {
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		pause_param->rx_pause = 1;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		pause_param->tx_pause = 1;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		pause_param->rx_pause = 1;
 		pause_param->tx_pause = 1;
 	default:
@@ -277,14 +277,14 @@ rte_ethtool_set_pauseparam(uint16_t port_id,
 
 	if (pause_param->tx_pause) {
 		if (pause_param->rx_pause)
-			fc_conf.mode = RTE_FC_FULL;
+			fc_conf.mode = RTE_ETH_FC_FULL;
 		else
-			fc_conf.mode = RTE_FC_TX_PAUSE;
+			fc_conf.mode = RTE_ETH_FC_TX_PAUSE;
 	} else {
 		if (pause_param->rx_pause)
-			fc_conf.mode = RTE_FC_RX_PAUSE;
+			fc_conf.mode = RTE_ETH_FC_RX_PAUSE;
 		else
-			fc_conf.mode = RTE_FC_NONE;
+			fc_conf.mode = RTE_ETH_FC_NONE;
 	}
 
 	status = rte_eth_dev_flow_ctrl_set(port_id, &fc_conf);
@@ -398,12 +398,12 @@ rte_ethtool_net_set_rx_mode(uint16_t port_id)
 	for (vf = 0; vf < num_vfs; vf++) {
 #ifdef RTE_NET_IXGBE
 		rte_pmd_ixgbe_set_vf_rxmode(port_id, vf,
-			ETH_VMDQ_ACCEPT_UNTAG, 0);
+			RTE_ETH_VMDQ_ACCEPT_UNTAG, 0);
 #endif
 	}
 
 	/* Enable Rx vlan filter, VF unspport status is discard */
-	ret = rte_eth_dev_set_vlan_offload(port_id, ETH_VLAN_FILTER_MASK);
+	ret = rte_eth_dev_set_vlan_offload(port_id, RTE_ETH_VLAN_FILTER_MASK);
 	if (ret != 0)
 		return ret;
 
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index e26be8edf28f..193a16463449 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -283,13 +283,13 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 	struct rte_eth_rxconf rx_conf;
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
-				.rss_hf = ETH_RSS_IP |
-					  ETH_RSS_TCP |
-					  ETH_RSS_UDP,
+				.rss_hf = RTE_ETH_RSS_IP |
+					  RTE_ETH_RSS_TCP |
+					  RTE_ETH_RSS_UDP,
 			}
 		}
 	};
@@ -311,12 +311,12 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_RSS_HASH)
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_RSS_HASH)
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	rx_conf = dev_info.default_rxconf;
 	rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index 476b147bdfcc..1b841d46ad93 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -614,13 +614,13 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 	struct rte_eth_rxconf rx_conf;
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
-				.rss_hf = ETH_RSS_IP |
-					  ETH_RSS_TCP |
-					  ETH_RSS_UDP,
+				.rss_hf = RTE_ETH_RSS_IP |
+					  RTE_ETH_RSS_TCP |
+					  RTE_ETH_RSS_UDP,
 			}
 		}
 	};
@@ -642,9 +642,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	rx_conf = dev_info.default_rxconf;
 	rx_conf.offloads = port_conf.rxmode.offloads;
 
diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
index 8a43f6ac0f92..6185b340600c 100644
--- a/examples/flow_classify/flow_classify.c
+++ b/examples/flow_classify/flow_classify.c
@@ -212,9 +212,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/flow_filtering/main.c b/examples/flow_filtering/main.c
index dd8a33d036ee..bfc1949c8428 100644
--- a/examples/flow_filtering/main.c
+++ b/examples/flow_filtering/main.c
@@ -113,7 +113,7 @@ assert_link_status(void)
 	memset(&link, 0, sizeof(link));
 	do {
 		link_get_err = rte_eth_link_get(port_id, &link);
-		if (link_get_err == 0 && link.link_status == ETH_LINK_UP)
+		if (link_get_err == 0 && link.link_status == RTE_ETH_LINK_UP)
 			break;
 		rte_delay_ms(CHECK_INTERVAL);
 	} while (--rep_cnt);
@@ -121,7 +121,7 @@ assert_link_status(void)
 	if (link_get_err < 0)
 		rte_exit(EXIT_FAILURE, ":: error: link get is failing: %s\n",
 			 rte_strerror(-link_get_err));
-	if (link.link_status == ETH_LINK_DOWN)
+	if (link.link_status == RTE_ETH_LINK_DOWN)
 		rte_exit(EXIT_FAILURE, ":: error: link is still down\n");
 }
 
@@ -138,12 +138,12 @@ init_port(void)
 		},
 		.txmode = {
 			.offloads =
-				DEV_TX_OFFLOAD_VLAN_INSERT |
-				DEV_TX_OFFLOAD_IPV4_CKSUM  |
-				DEV_TX_OFFLOAD_UDP_CKSUM   |
-				DEV_TX_OFFLOAD_TCP_CKSUM   |
-				DEV_TX_OFFLOAD_SCTP_CKSUM  |
-				DEV_TX_OFFLOAD_TCP_TSO,
+				RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_TCP_TSO,
 		},
 	};
 	struct rte_eth_txconf txq_conf;
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index ccfee585f850..b1aa2767a0af 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -819,12 +819,12 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
 	/* Configuring port to use RSS for multiple RX queues. 8< */
 	static const struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_PROTO_MASK,
+				.rss_hf = RTE_ETH_RSS_PROTO_MASK,
 			}
 		}
 	};
@@ -852,9 +852,9 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
 
 	local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 		dev_info.flow_type_rss_offloads;
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(portid, nb_queues, 1, &local_port_conf);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Cannot configure device:"
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 8644454a9aef..0307709f2b4a 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -149,13 +149,13 @@ static struct rte_eth_conf port_conf = {
 		.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
 			RTE_ETHER_CRC_LEN,
 		.split_hdr_size = 0,
-		.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
-			     DEV_RX_OFFLOAD_SCATTER),
+		.offloads = (RTE_ETH_RX_OFFLOAD_CHECKSUM |
+			     RTE_ETH_RX_OFFLOAD_SCATTER),
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_MULTI_SEGS),
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
 	},
 };
 
@@ -624,7 +624,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 9ba02e687adb..0290767af473 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -45,7 +45,7 @@ link_next(struct link *link)
 static struct rte_eth_conf port_conf_default = {
 	.link_speeds = 0,
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
 		.split_hdr_size = 0, /* Header split buffer size */
 	},
@@ -57,12 +57,12 @@ static struct rte_eth_conf port_conf_default = {
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
 
-#define RETA_CONF_SIZE     (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE     (RTE_ETH_RSS_RETA_SIZE_512 / RTE_ETH_RETA_GROUP_SIZE)
 
 static int
 rss_setup(uint16_t port_id,
@@ -77,11 +77,11 @@ rss_setup(uint16_t port_id,
 	memset(reta_conf, 0, sizeof(reta_conf));
 
 	for (i = 0; i < reta_size; i++)
-		reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+		reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
 
 	for (i = 0; i < reta_size; i++) {
-		uint32_t reta_id = i / RTE_RETA_GROUP_SIZE;
-		uint32_t reta_pos = i % RTE_RETA_GROUP_SIZE;
+		uint32_t reta_id = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint32_t reta_pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint32_t rss_qs_pos = i % rss->n_queues;
 
 		reta_conf[reta_id].reta[reta_pos] =
@@ -139,7 +139,7 @@ link_create(const char *name, struct link_params *params)
 	rss = params->rx.rss;
 	if (rss) {
 		if ((port_info.reta_size == 0) ||
-			(port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+			(port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
 			return NULL;
 
 		if ((rss->n_queues == 0) ||
@@ -157,9 +157,9 @@ link_create(const char *name, struct link_params *params)
 	/* Port */
 	memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
 	if (rss) {
-		port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+		port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
 		port_conf.rx_adv_conf.rss_conf.rss_hf =
-			(ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+			(RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
 			port_info.flow_type_rss_offloads;
 	}
 
@@ -267,5 +267,5 @@ link_is_up(const char *name)
 	if (rte_eth_link_get(link->port_id, &link_params) < 0)
 		return 0;
 
-	return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+	return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
 }
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 4f0e12e62447..a9f9bd477007 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -161,22 +161,22 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_RSS,
+		.mq_mode        = RTE_ETH_MQ_RX_RSS,
 		.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
 			RTE_ETHER_CRC_LEN,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_MULTI_SEGS),
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
 	},
 };
 
@@ -738,7 +738,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -1096,9 +1096,9 @@ main(int argc, char **argv)
 		n_tx_queue = nb_lcores;
 		if (n_tx_queue > MAX_TX_QUEUE_PER_PORT)
 			n_tx_queue = MAX_TX_QUEUE_PER_PORT;
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 5f5ec260f315..feddd84d1551 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -234,19 +234,19 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode	= ETH_MQ_RX_RSS,
+		.mq_mode	= RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
-				ETH_RSS_TCP | ETH_RSS_SCTP,
+			.rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+				RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -1455,10 +1455,10 @@ print_usage(const char *prgname)
 		"               \"parallel\" : Parallel\n"
 		"  --" CMD_LINE_OPT_RX_OFFLOAD
 		": bitmask of the RX HW offload capabilities to enable/use\n"
-		"                         (DEV_RX_OFFLOAD_*)\n"
+		"                         (RTE_ETH_RX_OFFLOAD_*)\n"
 		"  --" CMD_LINE_OPT_TX_OFFLOAD
 		": bitmask of the TX HW offload capabilities to enable/use\n"
-		"                         (DEV_TX_OFFLOAD_*)\n"
+		"                         (RTE_ETH_TX_OFFLOAD_*)\n"
 		"  --" CMD_LINE_OPT_REASSEMBLE " NUM"
 		": max number of entries in reassemble(fragment) table\n"
 		"    (zero (default value) disables reassembly)\n"
@@ -1909,7 +1909,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2212,8 +2212,8 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 	local_port_conf.rxmode.mtu = mtu_size;
 
 	if (multi_seg_required()) {
-		local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
-		local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		local_port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+		local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	}
 
 	local_port_conf.rxmode.offloads |= req_rx_offloads;
@@ -2236,12 +2236,12 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 			portid, local_port_conf.txmode.offloads,
 			dev_info.tx_offload_capa);
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM)
-		local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
+		local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 
 	printf("port %u configurng rx_offloads=0x%" PRIx64
 		", tx_offloads=0x%" PRIx64 "\n",
@@ -2299,7 +2299,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 		/* Pre-populate pkt offloads based on capabilities */
 		qconf->outbound.ipv4_offloads = PKT_TX_IPV4;
 		qconf->outbound.ipv6_offloads = PKT_TX_IPV6;
-		if (local_port_conf.txmode.offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+		if (local_port_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 			qconf->outbound.ipv4_offloads |= PKT_TX_IP_CKSUM;
 
 		tx_queueid++;
@@ -2660,7 +2660,7 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads)
 	struct rte_flow *flow;
 	int ret;
 
-	if (!(rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+	if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
 		return;
 
 	/* Add the default rte_flow to enable SECURITY for all ESP packets */
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 17a28556c971..5cdd794f017f 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -986,7 +986,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
 
 	if (inbound) {
 		if ((dev_info.rx_offload_capa &
-				DEV_RX_OFFLOAD_SECURITY) == 0) {
+				RTE_ETH_RX_OFFLOAD_SECURITY) == 0) {
 			RTE_LOG(WARNING, PORT,
 				"hardware RX IPSec offload is not supported\n");
 			return -EINVAL;
@@ -994,7 +994,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
 
 	} else { /* outbound */
 		if ((dev_info.tx_offload_capa &
-				DEV_TX_OFFLOAD_SECURITY) == 0) {
+				RTE_ETH_TX_OFFLOAD_SECURITY) == 0) {
 			RTE_LOG(WARNING, PORT,
 				"hardware TX IPSec offload is not supported\n");
 			return -EINVAL;
@@ -1628,7 +1628,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
 				rule_type ==
 				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
 				&& rule->portid == port_id)
-			*rx_offloads |= DEV_RX_OFFLOAD_SECURITY;
+			*rx_offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
 	}
 
 	/* Check for outbound rules that use offloads and use this port */
@@ -1639,7 +1639,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
 				rule_type ==
 				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
 				&& rule->portid == port_id)
-			*tx_offloads |= DEV_TX_OFFLOAD_SECURITY;
+			*tx_offloads |= RTE_ETH_TX_OFFLOAD_SECURITY;
 	}
 	return 0;
 }
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index 87538dccc879..32670f80bc2b 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -115,8 +115,8 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
 	},
 };
 
@@ -620,7 +620,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/kni/main.c b/examples/kni/main.c
index 1790ec024072..f780be712ec0 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -95,7 +95,7 @@ static struct kni_port_params *kni_port_params_array[RTE_MAX_ETHPORTS];
 /* Options for configuring ethernet port */
 static struct rte_eth_conf port_conf = {
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -608,9 +608,9 @@ init_port(uint16_t port)
 			"Error during getting device (port %u) info: %s\n",
 			port, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(port, 1, 1, &local_port_conf);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Could not configure port%u (%d)\n",
@@ -688,7 +688,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index c646f1748ca7..42c04abbbb34 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -216,11 +216,11 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -1808,7 +1808,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2632,9 +2632,9 @@ initialize_ports(struct l2fwd_crypto_options *options)
 			return retval;
 		}
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		retval = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (retval < 0) {
 			printf("Cannot configure device: err=%d, port=%u\n",
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index 9040be5ed9b6..cf3d1b8aaf40 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -14,7 +14,7 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
 			.split_hdr_size = 0,
 		},
 		.txmode = {
-			.mq_mode = ETH_MQ_TX_NONE,
+			.mq_mode = RTE_ETH_MQ_TX_NONE,
 		},
 	};
 	uint16_t nb_ports_available = 0;
@@ -22,9 +22,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
 	int ret;
 
 	if (rsrc->event_mode) {
-		port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+		port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
 		port_conf.rx_adv_conf.rss_conf.rss_key = NULL;
-		port_conf.rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP;
+		port_conf.rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP;
 	}
 
 	/* Initialise each port */
@@ -60,9 +60,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
 				local_port_conf.rx_adv_conf.rss_conf.rss_hf);
 		}
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure RX and TX queue. 8< */
 		ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 1db89f2bd139..9806204b81d1 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -395,7 +395,7 @@ check_all_ports_link_status(struct l2fwd_resources *rsrc,
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c
index 06280321b1f2..092ea0189c7f 100644
--- a/examples/l2fwd-jobstats/main.c
+++ b/examples/l2fwd-jobstats/main.c
@@ -94,7 +94,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -726,7 +726,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -869,9 +869,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure the RX and TX queues. 8< */
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/l2fwd-keepalive/main.c b/examples/l2fwd-keepalive/main.c
index 07271affb4a9..78e43f9c091e 100644
--- a/examples/l2fwd-keepalive/main.c
+++ b/examples/l2fwd-keepalive/main.c
@@ -83,7 +83,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -478,7 +478,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -650,9 +650,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
 			rte_exit(EXIT_FAILURE,
diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
index f3deeba0a665..3edabd1dd19b 100644
--- a/examples/l2fwd/main.c
+++ b/examples/l2fwd/main.c
@@ -95,7 +95,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -606,7 +606,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -792,9 +792,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure the number of queues for a port. */
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 1890c88a5b01..fea414ae5929 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -124,19 +124,19 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode	= ETH_MQ_RX_RSS,
+		.mq_mode	= RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
-				ETH_RSS_TCP | ETH_RSS_SCTP,
+			.rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+				RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -1936,7 +1936,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2004,7 +2004,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -2088,9 +2088,9 @@ main(int argc, char **argv)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index 05385807e83e..7f00c65609ed 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -111,17 +111,17 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -607,7 +607,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* Clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -731,7 +731,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -828,9 +828,9 @@ main(int argc, char **argv)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 6aa1b66ecfcc..5a4359a368b5 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -250,18 +250,18 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_RSS,
+		.mq_mode        = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_UDP,
+			.rss_hf = RTE_ETH_RSS_UDP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	}
 };
 
@@ -2197,7 +2197,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2510,7 +2510,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -2638,9 +2638,9 @@ main(int argc, char **argv)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 961860ea18ef..7c7613a83aad 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -75,9 +75,9 @@ l3fwd_eth_dev_port_setup(struct rte_eth_conf *port_conf)
 			rte_panic("Error during getting device (port %u) info:"
 				  "%s\n", port_id, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+						RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 						dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index f27c76bb7a73..51cbf81f1afa 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -120,18 +120,18 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -903,7 +903,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -988,7 +988,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -1053,15 +1053,15 @@ l3fwd_poll_resource_setup(void)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
 
 		if (dev_info.max_rx_queues == 1)
-			local_port_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+			local_port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
 
 		if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
 				port_conf.rx_adv_conf.rss_conf.rss_hf) {
diff --git a/examples/link_status_interrupt/main.c b/examples/link_status_interrupt/main.c
index e4542df11f87..8714acddd110 100644
--- a/examples/link_status_interrupt/main.c
+++ b/examples/link_status_interrupt/main.c
@@ -83,7 +83,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.intr_conf = {
 		.lsc = 1, /**< lsc interrupt feature enabled */
@@ -147,7 +147,7 @@ print_stats(void)
 			   link_get_err < 0 ? "0" :
 			   rte_eth_link_speed_to_str(link.link_speed),
 			   link_get_err < 0 ? "Link get failed" :
-			   (link.link_duplex == ETH_LINK_FULL_DUPLEX ? \
+			   (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex"),
 			   port_statistics[portid].tx,
 			   port_statistics[portid].rx,
@@ -507,7 +507,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -634,9 +634,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure RX and TX queues. 8< */
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/multi_process/client_server_mp/mp_server/init.c b/examples/multi_process/client_server_mp/mp_server/init.c
index 1ad71ca7ec5f..23307073c904 100644
--- a/examples/multi_process/client_server_mp/mp_server/init.c
+++ b/examples/multi_process/client_server_mp/mp_server/init.c
@@ -94,7 +94,7 @@ init_port(uint16_t port_num)
 	/* for port configuration all features are off by default */
 	const struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS
+			.mq_mode = RTE_ETH_MQ_RX_RSS
 		}
 	};
 	const uint16_t rx_rings = 1, tx_rings = num_clients;
@@ -213,7 +213,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/multi_process/symmetric_mp/main.c b/examples/multi_process/symmetric_mp/main.c
index 01dc3acf34d5..85955375f1bf 100644
--- a/examples/multi_process/symmetric_mp/main.c
+++ b/examples/multi_process/symmetric_mp/main.c
@@ -176,18 +176,18 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
 {
 	struct rte_eth_conf port_conf = {
 			.rxmode = {
-				.mq_mode	= ETH_MQ_RX_RSS,
+				.mq_mode	= RTE_ETH_MQ_RX_RSS,
 				.split_hdr_size = 0,
-				.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+				.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 			},
 			.rx_adv_conf = {
 				.rss_conf = {
 					.rss_key = NULL,
-					.rss_hf = ETH_RSS_IP,
+					.rss_hf = RTE_ETH_RSS_IP,
 				},
 			},
 			.txmode = {
-				.mq_mode = ETH_MQ_TX_NONE,
+				.mq_mode = RTE_ETH_MQ_TX_NONE,
 			}
 	};
 	const uint16_t rx_rings = num_queues, tx_rings = num_queues;
@@ -218,9 +218,9 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
 
 	info.default_rxconf.rx_drop_en = 1;
 
-	if (info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
 	port_conf.rx_adv_conf.rss_conf.rss_hf &= info.flow_type_rss_offloads;
@@ -392,7 +392,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
index e9a388710647..f110fc129f55 100644
--- a/examples/ntb/ntb_fwd.c
+++ b/examples/ntb/ntb_fwd.c
@@ -89,17 +89,17 @@ static uint16_t pkt_burst = NTB_DFLT_PKT_BURST;
 
 static struct rte_eth_conf eth_port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c
index 4f6982bc1289..b01ac60fd196 100644
--- a/examples/packet_ordering/main.c
+++ b/examples/packet_ordering/main.c
@@ -294,9 +294,9 @@ configure_eth_port(uint16_t port_id)
 		return ret;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(port_id, rxRings, txRings, &port_conf);
 	if (ret != 0)
 		return ret;
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 5de5df997ee9..baeee9298d57 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -307,18 +307,18 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_TCP,
+			.rss_hf = RTE_ETH_RSS_TCP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -3441,7 +3441,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -3494,7 +3494,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -3593,9 +3593,9 @@ main(int argc, char **argv)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 4f20dfc4be06..569207a79d62 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -133,7 +133,7 @@ mempool_find(struct obj *obj, const char *name)
 static struct rte_eth_conf port_conf_default = {
 	.link_speeds = 0,
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
 		.split_hdr_size = 0, /* Header split buffer size */
 	},
@@ -145,12 +145,12 @@ static struct rte_eth_conf port_conf_default = {
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
 
-#define RETA_CONF_SIZE     (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE     (RTE_ETH_RSS_RETA_SIZE_512 / RTE_ETH_RETA_GROUP_SIZE)
 
 static int
 rss_setup(uint16_t port_id,
@@ -165,11 +165,11 @@ rss_setup(uint16_t port_id,
 	memset(reta_conf, 0, sizeof(reta_conf));
 
 	for (i = 0; i < reta_size; i++)
-		reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+		reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
 
 	for (i = 0; i < reta_size; i++) {
-		uint32_t reta_id = i / RTE_RETA_GROUP_SIZE;
-		uint32_t reta_pos = i % RTE_RETA_GROUP_SIZE;
+		uint32_t reta_id = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint32_t reta_pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint32_t rss_qs_pos = i % rss->n_queues;
 
 		reta_conf[reta_id].reta[reta_pos] =
@@ -227,7 +227,7 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
 	rss = params->rx.rss;
 	if (rss) {
 		if ((port_info.reta_size == 0) ||
-			(port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+			(port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
 			return NULL;
 
 		if ((rss->n_queues == 0) ||
@@ -245,9 +245,9 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
 	/* Port */
 	memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
 	if (rss) {
-		port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+		port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
 		port_conf.rx_adv_conf.rss_conf.rss_hf =
-			(ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+			(RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
 			port_info.flow_type_rss_offloads;
 	}
 
@@ -356,7 +356,7 @@ link_is_up(struct obj *obj, const char *name)
 	if (rte_eth_link_get(link->port_id, &link_params) < 0)
 		return 0;
 
-	return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+	return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
 }
 
 struct link *
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 229a277032cb..979d9eb9e9d0 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -193,14 +193,14 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	/* Force full Tx path in the driver, required for IEEE1588 */
-	port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index c32d2e12e633..743bae2da50a 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -51,18 +51,18 @@ static struct rte_mempool *pool = NULL;
  ***/
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode	= ETH_MQ_RX_RSS,
+		.mq_mode	= RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_DCB_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -332,8 +332,8 @@ main(int argc, char **argv)
 			"Error during getting device (port %u) info: %s\n",
 			port_rx, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
-		conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+		conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
 	if (conf.rx_adv_conf.rss_conf.rss_hf !=
@@ -378,8 +378,8 @@ main(int argc, char **argv)
 			"Error during getting device (port %u) info: %s\n",
 			port_tx, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
-		conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+		conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
 	if (conf.rx_adv_conf.rss_conf.rss_hf !=
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 1367569c65db..9b34e4a76b1b 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -60,7 +60,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_DCB_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -105,9 +105,9 @@ app_init_port(uint16_t portid, struct rte_mempool *mp)
 			"Error during getting device (port %u) info: %s\n",
 			portid, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE,
diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
index 6845c396b8d9..1903d8b095a1 100644
--- a/examples/rxtx_callbacks/main.c
+++ b/examples/rxtx_callbacks/main.c
@@ -141,17 +141,17 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	if (hw_timestamping) {
-		if (!(dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)) {
+		if (!(dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
 			printf("\nERROR: Port %u does not support hardware timestamping\n"
 					, port);
 			return -1;
 		}
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 		rte_mbuf_dyn_rx_timestamp_register(&hwts_dynfield_offset, NULL);
 		if (hwts_dynfield_offset < 0) {
 			printf("ERROR: Failed to register timestamp field\n");
diff --git a/examples/server_node_efd/server/init.c b/examples/server_node_efd/server/init.c
index 9ebd88bac20e..074fee5b26b2 100644
--- a/examples/server_node_efd/server/init.c
+++ b/examples/server_node_efd/server/init.c
@@ -96,7 +96,7 @@ init_port(uint16_t port_num)
 	/* for port configuration all features are off by default */
 	struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 	};
 	const uint16_t rx_rings = 1, tx_rings = num_nodes;
@@ -115,9 +115,9 @@ init_port(uint16_t port_num)
 	if (retval != 0)
 		return retval;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/*
 	 * Standard DPDK port initialisation - config port, then set up
@@ -277,7 +277,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
index fd7207aee758..16435ee3ccc2 100644
--- a/examples/skeleton/basicfwd.c
+++ b/examples/skeleton/basicfwd.c
@@ -49,9 +49,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 999809e6ed41..49c134a3042f 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -110,23 +110,23 @@ static int nb_sockets;
 /* empty vmdq configuration structure. Filled in programatically */
 static struct rte_eth_conf vmdq_conf_default = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_VMDQ_ONLY,
+		.mq_mode        = RTE_ETH_MQ_RX_VMDQ_ONLY,
 		.split_hdr_size = 0,
 		/*
 		 * VLAN strip is necessary for 1G NIC such as I350,
 		 * this fixes bug of ipv4 forwarding in guest can't
 		 * forward pakets from one virtio dev to another virtio dev.
 		 */
-		.offloads = DEV_RX_OFFLOAD_VLAN_STRIP,
+		.offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP,
 	},
 
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_TCP_CKSUM |
-			     DEV_TX_OFFLOAD_VLAN_INSERT |
-			     DEV_TX_OFFLOAD_MULTI_SEGS |
-			     DEV_TX_OFFLOAD_TCP_TSO),
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+			     RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+			     RTE_ETH_TX_OFFLOAD_TCP_TSO),
 	},
 	.rx_adv_conf = {
 		/*
@@ -134,7 +134,7 @@ static struct rte_eth_conf vmdq_conf_default = {
 		 * appropriate values
 		 */
 		.vmdq_rx_conf = {
-			.nb_queue_pools = ETH_8_POOLS,
+			.nb_queue_pools = RTE_ETH_8_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -291,9 +291,9 @@ port_init(uint16_t port)
 		return -1;
 
 	rx_rings = (uint16_t)dev_info.max_rx_queues;
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	/* Configure ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
 	if (retval != 0) {
@@ -557,8 +557,8 @@ us_vhost_parse_args(int argc, char **argv)
 		case 'P':
 			promiscuous = 1;
 			vmdq_conf_default.rx_adv_conf.vmdq_rx_conf.rx_mode =
-				ETH_VMDQ_ACCEPT_BROADCAST |
-				ETH_VMDQ_ACCEPT_MULTICAST;
+				RTE_ETH_VMDQ_ACCEPT_BROADCAST |
+				RTE_ETH_VMDQ_ACCEPT_MULTICAST;
 			break;
 
 		case OPT_VM2VM_NUM:
diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
index e19d79a40802..b159291d77ce 100644
--- a/examples/vm_power_manager/main.c
+++ b/examples/vm_power_manager/main.c
@@ -73,9 +73,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
@@ -270,7 +270,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 		       /* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
index ee7f4324e141..1f336082e5c1 100644
--- a/examples/vmdq/main.c
+++ b/examples/vmdq/main.c
@@ -66,12 +66,12 @@ static uint8_t rss_enable;
 /* empty vmdq configuration structure. Filled in programatically */
 static const struct rte_eth_conf vmdq_conf_default = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_VMDQ_ONLY,
+		.mq_mode        = RTE_ETH_MQ_RX_VMDQ_ONLY,
 		.split_hdr_size = 0,
 	},
 
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.rx_adv_conf = {
 		/*
@@ -79,7 +79,7 @@ static const struct rte_eth_conf vmdq_conf_default = {
 		 * appropriate values
 		 */
 		.vmdq_rx_conf = {
-			.nb_queue_pools = ETH_8_POOLS,
+			.nb_queue_pools = RTE_ETH_8_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -157,11 +157,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t num_pools)
 	(void)(rte_memcpy(&eth_conf->rx_adv_conf.vmdq_rx_conf, &conf,
 		   sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf)));
 	if (rss_enable) {
-		eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
-		eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
-							ETH_RSS_UDP |
-							ETH_RSS_TCP |
-							ETH_RSS_SCTP;
+		eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
+		eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+							RTE_ETH_RSS_UDP |
+							RTE_ETH_RSS_TCP |
+							RTE_ETH_RSS_SCTP;
 	}
 	return 0;
 }
@@ -259,9 +259,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	retval = rte_eth_dev_configure(port, rxRings, txRings, &port_conf);
 	if (retval != 0)
 		return retval;
diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c
index 14c20e6a8b26..1a19f1799bd2 100644
--- a/examples/vmdq_dcb/main.c
+++ b/examples/vmdq_dcb/main.c
@@ -60,8 +60,8 @@ static uint16_t ports[RTE_MAX_ETHPORTS];
 static unsigned num_ports;
 
 /* number of pools (if user does not specify any, 32 by default */
-static enum rte_eth_nb_pools num_pools = ETH_32_POOLS;
-static enum rte_eth_nb_tcs   num_tcs   = ETH_4_TCS;
+static enum rte_eth_nb_pools num_pools = RTE_ETH_32_POOLS;
+static enum rte_eth_nb_tcs   num_tcs   = RTE_ETH_4_TCS;
 static uint16_t num_queues, num_vmdq_queues;
 static uint16_t vmdq_pool_base, vmdq_queue_base;
 static uint8_t rss_enable;
@@ -69,11 +69,11 @@ static uint8_t rss_enable;
 /* Empty vmdq+dcb configuration structure. Filled in programmatically. 8< */
 static const struct rte_eth_conf vmdq_dcb_conf_default = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_VMDQ_DCB,
+		.mq_mode        = RTE_ETH_MQ_RX_VMDQ_DCB,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_VMDQ_DCB,
+		.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB,
 	},
 	/*
 	 * should be overridden separately in code with
@@ -81,7 +81,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 	 */
 	.rx_adv_conf = {
 		.vmdq_dcb_conf = {
-			.nb_queue_pools = ETH_32_POOLS,
+			.nb_queue_pools = RTE_ETH_32_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -89,12 +89,12 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 			.dcb_tc = {0},
 		},
 		.dcb_rx_conf = {
-				.nb_tcs = ETH_4_TCS,
+				.nb_tcs = RTE_ETH_4_TCS,
 				/** Traffic class each UP mapped to. */
 				.dcb_tc = {0},
 		},
 		.vmdq_rx_conf = {
-			.nb_queue_pools = ETH_32_POOLS,
+			.nb_queue_pools = RTE_ETH_32_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -103,7 +103,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 	},
 	.tx_adv_conf = {
 		.vmdq_dcb_tx_conf = {
-			.nb_queue_pools = ETH_32_POOLS,
+			.nb_queue_pools = RTE_ETH_32_POOLS,
 			.dcb_tc = {0},
 		},
 	},
@@ -157,7 +157,7 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
 		conf.pool_map[i].pools = 1UL << i;
 		vmdq_conf.pool_map[i].pools = 1UL << i;
 	}
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		conf.dcb_tc[i] = i % num_tcs;
 		dcb_conf.dcb_tc[i] = i % num_tcs;
 		tx_conf.dcb_tc[i] = i % num_tcs;
@@ -173,11 +173,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
 	(void)(rte_memcpy(&eth_conf->tx_adv_conf.vmdq_dcb_tx_conf, &tx_conf,
 			  sizeof(tx_conf)));
 	if (rss_enable) {
-		eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
-		eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
-							ETH_RSS_UDP |
-							ETH_RSS_TCP |
-							ETH_RSS_SCTP;
+		eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
+		eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+							RTE_ETH_RSS_UDP |
+							RTE_ETH_RSS_TCP |
+							RTE_ETH_RSS_SCTP;
 	}
 	return 0;
 }
@@ -271,9 +271,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
 	port_conf.rx_adv_conf.rss_conf.rss_hf &=
@@ -382,9 +382,9 @@ vmdq_parse_num_pools(const char *q_arg)
 	if (n != 16 && n != 32)
 		return -1;
 	if (n == 16)
-		num_pools = ETH_16_POOLS;
+		num_pools = RTE_ETH_16_POOLS;
 	else
-		num_pools = ETH_32_POOLS;
+		num_pools = RTE_ETH_32_POOLS;
 
 	return 0;
 }
@@ -404,9 +404,9 @@ vmdq_parse_num_tcs(const char *q_arg)
 	if (n != 4 && n != 8)
 		return -1;
 	if (n == 4)
-		num_tcs = ETH_4_TCS;
+		num_tcs = RTE_ETH_4_TCS;
 	else
-		num_tcs = ETH_8_TCS;
+		num_tcs = RTE_ETH_8_TCS;
 
 	return 0;
 }
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 0174ba03d7f3..c134b878684e 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -116,7 +116,7 @@ struct rte_eth_dev_data {
 			/**< Device Ethernet link address.
 			 *   @see rte_eth_dev_release_port()
 			 */
-	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+	uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
 			/**< Bitmap associating MAC addresses to pools. */
 	struct rte_ether_addr *hash_mac_addrs;
 			/**< Device Ethernet MAC addresses of hash filtering.
@@ -1657,23 +1657,23 @@ struct rte_eth_syn_filter {
 /**
  * filter type of tunneling packet
  */
-#define ETH_TUNNEL_FILTER_OMAC  0x01 /**< filter by outer MAC addr */
-#define ETH_TUNNEL_FILTER_OIP   0x02 /**< filter by outer IP Addr */
-#define ETH_TUNNEL_FILTER_TENID 0x04 /**< filter by tenant ID */
-#define ETH_TUNNEL_FILTER_IMAC  0x08 /**< filter by inner MAC addr */
-#define ETH_TUNNEL_FILTER_IVLAN 0x10 /**< filter by inner VLAN ID */
-#define ETH_TUNNEL_FILTER_IIP   0x20 /**< filter by inner IP addr */
-
-#define RTE_TUNNEL_FILTER_IMAC_IVLAN (ETH_TUNNEL_FILTER_IMAC | \
-					ETH_TUNNEL_FILTER_IVLAN)
-#define RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID (ETH_TUNNEL_FILTER_IMAC | \
-					ETH_TUNNEL_FILTER_IVLAN | \
-					ETH_TUNNEL_FILTER_TENID)
-#define RTE_TUNNEL_FILTER_IMAC_TENID (ETH_TUNNEL_FILTER_IMAC | \
-					ETH_TUNNEL_FILTER_TENID)
-#define RTE_TUNNEL_FILTER_OMAC_TENID_IMAC (ETH_TUNNEL_FILTER_OMAC | \
-					ETH_TUNNEL_FILTER_TENID | \
-					ETH_TUNNEL_FILTER_IMAC)
+#define RTE_ETH_TUNNEL_FILTER_OMAC  0x01 /**< filter by outer MAC addr */
+#define RTE_ETH_TUNNEL_FILTER_OIP   0x02 /**< filter by outer IP Addr */
+#define RTE_ETH_TUNNEL_FILTER_TENID 0x04 /**< filter by tenant ID */
+#define RTE_ETH_TUNNEL_FILTER_IMAC  0x08 /**< filter by inner MAC addr */
+#define RTE_ETH_TUNNEL_FILTER_IVLAN 0x10 /**< filter by inner VLAN ID */
+#define RTE_ETH_TUNNEL_FILTER_IIP   0x20 /**< filter by inner IP addr */
+
+#define RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN (RTE_ETH_TUNNEL_FILTER_IMAC | \
+					  RTE_ETH_TUNNEL_FILTER_IVLAN)
+#define RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID (RTE_ETH_TUNNEL_FILTER_IMAC | \
+						RTE_ETH_TUNNEL_FILTER_IVLAN | \
+						RTE_ETH_TUNNEL_FILTER_TENID)
+#define RTE_ETH_TUNNEL_FILTER_IMAC_TENID (RTE_ETH_TUNNEL_FILTER_IMAC | \
+					  RTE_ETH_TUNNEL_FILTER_TENID)
+#define RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC (RTE_ETH_TUNNEL_FILTER_OMAC | \
+					       RTE_ETH_TUNNEL_FILTER_TENID | \
+					       RTE_ETH_TUNNEL_FILTER_IMAC)
 
 /**
  *  Select IPv4 or IPv6 for tunnel filters.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 1f18aa916cca..7fd916c070e9 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -101,9 +101,6 @@ static const struct rte_eth_xstats_name_off eth_dev_txq_stats_strings[] = {
 #define RTE_NB_TXQ_STATS RTE_DIM(eth_dev_txq_stats_strings)
 
 #define RTE_RX_OFFLOAD_BIT2STR(_name)	\
-	{ DEV_RX_OFFLOAD_##_name, #_name }
-
-#define RTE_ETH_RX_OFFLOAD_BIT2STR(_name)	\
 	{ RTE_ETH_RX_OFFLOAD_##_name, #_name }
 
 static const struct {
@@ -128,14 +125,14 @@ static const struct {
 	RTE_RX_OFFLOAD_BIT2STR(SCTP_CKSUM),
 	RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM),
 	RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
-	RTE_ETH_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
+	RTE_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
 };
 
 #undef RTE_RX_OFFLOAD_BIT2STR
 #undef RTE_ETH_RX_OFFLOAD_BIT2STR
 
 #define RTE_TX_OFFLOAD_BIT2STR(_name)	\
-	{ DEV_TX_OFFLOAD_##_name, #_name }
+	{ RTE_ETH_TX_OFFLOAD_##_name, #_name }
 
 static const struct {
 	uint64_t offload;
@@ -1173,32 +1170,32 @@ uint32_t
 rte_eth_speed_bitflag(uint32_t speed, int duplex)
 {
 	switch (speed) {
-	case ETH_SPEED_NUM_10M:
-		return duplex ? ETH_LINK_SPEED_10M : ETH_LINK_SPEED_10M_HD;
-	case ETH_SPEED_NUM_100M:
-		return duplex ? ETH_LINK_SPEED_100M : ETH_LINK_SPEED_100M_HD;
-	case ETH_SPEED_NUM_1G:
-		return ETH_LINK_SPEED_1G;
-	case ETH_SPEED_NUM_2_5G:
-		return ETH_LINK_SPEED_2_5G;
-	case ETH_SPEED_NUM_5G:
-		return ETH_LINK_SPEED_5G;
-	case ETH_SPEED_NUM_10G:
-		return ETH_LINK_SPEED_10G;
-	case ETH_SPEED_NUM_20G:
-		return ETH_LINK_SPEED_20G;
-	case ETH_SPEED_NUM_25G:
-		return ETH_LINK_SPEED_25G;
-	case ETH_SPEED_NUM_40G:
-		return ETH_LINK_SPEED_40G;
-	case ETH_SPEED_NUM_50G:
-		return ETH_LINK_SPEED_50G;
-	case ETH_SPEED_NUM_56G:
-		return ETH_LINK_SPEED_56G;
-	case ETH_SPEED_NUM_100G:
-		return ETH_LINK_SPEED_100G;
-	case ETH_SPEED_NUM_200G:
-		return ETH_LINK_SPEED_200G;
+	case RTE_ETH_SPEED_NUM_10M:
+		return duplex ? RTE_ETH_LINK_SPEED_10M : RTE_ETH_LINK_SPEED_10M_HD;
+	case RTE_ETH_SPEED_NUM_100M:
+		return duplex ? RTE_ETH_LINK_SPEED_100M : RTE_ETH_LINK_SPEED_100M_HD;
+	case RTE_ETH_SPEED_NUM_1G:
+		return RTE_ETH_LINK_SPEED_1G;
+	case RTE_ETH_SPEED_NUM_2_5G:
+		return RTE_ETH_LINK_SPEED_2_5G;
+	case RTE_ETH_SPEED_NUM_5G:
+		return RTE_ETH_LINK_SPEED_5G;
+	case RTE_ETH_SPEED_NUM_10G:
+		return RTE_ETH_LINK_SPEED_10G;
+	case RTE_ETH_SPEED_NUM_20G:
+		return RTE_ETH_LINK_SPEED_20G;
+	case RTE_ETH_SPEED_NUM_25G:
+		return RTE_ETH_LINK_SPEED_25G;
+	case RTE_ETH_SPEED_NUM_40G:
+		return RTE_ETH_LINK_SPEED_40G;
+	case RTE_ETH_SPEED_NUM_50G:
+		return RTE_ETH_LINK_SPEED_50G;
+	case RTE_ETH_SPEED_NUM_56G:
+		return RTE_ETH_LINK_SPEED_56G;
+	case RTE_ETH_SPEED_NUM_100G:
+		return RTE_ETH_LINK_SPEED_100G;
+	case RTE_ETH_SPEED_NUM_200G:
+		return RTE_ETH_LINK_SPEED_200G;
 	default:
 		return 0;
 	}
@@ -1503,7 +1500,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 * If LRO is enabled, check that the maximum aggregated packet
 	 * size is supported by the configured device.
 	 */
-	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		uint32_t max_rx_pktlen;
 		uint32_t overhead_len;
 
@@ -1560,12 +1557,12 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	}
 
 	/* Check if Rx RSS distribution is disabled but RSS hash is enabled. */
-	if (((dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) == 0) &&
-	    (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
+	    (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		RTE_ETHDEV_LOG(ERR,
 			"Ethdev port_id=%u config invalid Rx mq_mode without RSS but %s offload is requested\n",
 			port_id,
-			rte_eth_dev_rx_offload_name(DEV_RX_OFFLOAD_RSS_HASH));
+			rte_eth_dev_rx_offload_name(RTE_ETH_RX_OFFLOAD_RSS_HASH));
 		ret = -EINVAL;
 		goto rollback;
 	}
@@ -2180,7 +2177,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	 * size is supported by the configured device.
 	 */
 	/* Get the real Ethernet overhead length */
-	if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (local_conf.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		uint32_t overhead_len;
 		uint32_t max_rx_pktlen;
 		int ret;
@@ -2760,21 +2757,21 @@ const char *
 rte_eth_link_speed_to_str(uint32_t link_speed)
 {
 	switch (link_speed) {
-	case ETH_SPEED_NUM_NONE: return "None";
-	case ETH_SPEED_NUM_10M:  return "10 Mbps";
-	case ETH_SPEED_NUM_100M: return "100 Mbps";
-	case ETH_SPEED_NUM_1G:   return "1 Gbps";
-	case ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
-	case ETH_SPEED_NUM_5G:   return "5 Gbps";
-	case ETH_SPEED_NUM_10G:  return "10 Gbps";
-	case ETH_SPEED_NUM_20G:  return "20 Gbps";
-	case ETH_SPEED_NUM_25G:  return "25 Gbps";
-	case ETH_SPEED_NUM_40G:  return "40 Gbps";
-	case ETH_SPEED_NUM_50G:  return "50 Gbps";
-	case ETH_SPEED_NUM_56G:  return "56 Gbps";
-	case ETH_SPEED_NUM_100G: return "100 Gbps";
-	case ETH_SPEED_NUM_200G: return "200 Gbps";
-	case ETH_SPEED_NUM_UNKNOWN: return "Unknown";
+	case RTE_ETH_SPEED_NUM_NONE: return "None";
+	case RTE_ETH_SPEED_NUM_10M:  return "10 Mbps";
+	case RTE_ETH_SPEED_NUM_100M: return "100 Mbps";
+	case RTE_ETH_SPEED_NUM_1G:   return "1 Gbps";
+	case RTE_ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
+	case RTE_ETH_SPEED_NUM_5G:   return "5 Gbps";
+	case RTE_ETH_SPEED_NUM_10G:  return "10 Gbps";
+	case RTE_ETH_SPEED_NUM_20G:  return "20 Gbps";
+	case RTE_ETH_SPEED_NUM_25G:  return "25 Gbps";
+	case RTE_ETH_SPEED_NUM_40G:  return "40 Gbps";
+	case RTE_ETH_SPEED_NUM_50G:  return "50 Gbps";
+	case RTE_ETH_SPEED_NUM_56G:  return "56 Gbps";
+	case RTE_ETH_SPEED_NUM_100G: return "100 Gbps";
+	case RTE_ETH_SPEED_NUM_200G: return "200 Gbps";
+	case RTE_ETH_SPEED_NUM_UNKNOWN: return "Unknown";
 	default: return "Invalid";
 	}
 }
@@ -2798,14 +2795,14 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
 		return -EINVAL;
 	}
 
-	if (eth_link->link_status == ETH_LINK_DOWN)
+	if (eth_link->link_status == RTE_ETH_LINK_DOWN)
 		return snprintf(str, len, "Link down");
 	else
 		return snprintf(str, len, "Link up at %s %s %s",
 			rte_eth_link_speed_to_str(eth_link->link_speed),
-			(eth_link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+			(eth_link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 			"FDX" : "HDX",
-			(eth_link->link_autoneg == ETH_LINK_AUTONEG) ?
+			(eth_link->link_autoneg == RTE_ETH_LINK_AUTONEG) ?
 			"Autoneg" : "Fixed");
 }
 
@@ -3712,7 +3709,7 @@ rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on)
 	dev = &rte_eth_devices[port_id];
 
 	if (!(dev->data->dev_conf.rxmode.offloads &
-	      DEV_RX_OFFLOAD_VLAN_FILTER)) {
+	      RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
 		RTE_ETHDEV_LOG(ERR, "Port %u: vlan-filtering disabled\n",
 			port_id);
 		return -ENOSYS;
@@ -3799,44 +3796,44 @@ rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask)
 	dev_offloads = orig_offloads;
 
 	/* check which option changed by application */
-	cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	cur = !!(offload_mask & RTE_ETH_VLAN_STRIP_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
-		mask |= ETH_VLAN_STRIP_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+		mask |= RTE_ETH_VLAN_STRIP_MASK;
 	}
 
-	cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+	cur = !!(offload_mask & RTE_ETH_VLAN_FILTER_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
-		mask |= ETH_VLAN_FILTER_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+		mask |= RTE_ETH_VLAN_FILTER_MASK;
 	}
 
-	cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND);
+	cur = !!(offload_mask & RTE_ETH_VLAN_EXTEND_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
-		mask |= ETH_VLAN_EXTEND_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
+		mask |= RTE_ETH_VLAN_EXTEND_MASK;
 	}
 
-	cur = !!(offload_mask & ETH_QINQ_STRIP_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP);
+	cur = !!(offload_mask & RTE_ETH_QINQ_STRIP_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
-		mask |= ETH_QINQ_STRIP_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
+		mask |= RTE_ETH_QINQ_STRIP_MASK;
 	}
 
 	/*no change*/
@@ -3881,17 +3878,17 @@ rte_eth_dev_get_vlan_offload(uint16_t port_id)
 	dev = &rte_eth_devices[port_id];
 	dev_offloads = &dev->data->dev_conf.rxmode.offloads;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-		ret |= ETH_VLAN_STRIP_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+		ret |= RTE_ETH_VLAN_STRIP_OFFLOAD;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
-		ret |= ETH_VLAN_FILTER_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+		ret |= RTE_ETH_VLAN_FILTER_OFFLOAD;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
-		ret |= ETH_VLAN_EXTEND_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
+		ret |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
-		ret |= ETH_QINQ_STRIP_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+		ret |= RTE_ETH_QINQ_STRIP_OFFLOAD;
 
 	return ret;
 }
@@ -3968,7 +3965,7 @@ rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (pfc_conf->priority > (ETH_DCB_NUM_USER_PRIORITIES - 1)) {
+	if (pfc_conf->priority > (RTE_ETH_DCB_NUM_USER_PRIORITIES - 1)) {
 		RTE_ETHDEV_LOG(ERR, "Invalid priority, only 0-7 allowed\n");
 		return -EINVAL;
 	}
@@ -3986,7 +3983,7 @@ eth_check_reta_mask(struct rte_eth_rss_reta_entry64 *reta_conf,
 {
 	uint16_t i, num;
 
-	num = (reta_size + RTE_RETA_GROUP_SIZE - 1) / RTE_RETA_GROUP_SIZE;
+	num = (reta_size + RTE_ETH_RETA_GROUP_SIZE - 1) / RTE_ETH_RETA_GROUP_SIZE;
 	for (i = 0; i < num; i++) {
 		if (reta_conf[i].mask)
 			return 0;
@@ -4008,8 +4005,8 @@ eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & RTE_BIT64(shift)) &&
 			(reta_conf[idx].reta[shift] >= max_rxq)) {
 			RTE_ETHDEV_LOG(ERR,
@@ -4165,7 +4162,7 @@ rte_eth_dev_udp_tunnel_port_add(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+	if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
 		RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
 		return -EINVAL;
 	}
@@ -4191,7 +4188,7 @@ rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+	if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
 		RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
 		return -EINVAL;
 	}
@@ -4332,8 +4329,8 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr,
 			port_id);
 		return -EINVAL;
 	}
-	if (pool >= ETH_64_POOLS) {
-		RTE_ETHDEV_LOG(ERR, "Pool id must be 0-%d\n", ETH_64_POOLS - 1);
+	if (pool >= RTE_ETH_64_POOLS) {
+		RTE_ETHDEV_LOG(ERR, "Pool id must be 0-%d\n", RTE_ETH_64_POOLS - 1);
 		return -EINVAL;
 	}
 
@@ -6242,7 +6239,7 @@ eth_dev_handle_port_link_status(const char *cmd __rte_unused,
 	rte_tel_data_add_dict_string(d, status_str, "UP");
 	rte_tel_data_add_dict_u64(d, "speed", link.link_speed);
 	rte_tel_data_add_dict_string(d, "duplex",
-			(link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+			(link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 				"full-duplex" : "half-duplex");
 	return 0;
 }
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 014270d31672..9f0addee116c 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -250,7 +250,7 @@ void rte_eth_iterator_cleanup(struct rte_dev_iterator *iter);
  * field is not supported, its value is 0.
  * All byte-related statistics do not include Ethernet FCS regardless
  * of whether these bytes have been delivered to the application
- * (see DEV_RX_OFFLOAD_KEEP_CRC).
+ * (see RTE_ETH_RX_OFFLOAD_KEEP_CRC).
  */
 struct rte_eth_stats {
 	uint64_t ipackets;  /**< Total number of successfully received packets. */
@@ -280,43 +280,75 @@ struct rte_eth_stats {
 /**@{@name Link speed capabilities
  * Device supported speeds bitmap flags
  */
-#define ETH_LINK_SPEED_AUTONEG 0             /**< Autonegotiate (all speeds) */
-#define ETH_LINK_SPEED_FIXED   RTE_BIT32(0)  /**< Disable autoneg (fixed speed) */
-#define ETH_LINK_SPEED_10M_HD  RTE_BIT32(1)  /**<  10 Mbps half-duplex */
-#define ETH_LINK_SPEED_10M     RTE_BIT32(2)  /**<  10 Mbps full-duplex */
-#define ETH_LINK_SPEED_100M_HD RTE_BIT32(3)  /**< 100 Mbps half-duplex */
-#define ETH_LINK_SPEED_100M    RTE_BIT32(4)  /**< 100 Mbps full-duplex */
-#define ETH_LINK_SPEED_1G      RTE_BIT32(5)  /**<   1 Gbps */
-#define ETH_LINK_SPEED_2_5G    RTE_BIT32(6)  /**< 2.5 Gbps */
-#define ETH_LINK_SPEED_5G      RTE_BIT32(7)  /**<   5 Gbps */
-#define ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**<  10 Gbps */
-#define ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**<  20 Gbps */
-#define ETH_LINK_SPEED_25G     RTE_BIT32(10) /**<  25 Gbps */
-#define ETH_LINK_SPEED_40G     RTE_BIT32(11) /**<  40 Gbps */
-#define ETH_LINK_SPEED_50G     RTE_BIT32(12) /**<  50 Gbps */
-#define ETH_LINK_SPEED_56G     RTE_BIT32(13) /**<  56 Gbps */
-#define ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
-#define ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
+#define RTE_ETH_LINK_SPEED_AUTONEG 0             /**< Autonegotiate (all speeds) */
+#define ETH_LINK_SPEED_AUTONEG	RTE_ETH_LINK_SPEED_AUTONEG
+#define RTE_ETH_LINK_SPEED_FIXED   RTE_BIT32(0)  /**< Disable autoneg (fixed speed) */
+#define ETH_LINK_SPEED_FIXED	RTE_ETH_LINK_SPEED_FIXED
+#define RTE_ETH_LINK_SPEED_10M_HD  RTE_BIT32(1)  /**<  10 Mbps half-duplex */
+#define ETH_LINK_SPEED_10M_HD	RTE_ETH_LINK_SPEED_10M_HD
+#define RTE_ETH_LINK_SPEED_10M     RTE_BIT32(2)  /**<  10 Mbps full-duplex */
+#define ETH_LINK_SPEED_10M	RTE_ETH_LINK_SPEED_10M
+#define RTE_ETH_LINK_SPEED_100M_HD RTE_BIT32(3)  /**< 100 Mbps half-duplex */
+#define ETH_LINK_SPEED_100M_HD	RTE_ETH_LINK_SPEED_100M_HD
+#define RTE_ETH_LINK_SPEED_100M    RTE_BIT32(4)  /**< 100 Mbps full-duplex */
+#define ETH_LINK_SPEED_100M	RTE_ETH_LINK_SPEED_100M
+#define RTE_ETH_LINK_SPEED_1G      RTE_BIT32(5)  /**<   1 Gbps */
+#define ETH_LINK_SPEED_1G	RTE_ETH_LINK_SPEED_1G
+#define RTE_ETH_LINK_SPEED_2_5G    RTE_BIT32(6)  /**< 2.5 Gbps */
+#define ETH_LINK_SPEED_2_5G	RTE_ETH_LINK_SPEED_2_5G
+#define RTE_ETH_LINK_SPEED_5G      RTE_BIT32(7)  /**<   5 Gbps */
+#define ETH_LINK_SPEED_5G	RTE_ETH_LINK_SPEED_5G
+#define RTE_ETH_LINK_SPEED_10G     RTE_BIT32(8)  /**<  10 Gbps */
+#define ETH_LINK_SPEED_10G	RTE_ETH_LINK_SPEED_10G
+#define RTE_ETH_LINK_SPEED_20G     RTE_BIT32(9)  /**<  20 Gbps */
+#define ETH_LINK_SPEED_20G	RTE_ETH_LINK_SPEED_20G
+#define RTE_ETH_LINK_SPEED_25G     RTE_BIT32(10) /**<  25 Gbps */
+#define ETH_LINK_SPEED_25G	RTE_ETH_LINK_SPEED_25G
+#define RTE_ETH_LINK_SPEED_40G     RTE_BIT32(11) /**<  40 Gbps */
+#define ETH_LINK_SPEED_40G	RTE_ETH_LINK_SPEED_40G
+#define RTE_ETH_LINK_SPEED_50G     RTE_BIT32(12) /**<  50 Gbps */
+#define ETH_LINK_SPEED_50G	RTE_ETH_LINK_SPEED_50G
+#define RTE_ETH_LINK_SPEED_56G     RTE_BIT32(13) /**<  56 Gbps */
+#define ETH_LINK_SPEED_56G	RTE_ETH_LINK_SPEED_56G
+#define RTE_ETH_LINK_SPEED_100G    RTE_BIT32(14) /**< 100 Gbps */
+#define ETH_LINK_SPEED_100G	RTE_ETH_LINK_SPEED_100G
+#define RTE_ETH_LINK_SPEED_200G    RTE_BIT32(15) /**< 200 Gbps */
+#define ETH_LINK_SPEED_200G	RTE_ETH_LINK_SPEED_200G
 /**@}*/
 
 /**@{@name Link speed
  * Ethernet numeric link speeds in Mbps
  */
-#define ETH_SPEED_NUM_NONE         0 /**< Not defined */
-#define ETH_SPEED_NUM_10M         10 /**<  10 Mbps */
-#define ETH_SPEED_NUM_100M       100 /**< 100 Mbps */
-#define ETH_SPEED_NUM_1G        1000 /**<   1 Gbps */
-#define ETH_SPEED_NUM_2_5G      2500 /**< 2.5 Gbps */
-#define ETH_SPEED_NUM_5G        5000 /**<   5 Gbps */
-#define ETH_SPEED_NUM_10G      10000 /**<  10 Gbps */
-#define ETH_SPEED_NUM_20G      20000 /**<  20 Gbps */
-#define ETH_SPEED_NUM_25G      25000 /**<  25 Gbps */
-#define ETH_SPEED_NUM_40G      40000 /**<  40 Gbps */
-#define ETH_SPEED_NUM_50G      50000 /**<  50 Gbps */
-#define ETH_SPEED_NUM_56G      56000 /**<  56 Gbps */
-#define ETH_SPEED_NUM_100G    100000 /**< 100 Gbps */
-#define ETH_SPEED_NUM_200G    200000 /**< 200 Gbps */
-#define ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define RTE_ETH_SPEED_NUM_NONE         0 /**< Not defined */
+#define ETH_SPEED_NUM_NONE	RTE_ETH_SPEED_NUM_NONE
+#define RTE_ETH_SPEED_NUM_10M         10 /**<  10 Mbps */
+#define ETH_SPEED_NUM_10M	RTE_ETH_SPEED_NUM_10M
+#define RTE_ETH_SPEED_NUM_100M       100 /**< 100 Mbps */
+#define ETH_SPEED_NUM_100M	RTE_ETH_SPEED_NUM_100M
+#define RTE_ETH_SPEED_NUM_1G        1000 /**<   1 Gbps */
+#define ETH_SPEED_NUM_1G	RTE_ETH_SPEED_NUM_1G
+#define RTE_ETH_SPEED_NUM_2_5G      2500 /**< 2.5 Gbps */
+#define ETH_SPEED_NUM_2_5G	RTE_ETH_SPEED_NUM_2_5G
+#define RTE_ETH_SPEED_NUM_5G        5000 /**<   5 Gbps */
+#define ETH_SPEED_NUM_5G	RTE_ETH_SPEED_NUM_5G
+#define RTE_ETH_SPEED_NUM_10G      10000 /**<  10 Gbps */
+#define ETH_SPEED_NUM_10G	RTE_ETH_SPEED_NUM_10G
+#define RTE_ETH_SPEED_NUM_20G      20000 /**<  20 Gbps */
+#define ETH_SPEED_NUM_20G	RTE_ETH_SPEED_NUM_20G
+#define RTE_ETH_SPEED_NUM_25G      25000 /**<  25 Gbps */
+#define ETH_SPEED_NUM_25G	RTE_ETH_SPEED_NUM_25G
+#define RTE_ETH_SPEED_NUM_40G      40000 /**<  40 Gbps */
+#define ETH_SPEED_NUM_40G	RTE_ETH_SPEED_NUM_40G
+#define RTE_ETH_SPEED_NUM_50G      50000 /**<  50 Gbps */
+#define ETH_SPEED_NUM_50G	RTE_ETH_SPEED_NUM_50G
+#define RTE_ETH_SPEED_NUM_56G      56000 /**<  56 Gbps */
+#define ETH_SPEED_NUM_56G	RTE_ETH_SPEED_NUM_56G
+#define RTE_ETH_SPEED_NUM_100G    100000 /**< 100 Gbps */
+#define ETH_SPEED_NUM_100G	RTE_ETH_SPEED_NUM_100G
+#define RTE_ETH_SPEED_NUM_200G    200000 /**< 200 Gbps */
+#define ETH_SPEED_NUM_200G	RTE_ETH_SPEED_NUM_200G
+#define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define ETH_SPEED_NUM_UNKNOWN	RTE_ETH_SPEED_NUM_UNKNOWN
 /**@}*/
 
 /**
@@ -324,21 +356,27 @@ struct rte_eth_stats {
  */
 __extension__
 struct rte_eth_link {
-	uint32_t link_speed;        /**< ETH_SPEED_NUM_ */
-	uint16_t link_duplex  : 1;  /**< ETH_LINK_[HALF/FULL]_DUPLEX */
-	uint16_t link_autoneg : 1;  /**< ETH_LINK_[AUTONEG/FIXED] */
-	uint16_t link_status  : 1;  /**< ETH_LINK_[DOWN/UP] */
+	uint32_t link_speed;        /**< RTE_ETH_SPEED_NUM_ */
+	uint16_t link_duplex  : 1;  /**< RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+	uint16_t link_autoneg : 1;  /**< RTE_ETH_LINK_[AUTONEG/FIXED] */
+	uint16_t link_status  : 1;  /**< RTE_ETH_LINK_[DOWN/UP] */
 } __rte_aligned(8);      /**< aligned for atomic64 read/write */
 
 /**@{@name Link negotiation
  * Constants used in link management.
  */
-#define ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
-#define ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
-#define ETH_LINK_DOWN        0 /**< Link is down (see link_status). */
-#define ETH_LINK_UP          1 /**< Link is up (see link_status). */
-#define ETH_LINK_FIXED       0 /**< No autonegotiation (see link_autoneg). */
-#define ETH_LINK_AUTONEG     1 /**< Autonegotiated (see link_autoneg). */
+#define RTE_ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
+#define ETH_LINK_HALF_DUPLEX	RTE_ETH_LINK_HALF_DUPLEX
+#define RTE_ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
+#define ETH_LINK_FULL_DUPLEX	RTE_ETH_LINK_FULL_DUPLEX
+#define RTE_ETH_LINK_DOWN        0 /**< Link is down (see link_status). */
+#define ETH_LINK_DOWN		RTE_ETH_LINK_DOWN
+#define RTE_ETH_LINK_UP          1 /**< Link is up (see link_status). */
+#define ETH_LINK_UP		RTE_ETH_LINK_UP
+#define RTE_ETH_LINK_FIXED       0 /**< No autonegotiation (see link_autoneg). */
+#define ETH_LINK_FIXED		RTE_ETH_LINK_FIXED
+#define RTE_ETH_LINK_AUTONEG     1 /**< Autonegotiated (see link_autoneg). */
+#define ETH_LINK_AUTONEG	RTE_ETH_LINK_AUTONEG
 #define RTE_ETH_LINK_MAX_STR_LEN 40 /**< Max length of default link string. */
 /**@}*/
 
@@ -355,9 +393,12 @@ struct rte_eth_thresh {
 /**@{@name Multi-queue mode
  * @see rte_eth_conf.rxmode.mq_mode.
  */
-#define ETH_MQ_RX_RSS_FLAG  0x1 /**< Enable RSS. @see rte_eth_rss_conf */
-#define ETH_MQ_RX_DCB_FLAG  0x2 /**< Enable DCB. */
-#define ETH_MQ_RX_VMDQ_FLAG 0x4 /**< Enable VMDq. */
+#define RTE_ETH_MQ_RX_RSS_FLAG  0x1
+#define ETH_MQ_RX_RSS_FLAG	RTE_ETH_MQ_RX_RSS_FLAG
+#define RTE_ETH_MQ_RX_DCB_FLAG  0x2
+#define ETH_MQ_RX_DCB_FLAG	RTE_ETH_MQ_RX_DCB_FLAG
+#define RTE_ETH_MQ_RX_VMDQ_FLAG 0x4
+#define ETH_MQ_RX_VMDQ_FLAG	RTE_ETH_MQ_RX_VMDQ_FLAG
 /**@}*/
 
 /**
@@ -366,50 +407,49 @@ struct rte_eth_thresh {
  */
 enum rte_eth_rx_mq_mode {
 	/** None of DCB,RSS or VMDQ mode */
-	ETH_MQ_RX_NONE = 0,
+	RTE_ETH_MQ_RX_NONE = 0,
 
 	/** For RX side, only RSS is on */
-	ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
+	RTE_ETH_MQ_RX_RSS = RTE_ETH_MQ_RX_RSS_FLAG,
 	/** For RX side,only DCB is on. */
-	ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
+	RTE_ETH_MQ_RX_DCB = RTE_ETH_MQ_RX_DCB_FLAG,
 	/** Both DCB and RSS enable */
-	ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG,
+	RTE_ETH_MQ_RX_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
 
 	/** Only VMDQ, no RSS nor DCB */
-	ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_ONLY = RTE_ETH_MQ_RX_VMDQ_FLAG,
 	/** RSS mode with VMDQ */
-	ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG,
 	/** Use VMDQ+DCB to route traffic to queues */
-	ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG | ETH_MQ_RX_DCB_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_DCB = RTE_ETH_MQ_RX_VMDQ_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
 	/** Enable both VMDQ and DCB in VMDq */
-	ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG |
-				 ETH_MQ_RX_VMDQ_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG |
+				 RTE_ETH_MQ_RX_VMDQ_FLAG,
 };
 
-/**
- * for rx mq mode backward compatible
- */
-#define ETH_RSS                       ETH_MQ_RX_RSS
-#define VMDQ_DCB                      ETH_MQ_RX_VMDQ_DCB
-#define ETH_DCB_RX                    ETH_MQ_RX_DCB
+#define ETH_MQ_RX_NONE		RTE_ETH_MQ_RX_NONE
+#define ETH_MQ_RX_RSS		RTE_ETH_MQ_RX_RSS
+#define ETH_MQ_RX_DCB		RTE_ETH_MQ_RX_DCB
+#define ETH_MQ_RX_DCB_RSS	RTE_ETH_MQ_RX_DCB_RSS
+#define ETH_MQ_RX_VMDQ_ONLY	RTE_ETH_MQ_RX_VMDQ_ONLY
+#define ETH_MQ_RX_VMDQ_RSS	RTE_ETH_MQ_RX_VMDQ_RSS
+#define ETH_MQ_RX_VMDQ_DCB	RTE_ETH_MQ_RX_VMDQ_DCB
+#define ETH_MQ_RX_VMDQ_DCB_RSS	RTE_ETH_MQ_RX_VMDQ_DCB_RSS
 
 /**
  * A set of values to identify what method is to be used to transmit
  * packets using multi-TCs.
  */
 enum rte_eth_tx_mq_mode {
-	ETH_MQ_TX_NONE    = 0,  /**< It is in neither DCB nor VT mode. */
-	ETH_MQ_TX_DCB,          /**< For TX side,only DCB is on. */
-	ETH_MQ_TX_VMDQ_DCB,	/**< For TX side,both DCB and VT is on. */
-	ETH_MQ_TX_VMDQ_ONLY,    /**< Only VT on, no DCB */
+	RTE_ETH_MQ_TX_NONE    = 0,  /**< It is in neither DCB nor VT mode. */
+	RTE_ETH_MQ_TX_DCB,          /**< For TX side,only DCB is on. */
+	RTE_ETH_MQ_TX_VMDQ_DCB,	/**< For TX side,both DCB and VT is on. */
+	RTE_ETH_MQ_TX_VMDQ_ONLY,    /**< Only VT on, no DCB */
 };
-
-/**
- * for tx mq mode backward compatible
- */
-#define ETH_DCB_NONE                ETH_MQ_TX_NONE
-#define ETH_VMDQ_DCB_TX             ETH_MQ_TX_VMDQ_DCB
-#define ETH_DCB_TX                  ETH_MQ_TX_DCB
+#define ETH_MQ_TX_NONE		RTE_ETH_MQ_TX_NONE
+#define ETH_MQ_TX_DCB		RTE_ETH_MQ_TX_DCB
+#define ETH_MQ_TX_VMDQ_DCB	RTE_ETH_MQ_TX_VMDQ_DCB
+#define ETH_MQ_TX_VMDQ_ONLY	RTE_ETH_MQ_TX_VMDQ_ONLY
 
 /**
  * A structure used to configure the RX features of an Ethernet port.
@@ -422,7 +462,7 @@ struct rte_eth_rxmode {
 	uint32_t max_lro_pkt_size;
 	uint16_t split_hdr_size;  /**< hdr buf size (header_split enabled).*/
 	/**
-	 * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Per-port Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
 	 * Only offloads set on rx_offload_capa field on rte_eth_dev_info
 	 * structure are allowed to be set.
 	 */
@@ -437,12 +477,17 @@ struct rte_eth_rxmode {
  * Note that single VLAN is treated the same as inner VLAN.
  */
 enum rte_vlan_type {
-	ETH_VLAN_TYPE_UNKNOWN = 0,
-	ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
-	ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
-	ETH_VLAN_TYPE_MAX,
+	RTE_ETH_VLAN_TYPE_UNKNOWN = 0,
+	RTE_ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
+	RTE_ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
+	RTE_ETH_VLAN_TYPE_MAX,
 };
 
+#define ETH_VLAN_TYPE_UNKNOWN	RTE_ETH_VLAN_TYPE_UNKNOWN
+#define ETH_VLAN_TYPE_INNER	RTE_ETH_VLAN_TYPE_INNER
+#define ETH_VLAN_TYPE_OUTER	RTE_ETH_VLAN_TYPE_OUTER
+#define ETH_VLAN_TYPE_MAX	RTE_ETH_VLAN_TYPE_MAX
+
 /**
  * A structure used to describe a vlan filter.
  * If the bit corresponding to a VID is set, such VID is on.
@@ -513,38 +558,70 @@ struct rte_eth_rss_conf {
  * Below macros are defined for RSS offload types, they can be used to
  * fill rte_eth_rss_conf.rss_hf or rte_flow_action_rss.types.
  */
-#define ETH_RSS_IPV4               RTE_BIT64(2)
-#define ETH_RSS_FRAG_IPV4          RTE_BIT64(3)
-#define ETH_RSS_NONFRAG_IPV4_TCP   RTE_BIT64(4)
-#define ETH_RSS_NONFRAG_IPV4_UDP   RTE_BIT64(5)
-#define ETH_RSS_NONFRAG_IPV4_SCTP  RTE_BIT64(6)
-#define ETH_RSS_NONFRAG_IPV4_OTHER RTE_BIT64(7)
-#define ETH_RSS_IPV6               RTE_BIT64(8)
-#define ETH_RSS_FRAG_IPV6          RTE_BIT64(9)
-#define ETH_RSS_NONFRAG_IPV6_TCP   RTE_BIT64(10)
-#define ETH_RSS_NONFRAG_IPV6_UDP   RTE_BIT64(11)
-#define ETH_RSS_NONFRAG_IPV6_SCTP  RTE_BIT64(12)
-#define ETH_RSS_NONFRAG_IPV6_OTHER RTE_BIT64(13)
-#define ETH_RSS_L2_PAYLOAD         RTE_BIT64(14)
-#define ETH_RSS_IPV6_EX            RTE_BIT64(15)
-#define ETH_RSS_IPV6_TCP_EX        RTE_BIT64(16)
-#define ETH_RSS_IPV6_UDP_EX        RTE_BIT64(17)
-#define ETH_RSS_PORT               RTE_BIT64(18)
-#define ETH_RSS_VXLAN              RTE_BIT64(19)
-#define ETH_RSS_GENEVE             RTE_BIT64(20)
-#define ETH_RSS_NVGRE              RTE_BIT64(21)
-#define ETH_RSS_GTPU               RTE_BIT64(23)
-#define ETH_RSS_ETH                RTE_BIT64(24)
-#define ETH_RSS_S_VLAN             RTE_BIT64(25)
-#define ETH_RSS_C_VLAN             RTE_BIT64(26)
-#define ETH_RSS_ESP                RTE_BIT64(27)
-#define ETH_RSS_AH                 RTE_BIT64(28)
-#define ETH_RSS_L2TPV3             RTE_BIT64(29)
-#define ETH_RSS_PFCP               RTE_BIT64(30)
-#define ETH_RSS_PPPOE              RTE_BIT64(31)
-#define ETH_RSS_ECPRI              RTE_BIT64(32)
-#define ETH_RSS_MPLS               RTE_BIT64(33)
-#define ETH_RSS_IPV4_CHKSUM        RTE_BIT64(34)
+#define RTE_ETH_RSS_IPV4               RTE_BIT64(2)
+#define ETH_RSS_IPV4                   RTE_ETH_RSS_IPV4
+#define RTE_ETH_RSS_FRAG_IPV4          RTE_BIT64(3)
+#define ETH_RSS_FRAG_IPV4              RTE_ETH_RSS_FRAG_IPV4
+#define RTE_ETH_RSS_NONFRAG_IPV4_TCP   RTE_BIT64(4)
+#define ETH_RSS_NONFRAG_IPV4_TCP       RTE_ETH_RSS_NONFRAG_IPV4_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV4_UDP   RTE_BIT64(5)
+#define ETH_RSS_NONFRAG_IPV4_UDP       RTE_ETH_RSS_NONFRAG_IPV4_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV4_SCTP  RTE_BIT64(6)
+#define ETH_RSS_NONFRAG_IPV4_SCTP      RTE_ETH_RSS_NONFRAG_IPV4_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV4_OTHER RTE_BIT64(7)
+#define ETH_RSS_NONFRAG_IPV4_OTHER     RTE_ETH_RSS_NONFRAG_IPV4_OTHER
+#define RTE_ETH_RSS_IPV6               RTE_BIT64(8)
+#define ETH_RSS_IPV6                   RTE_ETH_RSS_IPV6
+#define RTE_ETH_RSS_FRAG_IPV6          RTE_BIT64(9)
+#define ETH_RSS_FRAG_IPV6              RTE_ETH_RSS_FRAG_IPV6
+#define RTE_ETH_RSS_NONFRAG_IPV6_TCP   RTE_BIT64(10)
+#define ETH_RSS_NONFRAG_IPV6_TCP       RTE_ETH_RSS_NONFRAG_IPV6_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV6_UDP   RTE_BIT64(11)
+#define ETH_RSS_NONFRAG_IPV6_UDP       RTE_ETH_RSS_NONFRAG_IPV6_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV6_SCTP  RTE_BIT64(12)
+#define ETH_RSS_NONFRAG_IPV6_SCTP      RTE_ETH_RSS_NONFRAG_IPV6_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV6_OTHER RTE_BIT64(13)
+#define ETH_RSS_NONFRAG_IPV6_OTHER     RTE_ETH_RSS_NONFRAG_IPV6_OTHER
+#define RTE_ETH_RSS_L2_PAYLOAD         RTE_BIT64(14)
+#define ETH_RSS_L2_PAYLOAD             RTE_ETH_RSS_L2_PAYLOAD
+#define RTE_ETH_RSS_IPV6_EX            RTE_BIT64(15)
+#define ETH_RSS_IPV6_EX                RTE_ETH_RSS_IPV6_EX
+#define RTE_ETH_RSS_IPV6_TCP_EX        RTE_BIT64(16)
+#define ETH_RSS_IPV6_TCP_EX            RTE_ETH_RSS_IPV6_TCP_EX
+#define RTE_ETH_RSS_IPV6_UDP_EX        RTE_BIT64(17)
+#define ETH_RSS_IPV6_UDP_EX            RTE_ETH_RSS_IPV6_UDP_EX
+#define RTE_ETH_RSS_PORT               RTE_BIT64(18)
+#define ETH_RSS_PORT                   RTE_ETH_RSS_PORT
+#define RTE_ETH_RSS_VXLAN              RTE_BIT64(19)
+#define ETH_RSS_VXLAN                  RTE_ETH_RSS_VXLAN
+#define RTE_ETH_RSS_GENEVE             RTE_BIT64(20)
+#define ETH_RSS_GENEVE                 RTE_ETH_RSS_GENEVE
+#define RTE_ETH_RSS_NVGRE              RTE_BIT64(21)
+#define ETH_RSS_NVGRE                  RTE_ETH_RSS_NVGRE
+#define RTE_ETH_RSS_GTPU               RTE_BIT64(23)
+#define ETH_RSS_GTPU                   RTE_ETH_RSS_GTPU
+#define RTE_ETH_RSS_ETH                RTE_BIT64(24)
+#define ETH_RSS_ETH                    RTE_ETH_RSS_ETH
+#define RTE_ETH_RSS_S_VLAN             RTE_BIT64(25)
+#define ETH_RSS_S_VLAN                 RTE_ETH_RSS_S_VLAN
+#define RTE_ETH_RSS_C_VLAN             RTE_BIT64(26)
+#define ETH_RSS_C_VLAN                 RTE_ETH_RSS_C_VLAN
+#define RTE_ETH_RSS_ESP                RTE_BIT64(27)
+#define ETH_RSS_ESP                    RTE_ETH_RSS_ESP
+#define RTE_ETH_RSS_AH                 RTE_BIT64(28)
+#define ETH_RSS_AH                     RTE_ETH_RSS_AH
+#define RTE_ETH_RSS_L2TPV3             RTE_BIT64(29)
+#define ETH_RSS_L2TPV3                 RTE_ETH_RSS_L2TPV3
+#define RTE_ETH_RSS_PFCP               RTE_BIT64(30)
+#define ETH_RSS_PFCP                   RTE_ETH_RSS_PFCP
+#define RTE_ETH_RSS_PPPOE              RTE_BIT64(31)
+#define ETH_RSS_PPPOE                  RTE_ETH_RSS_PPPOE
+#define RTE_ETH_RSS_ECPRI              RTE_BIT64(32)
+#define ETH_RSS_ECPRI                  RTE_ETH_RSS_ECPRI
+#define RTE_ETH_RSS_MPLS               RTE_BIT64(33)
+#define ETH_RSS_MPLS                   RTE_ETH_RSS_MPLS
+#define RTE_ETH_RSS_IPV4_CHKSUM        RTE_BIT64(34)
+#define ETH_RSS_IPV4_CHKSUM            RTE_ETH_RSS_IPV4_CHKSUM
 
 /**
  * The ETH_RSS_L4_CHKSUM works on checksum field of any L4 header.
@@ -553,34 +630,41 @@ struct rte_eth_rss_conf {
  * checksum type for constructing the use of RSS offload bits.
  *
  * Due to above reason, some old APIs (and configuration) don't support
- * ETH_RSS_L4_CHKSUM. The rte_flow RSS API supports it.
+ * RTE_ETH_RSS_L4_CHKSUM. The rte_flow RSS API supports it.
  *
  * For the case that checksum is not used in an UDP header,
  * it takes the reserved value 0 as input for the hash function.
  */
-#define ETH_RSS_L4_CHKSUM          RTE_BIT64(35)
+#define RTE_ETH_RSS_L4_CHKSUM          RTE_BIT64(35)
+#define ETH_RSS_L4_CHKSUM              RTE_ETH_RSS_L4_CHKSUM
 
 /*
- * We use the following macros to combine with above ETH_RSS_* for
+ * We use the following macros to combine with above RTE_ETH_RSS_* for
  * more specific input set selection. These bits are defined starting
  * from the high end of the 64 bits.
- * Note: If we use above ETH_RSS_* without SRC/DST_ONLY, it represents
+ * Note: If we use above RTE_ETH_RSS_* without SRC/DST_ONLY, it represents
  * both SRC and DST are taken into account. If SRC_ONLY and DST_ONLY of
  * the same level are used simultaneously, it is the same case as none of
  * them are added.
  */
-#define ETH_RSS_L3_SRC_ONLY        RTE_BIT64(63)
-#define ETH_RSS_L3_DST_ONLY        RTE_BIT64(62)
-#define ETH_RSS_L4_SRC_ONLY        RTE_BIT64(61)
-#define ETH_RSS_L4_DST_ONLY        RTE_BIT64(60)
-#define ETH_RSS_L2_SRC_ONLY        RTE_BIT64(59)
-#define ETH_RSS_L2_DST_ONLY        RTE_BIT64(58)
+#define RTE_ETH_RSS_L3_SRC_ONLY        RTE_BIT64(63)
+#define ETH_RSS_L3_SRC_ONLY            RTE_ETH_RSS_L3_SRC_ONLY
+#define RTE_ETH_RSS_L3_DST_ONLY        RTE_BIT64(62)
+#define ETH_RSS_L3_DST_ONLY            RTE_ETH_RSS_L3_DST_ONLY
+#define RTE_ETH_RSS_L4_SRC_ONLY        RTE_BIT64(61)
+#define ETH_RSS_L4_SRC_ONLY            RTE_ETH_RSS_L4_SRC_ONLY
+#define RTE_ETH_RSS_L4_DST_ONLY        RTE_BIT64(60)
+#define ETH_RSS_L4_DST_ONLY            RTE_ETH_RSS_L4_DST_ONLY
+#define RTE_ETH_RSS_L2_SRC_ONLY        RTE_BIT64(59)
+#define ETH_RSS_L2_SRC_ONLY            RTE_ETH_RSS_L2_SRC_ONLY
+#define RTE_ETH_RSS_L2_DST_ONLY        RTE_BIT64(58)
+#define ETH_RSS_L2_DST_ONLY            RTE_ETH_RSS_L2_DST_ONLY
 
 /*
  * Only select IPV6 address prefix as RSS input set according to
- * https://tools.ietf.org/html/rfc6052
- * Must be combined with ETH_RSS_IPV6, ETH_RSS_NONFRAG_IPV6_UDP,
- * ETH_RSS_NONFRAG_IPV6_TCP, ETH_RSS_NONFRAG_IPV6_SCTP.
+ * https:tools.ietf.org/html/rfc6052
+ * Must be combined with RTE_ETH_RSS_IPV6, RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ * RTE_ETH_RSS_NONFRAG_IPV6_TCP, RTE_ETH_RSS_NONFRAG_IPV6_SCTP.
  */
 #define RTE_ETH_RSS_L3_PRE32	   RTE_BIT64(57)
 #define RTE_ETH_RSS_L3_PRE40	   RTE_BIT64(56)
@@ -602,22 +686,27 @@ struct rte_eth_rss_conf {
  * It basically stands for the innermost encapsulation level RSS
  * can be performed on according to PMD and device capabilities.
  */
-#define ETH_RSS_LEVEL_PMD_DEFAULT       (0ULL << 50)
+#define RTE_ETH_RSS_LEVEL_PMD_DEFAULT       (0ULL << 50)
+#define ETH_RSS_LEVEL_PMD_DEFAULT	RTE_ETH_RSS_LEVEL_PMD_DEFAULT
 
 /**
  * level 1, requests RSS to be performed on the outermost packet
  * encapsulation level.
  */
-#define ETH_RSS_LEVEL_OUTERMOST         (1ULL << 50)
+#define RTE_ETH_RSS_LEVEL_OUTERMOST         (1ULL << 50)
+#define ETH_RSS_LEVEL_OUTERMOST	RTE_ETH_RSS_LEVEL_OUTERMOST
 
 /**
  * level 2, requests RSS to be performed on the specified inner packet
  * encapsulation level, from outermost to innermost (lower to higher values).
  */
-#define ETH_RSS_LEVEL_INNERMOST         (2ULL << 50)
-#define ETH_RSS_LEVEL_MASK              (3ULL << 50)
+#define RTE_ETH_RSS_LEVEL_INNERMOST         (2ULL << 50)
+#define ETH_RSS_LEVEL_INNERMOST	RTE_ETH_RSS_LEVEL_INNERMOST
+#define RTE_ETH_RSS_LEVEL_MASK              (3ULL << 50)
+#define ETH_RSS_LEVEL_MASK	RTE_ETH_RSS_LEVEL_MASK
 
-#define ETH_RSS_LEVEL(rss_hf) ((rss_hf & ETH_RSS_LEVEL_MASK) >> 50)
+#define RTE_ETH_RSS_LEVEL(rss_hf) ((rss_hf & RTE_ETH_RSS_LEVEL_MASK) >> 50)
+#define ETH_RSS_LEVEL(rss_hf)	RTE_ETH_RSS_LEVEL(rss_hf)
 
 /**
  * For input set change of hash filter, if SRC_ONLY and DST_ONLY of
@@ -632,219 +721,312 @@ struct rte_eth_rss_conf {
 static inline uint64_t
 rte_eth_rss_hf_refine(uint64_t rss_hf)
 {
-	if ((rss_hf & ETH_RSS_L3_SRC_ONLY) && (rss_hf & ETH_RSS_L3_DST_ONLY))
-		rss_hf &= ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+	if ((rss_hf & RTE_ETH_RSS_L3_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L3_DST_ONLY))
+		rss_hf &= ~(RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
 
-	if ((rss_hf & ETH_RSS_L4_SRC_ONLY) && (rss_hf & ETH_RSS_L4_DST_ONLY))
-		rss_hf &= ~(ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+	if ((rss_hf & RTE_ETH_RSS_L4_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L4_DST_ONLY))
+		rss_hf &= ~(RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
 
 	return rss_hf;
 }
 
-#define ETH_RSS_IPV6_PRE32 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE32 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32	RTE_ETH_RSS_IPV6_PRE32
 
-#define ETH_RSS_IPV6_PRE40 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE40 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40	RTE_ETH_RSS_IPV6_PRE40
 
-#define ETH_RSS_IPV6_PRE48 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE48 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48	RTE_ETH_RSS_IPV6_PRE48
 
-#define ETH_RSS_IPV6_PRE56 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE56 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56	RTE_ETH_RSS_IPV6_PRE56
 
-#define ETH_RSS_IPV6_PRE64 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE64 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64	RTE_ETH_RSS_IPV6_PRE64
 
-#define ETH_RSS_IPV6_PRE96 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE96 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96	RTE_ETH_RSS_IPV6_PRE96
 
-#define ETH_RSS_IPV6_PRE32_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE32_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_UDP	RTE_ETH_RSS_IPV6_PRE32_UDP
 
-#define ETH_RSS_IPV6_PRE40_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE40_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_UDP	RTE_ETH_RSS_IPV6_PRE40_UDP
 
-#define ETH_RSS_IPV6_PRE48_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE48_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_UDP	RTE_ETH_RSS_IPV6_PRE48_UDP
 
-#define ETH_RSS_IPV6_PRE56_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE56_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_UDP	RTE_ETH_RSS_IPV6_PRE56_UDP
 
-#define ETH_RSS_IPV6_PRE64_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE64_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_UDP	RTE_ETH_RSS_IPV6_PRE64_UDP
 
-#define ETH_RSS_IPV6_PRE96_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE96_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_UDP	RTE_ETH_RSS_IPV6_PRE96_UDP
 
-#define ETH_RSS_IPV6_PRE32_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE32_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_TCP	RTE_ETH_RSS_IPV6_PRE32_TCP
 
-#define ETH_RSS_IPV6_PRE40_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE40_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_TCP	RTE_ETH_RSS_IPV6_PRE40_TCP
 
-#define ETH_RSS_IPV6_PRE48_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE48_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_TCP	RTE_ETH_RSS_IPV6_PRE48_TCP
 
-#define ETH_RSS_IPV6_PRE56_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE56_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_TCP	RTE_ETH_RSS_IPV6_PRE56_TCP
 
-#define ETH_RSS_IPV6_PRE64_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE64_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_TCP	RTE_ETH_RSS_IPV6_PRE64_TCP
 
-#define ETH_RSS_IPV6_PRE96_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE96_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_TCP	RTE_ETH_RSS_IPV6_PRE96_TCP
 
-#define ETH_RSS_IPV6_PRE32_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE32_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_SCTP	RTE_ETH_RSS_IPV6_PRE32_SCTP
 
-#define ETH_RSS_IPV6_PRE40_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE40_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_SCTP	RTE_ETH_RSS_IPV6_PRE40_SCTP
 
-#define ETH_RSS_IPV6_PRE48_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE48_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_SCTP	RTE_ETH_RSS_IPV6_PRE48_SCTP
 
-#define ETH_RSS_IPV6_PRE56_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE56_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_SCTP	RTE_ETH_RSS_IPV6_PRE56_SCTP
 
-#define ETH_RSS_IPV6_PRE64_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE64_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_SCTP	RTE_ETH_RSS_IPV6_PRE64_SCTP
 
-#define ETH_RSS_IPV6_PRE96_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE96_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE96)
-
-#define ETH_RSS_IP ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_IPV6_EX)
-
-#define ETH_RSS_UDP ( \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_UDP_EX)
-
-#define ETH_RSS_TCP ( \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_IPV6_TCP_EX)
-
-#define ETH_RSS_SCTP ( \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP)
-
-#define ETH_RSS_TUNNEL ( \
-	ETH_RSS_VXLAN  | \
-	ETH_RSS_GENEVE | \
-	ETH_RSS_NVGRE)
-
-#define ETH_RSS_VLAN ( \
-	ETH_RSS_S_VLAN  | \
-	ETH_RSS_C_VLAN)
+#define ETH_RSS_IPV6_PRE96_SCTP	RTE_ETH_RSS_IPV6_PRE96_SCTP
+
+#define RTE_ETH_RSS_IP ( \
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_IPV6_EX)
+#define ETH_RSS_IP	RTE_ETH_RSS_IP
+
+#define RTE_ETH_RSS_UDP ( \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
+#define ETH_RSS_UDP	RTE_ETH_RSS_UDP
+
+#define RTE_ETH_RSS_TCP ( \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_IPV6_TCP_EX)
+#define ETH_RSS_TCP	RTE_ETH_RSS_TCP
+
+#define RTE_ETH_RSS_SCTP ( \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+#define ETH_RSS_SCTP	RTE_ETH_RSS_SCTP
+
+#define RTE_ETH_RSS_TUNNEL ( \
+	RTE_ETH_RSS_VXLAN  | \
+	RTE_ETH_RSS_GENEVE | \
+	RTE_ETH_RSS_NVGRE)
+#define ETH_RSS_TUNNEL	RTE_ETH_RSS_TUNNEL
+
+#define RTE_ETH_RSS_VLAN ( \
+	RTE_ETH_RSS_S_VLAN  | \
+	RTE_ETH_RSS_C_VLAN)
+#define ETH_RSS_VLAN	RTE_ETH_RSS_VLAN
 
 /**< Mask of valid RSS hash protocols */
-#define ETH_RSS_PROTO_MASK ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L2_PAYLOAD | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX | \
-	ETH_RSS_PORT  | \
-	ETH_RSS_VXLAN | \
-	ETH_RSS_GENEVE | \
-	ETH_RSS_NVGRE | \
-	ETH_RSS_MPLS)
+#define RTE_ETH_RSS_PROTO_MASK ( \
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L2_PAYLOAD | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX | \
+	RTE_ETH_RSS_PORT  | \
+	RTE_ETH_RSS_VXLAN | \
+	RTE_ETH_RSS_GENEVE | \
+	RTE_ETH_RSS_NVGRE | \
+	RTE_ETH_RSS_MPLS)
+#define ETH_RSS_PROTO_MASK	RTE_ETH_RSS_PROTO_MASK
 
 /*
  * Definitions used for redirection table entry size.
  * Some RSS RETA sizes may not be supported by some drivers, check the
  * documentation or the description of relevant functions for more details.
  */
-#define ETH_RSS_RETA_SIZE_64  64
-#define ETH_RSS_RETA_SIZE_128 128
-#define ETH_RSS_RETA_SIZE_256 256
-#define ETH_RSS_RETA_SIZE_512 512
-#define RTE_RETA_GROUP_SIZE   64
+#define RTE_ETH_RSS_RETA_SIZE_64  64
+#define ETH_RSS_RETA_SIZE_64	RTE_ETH_RSS_RETA_SIZE_64
+#define RTE_ETH_RSS_RETA_SIZE_128 128
+#define ETH_RSS_RETA_SIZE_128	RTE_ETH_RSS_RETA_SIZE_128
+#define RTE_ETH_RSS_RETA_SIZE_256 256
+#define ETH_RSS_RETA_SIZE_256	RTE_ETH_RSS_RETA_SIZE_256
+#define RTE_ETH_RSS_RETA_SIZE_512 512
+#define ETH_RSS_RETA_SIZE_512	RTE_ETH_RSS_RETA_SIZE_512
+#define RTE_ETH_RETA_GROUP_SIZE   64
+#define RTE_RETA_GROUP_SIZE	RTE_ETH_RETA_GROUP_SIZE
 
 /**@{@name VMDq and DCB maximums */
-#define ETH_VMDQ_MAX_VLAN_FILTERS   64 /**< Maximum nb. of VMDQ vlan filters. */
-#define ETH_DCB_NUM_USER_PRIORITIES 8  /**< Maximum nb. of DCB priorities. */
-#define ETH_VMDQ_DCB_NUM_QUEUES     128 /**< Maximum nb. of VMDQ DCB queues. */
-#define ETH_DCB_NUM_QUEUES          128 /**< Maximum nb. of DCB queues. */
+#define RTE_ETH_VMDQ_MAX_VLAN_FILTERS   64 /**< Maximum nb. of VMDQ vlan filters. */
+#define ETH_VMDQ_MAX_VLAN_FILTERS	RTE_ETH_VMDQ_MAX_VLAN_FILTERS
+#define RTE_ETH_DCB_NUM_USER_PRIORITIES 8  /**< Maximum nb. of DCB priorities. */
+#define ETH_DCB_NUM_USER_PRIORITIES	RTE_ETH_DCB_NUM_USER_PRIORITIES
+#define RTE_ETH_VMDQ_DCB_NUM_QUEUES     128 /**< Maximum nb. of VMDQ DCB queues. */
+#define ETH_VMDQ_DCB_NUM_QUEUES	RTE_ETH_VMDQ_DCB_NUM_QUEUES
+#define RTE_ETH_DCB_NUM_QUEUES          128 /**< Maximum nb. of DCB queues. */
+#define ETH_DCB_NUM_QUEUES	RTE_ETH_DCB_NUM_QUEUES
 /**@}*/
 
 /**@{@name DCB capabilities */
-#define ETH_DCB_PG_SUPPORT      0x00000001 /**< Priority Group(ETS) support. */
-#define ETH_DCB_PFC_SUPPORT     0x00000002 /**< Priority Flow Control support. */
+#define RTE_ETH_DCB_PG_SUPPORT      0x00000001 /**< Priority Group(ETS) support. */
+#define ETH_DCB_PG_SUPPORT	RTE_ETH_DCB_PG_SUPPORT
+#define RTE_ETH_DCB_PFC_SUPPORT     0x00000002 /**< Priority Flow Control support. */
+#define ETH_DCB_PFC_SUPPORT	RTE_ETH_DCB_PFC_SUPPORT
 /**@}*/
 
 /**@{@name VLAN offload bits */
-#define ETH_VLAN_STRIP_OFFLOAD   0x0001 /**< VLAN Strip  On/Off */
-#define ETH_VLAN_FILTER_OFFLOAD  0x0002 /**< VLAN Filter On/Off */
-#define ETH_VLAN_EXTEND_OFFLOAD  0x0004 /**< VLAN Extend On/Off */
-#define ETH_QINQ_STRIP_OFFLOAD   0x0008 /**< QINQ Strip On/Off */
-
-#define ETH_VLAN_STRIP_MASK   0x0001 /**< VLAN Strip  setting mask */
-#define ETH_VLAN_FILTER_MASK  0x0002 /**< VLAN Filter  setting mask*/
-#define ETH_VLAN_EXTEND_MASK  0x0004 /**< VLAN Extend  setting mask*/
-#define ETH_QINQ_STRIP_MASK   0x0008 /**< QINQ Strip  setting mask */
-#define ETH_VLAN_ID_MAX       0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define RTE_ETH_VLAN_STRIP_OFFLOAD   0x0001 /**< VLAN Strip  On/Off */
+#define ETH_VLAN_STRIP_OFFLOAD	RTE_ETH_VLAN_STRIP_OFFLOAD
+#define RTE_ETH_VLAN_FILTER_OFFLOAD  0x0002 /**< VLAN Filter On/Off */
+#define ETH_VLAN_FILTER_OFFLOAD	RTE_ETH_VLAN_FILTER_OFFLOAD
+#define RTE_ETH_VLAN_EXTEND_OFFLOAD  0x0004 /**< VLAN Extend On/Off */
+#define ETH_VLAN_EXTEND_OFFLOAD	RTE_ETH_VLAN_EXTEND_OFFLOAD
+#define RTE_ETH_QINQ_STRIP_OFFLOAD   0x0008 /**< QINQ Strip On/Off */
+#define ETH_QINQ_STRIP_OFFLOAD	RTE_ETH_QINQ_STRIP_OFFLOAD
+
+#define RTE_ETH_VLAN_STRIP_MASK   0x0001 /**< VLAN Strip  setting mask */
+#define ETH_VLAN_STRIP_MASK	RTE_ETH_VLAN_STRIP_MASK
+#define RTE_ETH_VLAN_FILTER_MASK  0x0002 /**< VLAN Filter  setting mask*/
+#define ETH_VLAN_FILTER_MASK	RTE_ETH_VLAN_FILTER_MASK
+#define RTE_ETH_VLAN_EXTEND_MASK  0x0004 /**< VLAN Extend  setting mask*/
+#define ETH_VLAN_EXTEND_MASK	RTE_ETH_VLAN_EXTEND_MASK
+#define RTE_ETH_QINQ_STRIP_MASK   0x0008 /**< QINQ Strip  setting mask */
+#define ETH_QINQ_STRIP_MASK	RTE_ETH_QINQ_STRIP_MASK
+#define RTE_ETH_VLAN_ID_MAX       0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define ETH_VLAN_ID_MAX		RTE_ETH_VLAN_ID_MAX
 /**@}*/
 
 /* Definitions used for receive MAC address   */
-#define ETH_NUM_RECEIVE_MAC_ADDR  128 /**< Maximum nb. of receive mac addr. */
+#define RTE_ETH_NUM_RECEIVE_MAC_ADDR  128 /**< Maximum nb. of receive mac addr. */
+#define ETH_NUM_RECEIVE_MAC_ADDR	RTE_ETH_NUM_RECEIVE_MAC_ADDR
 
 /* Definitions used for unicast hash  */
-#define ETH_VMDQ_NUM_UC_HASH_ARRAY  128 /**< Maximum nb. of UC hash array. */
+#define RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY  128 /**< Maximum nb. of UC hash array. */
+#define ETH_VMDQ_NUM_UC_HASH_ARRAY	RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY
 
 /**@{@name VMDq Rx mode
  * @see rte_eth_vmdq_rx_conf.rx_mode
  */
-#define ETH_VMDQ_ACCEPT_UNTAG   0x0001 /**< accept untagged packets. */
-#define ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
-#define ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
-#define ETH_VMDQ_ACCEPT_BROADCAST   0x0008 /**< accept broadcast packets. */
-#define ETH_VMDQ_ACCEPT_MULTICAST   0x0010 /**< multicast promiscuous. */
+#define RTE_ETH_VMDQ_ACCEPT_UNTAG   0x0001 /**< accept untagged packets. */
+#define ETH_VMDQ_ACCEPT_UNTAG	RTE_ETH_VMDQ_ACCEPT_UNTAG
+#define RTE_ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
+#define ETH_VMDQ_ACCEPT_HASH_MC	RTE_ETH_VMDQ_ACCEPT_HASH_MC
+#define RTE_ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
+#define ETH_VMDQ_ACCEPT_HASH_UC	RTE_ETH_VMDQ_ACCEPT_HASH_UC
+#define RTE_ETH_VMDQ_ACCEPT_BROADCAST   0x0008 /**< accept broadcast packets. */
+#define ETH_VMDQ_ACCEPT_BROADCAST	RTE_ETH_VMDQ_ACCEPT_BROADCAST
+#define RTE_ETH_VMDQ_ACCEPT_MULTICAST   0x0010 /**< multicast promiscuous. */
+#define ETH_VMDQ_ACCEPT_MULTICAST	RTE_ETH_VMDQ_ACCEPT_MULTICAST
 /**@}*/
 
+/** Maximum nb. of vlan per mirror rule */
+#define RTE_ETH_MIRROR_MAX_VLANS       64
+#define ETH_MIRROR_MAX_VLANS	RTE_ETH_MIRROR_MAX_VLANS
+
+#define RTE_ETH_MIRROR_VIRTUAL_POOL_UP     0x01  /**< Virtual Pool uplink Mirroring. */
+#define ETH_MIRROR_VIRTUAL_POOL_UP	RTE_ETH_MIRROR_VIRTUAL_POOL_UP
+#define RTE_ETH_MIRROR_UPLINK_PORT         0x02  /**< Uplink Port Mirroring. */
+#define ETH_MIRROR_UPLINK_PORT	RTE_ETH_MIRROR_UPLINK_PORT
+#define RTE_ETH_MIRROR_DOWNLINK_PORT       0x04  /**< Downlink Port Mirroring. */
+#define ETH_MIRROR_DOWNLINK_PORT	RTE_ETH_MIRROR_DOWNLINK_PORT
+#define RTE_ETH_MIRROR_VLAN                0x08  /**< VLAN Mirroring. */
+#define ETH_MIRROR_VLAN		RTE_ETH_MIRROR_VLAN
+#define RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN   0x10  /**< Virtual Pool downlink Mirroring. */
+#define ETH_MIRROR_VIRTUAL_POOL_DOWN	RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN
+
+/**
+ * A structure used to configure VLAN traffic mirror of an Ethernet port.
+ */
+struct rte_eth_vlan_mirror {
+	uint64_t vlan_mask; /**< mask for valid VLAN ID. */
+	/** VLAN ID list for vlan mirroring. */
+	uint16_t vlan_id[RTE_ETH_MIRROR_MAX_VLANS];
+};
+
+/**
+ * A structure used to configure traffic mirror of an Ethernet port.
+ */
+struct rte_eth_mirror_conf {
+	uint8_t rule_type; /**< Mirroring rule type */
+	uint8_t dst_pool;  /**< Destination pool for this mirror rule. */
+	uint64_t pool_mask; /**< Bitmap of pool for pool mirroring */
+	/** VLAN ID setting for VLAN mirroring. */
+	struct rte_eth_vlan_mirror vlan;
+};
+
 /**
  * A structure used to configure 64 entries of Redirection Table of the
  * Receive Side Scaling (RSS) feature of an Ethernet port. To configure
@@ -854,7 +1036,7 @@ rte_eth_rss_hf_refine(uint64_t rss_hf)
 struct rte_eth_rss_reta_entry64 {
 	uint64_t mask;
 	/**< Mask bits indicate which entries need to be updated/queried. */
-	uint16_t reta[RTE_RETA_GROUP_SIZE];
+	uint16_t reta[RTE_ETH_RETA_GROUP_SIZE];
 	/**< Group of 64 redirection table entries. */
 };
 
@@ -863,38 +1045,44 @@ struct rte_eth_rss_reta_entry64 {
  * in DCB configurations
  */
 enum rte_eth_nb_tcs {
-	ETH_4_TCS = 4, /**< 4 TCs with DCB. */
-	ETH_8_TCS = 8  /**< 8 TCs with DCB. */
+	RTE_ETH_4_TCS = 4, /**< 4 TCs with DCB. */
+	RTE_ETH_8_TCS = 8  /**< 8 TCs with DCB. */
 };
+#define ETH_4_TCS RTE_ETH_4_TCS
+#define ETH_8_TCS RTE_ETH_8_TCS
 
 /**
  * This enum indicates the possible number of queue pools
  * in VMDQ configurations.
  */
 enum rte_eth_nb_pools {
-	ETH_8_POOLS = 8,    /**< 8 VMDq pools. */
-	ETH_16_POOLS = 16,  /**< 16 VMDq pools. */
-	ETH_32_POOLS = 32,  /**< 32 VMDq pools. */
-	ETH_64_POOLS = 64   /**< 64 VMDq pools. */
+	RTE_ETH_8_POOLS = 8,    /**< 8 VMDq pools. */
+	RTE_ETH_16_POOLS = 16,  /**< 16 VMDq pools. */
+	RTE_ETH_32_POOLS = 32,  /**< 32 VMDq pools. */
+	RTE_ETH_64_POOLS = 64   /**< 64 VMDq pools. */
 };
+#define ETH_8_POOLS	RTE_ETH_8_POOLS
+#define ETH_16_POOLS	RTE_ETH_16_POOLS
+#define ETH_32_POOLS	RTE_ETH_32_POOLS
+#define ETH_64_POOLS	RTE_ETH_64_POOLS
 
 /* This structure may be extended in future. */
 struct rte_eth_dcb_rx_conf {
 	enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs */
 	/** Traffic class each UP mapped to. */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_vmdq_dcb_tx_conf {
 	enum rte_eth_nb_pools nb_queue_pools; /**< With DCB, 16 or 32 pools. */
 	/** Traffic class each UP mapped to. */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_dcb_tx_conf {
 	enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs. */
 	/** Traffic class each UP mapped to. */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_vmdq_tx_conf {
@@ -920,8 +1108,8 @@ struct rte_eth_vmdq_dcb_conf {
 	struct {
 		uint16_t vlan_id; /**< The vlan id of the received frame */
 		uint64_t pools;   /**< Bitmask of pools for packet rx */
-	} pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	} pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 	/**< Selects a queue in a pool */
 };
 
@@ -932,7 +1120,7 @@ struct rte_eth_vmdq_dcb_conf {
  * Using this feature, packets are routed to a pool of queues. By default,
  * the pool selection is based on the MAC address, the vlan id in the
  * vlan tag as specified in the pool_map array.
- * Passing the ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool
+ * Passing the RTE_ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool
  * selection using only the MAC address. MAC address to pool mapping is done
  * using the rte_eth_dev_mac_addr_add function, with the pool parameter
  * corresponding to the pool id.
@@ -953,7 +1141,7 @@ struct rte_eth_vmdq_rx_conf {
 	struct {
 		uint16_t vlan_id; /**< The vlan id of the received frame */
 		uint64_t pools;   /**< Bitmask of pools for packet rx */
-	} pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
+	} pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
 };
 
 /**
@@ -962,7 +1150,7 @@ struct rte_eth_vmdq_rx_conf {
 struct rte_eth_txmode {
 	enum rte_eth_tx_mq_mode mq_mode; /**< TX multi-queues mode. */
 	/**
-	 * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+	 * Per-port Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags.
 	 * Only offloads set on tx_offload_capa field on rte_eth_dev_info
 	 * structure are allowed to be set.
 	 */
@@ -1046,7 +1234,7 @@ struct rte_eth_rxconf {
 	uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
 	uint16_t rx_nseg; /**< Number of descriptions in rx_seg array. */
 	/**
-	 * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Per-queue Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
 	 * Only offloads set on rx_queue_offload_capa or rx_offload_capa
 	 * fields on rte_eth_dev_info structure are allowed to be set.
 	 */
@@ -1075,7 +1263,7 @@ struct rte_eth_txconf {
 
 	uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
 	/**
-	 * Per-queue Tx offloads to be set  using DEV_TX_OFFLOAD_* flags.
+	 * Per-queue Tx offloads to be set  using RTE_ETH_TX_OFFLOAD_* flags.
 	 * Only offloads set on tx_queue_offload_capa or tx_offload_capa
 	 * fields on rte_eth_dev_info structure are allowed to be set.
 	 */
@@ -1186,12 +1374,17 @@ struct rte_eth_desc_lim {
  * This enum indicates the flow control mode
  */
 enum rte_eth_fc_mode {
-	RTE_FC_NONE = 0, /**< Disable flow control. */
-	RTE_FC_RX_PAUSE, /**< RX pause frame, enable flowctrl on TX side. */
-	RTE_FC_TX_PAUSE, /**< TX pause frame, enable flowctrl on RX side. */
-	RTE_FC_FULL      /**< Enable flow control on both side. */
+	RTE_ETH_FC_NONE = 0, /**< Disable flow control. */
+	RTE_ETH_FC_RX_PAUSE, /**< RX pause frame, enable flowctrl on TX side. */
+	RTE_ETH_FC_TX_PAUSE, /**< TX pause frame, enable flowctrl on RX side. */
+	RTE_ETH_FC_FULL      /**< Enable flow control on both side. */
 };
 
+#define RTE_FC_NONE	RTE_ETH_FC_NONE
+#define RTE_FC_RX_PAUSE	RTE_ETH_FC_RX_PAUSE
+#define RTE_FC_TX_PAUSE	RTE_ETH_FC_TX_PAUSE
+#define RTE_FC_FULL	RTE_ETH_FC_FULL
+
 /**
  * A structure used to configure Ethernet flow control parameter.
  * These parameters will be configured into the register of the NIC.
@@ -1222,18 +1415,29 @@ struct rte_eth_pfc_conf {
  * @see rte_eth_udp_tunnel
  */
 enum rte_eth_tunnel_type {
-	RTE_TUNNEL_TYPE_NONE = 0,
-	RTE_TUNNEL_TYPE_VXLAN,
-	RTE_TUNNEL_TYPE_GENEVE,
-	RTE_TUNNEL_TYPE_TEREDO,
-	RTE_TUNNEL_TYPE_NVGRE,
-	RTE_TUNNEL_TYPE_IP_IN_GRE,
-	RTE_L2_TUNNEL_TYPE_E_TAG,
-	RTE_TUNNEL_TYPE_VXLAN_GPE,
-	RTE_TUNNEL_TYPE_ECPRI,
-	RTE_TUNNEL_TYPE_MAX,
+	RTE_ETH_TUNNEL_TYPE_NONE = 0,
+	RTE_ETH_TUNNEL_TYPE_VXLAN,
+	RTE_ETH_TUNNEL_TYPE_GENEVE,
+	RTE_ETH_TUNNEL_TYPE_TEREDO,
+	RTE_ETH_TUNNEL_TYPE_NVGRE,
+	RTE_ETH_TUNNEL_TYPE_IP_IN_GRE,
+	RTE_ETH_L2_TUNNEL_TYPE_E_TAG,
+	RTE_ETH_TUNNEL_TYPE_VXLAN_GPE,
+	RTE_ETH_TUNNEL_TYPE_ECPRI,
+	RTE_ETH_TUNNEL_TYPE_MAX,
 };
 
+#define RTE_TUNNEL_TYPE_NONE		RTE_ETH_TUNNEL_TYPE_NONE
+#define RTE_TUNNEL_TYPE_VXLAN		RTE_ETH_TUNNEL_TYPE_VXLAN
+#define RTE_TUNNEL_TYPE_GENEVE		RTE_ETH_TUNNEL_TYPE_GENEVE
+#define RTE_TUNNEL_TYPE_TEREDO		RTE_ETH_TUNNEL_TYPE_TEREDO
+#define RTE_TUNNEL_TYPE_NVGRE		RTE_ETH_TUNNEL_TYPE_NVGRE
+#define RTE_TUNNEL_TYPE_IP_IN_GRE	RTE_ETH_TUNNEL_TYPE_IP_IN_GRE
+#define RTE_L2_TUNNEL_TYPE_E_TAG	RTE_ETH_L2_TUNNEL_TYPE_E_TAG
+#define RTE_TUNNEL_TYPE_VXLAN_GPE	RTE_ETH_TUNNEL_TYPE_VXLAN_GPE
+#define RTE_TUNNEL_TYPE_ECPRI		RTE_ETH_TUNNEL_TYPE_ECPRI
+#define RTE_TUNNEL_TYPE_MAX		RTE_ETH_TUNNEL_TYPE_MAX
+
 /* Deprecated API file for rte_eth_dev_filter_* functions */
 #include "rte_eth_ctrl.h"
 
@@ -1241,11 +1445,16 @@ enum rte_eth_tunnel_type {
  *  Memory space that can be configured to store Flow Director filters
  *  in the board memory.
  */
-enum rte_fdir_pballoc_type {
-	RTE_FDIR_PBALLOC_64K = 0,  /**< 64k. */
-	RTE_FDIR_PBALLOC_128K,     /**< 128k. */
-	RTE_FDIR_PBALLOC_256K,     /**< 256k. */
+enum rte_eth_fdir_pballoc_type {
+	RTE_ETH_FDIR_PBALLOC_64K = 0,  /**< 64k. */
+	RTE_ETH_FDIR_PBALLOC_128K,     /**< 128k. */
+	RTE_ETH_FDIR_PBALLOC_256K,     /**< 256k. */
 };
+#define rte_fdir_pballoc_type	rte_eth_fdir_pballoc_type
+
+#define RTE_FDIR_PBALLOC_64K	RTE_ETH_FDIR_PBALLOC_64K
+#define RTE_FDIR_PBALLOC_128K	RTE_ETH_FDIR_PBALLOC_128K
+#define RTE_FDIR_PBALLOC_256K	RTE_ETH_FDIR_PBALLOC_256K
 
 /**
  *  Select report mode of FDIR hash information in RX descriptors.
@@ -1262,9 +1471,9 @@ enum rte_fdir_status_mode {
  *
  * If mode is RTE_FDIR_MODE_NONE, the pballoc value is ignored.
  */
-struct rte_fdir_conf {
+struct rte_eth_fdir_conf {
 	enum rte_fdir_mode mode; /**< Flow Director mode. */
-	enum rte_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
+	enum rte_eth_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
 	enum rte_fdir_status_mode status;  /**< How to report FDIR hash. */
 	/** RX queue of packets matching a "drop" filter in perfect mode. */
 	uint8_t drop_queue;
@@ -1273,6 +1482,8 @@ struct rte_fdir_conf {
 	/**< Flex payload configuration. */
 };
 
+#define rte_fdir_conf rte_eth_fdir_conf
+
 /**
  * UDP tunneling configuration.
  *
@@ -1290,7 +1501,7 @@ struct rte_eth_udp_tunnel {
 /**
  * A structure used to enable/disable specific device interrupts.
  */
-struct rte_intr_conf {
+struct rte_eth_intr_conf {
 	/** enable/disable lsc interrupt. 0 (default) - disable, 1 enable */
 	uint32_t lsc:1;
 	/** enable/disable rxq interrupt. 0 (default) - disable, 1 enable */
@@ -1299,18 +1510,20 @@ struct rte_intr_conf {
 	uint32_t rmv:1;
 };
 
+#define rte_intr_conf rte_eth_intr_conf
+
 /**
  * A structure used to configure an Ethernet port.
  * Depending upon the RX multi-queue mode, extra advanced
  * configuration settings may be needed.
  */
 struct rte_eth_conf {
-	uint32_t link_speeds; /**< bitmap of ETH_LINK_SPEED_XXX of speeds to be
-				used. ETH_LINK_SPEED_FIXED disables link
+	uint32_t link_speeds; /**< bitmap of RTE_ETH_LINK_SPEED_XXX of speeds to be
+				used. RTE_ETH_LINK_SPEED_FIXED disables link
 				autonegotiation, and a unique speed shall be
 				set. Otherwise, the bitmap defines the set of
 				speeds to be advertised. If the special value
-				ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
+				RTE_ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
 				supported are advertised. */
 	struct rte_eth_rxmode rxmode; /**< Port RX configuration. */
 	struct rte_eth_txmode txmode; /**< Port TX configuration. */
@@ -1336,48 +1549,70 @@ struct rte_eth_conf {
 		struct rte_eth_vmdq_tx_conf vmdq_tx_conf;
 		/**< Port vmdq TX configuration. */
 	} tx_adv_conf; /**< Port TX DCB configuration (union). */
-	/** Currently,Priority Flow Control(PFC) are supported,if DCB with PFC
-	    is needed,and the variable must be set ETH_DCB_PFC_SUPPORT. */
+	/**
+	 * Currently,Priority Flow Control(PFC) are supported,if DCB with PFC
+	 * is needed,and the variable must be set RTE_ETH_DCB_PFC_SUPPORT.
+	 */
 	uint32_t dcb_capability_en;
-	struct rte_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
-	struct rte_intr_conf intr_conf; /**< Interrupt mode configuration. */
+	struct rte_eth_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
+	struct rte_eth_intr_conf intr_conf; /**< Interrupt mode configuration. */
 };
 
 /**
  * RX offload capabilities of a device.
  */
-#define DEV_RX_OFFLOAD_VLAN_STRIP  0x00000001
-#define DEV_RX_OFFLOAD_IPV4_CKSUM  0x00000002
-#define DEV_RX_OFFLOAD_UDP_CKSUM   0x00000004
-#define DEV_RX_OFFLOAD_TCP_CKSUM   0x00000008
-#define DEV_RX_OFFLOAD_TCP_LRO     0x00000010
-#define DEV_RX_OFFLOAD_QINQ_STRIP  0x00000020
-#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
-#define DEV_RX_OFFLOAD_MACSEC_STRIP     0x00000080
-#define DEV_RX_OFFLOAD_HEADER_SPLIT	0x00000100
-#define DEV_RX_OFFLOAD_VLAN_FILTER	0x00000200
-#define DEV_RX_OFFLOAD_VLAN_EXTEND	0x00000400
-#define DEV_RX_OFFLOAD_SCATTER		0x00002000
+#define RTE_ETH_RX_OFFLOAD_VLAN_STRIP  0x00000001
+#define DEV_RX_OFFLOAD_VLAN_STRIP	RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+#define RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  0x00000002
+#define DEV_RX_OFFLOAD_IPV4_CKSUM	RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_UDP_CKSUM   0x00000004
+#define DEV_RX_OFFLOAD_UDP_CKSUM	RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_CKSUM   0x00000008
+#define DEV_RX_OFFLOAD_TCP_CKSUM	RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_LRO     0x00000010
+#define DEV_RX_OFFLOAD_TCP_LRO		RTE_ETH_RX_OFFLOAD_TCP_LRO
+#define RTE_ETH_RX_OFFLOAD_QINQ_STRIP  0x00000020
+#define DEV_RX_OFFLOAD_QINQ_STRIP	RTE_ETH_RX_OFFLOAD_QINQ_STRIP
+#define RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
+#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM	RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_MACSEC_STRIP     0x00000080
+#define DEV_RX_OFFLOAD_MACSEC_STRIP	RTE_ETH_RX_OFFLOAD_MACSEC_STRIP
+#define RTE_ETH_RX_OFFLOAD_HEADER_SPLIT	0x00000100
+#define DEV_RX_OFFLOAD_HEADER_SPLIT	RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
+#define RTE_ETH_RX_OFFLOAD_VLAN_FILTER	0x00000200
+#define DEV_RX_OFFLOAD_VLAN_FILTER	RTE_ETH_RX_OFFLOAD_VLAN_FILTER
+#define RTE_ETH_RX_OFFLOAD_VLAN_EXTEND	0x00000400
+#define DEV_RX_OFFLOAD_VLAN_EXTEND	RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
+#define RTE_ETH_RX_OFFLOAD_SCATTER	0x00002000
+#define DEV_RX_OFFLOAD_SCATTER		RTE_ETH_RX_OFFLOAD_SCATTER
 /**
  * Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
  * and RTE_MBUF_DYNFLAG_RX_TIMESTAMP_NAME is set in ol_flags.
  * The mbuf field and flag are registered when the offload is configured.
  */
-#define DEV_RX_OFFLOAD_TIMESTAMP	0x00004000
-#define DEV_RX_OFFLOAD_SECURITY         0x00008000
-#define DEV_RX_OFFLOAD_KEEP_CRC		0x00010000
-#define DEV_RX_OFFLOAD_SCTP_CKSUM	0x00020000
-#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM  0x00040000
-#define DEV_RX_OFFLOAD_RSS_HASH		0x00080000
+#define RTE_ETH_RX_OFFLOAD_TIMESTAMP	0x00004000
+#define DEV_RX_OFFLOAD_TIMESTAMP	RTE_ETH_RX_OFFLOAD_TIMESTAMP
+#define RTE_ETH_RX_OFFLOAD_SECURITY     0x00008000
+#define DEV_RX_OFFLOAD_SECURITY		RTE_ETH_RX_OFFLOAD_SECURITY
+#define RTE_ETH_RX_OFFLOAD_KEEP_CRC	0x00010000
+#define DEV_RX_OFFLOAD_KEEP_CRC		RTE_ETH_RX_OFFLOAD_KEEP_CRC
+#define RTE_ETH_RX_OFFLOAD_SCTP_CKSUM	0x00020000
+#define DEV_RX_OFFLOAD_SCTP_CKSUM	RTE_ETH_RX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM  0x00040000
+#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM	RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_RSS_HASH	0x00080000
+#define DEV_RX_OFFLOAD_RSS_HASH	RTE_ETH_RX_OFFLOAD_RSS_HASH
 #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
 
-#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
-				 DEV_RX_OFFLOAD_UDP_CKSUM | \
-				 DEV_RX_OFFLOAD_TCP_CKSUM)
-#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
-			     DEV_RX_OFFLOAD_VLAN_FILTER | \
-			     DEV_RX_OFFLOAD_VLAN_EXTEND | \
-			     DEV_RX_OFFLOAD_QINQ_STRIP)
+#define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+				 RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+				 RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_CHECKSUM	RTE_ETH_RX_OFFLOAD_CHECKSUM
+#define RTE_ETH_RX_OFFLOAD_VLAN (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+			     RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+			     RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+			     RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+#define DEV_RX_OFFLOAD_VLAN	RTE_ETH_RX_OFFLOAD_VLAN
 
 /*
  * If new Rx offload capabilities are defined, they also must be
@@ -1387,52 +1622,74 @@ struct rte_eth_conf {
 /**
  * TX offload capabilities of a device.
  */
-#define DEV_TX_OFFLOAD_VLAN_INSERT 0x00000001
-#define DEV_TX_OFFLOAD_IPV4_CKSUM  0x00000002
-#define DEV_TX_OFFLOAD_UDP_CKSUM   0x00000004
-#define DEV_TX_OFFLOAD_TCP_CKSUM   0x00000008
-#define DEV_TX_OFFLOAD_SCTP_CKSUM  0x00000010
-#define DEV_TX_OFFLOAD_TCP_TSO     0x00000020
-#define DEV_TX_OFFLOAD_UDP_TSO     0x00000040
-#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_QINQ_INSERT 0x00000100
-#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO    0x00000200    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GRE_TNL_TSO      0x00000400    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_IPIP_TNL_TSO     0x00000800    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO   0x00001000    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_MACSEC_INSERT    0x00002000
-#define DEV_TX_OFFLOAD_MT_LOCKFREE      0x00004000
+#define RTE_ETH_TX_OFFLOAD_VLAN_INSERT 0x00000001
+#define DEV_TX_OFFLOAD_VLAN_INSERT	RTE_ETH_TX_OFFLOAD_VLAN_INSERT
+#define RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  0x00000002
+#define DEV_TX_OFFLOAD_IPV4_CKSUM	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_UDP_CKSUM   0x00000004
+#define DEV_TX_OFFLOAD_UDP_CKSUM	RTE_ETH_TX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_CKSUM   0x00000008
+#define DEV_TX_OFFLOAD_TCP_CKSUM	RTE_ETH_TX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  0x00000010
+#define DEV_TX_OFFLOAD_SCTP_CKSUM	RTE_ETH_TX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_TSO     0x00000020
+#define DEV_TX_OFFLOAD_TCP_TSO		RTE_ETH_TX_OFFLOAD_TCP_TSO
+#define RTE_ETH_TX_OFFLOAD_UDP_TSO     0x00000040
+#define DEV_TX_OFFLOAD_UDP_TSO		RTE_ETH_TX_OFFLOAD_UDP_TSO
+#define RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM	RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_QINQ_INSERT 0x00000100
+#define DEV_TX_OFFLOAD_QINQ_INSERT	RTE_ETH_TX_OFFLOAD_QINQ_INSERT
+#define RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO    0x00000200    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO	RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO      0x00000400    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GRE_TNL_TSO	RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO     0x00000800    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_IPIP_TNL_TSO	RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO   0x00001000    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO	RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_MACSEC_INSERT    0x00002000
+#define DEV_TX_OFFLOAD_MACSEC_INSERT	RTE_ETH_TX_OFFLOAD_MACSEC_INSERT
+#define RTE_ETH_TX_OFFLOAD_MT_LOCKFREE      0x00004000
+#define DEV_TX_OFFLOAD_MT_LOCKFREE	RTE_ETH_TX_OFFLOAD_MT_LOCKFREE
 /**< Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
  * tx queue without SW lock.
  */
-#define DEV_TX_OFFLOAD_MULTI_SEGS	0x00008000
+#define RTE_ETH_TX_OFFLOAD_MULTI_SEGS	0x00008000
+#define DEV_TX_OFFLOAD_MULTI_SEGS	RTE_ETH_TX_OFFLOAD_MULTI_SEGS
 /**< Device supports multi segment send. */
-#define DEV_TX_OFFLOAD_MBUF_FAST_FREE	0x00010000
+#define RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE	0x00010000
+#define DEV_TX_OFFLOAD_MBUF_FAST_FREE	RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
 /**< Device supports optimization for fast release of mbufs.
  *   When set application must guarantee that per-queue all mbufs comes from
  *   the same mempool and has refcnt = 1.
  */
-#define DEV_TX_OFFLOAD_SECURITY         0x00020000
+#define RTE_ETH_TX_OFFLOAD_SECURITY         0x00020000
+#define DEV_TX_OFFLOAD_SECURITY	RTE_ETH_TX_OFFLOAD_SECURITY
 /**
  * Device supports generic UDP tunneled packet TSO.
  * Application must set PKT_TX_TUNNEL_UDP and other mbuf fields required
  * for tunnel TSO.
  */
-#define DEV_TX_OFFLOAD_UDP_TNL_TSO      0x00040000
+#define RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO      0x00040000
+#define DEV_TX_OFFLOAD_UDP_TNL_TSO	RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO
 /**
  * Device supports generic IP tunneled packet TSO.
  * Application must set PKT_TX_TUNNEL_IP and other mbuf fields required
  * for tunnel TSO.
  */
-#define DEV_TX_OFFLOAD_IP_TNL_TSO       0x00080000
+#define RTE_ETH_TX_OFFLOAD_IP_TNL_TSO       0x00080000
+#define DEV_TX_OFFLOAD_IP_TNL_TSO	RTE_ETH_TX_OFFLOAD_IP_TNL_TSO
 /** Device supports outer UDP checksum */
-#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM  0x00100000
+#define RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM  0x00100000
+#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM	RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM
 /**
  * Device sends on time read from RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
  * if RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME is set in ol_flags.
  * The mbuf field and flag are registered when the offload is configured.
  */
-#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP	RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP
 /*
  * If new Tx offload capabilities are defined, they also must be
  * mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1564,7 +1821,7 @@ struct rte_eth_dev_info {
 	uint16_t vmdq_pool_base;  /**< First ID of VMDQ pools. */
 	struct rte_eth_desc_lim rx_desc_lim;  /**< RX descriptors limits */
 	struct rte_eth_desc_lim tx_desc_lim;  /**< TX descriptors limits */
-	uint32_t speed_capa;  /**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+	uint32_t speed_capa;  /**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
 	/** Configured number of rx/tx queues */
 	uint16_t nb_rx_queues; /**< Number of RX queues. */
 	uint16_t nb_tx_queues; /**< Number of TX queues. */
@@ -1668,8 +1925,10 @@ struct rte_eth_xstat_name {
 	char name[RTE_ETH_XSTATS_NAME_SIZE]; /**< The statistic name. */
 };
 
-#define ETH_DCB_NUM_TCS    8
-#define ETH_MAX_VMDQ_POOL  64
+#define RTE_ETH_DCB_NUM_TCS    8
+#define ETH_DCB_NUM_TCS	RTE_ETH_DCB_NUM_TCS
+#define RTE_ETH_MAX_VMDQ_POOL  64
+#define ETH_MAX_VMDQ_POOL	RTE_ETH_MAX_VMDQ_POOL
 
 /**
  * A structure used to get the information of queue and
@@ -1680,12 +1939,12 @@ struct rte_eth_dcb_tc_queue_mapping {
 	struct {
 		uint16_t base;
 		uint16_t nb_queue;
-	} tc_rxq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+	} tc_rxq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS];
 	/** rx queues assigned to tc per Pool */
 	struct {
 		uint16_t base;
 		uint16_t nb_queue;
-	} tc_txq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+	} tc_txq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS];
 };
 
 /**
@@ -1694,8 +1953,8 @@ struct rte_eth_dcb_tc_queue_mapping {
  */
 struct rte_eth_dcb_info {
 	uint8_t nb_tcs;        /**< number of TCs */
-	uint8_t prio_tc[ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
-	uint8_t tc_bws[ETH_DCB_NUM_TCS]; /**< TX BW percentage for each TC */
+	uint8_t prio_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
+	uint8_t tc_bws[RTE_ETH_DCB_NUM_TCS]; /**< TX BW percentage for each TC */
 	/** rx queues assigned to tc */
 	struct rte_eth_dcb_tc_queue_mapping tc_queue;
 };
@@ -1719,7 +1978,7 @@ enum rte_eth_fec_mode {
 
 /* A structure used to get capabilities per link speed */
 struct rte_eth_fec_capa {
-	uint32_t speed; /**< Link speed (see ETH_SPEED_NUM_*) */
+	uint32_t speed; /**< Link speed (see RTE_ETH_SPEED_NUM_*) */
 	uint32_t capa;  /**< FEC capabilities bitmask */
 };
 
@@ -1742,13 +2001,17 @@ struct rte_eth_fec_capa {
 
 /**@{@name L2 tunnel configuration */
 /**< l2 tunnel enable mask */
-#define ETH_L2_TUNNEL_ENABLE_MASK       0x00000001
+#define RTE_ETH_L2_TUNNEL_ENABLE_MASK       0x00000001
+#define ETH_L2_TUNNEL_ENABLE_MASK	RTE_ETH_L2_TUNNEL_ENABLE_MASK
 /**< l2 tunnel insertion mask */
-#define ETH_L2_TUNNEL_INSERTION_MASK    0x00000002
+#define RTE_ETH_L2_TUNNEL_INSERTION_MASK    0x00000002
+#define ETH_L2_TUNNEL_INSERTION_MASK	RTE_ETH_L2_TUNNEL_INSERTION_MASK
 /**< l2 tunnel stripping mask */
-#define ETH_L2_TUNNEL_STRIPPING_MASK    0x00000004
+#define RTE_ETH_L2_TUNNEL_STRIPPING_MASK    0x00000004
+#define ETH_L2_TUNNEL_STRIPPING_MASK	RTE_ETH_L2_TUNNEL_STRIPPING_MASK
 /**< l2 tunnel forwarding mask */
-#define ETH_L2_TUNNEL_FORWARDING_MASK   0x00000008
+#define RTE_ETH_L2_TUNNEL_FORWARDING_MASK   0x00000008
+#define ETH_L2_TUNNEL_FORWARDING_MASK	RTE_ETH_L2_TUNNEL_FORWARDING_MASK
 /**@}*/
 
 /**
@@ -2059,14 +2322,14 @@ uint16_t rte_eth_dev_count_total(void);
  * @param speed
  *   Numerical speed value in Mbps
  * @param duplex
- *   ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
+ *   RTE_ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
  * @return
  *   0 if the speed cannot be mapped
  */
 uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
 
 /**
- * Get DEV_RX_OFFLOAD_* flag name.
+ * Get RTE_ETH_RX_OFFLOAD_* flag name.
  *
  * @param offload
  *   Offload flag.
@@ -2076,7 +2339,7 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
 const char *rte_eth_dev_rx_offload_name(uint64_t offload);
 
 /**
- * Get DEV_TX_OFFLOAD_* flag name.
+ * Get RTE_ETH_TX_OFFLOAD_* flag name.
  *
  * @param offload
  *   Offload flag.
@@ -2170,7 +2433,7 @@ rte_eth_dev_is_removed(uint16_t port_id);
  *   of the Prefetch, Host, and Write-Back threshold registers of the receive
  *   ring.
  *   In addition it contains the hardware offloads features to activate using
- *   the DEV_RX_OFFLOAD_* flags.
+ *   the RTE_ETH_RX_OFFLOAD_* flags.
  *   If an offloading set in rx_conf->offloads
  *   hasn't been set in the input argument eth_conf->rxmode.offloads
  *   to rte_eth_dev_configure(), it is a new added offloading, it must be
@@ -2747,7 +3010,7 @@ const char *rte_eth_link_speed_to_str(uint32_t link_speed);
  *
  * @param str
  *   A pointer to a string to be filled with textual representation of
- *   device status. At least ETH_LINK_MAX_STR_LEN bytes should be allocated to
+ *   device status. At least RTE_ETH_LINK_MAX_STR_LEN bytes should be allocated to
  *   store default link status text.
  * @param len
  *   Length of available memory at 'str' string.
@@ -3293,10 +3556,10 @@ int rte_eth_dev_set_vlan_ether_type(uint16_t port_id,
  *   The port identifier of the Ethernet device.
  * @param offload_mask
  *   The VLAN Offload bit mask can be mixed use with "OR"
- *       ETH_VLAN_STRIP_OFFLOAD
- *       ETH_VLAN_FILTER_OFFLOAD
- *       ETH_VLAN_EXTEND_OFFLOAD
- *       ETH_QINQ_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_FILTER_OFFLOAD
+ *       RTE_ETH_VLAN_EXTEND_OFFLOAD
+ *       RTE_ETH_QINQ_STRIP_OFFLOAD
  * @return
  *   - (0) if successful.
  *   - (-ENOTSUP) if hardware-assisted VLAN filtering not configured.
@@ -3312,10 +3575,10 @@ int rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask);
  *   The port identifier of the Ethernet device.
  * @return
  *   - (>0) if successful. Bit mask to indicate
- *       ETH_VLAN_STRIP_OFFLOAD
- *       ETH_VLAN_FILTER_OFFLOAD
- *       ETH_VLAN_EXTEND_OFFLOAD
- *       ETH_QINQ_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_FILTER_OFFLOAD
+ *       RTE_ETH_VLAN_EXTEND_OFFLOAD
+ *       RTE_ETH_QINQ_STRIP_OFFLOAD
  *   - (-ENODEV) if *port_id* invalid.
  */
 int rte_eth_dev_get_vlan_offload(uint16_t port_id);
@@ -5340,7 +5603,7 @@ uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
  * rte_eth_tx_burst() function must [attempt to] free the *rte_mbuf*  buffers
  * of those packets whose transmission was effectively completed.
  *
- * If the PMD is DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+ * If the PMD is RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
  * invoke this function concurrently on the same tx queue without SW lock.
  * @see rte_eth_dev_info_get, struct rte_eth_txconf::offloads
  *
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 2b6efeef8cf5..555580ab4e71 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2890,7 +2890,7 @@ struct rte_flow_action_rss {
 	 * through.
 	 */
 	uint32_t level;
-	uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+	uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
 	uint32_t key_len; /**< Hash key length in bytes. */
 	uint32_t queue_num; /**< Number of entries in @p queue. */
 	const uint8_t *key; /**< Hash key. */
diff --git a/lib/gso/rte_gso.c b/lib/gso/rte_gso.c
index 0d02ec3cee05..119fdcac0b7f 100644
--- a/lib/gso/rte_gso.c
+++ b/lib/gso/rte_gso.c
@@ -15,13 +15,13 @@
 #include "gso_udp4.h"
 
 #define ILLEGAL_UDP_GSO_CTX(ctx) \
-	((((ctx)->gso_types & DEV_TX_OFFLOAD_UDP_TSO) == 0) || \
+	((((ctx)->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO) == 0) || \
 	 (ctx)->gso_size < RTE_GSO_UDP_SEG_SIZE_MIN)
 
 #define ILLEGAL_TCP_GSO_CTX(ctx) \
-	((((ctx)->gso_types & (DEV_TX_OFFLOAD_TCP_TSO | \
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-		DEV_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
+	((((ctx)->gso_types & (RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
 		(ctx)->gso_size < RTE_GSO_SEG_SIZE_MIN)
 
 int
@@ -54,28 +54,28 @@ rte_gso_segment(struct rte_mbuf *pkt,
 	ol_flags = pkt->ol_flags;
 
 	if ((IS_IPV4_VXLAN_TCP4(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
 			((IS_IPV4_GRE_TCP4(pkt->ol_flags) &&
-			 (gso_ctx->gso_types & DEV_TX_OFFLOAD_GRE_TNL_TSO)))) {
+			 (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))) {
 		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
 		ret = gso_tunnel_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_VXLAN_UDP4(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) &&
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
 		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
 		ret = gso_tunnel_udp4_segment(pkt, gso_size,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_TCP(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_TCP_TSO)) {
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
 		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
 		ret = gso_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_UDP(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
 		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
 		ret = gso_udp4_segment(pkt, gso_size, direct_pool,
 				indirect_pool, pkts_out, nb_pkts_out);
diff --git a/lib/gso/rte_gso.h b/lib/gso/rte_gso.h
index d93ee8e5b171..0a65afc11e64 100644
--- a/lib/gso/rte_gso.h
+++ b/lib/gso/rte_gso.h
@@ -52,11 +52,11 @@ struct rte_gso_ctx {
 	uint32_t gso_types;
 	/**< the bit mask of required GSO types. The GSO library
 	 * uses the same macros as that of describing device TX
-	 * offloading capabilities (i.e. DEV_TX_OFFLOAD_*_TSO) for
+	 * offloading capabilities (i.e. RTE_ETH_TX_OFFLOAD_*_TSO) for
 	 * gso_types.
 	 *
 	 * For example, if applications want to segment TCP/IPv4
-	 * packets, set DEV_TX_OFFLOAD_TCP_TSO in gso_types.
+	 * packets, set RTE_ETH_TX_OFFLOAD_TCP_TSO in gso_types.
 	 */
 	uint16_t gso_size;
 	/**< maximum size of an output GSO segment, including packet
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index d6f167994411..5a5b6b1e33c1 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -185,7 +185,7 @@ extern "C" {
  * The detection of PKT_RX_OUTER_L4_CKSUM_GOOD shall be based on the given
  * HW capability, At minimum, the PMD should support
  * PKT_RX_OUTER_L4_CKSUM_UNKNOWN and PKT_RX_OUTER_L4_CKSUM_BAD states
- * if the DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
+ * if the RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
  */
 #define PKT_RX_OUTER_L4_CKSUM_MASK	((1ULL << 21) | (1ULL << 22))
 
@@ -208,7 +208,7 @@ extern "C" {
  * a) Fill outer_l2_len and outer_l3_len in mbuf.
  * b) Set the PKT_TX_OUTER_UDP_CKSUM flag.
  * c) Set the PKT_TX_OUTER_IPV4 or PKT_TX_OUTER_IPV6 flag.
- * 2) Configure DEV_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
+ * 2) Configure RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
  */
 #define PKT_TX_OUTER_UDP_CKSUM     (1ULL << 41)
 
@@ -253,7 +253,7 @@ extern "C" {
  * It can be used for tunnels which are not standards or listed above.
  * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_GRE
  * or PKT_TX_TUNNEL_IPIP if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_IP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_IP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
  * PKT_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
@@ -266,7 +266,7 @@ extern "C" {
  * It can be used for tunnels which are not standards or listed above.
  * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_VXLAN
  * if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_UDP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
  * PKT_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
diff --git a/lib/mbuf/rte_mbuf_dyn.h b/lib/mbuf/rte_mbuf_dyn.h
index fb03cf1dcf90..29abe8da53cf 100644
--- a/lib/mbuf/rte_mbuf_dyn.h
+++ b/lib/mbuf/rte_mbuf_dyn.h
@@ -37,7 +37,7 @@
  *   of the dynamic field to be registered:
  *   const struct rte_mbuf_dynfield rte_dynfield_my_feature = { ... };
  * - The application initializes the PMD, and asks for this feature
- *   at port initialization by passing DEV_RX_OFFLOAD_MY_FEATURE in
+ *   at port initialization by passing RTE_ETH_RX_OFFLOAD_MY_FEATURE in
  *   rxconf. This will make the PMD to register the field by calling
  *   rte_mbuf_dynfield_register(&rte_dynfield_my_feature). The PMD
  *   stores the returned offset.
-- 
2.31.1


^ permalink raw reply	[relevance 1%]

* Re: [dpdk-dev] [PATCH v3 0/8] crypto/security session framework rework
  2021-10-20 16:48  0%       ` Akhil Goyal
@ 2021-10-20 18:04  0%         ` Akhil Goyal
  2021-10-21  8:43  0%           ` Zhang, Roy Fan
  0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-20 18:04 UTC (permalink / raw)
  To: Power, Ciara, dev, Ananyev, Konstantin, thomas, roy.fan.zhang,
	pablo.de.lara.guarch
  Cc: david.marchand, hemant.agrawal, Anoob Joseph, Trahe, Fiona,
	Doherty, Declan, matan, g.singh, jianjay.zhou, asomalap,
	ruifeng.wang, Nicolau, Radu, ajit.khaparde, Nagadheeraj Rottela,
	Ankur Dwivedi, Wang, Haiyue, jiawenwu, jianwang,
	Jerin Jacob Kollanukkaran, Nithin Kumar Dabilpuram

> > > I am seeing test failures for cryptodev_scheduler_autotest:
> > > + Tests Total :       638
> > >  + Tests Skipped :     280
> > >  + Tests Executed :    638
> > >  + Tests Unsupported:   0
> > >  + Tests Passed :      18
> > >  + Tests Failed :      340
> > >
> > > The error showing for each testcase:
> > > scheduler_pmd_sym_session_configure() line 487: unable to config sym
> > > session
> > > CRYPTODEV: rte_cryptodev_sym_session_init() line 1743: dev_id 2 failed
> to
> > > configure session details
> > >
> > > I believe the problem happens in
> scheduler_pmd_sym_session_configure.
> > > The full sess object is no longer accessible in here, but it is required to be
> > > passed to rte_cryptodev_sym_session_init.
> > > The init function expects access to sess rather than the private data, and
> > now
> > > fails as a result.
> > >
> > > static int
> > > scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
> > >         struct rte_crypto_sym_xform *xform, void *sess,
> > >         rte_iova_t sess_iova __rte_unused)
> > > {
> > >         struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> > >         uint32_t i;
> > >         int ret;
> > >         for (i = 0; i < sched_ctx->nb_workers; i++) {
> > >                 struct scheduler_worker *worker = &sched_ctx->workers[i];
> > >                 ret = rte_cryptodev_sym_session_init(worker->dev_id, sess,
> > >                                         xform);
> > >                 if (ret < 0) {
> > >                         CR_SCHED_LOG(ERR, "unable to config sym session");
> > >                         return ret;
> > >                 }
> > >         }
> > >         return 0;
> > > }
> > >
> > It looks like scheduler PMD is managing the stuff on its own for other
> PMDs.
> > The APIs are designed such that the app can call session_init multiple times
> > With different dev_id on same sess.
> > But here scheduler PMD internally want to configure other PMDs sess_priv
> > By calling session_init.
> >
> > I wonder, why we have this 2 step session_create and session_init?
> > Why can't we have it similar to security session create and let the scheduler
> > PMD have its big session private data which can hold priv_data of as many
> > PMDs
> > as it want to schedule.
> >
> > Konstantin/Fan/Pablo what are your thoughts on this issue?
> > Can we resolve this issue at priority in RC1(or probably RC2) for this release
> > or
> > else we defer it for next ABI break release?
> >
> > Thomas,
> > Can we defer this for RC2? It does not seem to be fixed in 1 day.
> 
> On another thought, this can be fixed with current patch also by having a big
> session
> Private data for scheduler PMD which is big enough to hold all other PMDs
> data which
> it want to schedule and then call the sess_configure function pointer of dev
> directly.
> What say? And this PMD change can be done in RC2. And this patchset go as
> is in RC1.
Here is the diff in scheduler PMD which should fix this issue in current patchset.

diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
index b92ffd6026..0611ea2c6a 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -450,9 +450,8 @@ scheduler_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 }

 static uint32_t
-scheduler_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
+get_max_session_priv_size(struct scheduler_ctx *sched_ctx)
 {
-       struct scheduler_ctx *sched_ctx = dev->data->dev_private;
        uint8_t i = 0;
        uint32_t max_priv_sess_size = 0;

@@ -469,20 +468,35 @@ scheduler_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
        return max_priv_sess_size;
 }

+static uint32_t
+scheduler_pmd_sym_session_get_size(struct rte_cryptodev *dev)
+{
+       struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+
+       return get_max_session_priv_size(sched_ctx) * sched_ctx->nb_workers;
+}
+
 static int
 scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
        struct rte_crypto_sym_xform *xform, void *sess,
        rte_iova_t sess_iova __rte_unused)
 {
        struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+       uint32_t worker_sess_priv_sz = get_max_session_priv_size(sched_ctx);
        uint32_t i;
        int ret;

        for (i = 0; i < sched_ctx->nb_workers; i++) {
                struct scheduler_worker *worker = &sched_ctx->workers[i];
+               struct rte_cryptodev *worker_dev =
+                               rte_cryptodev_pmd_get_dev(worker->dev_id);
+               uint8_t index = worker_dev->driver_id;

-               ret = rte_cryptodev_sym_session_init(worker->dev_id, sess,
-                                       xform);
+               ret = worker_dev->dev_ops->sym_session_configure(
+                               worker_dev,
+                               xform,
+                               (uint8_t *)sess + (index * worker_sess_priv_sz),
+                               sess_iova + (index * worker_sess_priv_sz));
                if (ret < 0) {
                        CR_SCHED_LOG(ERR, "unable to config sym session");
                        return ret;

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 0/8] crypto/security session framework rework
  2021-10-20 16:41  3%     ` Akhil Goyal
@ 2021-10-20 16:48  0%       ` Akhil Goyal
  2021-10-20 18:04  0%         ` Akhil Goyal
  0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-20 16:48 UTC (permalink / raw)
  To: Power, Ciara, dev, Ananyev, Konstantin, thomas, roy.fan.zhang,
	pablo.de.lara.guarch
  Cc: david.marchand, hemant.agrawal, Anoob Joseph, Trahe, Fiona,
	Doherty, Declan, matan, g.singh, jianjay.zhou, asomalap,
	ruifeng.wang, Nicolau, Radu, ajit.khaparde, Nagadheeraj Rottela,
	Ankur Dwivedi, Wang, Haiyue, jiawenwu, jianwang,
	Jerin Jacob Kollanukkaran, Nithin Kumar Dabilpuram

> > Hi Akhil,
> >
> > >Subject: [PATCH v3 0/8] crypto/security session framework rework
> > >
> > >As discussed in last release deprecation notice, crypto and security session
> > >framework are reworked to reduce the need of two mempool objects and
> > >remove the requirement to expose the rte_security_session and
> > >rte_cryptodev_sym_session structures.
> > >Design methodology is explained in the patch description.
> > >
> > >Similar work will need to be done for asymmetric sessions as well.
> > Asymmetric
> > >session need another rework and is postponed to next release. Since it is
> > still
> > >in experimental stage, we can modify the APIs in next release as well.
> > >
> > >The patches are compilable with all affected PMDs and tested with dpdk-
> > test
> > >and test-crypto-perf app on CN9k platform.
> > <snip>
> >
> > I am seeing test failures for cryptodev_scheduler_autotest:
> > + Tests Total :       638
> >  + Tests Skipped :     280
> >  + Tests Executed :    638
> >  + Tests Unsupported:   0
> >  + Tests Passed :      18
> >  + Tests Failed :      340
> >
> > The error showing for each testcase:
> > scheduler_pmd_sym_session_configure() line 487: unable to config sym
> > session
> > CRYPTODEV: rte_cryptodev_sym_session_init() line 1743: dev_id 2 failed to
> > configure session details
> >
> > I believe the problem happens in scheduler_pmd_sym_session_configure.
> > The full sess object is no longer accessible in here, but it is required to be
> > passed to rte_cryptodev_sym_session_init.
> > The init function expects access to sess rather than the private data, and
> now
> > fails as a result.
> >
> > static int
> > scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
> >         struct rte_crypto_sym_xform *xform, void *sess,
> >         rte_iova_t sess_iova __rte_unused)
> > {
> >         struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> >         uint32_t i;
> >         int ret;
> >         for (i = 0; i < sched_ctx->nb_workers; i++) {
> >                 struct scheduler_worker *worker = &sched_ctx->workers[i];
> >                 ret = rte_cryptodev_sym_session_init(worker->dev_id, sess,
> >                                         xform);
> >                 if (ret < 0) {
> >                         CR_SCHED_LOG(ERR, "unable to config sym session");
> >                         return ret;
> >                 }
> >         }
> >         return 0;
> > }
> >
> It looks like scheduler PMD is managing the stuff on its own for other PMDs.
> The APIs are designed such that the app can call session_init multiple times
> With different dev_id on same sess.
> But here scheduler PMD internally want to configure other PMDs sess_priv
> By calling session_init.
> 
> I wonder, why we have this 2 step session_create and session_init?
> Why can't we have it similar to security session create and let the scheduler
> PMD have its big session private data which can hold priv_data of as many
> PMDs
> as it want to schedule.
> 
> Konstantin/Fan/Pablo what are your thoughts on this issue?
> Can we resolve this issue at priority in RC1(or probably RC2) for this release
> or
> else we defer it for next ABI break release?
> 
> Thomas,
> Can we defer this for RC2? It does not seem to be fixed in 1 day.

On another thought, this can be fixed with current patch also by having a big session
Private data for scheduler PMD which is big enough to hold all other PMDs data which
it want to schedule and then call the sess_configure function pointer of dev directly.
What say? And this PMD change can be done in RC2. And this patchset go as is in RC1.


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 0/8] crypto/security session framework rework
  @ 2021-10-20 16:41  3%     ` Akhil Goyal
  2021-10-20 16:48  0%       ` Akhil Goyal
  0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-20 16:41 UTC (permalink / raw)
  To: Power, Ciara, dev, Ananyev, Konstantin, thomas, roy.fan.zhang,
	pablo.de.lara.guarch
  Cc: david.marchand, hemant.agrawal, Anoob Joseph, Trahe, Fiona,
	Doherty, Declan, matan, g.singh, jianjay.zhou, asomalap,
	ruifeng.wang, Nicolau, Radu, ajit.khaparde, Nagadheeraj Rottela,
	Ankur Dwivedi, Wang, Haiyue, jiawenwu, jianwang,
	Jerin Jacob Kollanukkaran, Nithin Kumar Dabilpuram

> Hi Akhil,
> 
> >Subject: [PATCH v3 0/8] crypto/security session framework rework
> >
> >As discussed in last release deprecation notice, crypto and security session
> >framework are reworked to reduce the need of two mempool objects and
> >remove the requirement to expose the rte_security_session and
> >rte_cryptodev_sym_session structures.
> >Design methodology is explained in the patch description.
> >
> >Similar work will need to be done for asymmetric sessions as well.
> Asymmetric
> >session need another rework and is postponed to next release. Since it is
> still
> >in experimental stage, we can modify the APIs in next release as well.
> >
> >The patches are compilable with all affected PMDs and tested with dpdk-
> test
> >and test-crypto-perf app on CN9k platform.
> <snip>
> 
> I am seeing test failures for cryptodev_scheduler_autotest:
> + Tests Total :       638
>  + Tests Skipped :     280
>  + Tests Executed :    638
>  + Tests Unsupported:   0
>  + Tests Passed :      18
>  + Tests Failed :      340
> 
> The error showing for each testcase:
> scheduler_pmd_sym_session_configure() line 487: unable to config sym
> session
> CRYPTODEV: rte_cryptodev_sym_session_init() line 1743: dev_id 2 failed to
> configure session details
> 
> I believe the problem happens in scheduler_pmd_sym_session_configure.
> The full sess object is no longer accessible in here, but it is required to be
> passed to rte_cryptodev_sym_session_init.
> The init function expects access to sess rather than the private data, and now
> fails as a result.
> 
> static int
> scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
>         struct rte_crypto_sym_xform *xform, void *sess,
>         rte_iova_t sess_iova __rte_unused)
> {
>         struct scheduler_ctx *sched_ctx = dev->data->dev_private;
>         uint32_t i;
>         int ret;
>         for (i = 0; i < sched_ctx->nb_workers; i++) {
>                 struct scheduler_worker *worker = &sched_ctx->workers[i];
>                 ret = rte_cryptodev_sym_session_init(worker->dev_id, sess,
>                                         xform);
>                 if (ret < 0) {
>                         CR_SCHED_LOG(ERR, "unable to config sym session");
>                         return ret;
>                 }
>         }
>         return 0;
> }
> 
It looks like scheduler PMD is managing the stuff on its own for other PMDs.
The APIs are designed such that the app can call session_init multiple times
With different dev_id on same sess.
But here scheduler PMD internally want to configure other PMDs sess_priv
By calling session_init.

I wonder, why we have this 2 step session_create and session_init?
Why can't we have it similar to security session create and let the scheduler
PMD have its big session private data which can hold priv_data of as many PMDs
as it want to schedule.

Konstantin/Fan/Pablo what are your thoughts on this issue?
Can we resolve this issue at priority in RC1(or probably RC2) for this release or
else we defer it for next ABI break release?

Thomas,
Can we defer this for RC2? It does not seem to be fixed in 1 day.

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [EXT] Re: [PATCH v3 2/7] eal/interrupts: implement get set APIs
  @ 2021-10-20 15:30  3%         ` Dmitry Kozlyuk
  2021-10-21  9:16  0%           ` Harman Kalra
  0 siblings, 1 reply; 200+ results
From: Dmitry Kozlyuk @ 2021-10-20 15:30 UTC (permalink / raw)
  To: Harman Kalra
  Cc: Stephen Hemminger, Thomas Monjalon, david.marchand, dev, Ray Kinsella

2021-10-19 08:32 (UTC+0000), Harman Kalra:
> > -----Original Message-----
> > From: Stephen Hemminger <stephen@networkplumber.org>
> > Sent: Tuesday, October 19, 2021 4:27 AM
> > To: Harman Kalra <hkalra@marvell.com>
> > Cc: dev@dpdk.org; Thomas Monjalon <thomas@monjalon.net>; Ray Kinsella
> > <mdr@ashroe.eu>; david.marchand@redhat.com;
> > dmitry.kozliuk@gmail.com
> > Subject: [EXT] Re: [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement get
> > set APIs
> > 
> > External Email
> > 
> > ----------------------------------------------------------------------
> > On Tue, 19 Oct 2021 01:07:02 +0530
> > Harman Kalra <hkalra@marvell.com> wrote:
> >   
> > > +	/* Detect if DPDK malloc APIs are ready to be used. */
> > > +	mem_allocator = rte_malloc_is_ready();
> > > +	if (mem_allocator)
> > > +		intr_handle = rte_zmalloc(NULL, sizeof(struct  
> > rte_intr_handle),  
> > > +					  0);
> > > +	else
> > > +		intr_handle = calloc(1, sizeof(struct rte_intr_handle));  
> > 
> > This is problematic way to do this.
> > The reason to use rte_malloc vs malloc should be determined by usage.
> > 
> > If the pointer will be shared between primary/secondary process then it has
> > to be in hugepages (ie rte_malloc). If it is not shared then then use regular
> > malloc.
> > 
> > But what you have done is created a method which will be a latent bug for
> > anyone using primary/secondary process.
> > 
> > Either:
> >     intr_handle is not allowed to be used in secondary.
> >       Then always use malloc().
> > Or.
> >     intr_handle can be used by both primary and secondary.
> >     Then always use rte_malloc().
> >     Any code path that allocates intr_handle before pool is
> >     ready is broken.  
> 
> Hi Stephan,
> 
> Till V2, I implemented this API in a way where user of the API can choose
> If he wants intr handle to be allocated using malloc or rte_malloc by passing
> a flag arg to the rte_intr_instanc_alloc API. User of the API will best know if
> the intr handle is to be shared with secondary or not.
> 
> But after some discussions and suggestions from the community we decided
> to drop that flag argument and auto detect on whether rte_malloc APIs are
> ready to be used and thereafter make all further allocations via rte_malloc.
> Currently alarm subsystem (or any driver doing allocation in constructor) gets
> interrupt instance allocated using glibc malloc that too because rte_malloc*
> is not ready by rte_eal_alarm_init(), while all further consumers gets instance
> allocated via rte_malloc.

Just as a comment, bus scanning is the real issue, not the alarms.
Alarms could be initialized after the memory management
(but it's irrelevant because their handle is not accessed from the outside).
However, MM needs to know bus IOVA requirements to initialize,
which is usually determined by at least bus device requirements.

>  I think this should not cause any issue in primary/secondary model as all interrupt
> instance pointer will be shared.

What do you mean? Aren't we discussing the issue
that those allocated early are not shared?

> Infact to avoid any surprises of primary/secondary
> not working we thought of making all allocations via rte_malloc. 

I don't see why anyone would not make them shared.
In order to only use rte_malloc(), we need:
1. In bus drivers, move handle allocation from scan to probe stage.
2. In EAL, move alarm initialization to after the MM.
It all can be done later with v3 design---but there are out-of-tree drivers.
We need to force them to make step 1 at some point.
I see two options:
a) Right now have an external API that only works with rte_malloc()
   and internal API with autodetection. Fix DPDK and drop internal API.
b) Have external API with autodetection. Fix DPDK.
   At the next ABI breakage drop autodetection and libc-malloc.

> David, Thomas, Dmitry, please add if I missed anything.
> 
> Can we please conclude on this series APIs as API freeze deadline (rc1) is very near.

I support v3 design with no options and autodetection,
because that's the interface we want in the end.
Implementation can be improved later.

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v4 8/8] cryptodev: move device specific structures
    2021-10-20 11:27  2%       ` [dpdk-dev] [PATCH v4 3/8] cryptodev: move inline APIs into separate structure Akhil Goyal
  2021-10-20 11:27  3%       ` [dpdk-dev] [PATCH v4 7/8] cryptodev: update fast path APIs to use new flat array Akhil Goyal
@ 2021-10-20 11:27  7%       ` Akhil Goyal
  2 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-10-20 11:27 UTC (permalink / raw)
  To: dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
	konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
	adwivedi, ciara.power, Akhil Goyal, Rebecca Troy

The device specific structures - rte_cryptodev
and rte_cryptodev_data are moved to cryptodev_pmd.h
to hide it from the applications.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Tested-by: Rebecca Troy <rebecca.troy@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst        |  6 ++
 drivers/crypto/ccp/ccp_dev.h                  |  2 +-
 drivers/crypto/cnxk/cn10k_ipsec.c             |  2 +-
 drivers/crypto/cnxk/cn9k_ipsec.c              |  2 +-
 .../crypto/cnxk/cnxk_cryptodev_capabilities.c |  2 +-
 drivers/crypto/cnxk/cnxk_cryptodev_sec.c      |  2 +-
 drivers/crypto/nitrox/nitrox_sym_reqmgr.c     |  2 +-
 drivers/crypto/octeontx/otx_cryptodev.c       |  1 -
 .../crypto/octeontx/otx_cryptodev_hw_access.c |  2 +-
 .../crypto/octeontx/otx_cryptodev_hw_access.h |  2 +-
 drivers/crypto/octeontx/otx_cryptodev_ops.h   |  2 +-
 .../crypto/octeontx2/otx2_cryptodev_mbox.c    |  2 +-
 drivers/crypto/scheduler/scheduler_failover.c |  2 +-
 .../crypto/scheduler/scheduler_multicore.c    |  2 +-
 .../scheduler/scheduler_pkt_size_distr.c      |  2 +-
 .../crypto/scheduler/scheduler_roundrobin.c   |  2 +-
 drivers/event/cnxk/cnxk_eventdev.h            |  2 +-
 drivers/event/dpaa/dpaa_eventdev.c            |  2 +-
 drivers/event/dpaa2/dpaa2_eventdev.c          |  2 +-
 drivers/event/octeontx/ssovf_evdev.c          |  2 +-
 .../event/octeontx2/otx2_evdev_crypto_adptr.c |  2 +-
 lib/cryptodev/cryptodev_pmd.h                 | 65 ++++++++++++++++++
 lib/cryptodev/rte_cryptodev_core.h            | 67 -------------------
 lib/cryptodev/version.map                     |  2 +-
 24 files changed, 91 insertions(+), 88 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index faa9164546..23bc854d16 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -328,6 +328,12 @@ ABI Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* cryptodev: Made ``rte_cryptodev``, ``rte_cryptodev_data`` private
+  structures internal to DPDK. ``rte_cryptodevs`` can't be accessed directly
+  by user any more. While it is an ABI breakage, this change is intended
+  to be transparent for both users (no changes in user app is required) and
+  PMD developers (no changes in PMD is required).
+
 * security: ``rte_security_set_pkt_metadata`` and ``rte_security_get_userdata``
   routines used by inline outbound and inline inbound security processing were
   made inline and enhanced to do simple 64-bit set/get for PMDs that do not
diff --git a/drivers/crypto/ccp/ccp_dev.h b/drivers/crypto/ccp/ccp_dev.h
index ca5145c278..85c8fc47a2 100644
--- a/drivers/crypto/ccp/ccp_dev.h
+++ b/drivers/crypto/ccp/ccp_dev.h
@@ -17,7 +17,7 @@
 #include <rte_pci.h>
 #include <rte_spinlock.h>
 #include <rte_crypto_sym.h>
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
 
 /**< CCP sspecific */
 #define MAX_HW_QUEUES                   5
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c
index defc792aa8..27df1dcd64 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.c
+++ b/drivers/crypto/cnxk/cn10k_ipsec.c
@@ -3,7 +3,7 @@
  */
 
 #include <rte_malloc.h>
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
 #include <rte_esp.h>
 #include <rte_ip.h>
 #include <rte_security.h>
diff --git a/drivers/crypto/cnxk/cn9k_ipsec.c b/drivers/crypto/cnxk/cn9k_ipsec.c
index 9ca4d20c62..53fb793654 100644
--- a/drivers/crypto/cnxk/cn9k_ipsec.c
+++ b/drivers/crypto/cnxk/cn9k_ipsec.c
@@ -2,7 +2,7 @@
  * Copyright(C) 2021 Marvell.
  */
 
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
 #include <rte_ip.h>
 #include <rte_security.h>
 #include <rte_security_driver.h>
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
index a227e6981c..a53b489a04 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c
@@ -2,7 +2,7 @@
  * Copyright(C) 2021 Marvell.
  */
 
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
 #include <rte_security.h>
 
 #include "roc_api.h"
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_sec.c b/drivers/crypto/cnxk/cnxk_cryptodev_sec.c
index 8d04d4b575..2021d5c77e 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_sec.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_sec.c
@@ -2,7 +2,7 @@
  * Copyright(C) 2021 Marvell.
  */
 
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
 #include <rte_malloc.h>
 #include <rte_security.h>
 #include <rte_security_driver.h>
diff --git a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
index fe3ca25a0c..9edb0cc00f 100644
--- a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
+++ b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
@@ -3,7 +3,7 @@
  */
 
 #include <rte_crypto.h>
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
 #include <rte_cycles.h>
 #include <rte_errno.h>
 
diff --git a/drivers/crypto/octeontx/otx_cryptodev.c b/drivers/crypto/octeontx/otx_cryptodev.c
index 05b78329d6..337d06aab8 100644
--- a/drivers/crypto/octeontx/otx_cryptodev.c
+++ b/drivers/crypto/octeontx/otx_cryptodev.c
@@ -4,7 +4,6 @@
 
 #include <rte_bus_pci.h>
 #include <rte_common.h>
-#include <rte_cryptodev.h>
 #include <cryptodev_pmd.h>
 #include <rte_log.h>
 #include <rte_pci.h>
diff --git a/drivers/crypto/octeontx/otx_cryptodev_hw_access.c b/drivers/crypto/octeontx/otx_cryptodev_hw_access.c
index 7b89a62d81..20b288334a 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_hw_access.c
+++ b/drivers/crypto/octeontx/otx_cryptodev_hw_access.c
@@ -7,7 +7,7 @@
 
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
 #include <rte_errno.h>
 #include <rte_mempool.h>
 #include <rte_memzone.h>
diff --git a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
index 7c6b1e45b4..e48805fb09 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
+++ b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
@@ -7,7 +7,7 @@
 #include <stdbool.h>
 
 #include <rte_branch_prediction.h>
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
 #include <rte_cycles.h>
 #include <rte_io.h>
 #include <rte_memory.h>
diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.h b/drivers/crypto/octeontx/otx_cryptodev_ops.h
index f234f16970..83b82ea059 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_ops.h
+++ b/drivers/crypto/octeontx/otx_cryptodev_ops.h
@@ -5,7 +5,7 @@
 #ifndef _OTX_CRYPTODEV_OPS_H_
 #define _OTX_CRYPTODEV_OPS_H_
 
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
 
 #define OTX_CPT_MIN_HEADROOM_REQ	(24)
 #define OTX_CPT_MIN_TAILROOM_REQ	(8)
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c b/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
index 1a8edae7eb..f9e7b0b474 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (C) 2019 Marvell International Ltd.
  */
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
 #include <rte_ethdev.h>
 
 #include "otx2_cryptodev.h"
diff --git a/drivers/crypto/scheduler/scheduler_failover.c b/drivers/crypto/scheduler/scheduler_failover.c
index 844312dd1b..5023577ef8 100644
--- a/drivers/crypto/scheduler/scheduler_failover.c
+++ b/drivers/crypto/scheduler/scheduler_failover.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2017 Intel Corporation
  */
 
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
 #include <rte_malloc.h>
 
 #include "rte_cryptodev_scheduler_operations.h"
diff --git a/drivers/crypto/scheduler/scheduler_multicore.c b/drivers/crypto/scheduler/scheduler_multicore.c
index 1e2e8dbf9f..900ab4049d 100644
--- a/drivers/crypto/scheduler/scheduler_multicore.c
+++ b/drivers/crypto/scheduler/scheduler_multicore.c
@@ -3,7 +3,7 @@
  */
 #include <unistd.h>
 
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
 #include <rte_malloc.h>
 
 #include "rte_cryptodev_scheduler_operations.h"
diff --git a/drivers/crypto/scheduler/scheduler_pkt_size_distr.c b/drivers/crypto/scheduler/scheduler_pkt_size_distr.c
index 57e330a744..933a5c6978 100644
--- a/drivers/crypto/scheduler/scheduler_pkt_size_distr.c
+++ b/drivers/crypto/scheduler/scheduler_pkt_size_distr.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2017 Intel Corporation
  */
 
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
 #include <rte_malloc.h>
 
 #include "rte_cryptodev_scheduler_operations.h"
diff --git a/drivers/crypto/scheduler/scheduler_roundrobin.c b/drivers/crypto/scheduler/scheduler_roundrobin.c
index bc4a632106..ace2dec2ec 100644
--- a/drivers/crypto/scheduler/scheduler_roundrobin.c
+++ b/drivers/crypto/scheduler/scheduler_roundrobin.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2017 Intel Corporation
  */
 
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
 #include <rte_malloc.h>
 
 #include "rte_cryptodev_scheduler_operations.h"
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index 8a5c737e4b..b57004c0dc 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -7,7 +7,7 @@
 
 #include <string.h>
 
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
 #include <rte_devargs.h>
 #include <rte_ethdev.h>
 #include <rte_event_eth_rx_adapter.h>
diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
index ec74160325..1d7ddfe1d1 100644
--- a/drivers/event/dpaa/dpaa_eventdev.c
+++ b/drivers/event/dpaa/dpaa_eventdev.c
@@ -28,7 +28,7 @@
 #include <rte_ethdev.h>
 #include <rte_event_eth_rx_adapter.h>
 #include <rte_event_eth_tx_adapter.h>
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
 #include <rte_dpaa_bus.h>
 #include <rte_dpaa_logs.h>
 #include <rte_cycles.h>
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index 5ccf22f77f..e03afb2958 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -25,7 +25,7 @@
 #include <rte_pci.h>
 #include <rte_bus_vdev.h>
 #include <ethdev_driver.h>
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
 #include <rte_event_eth_rx_adapter.h>
 #include <rte_event_eth_tx_adapter.h>
 
diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index b93f6ec8c6..9846fce34b 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -5,7 +5,7 @@
 #include <inttypes.h>
 
 #include <rte_common.h>
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
 #include <rte_debug.h>
 #include <rte_dev.h>
 #include <rte_eal.h>
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
index d9a002625c..d59d6c53f6 100644
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
+++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
@@ -2,7 +2,7 @@
  * Copyright (C) 2020-2021 Marvell.
  */
 
-#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
 #include <rte_eventdev.h>
 
 #include "otx2_cryptodev.h"
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index 9bb1e47ae4..89bf2af399 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -52,6 +52,71 @@ struct rte_cryptodev_pmd_init_params {
 	unsigned int max_nb_queue_pairs;
 };
 
+/**
+ * @internal
+ * The data part, with no function pointers, associated with each device.
+ *
+ * This structure is safe to place in shared memory to be common among
+ * different processes in a multi-process configuration.
+ */
+struct rte_cryptodev_data {
+	/** Device ID for this instance */
+	uint8_t dev_id;
+	/** Socket ID where memory is allocated */
+	uint8_t socket_id;
+	/** Unique identifier name */
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	__extension__
+	/** Device state: STARTED(1)/STOPPED(0) */
+	uint8_t dev_started : 1;
+
+	/** Session memory pool */
+	struct rte_mempool *session_pool;
+	/** Array of pointers to queue pairs. */
+	void **queue_pairs;
+	/** Number of device queue pairs. */
+	uint16_t nb_queue_pairs;
+
+	/** PMD-specific private data */
+	void *dev_private;
+} __rte_cache_aligned;
+
+/** @internal The data structure associated with each crypto device. */
+struct rte_cryptodev {
+	/** Pointer to PMD dequeue function. */
+	dequeue_pkt_burst_t dequeue_burst;
+	/** Pointer to PMD enqueue function. */
+	enqueue_pkt_burst_t enqueue_burst;
+
+	/** Pointer to device data */
+	struct rte_cryptodev_data *data;
+	/** Functions exported by PMD */
+	struct rte_cryptodev_ops *dev_ops;
+	/** Feature flags exposes HW/SW features for the given device */
+	uint64_t feature_flags;
+	/** Backing device */
+	struct rte_device *device;
+
+	/** Crypto driver identifier*/
+	uint8_t driver_id;
+
+	/** User application callback for interrupts if present */
+	struct rte_cryptodev_cb_list link_intr_cbs;
+
+	/** Context for security ops */
+	void *security_ctx;
+
+	__extension__
+	/** Flag indicating the device is attached */
+	uint8_t attached : 1;
+
+	/** User application callback for pre enqueue processing */
+	struct rte_cryptodev_cb_rcu *enq_cbs;
+	/** User application callback for post dequeue processing */
+	struct rte_cryptodev_cb_rcu *deq_cbs;
+} __rte_cache_aligned;
+
 /** Global structure used for maintaining state of allocated crypto devices */
 struct rte_cryptodev_global {
 	struct rte_cryptodev *devs;	/**< Device information array */
diff --git a/lib/cryptodev/rte_cryptodev_core.h b/lib/cryptodev/rte_cryptodev_core.h
index 2bb9a228c1..16832f645d 100644
--- a/lib/cryptodev/rte_cryptodev_core.h
+++ b/lib/cryptodev/rte_cryptodev_core.h
@@ -54,73 +54,6 @@ struct rte_crypto_fp_ops {
 
 extern struct rte_crypto_fp_ops rte_crypto_fp_ops[RTE_CRYPTO_MAX_DEVS];
 
-/**
- * @internal
- * The data part, with no function pointers, associated with each device.
- *
- * This structure is safe to place in shared memory to be common among
- * different processes in a multi-process configuration.
- */
-struct rte_cryptodev_data {
-	uint8_t dev_id;
-	/**< Device ID for this instance */
-	uint8_t socket_id;
-	/**< Socket ID where memory is allocated */
-	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
-	/**< Unique identifier name */
-
-	__extension__
-	uint8_t dev_started : 1;
-	/**< Device state: STARTED(1)/STOPPED(0) */
-
-	struct rte_mempool *session_pool;
-	/**< Session memory pool */
-	void **queue_pairs;
-	/**< Array of pointers to queue pairs. */
-	uint16_t nb_queue_pairs;
-	/**< Number of device queue pairs. */
-
-	void *dev_private;
-	/**< PMD-specific private data */
-} __rte_cache_aligned;
-
-
-/** @internal The data structure associated with each crypto device. */
-struct rte_cryptodev {
-	dequeue_pkt_burst_t dequeue_burst;
-	/**< Pointer to PMD receive function. */
-	enqueue_pkt_burst_t enqueue_burst;
-	/**< Pointer to PMD transmit function. */
-
-	struct rte_cryptodev_data *data;
-	/**< Pointer to device data */
-	struct rte_cryptodev_ops *dev_ops;
-	/**< Functions exported by PMD */
-	uint64_t feature_flags;
-	/**< Feature flags exposes HW/SW features for the given device */
-	struct rte_device *device;
-	/**< Backing device */
-
-	uint8_t driver_id;
-	/**< Crypto driver identifier*/
-
-	struct rte_cryptodev_cb_list link_intr_cbs;
-	/**< User application callback for interrupts if present */
-
-	void *security_ctx;
-	/**< Context for security ops */
-
-	__extension__
-	uint8_t attached : 1;
-	/**< Flag indicating the device is attached */
-
-	struct rte_cryptodev_cb_rcu *enq_cbs;
-	/**< User application callback for pre enqueue processing */
-
-	struct rte_cryptodev_cb_rcu *deq_cbs;
-	/**< User application callback for post dequeue processing */
-} __rte_cache_aligned;
-
 /**
  * The pool of rte_cryptodev structures.
  */
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index 157dac521d..b55b4b8e7e 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -43,7 +43,6 @@ DPDK_22 {
 	rte_cryptodev_sym_session_create;
 	rte_cryptodev_sym_session_free;
 	rte_cryptodev_sym_session_init;
-	rte_cryptodevs;
 
 	#added in 21.11
 	rte_crypto_fp_ops;
@@ -125,4 +124,5 @@ INTERNAL {
 	rte_cryptodev_pmd_parse_input_args;
 	rte_cryptodev_pmd_probing_finish;
 	rte_cryptodev_pmd_release_device;
+	rte_cryptodevs;
 };
-- 
2.25.1


^ permalink raw reply	[relevance 7%]

* [dpdk-dev] [PATCH v4 7/8] cryptodev: update fast path APIs to use new flat array
    2021-10-20 11:27  2%       ` [dpdk-dev] [PATCH v4 3/8] cryptodev: move inline APIs into separate structure Akhil Goyal
@ 2021-10-20 11:27  3%       ` Akhil Goyal
  2021-10-20 11:27  7%       ` [dpdk-dev] [PATCH v4 8/8] cryptodev: move device specific structures Akhil Goyal
  2 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-10-20 11:27 UTC (permalink / raw)
  To: dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
	konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
	adwivedi, ciara.power, Akhil Goyal

Rework fast-path cryptodev functions to use rte_crypto_fp_ops[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/cryptodev/rte_cryptodev.h | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index ce0dca72be..56e3868ada 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -1832,13 +1832,18 @@ static inline uint16_t
 rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
 		struct rte_crypto_op **ops, uint16_t nb_ops)
 {
-	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+	const struct rte_crypto_fp_ops *fp_ops;
+	void *qp;
 
 	rte_cryptodev_trace_dequeue_burst(dev_id, qp_id, (void **)ops, nb_ops);
-	nb_ops = (*dev->dequeue_burst)
-			(dev->data->queue_pairs[qp_id], ops, nb_ops);
+
+	fp_ops = &rte_crypto_fp_ops[dev_id];
+	qp = fp_ops->qp.data[qp_id];
+
+	nb_ops = fp_ops->dequeue_burst(qp, ops, nb_ops);
+
 #ifdef RTE_CRYPTO_CALLBACKS
-	if (unlikely(dev->deq_cbs != NULL)) {
+	if (unlikely(fp_ops->qp.deq_cb != NULL)) {
 		struct rte_cryptodev_cb_rcu *list;
 		struct rte_cryptodev_cb *cb;
 
@@ -1848,7 +1853,7 @@ rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
 		 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 		 * not required.
 		 */
-		list = &dev->deq_cbs[qp_id];
+		list = &fp_ops->qp.deq_cb[qp_id];
 		rte_rcu_qsbr_thread_online(list->qsbr, 0);
 		cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
 
@@ -1899,10 +1904,13 @@ static inline uint16_t
 rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
 		struct rte_crypto_op **ops, uint16_t nb_ops)
 {
-	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+	const struct rte_crypto_fp_ops *fp_ops;
+	void *qp;
 
+	fp_ops = &rte_crypto_fp_ops[dev_id];
+	qp = fp_ops->qp.data[qp_id];
 #ifdef RTE_CRYPTO_CALLBACKS
-	if (unlikely(dev->enq_cbs != NULL)) {
+	if (unlikely(fp_ops->qp.enq_cb != NULL)) {
 		struct rte_cryptodev_cb_rcu *list;
 		struct rte_cryptodev_cb *cb;
 
@@ -1912,7 +1920,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
 		 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 		 * not required.
 		 */
-		list = &dev->enq_cbs[qp_id];
+		list = &fp_ops->qp.enq_cb[qp_id];
 		rte_rcu_qsbr_thread_online(list->qsbr, 0);
 		cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
 
@@ -1927,8 +1935,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
 #endif
 
 	rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops, nb_ops);
-	return (*dev->enqueue_burst)(
-			dev->data->queue_pairs[qp_id], ops, nb_ops);
+	return fp_ops->enqueue_burst(qp, ops, nb_ops);
 }
 
 
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v4 3/8] cryptodev: move inline APIs into separate structure
  @ 2021-10-20 11:27  2%       ` Akhil Goyal
  2021-10-20 11:27  3%       ` [dpdk-dev] [PATCH v4 7/8] cryptodev: update fast path APIs to use new flat array Akhil Goyal
  2021-10-20 11:27  7%       ` [dpdk-dev] [PATCH v4 8/8] cryptodev: move device specific structures Akhil Goyal
  2 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-10-20 11:27 UTC (permalink / raw)
  To: dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
	konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
	adwivedi, ciara.power, Akhil Goyal, Rebecca Troy

Move fastpath inline function pointers from rte_cryptodev into a
separate structure accessed via a flat array.
The intension is to make rte_cryptodev and related structures private
to avoid future API/ABI breakages.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Tested-by: Rebecca Troy <rebecca.troy@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/cryptodev/cryptodev_pmd.c      | 53 +++++++++++++++++++++++++++++-
 lib/cryptodev/cryptodev_pmd.h      | 11 +++++++
 lib/cryptodev/rte_cryptodev.c      | 19 +++++++++++
 lib/cryptodev/rte_cryptodev_core.h | 29 ++++++++++++++++
 lib/cryptodev/version.map          |  5 +++
 5 files changed, 116 insertions(+), 1 deletion(-)

diff --git a/lib/cryptodev/cryptodev_pmd.c b/lib/cryptodev/cryptodev_pmd.c
index 44a70ecb35..fd74543682 100644
--- a/lib/cryptodev/cryptodev_pmd.c
+++ b/lib/cryptodev/cryptodev_pmd.c
@@ -3,7 +3,7 @@
  */
 
 #include <sys/queue.h>
-
+#include <rte_errno.h>
 #include <rte_string_fns.h>
 #include <rte_malloc.h>
 
@@ -160,3 +160,54 @@ rte_cryptodev_pmd_destroy(struct rte_cryptodev *cryptodev)
 
 	return 0;
 }
+
+static uint16_t
+dummy_crypto_enqueue_burst(__rte_unused void *qp,
+			   __rte_unused struct rte_crypto_op **ops,
+			   __rte_unused uint16_t nb_ops)
+{
+	CDEV_LOG_ERR(
+		"crypto enqueue burst requested for unconfigured device");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+dummy_crypto_dequeue_burst(__rte_unused void *qp,
+			   __rte_unused struct rte_crypto_op **ops,
+			   __rte_unused uint16_t nb_ops)
+{
+	CDEV_LOG_ERR(
+		"crypto dequeue burst requested for unconfigured device");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+void
+cryptodev_fp_ops_reset(struct rte_crypto_fp_ops *fp_ops)
+{
+	static struct rte_cryptodev_cb_rcu dummy_cb[RTE_MAX_QUEUES_PER_PORT];
+	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+	static const struct rte_crypto_fp_ops dummy = {
+		.enqueue_burst = dummy_crypto_enqueue_burst,
+		.dequeue_burst = dummy_crypto_dequeue_burst,
+		.qp = {
+			.data = dummy_data,
+			.enq_cb = dummy_cb,
+			.deq_cb = dummy_cb,
+		},
+	};
+
+	*fp_ops = dummy;
+}
+
+void
+cryptodev_fp_ops_set(struct rte_crypto_fp_ops *fp_ops,
+		     const struct rte_cryptodev *dev)
+{
+	fp_ops->enqueue_burst = dev->enqueue_burst;
+	fp_ops->dequeue_burst = dev->dequeue_burst;
+	fp_ops->qp.data = dev->data->queue_pairs;
+	fp_ops->qp.enq_cb = dev->enq_cbs;
+	fp_ops->qp.deq_cb = dev->deq_cbs;
+}
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index 36606dd10b..a71edbb991 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -516,6 +516,17 @@ RTE_INIT(init_ ##driver_id)\
 	driver_id = rte_cryptodev_allocate_driver(&crypto_drv, &(drv));\
 }
 
+/* Reset crypto device fastpath APIs to dummy values. */
+__rte_internal
+void
+cryptodev_fp_ops_reset(struct rte_crypto_fp_ops *fp_ops);
+
+/* Setup crypto device fastpath APIs. */
+__rte_internal
+void
+cryptodev_fp_ops_set(struct rte_crypto_fp_ops *fp_ops,
+		     const struct rte_cryptodev *dev);
+
 static inline void *
 get_sym_session_private_data(const struct rte_cryptodev_sym_session *sess,
 		uint8_t driver_id) {
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index eb86e629aa..305e013ebb 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -53,6 +53,9 @@ static struct rte_cryptodev_global cryptodev_globals = {
 		.nb_devs		= 0
 };
 
+/* Public fastpath APIs. */
+struct rte_crypto_fp_ops rte_crypto_fp_ops[RTE_CRYPTO_MAX_DEVS];
+
 /* spinlock for crypto device callbacks */
 static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
 
@@ -917,6 +920,8 @@ rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
 
 	dev_id = cryptodev->data->dev_id;
 
+	cryptodev_fp_ops_reset(rte_crypto_fp_ops + dev_id);
+
 	/* Close device only if device operations have been set */
 	if (cryptodev->dev_ops) {
 		ret = rte_cryptodev_close(dev_id);
@@ -1080,6 +1085,9 @@ rte_cryptodev_start(uint8_t dev_id)
 	}
 
 	diag = (*dev->dev_ops->dev_start)(dev);
+	/* expose selection of PMD fast-path functions */
+	cryptodev_fp_ops_set(rte_crypto_fp_ops + dev_id, dev);
+
 	rte_cryptodev_trace_start(dev_id, diag);
 	if (diag == 0)
 		dev->data->dev_started = 1;
@@ -1109,6 +1117,9 @@ rte_cryptodev_stop(uint8_t dev_id)
 		return;
 	}
 
+	/* point fast-path functions to dummy ones */
+	cryptodev_fp_ops_reset(rte_crypto_fp_ops + dev_id);
+
 	(*dev->dev_ops->dev_stop)(dev);
 	rte_cryptodev_trace_stop(dev_id);
 	dev->data->dev_started = 0;
@@ -2411,3 +2422,11 @@ rte_cryptodev_allocate_driver(struct cryptodev_driver *crypto_drv,
 
 	return nb_drivers++;
 }
+
+RTE_INIT(cryptodev_init_fp_ops)
+{
+	uint32_t i;
+
+	for (i = 0; i != RTE_DIM(rte_crypto_fp_ops); i++)
+		cryptodev_fp_ops_reset(rte_crypto_fp_ops + i);
+}
diff --git a/lib/cryptodev/rte_cryptodev_core.h b/lib/cryptodev/rte_cryptodev_core.h
index 1633e55889..2bb9a228c1 100644
--- a/lib/cryptodev/rte_cryptodev_core.h
+++ b/lib/cryptodev/rte_cryptodev_core.h
@@ -25,6 +25,35 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
 		struct rte_crypto_op **ops,	uint16_t nb_ops);
 /**< Enqueue packets for processing on queue pair of a device. */
 
+/**
+ * @internal
+ * Structure used to hold opaque pointers to internal ethdev Rx/Tx
+ * queues data.
+ * The main purpose to expose these pointers at all - allow compiler
+ * to fetch this data for fast-path cryptodev inline functions in advance.
+ */
+struct rte_cryptodev_qpdata {
+	/** points to array of internal queue pair data pointers. */
+	void **data;
+	/** points to array of enqueue callback data pointers */
+	struct rte_cryptodev_cb_rcu *enq_cb;
+	/** points to array of dequeue callback data pointers */
+	struct rte_cryptodev_cb_rcu *deq_cb;
+};
+
+struct rte_crypto_fp_ops {
+	/** PMD enqueue burst function. */
+	enqueue_pkt_burst_t enqueue_burst;
+	/** PMD dequeue burst function. */
+	dequeue_pkt_burst_t dequeue_burst;
+	/** Internal queue pair data pointers. */
+	struct rte_cryptodev_qpdata qp;
+	/** Reserved for future ops. */
+	uintptr_t reserved[3];
+} __rte_cache_aligned;
+
+extern struct rte_crypto_fp_ops rte_crypto_fp_ops[RTE_CRYPTO_MAX_DEVS];
+
 /**
  * @internal
  * The data part, with no function pointers, associated with each device.
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index 43cf937e40..ed62ced221 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -45,6 +45,9 @@ DPDK_22 {
 	rte_cryptodev_sym_session_init;
 	rte_cryptodevs;
 
+	#added in 21.11
+	rte_crypto_fp_ops;
+
 	local: *;
 };
 
@@ -109,6 +112,8 @@ EXPERIMENTAL {
 INTERNAL {
 	global:
 
+	cryptodev_fp_ops_reset;
+	cryptodev_fp_ops_set;
 	rte_cryptodev_allocate_driver;
 	rte_cryptodev_pmd_allocate;
 	rte_cryptodev_pmd_callback_process;
-- 
2.25.1


^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH] port: eventdev port api promoted
  @ 2021-10-20  9:55  3%       ` Kinsella, Ray
  0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-10-20  9:55 UTC (permalink / raw)
  To: Thomas Monjalon, David Marchand, Rahul Shah; +Cc: dev, Cristian Dumitrescu



On 13/10/2021 13:12, Thomas Monjalon wrote:
> +Cc Cristian, the maintainer
> 
> 10/09/2021 15:40, Kinsella, Ray:
>> On 10/09/2021 08:36, David Marchand wrote:
>>> On Fri, Sep 10, 2021 at 9:31 AM Kinsella, Ray <mdr@ashroe.eu> wrote:
>>>> On 09/09/2021 17:40, Rahul Shah wrote:
>>>>> rte_port_eventdev_reader_ops, rte_port_eventdev_writer_nodrops_ops,
>>>>> rte_port_eventdev_writer_ops symbols promoted
>>>>>
>>>>> Signed-off-by: Rahul Shah <rahul.r.shah@intel.com>
>>>>> ---
>>>>>   lib/port/version.map | 8 +++-----
>>>>>   1 file changed, 3 insertions(+), 5 deletions(-)
>>>>
>>>> Hi Rahul,
>>>>
>>>> You need to strip the __rte_experimental attribute in the header file also.
>>>
>>> That's what I first thought... but those are variables, and there were
>>> not marked in the header.
>>
>> My mistake - should have checked.
>>
>>> At least, those symbols must be alphabetically sorted in version.map.
>>>
>>> About checking for experimental mark on variables... I had a patch,
>>> but never got it in.
>>> I think we should instead (forbid such exports and|insist on) rework
>>> API / libraries that rely on public variables.
>>
>> I'll pull together a script to identify all the variables in DPDK.
>> Are you expecting the rework on the port api to be done prior to 21.11?
> 
> Does it mean we should not promote these variables?
> 
> 

So the net-net is that variables are almost impossible to version.
Think about maintaining two parallel versions of the same variable, and having to track and reconcile state between them.

So variables are make ABI versioning (and maintenance) harder, and are best avoided.

In this particular case.
I would suggest leaving these as experimental and improving the API, post 21.11.

Ray K





^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [EXT] Re: [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle
  2021-10-19 21:27  4%     ` Dmitry Kozlyuk
@ 2021-10-20  9:25  3%       ` Harman Kalra
  0 siblings, 0 replies; 200+ results
From: Harman Kalra @ 2021-10-20  9:25 UTC (permalink / raw)
  To: Dmitry Kozlyuk; +Cc: dev, Bruce Richardson, david.marchand, mdr, thomas



> -----Original Message-----
> From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> Sent: Wednesday, October 20, 2021 2:58 AM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: dev@dpdk.org; Bruce Richardson <bruce.richardson@intel.com>;
> david.marchand@redhat.com; mdr@ashroe.eu; thomas@monjalon.net
> Subject: [EXT] Re: [PATCH v4 3/7] eal/interrupts: avoid direct access to
> interrupt handle
> 
> External Email
> 
> ----------------------------------------------------------------------
> 2021-10-20 00:05 (UTC+0530), Harman Kalra:
> > Making changes to the interrupt framework to use interrupt handle APIs
> > to get/set any field. Direct access to any of the fields should be
> > avoided to avoid any ABI breakage in future.
> 
> I get and accept the point why EAL also should use the API.
> However, mentioning ABI is still a wrong wording.
> There is no ABI between EAL structures and EAL functions by definition of
> ABI.

Sure, I will reword the commit message without ABI inclusion.

> 
> >
> > Signed-off-by: Harman Kalra <hkalra@marvell.com>
> > ---
> >  lib/eal/freebsd/eal_interrupts.c |  92 ++++++----
> >  lib/eal/linux/eal_interrupts.c   | 287 +++++++++++++++++++------------
> >  2 files changed, 234 insertions(+), 145 deletions(-)
> >
> > diff --git a/lib/eal/freebsd/eal_interrupts.c
> > b/lib/eal/freebsd/eal_interrupts.c
> [...]
> > @@ -135,9 +137,18 @@ rte_intr_callback_register(const struct
> rte_intr_handle *intr_handle,
> >  				ret = -ENOMEM;
> >  				goto fail;
> >  			} else {
> > -				src->intr_handle = *intr_handle;
> > -				TAILQ_INIT(&src->callbacks);
> > -				TAILQ_INSERT_TAIL(&intr_sources, src, next);
> > +				src->intr_handle = rte_intr_instance_alloc();
> > +				if (src->intr_handle == NULL) {
> > +					RTE_LOG(ERR, EAL, "Can not create
> intr instance\n");
> > +					free(callback);
> > +					ret = -ENOMEM;
> 
> goto fail?

I think goto not required, as we not setting wake_thread = 1 here,
API will just return error after unlocking the spinlock and trace.

> 
> > +				} else {
> > +					rte_intr_instance_copy(src-
> >intr_handle,
> > +							       intr_handle);
> > +					TAILQ_INIT(&src->callbacks);
> > +					TAILQ_INSERT_TAIL(&intr_sources,
> src,
> > +							  next);
> > +				}
> >  			}
> >  		}
> >
> [...]
> > @@ -213,7 +226,7 @@ rte_intr_callback_unregister_pending(const struct
> rte_intr_handle *intr_handle,
> >  	struct rte_intr_callback *cb, *next;
> >
> >  	/* do parameter checking first */
> > -	if (intr_handle == NULL || intr_handle->fd < 0) {
> > +	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
> 
> The handle is checked for NULL inside the accessor, here and in other places:
> grep -R 'intr_handle == NULL ||' lib/eal

Ack, I will remove these NULL checks.

> 
> >  		RTE_LOG(ERR, EAL,
> >  		"Unregistering with invalid input parameter\n");
> >  		return -EINVAL;
> 
> > diff --git a/lib/eal/linux/eal_interrupts.c
> > b/lib/eal/linux/eal_interrupts.c
> [...]

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v5] lib/cmdline: release cl when cmdline exit
  2021-10-18 13:58  4% ` [dpdk-dev] [PATCH v5] lib/cmdline: release cl when cmdline exit zhihongx.peng
@ 2021-10-20  9:22  0%   ` Peng, ZhihongX
  0 siblings, 0 replies; 200+ results
From: Peng, ZhihongX @ 2021-10-20  9:22 UTC (permalink / raw)
  To: olivier.matz, dmitry.kozliuk; +Cc: dev

> -----Original Message-----
> From: Peng, ZhihongX <zhihongx.peng@intel.com>
> Sent: Monday, October 18, 2021 9:59 PM
> To: olivier.matz@6wind.com; dmitry.kozliuk@gmail.com
> Cc: dev@dpdk.org; Peng, ZhihongX <zhihongx.peng@intel.com>
> Subject: [PATCH v5] lib/cmdline: release cl when cmdline exit
> 
> From: Zhihong Peng <zhihongx.peng@intel.com>
> 
> Malloc cl in the cmdline_stdin_new function, so release in the
> cmdline_stdin_exit function is logical, so that cl will not be released alone.
> 
> Fixes: af75078fece3 ("first public release")
> Cc: intel.com
> 
> Signed-off-by: Zhihong Peng <zhihongx.peng@intel.com>
> ---
>  app/test/test.c                        | 1 -
>  app/test/test_cmdline_lib.c            | 1 -
>  doc/guides/rel_notes/release_21_11.rst | 3 +++
>  lib/cmdline/cmdline_socket.c           | 1 +
>  4 files changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/app/test/test.c b/app/test/test.c index 173d202e47..5194131026
> 100644
> --- a/app/test/test.c
> +++ b/app/test/test.c
> @@ -233,7 +233,6 @@ main(int argc, char **argv)
> 
>  		cmdline_interact(cl);
>  		cmdline_stdin_exit(cl);
> -		cmdline_free(cl);
>  	}
>  #endif
>  	ret = 0;
> diff --git a/app/test/test_cmdline_lib.c b/app/test/test_cmdline_lib.c index
> d5a09b4541..6bcfa6511e 100644
> --- a/app/test/test_cmdline_lib.c
> +++ b/app/test/test_cmdline_lib.c
> @@ -174,7 +174,6 @@ test_cmdline_socket_fns(void)
>  	/* void functions */
>  	cmdline_stdin_exit(NULL);
> 
> -	cmdline_free(cl);
>  	return 0;
>  error:
>  	printf("Error: function accepted null parameter!\n"); diff --git
> a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> index d5435a64aa..6aa98d1e34 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -237,6 +237,9 @@ API Changes
>    the crypto/security operation. This field will be used to communicate
>    events such as soft expiry with IPsec in lookaside mode.
> 
> +* cmdline: ``cmdline_stdin_exit()`` now frees the ``cmdline`` structure.
> +  Calls to ``cmdline_free()`` after it need to be deleted from applications.
> +
> 
>  ABI Changes
>  -----------
> diff --git a/lib/cmdline/cmdline_socket.c b/lib/cmdline/cmdline_socket.c
> index 998e8ade25..ebd5343754 100644
> --- a/lib/cmdline/cmdline_socket.c
> +++ b/lib/cmdline/cmdline_socket.c
> @@ -53,4 +53,5 @@ cmdline_stdin_exit(struct cmdline *cl)
>  		return;
> 
>  	terminal_restore(cl);
> +	cmdline_free(cl);
>  }
> --
> 2.25.1

Tested-by: Zhihong Peng <zhihongx.peng@intel.com>

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v17 0/5] Add PIE support for HQoS library
  2021-10-19 12:45  3%     ` [dpdk-dev] [PATCH v16 " Liguzinski, WojciechX
@ 2021-10-20  7:49  3%       ` Liguzinski, WojciechX
  2021-10-25 11:32  3%         ` [dpdk-dev] [PATCH v18 " Liguzinski, WojciechX
  0 siblings, 1 reply; 200+ results
From: Liguzinski, WojciechX @ 2021-10-20  7:49 UTC (permalink / raw)
  To: dev, jasvinder.singh, cristian.dumitrescu; +Cc: megha.ajmera

DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management. However, more
advanced queue management is required to address this problem and provide desirable
quality of service to users.

This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.

The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.

Liguzinski, WojciechX (5):
  sched: add PIE based congestion management
  example/qos_sched: add PIE support
  example/ip_pipeline: add PIE support
  doc/guides/prog_guide: added PIE
  app/test: add tests for PIE

 app/test/meson.build                         |    4 +
 app/test/test_pie.c                          | 1065 ++++++++++++++++++
 config/rte_config.h                          |    1 -
 doc/guides/prog_guide/glossary.rst           |    3 +
 doc/guides/prog_guide/qos_framework.rst      |   64 +-
 doc/guides/prog_guide/traffic_management.rst |   13 +-
 drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
 examples/ip_pipeline/tmgr.c                  |  142 +--
 examples/qos_sched/app_thread.c              |    1 -
 examples/qos_sched/cfg_file.c                |  127 ++-
 examples/qos_sched/cfg_file.h                |    5 +
 examples/qos_sched/init.c                    |   27 +-
 examples/qos_sched/main.h                    |    3 +
 examples/qos_sched/profile.cfg               |  196 ++--
 lib/sched/meson.build                        |   10 +-
 lib/sched/rte_pie.c                          |   86 ++
 lib/sched/rte_pie.h                          |  398 +++++++
 lib/sched/rte_sched.c                        |  241 ++--
 lib/sched/rte_sched.h                        |   63 +-
 lib/sched/version.map                        |    4 +
 20 files changed, 2173 insertions(+), 286 deletions(-)
 create mode 100644 app/test/test_pie.c
 create mode 100644 lib/sched/rte_pie.c
 create mode 100644 lib/sched/rte_pie.h

-- 
2.25.1

Series-acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle
  2021-10-19 18:35  1%   ` [dpdk-dev] [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
@ 2021-10-19 21:27  4%     ` Dmitry Kozlyuk
  2021-10-20  9:25  3%       ` [dpdk-dev] [EXT] " Harman Kalra
  0 siblings, 1 reply; 200+ results
From: Dmitry Kozlyuk @ 2021-10-19 21:27 UTC (permalink / raw)
  To: Harman Kalra; +Cc: dev, Bruce Richardson, david.marchand, mdr, thomas

2021-10-20 00:05 (UTC+0530), Harman Kalra:
> Making changes to the interrupt framework to use interrupt handle
> APIs to get/set any field. Direct access to any of the fields
> should be avoided to avoid any ABI breakage in future.

I get and accept the point why EAL also should use the API.
However, mentioning ABI is still a wrong wording.
There is no ABI between EAL structures and EAL functions by definition of ABI.

> 
> Signed-off-by: Harman Kalra <hkalra@marvell.com>
> ---
>  lib/eal/freebsd/eal_interrupts.c |  92 ++++++----
>  lib/eal/linux/eal_interrupts.c   | 287 +++++++++++++++++++------------
>  2 files changed, 234 insertions(+), 145 deletions(-)
> 
> diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
[...]
> @@ -135,9 +137,18 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
>  				ret = -ENOMEM;
>  				goto fail;
>  			} else {
> -				src->intr_handle = *intr_handle;
> -				TAILQ_INIT(&src->callbacks);
> -				TAILQ_INSERT_TAIL(&intr_sources, src, next);
> +				src->intr_handle = rte_intr_instance_alloc();
> +				if (src->intr_handle == NULL) {
> +					RTE_LOG(ERR, EAL, "Can not create intr instance\n");
> +					free(callback);
> +					ret = -ENOMEM;

goto fail?

> +				} else {
> +					rte_intr_instance_copy(src->intr_handle,
> +							       intr_handle);
> +					TAILQ_INIT(&src->callbacks);
> +					TAILQ_INSERT_TAIL(&intr_sources, src,
> +							  next);
> +				}
>  			}
>  		}
>  
[...]
> @@ -213,7 +226,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
>  	struct rte_intr_callback *cb, *next;
>  
>  	/* do parameter checking first */
> -	if (intr_handle == NULL || intr_handle->fd < 0) {
> +	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {

The handle is checked for NULL inside the accessor, here and in other places:
grep -R 'intr_handle == NULL ||' lib/eal

>  		RTE_LOG(ERR, EAL,
>  		"Unregistering with invalid input parameter\n");
>  		return -EINVAL;

> diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
[...]

^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle
  2021-10-19 18:35  4% ` [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal Harman Kalra
@ 2021-10-19 18:35  1%   ` Harman Kalra
  2021-10-19 21:27  4%     ` Dmitry Kozlyuk
  0 siblings, 1 reply; 200+ results
From: Harman Kalra @ 2021-10-19 18:35 UTC (permalink / raw)
  To: dev, Harman Kalra, Bruce Richardson
  Cc: david.marchand, dmitry.kozliuk, mdr, thomas

Making changes to the interrupt framework to use interrupt handle
APIs to get/set any field. Direct access to any of the fields
should be avoided to avoid any ABI breakage in future.

Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
 lib/eal/freebsd/eal_interrupts.c |  92 ++++++----
 lib/eal/linux/eal_interrupts.c   | 287 +++++++++++++++++++------------
 2 files changed, 234 insertions(+), 145 deletions(-)

diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 86810845fe..846ca4aa89 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -40,7 +40,7 @@ struct rte_intr_callback {
 
 struct rte_intr_source {
 	TAILQ_ENTRY(rte_intr_source) next;
-	struct rte_intr_handle intr_handle; /**< interrupt handle */
+	struct rte_intr_handle *intr_handle; /**< interrupt handle */
 	struct rte_intr_cb_list callbacks;  /**< user callbacks */
 	uint32_t active;
 };
@@ -60,7 +60,7 @@ static int
 intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
 {
 	/* alarm callbacks are special case */
-	if (ih->type == RTE_INTR_HANDLE_ALARM) {
+	if (rte_intr_type_get(ih) == RTE_INTR_HANDLE_ALARM) {
 		uint64_t timeout_ns;
 
 		/* get soonest alarm timeout */
@@ -75,7 +75,7 @@ intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
 	} else {
 		ke->filter = EVFILT_READ;
 	}
-	ke->ident = ih->fd;
+	ke->ident = rte_intr_fd_get(ih);
 
 	return 0;
 }
@@ -89,7 +89,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 	int ret = 0, add_event = 0;
 
 	/* first do parameter checking */
-	if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0 ||
+	    cb == NULL) {
 		RTE_LOG(ERR, EAL,
 			"Registering with invalid input parameter\n");
 		return -EINVAL;
@@ -103,7 +104,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 
 	/* find the source for this intr_handle */
 	TAILQ_FOREACH(src, &intr_sources, next) {
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_fd_get(src->intr_handle) ==
+		    rte_intr_fd_get(intr_handle))
 			break;
 	}
 
@@ -112,8 +114,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 	 * thing on the list should be eal_alarm_callback() and we may
 	 * be called just to reset the timer.
 	 */
-	if (src != NULL && src->intr_handle.type == RTE_INTR_HANDLE_ALARM &&
-		 !TAILQ_EMPTY(&src->callbacks)) {
+	if (src != NULL && rte_intr_type_get(src->intr_handle) ==
+		RTE_INTR_HANDLE_ALARM && !TAILQ_EMPTY(&src->callbacks)) {
 		callback = NULL;
 	} else {
 		/* allocate a new interrupt callback entity */
@@ -135,9 +137,18 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 				ret = -ENOMEM;
 				goto fail;
 			} else {
-				src->intr_handle = *intr_handle;
-				TAILQ_INIT(&src->callbacks);
-				TAILQ_INSERT_TAIL(&intr_sources, src, next);
+				src->intr_handle = rte_intr_instance_alloc();
+				if (src->intr_handle == NULL) {
+					RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+					free(callback);
+					ret = -ENOMEM;
+				} else {
+					rte_intr_instance_copy(src->intr_handle,
+							       intr_handle);
+					TAILQ_INIT(&src->callbacks);
+					TAILQ_INSERT_TAIL(&intr_sources, src,
+							  next);
+				}
 			}
 		}
 
@@ -151,7 +162,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 	/* add events to the queue. timer events are special as we need to
 	 * re-set the timer.
 	 */
-	if (add_event || src->intr_handle.type == RTE_INTR_HANDLE_ALARM) {
+	if (add_event || rte_intr_type_get(src->intr_handle) ==
+							RTE_INTR_HANDLE_ALARM) {
 		struct kevent ke;
 
 		memset(&ke, 0, sizeof(ke));
@@ -173,12 +185,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 			 */
 			if (errno == ENODEV)
 				RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n",
-					src->intr_handle.fd);
+				rte_intr_fd_get(src->intr_handle));
 			else
 				RTE_LOG(ERR, EAL, "Error adding fd %d "
-						"kevent, %s\n",
-						src->intr_handle.fd,
-						strerror(errno));
+					"kevent, %s\n",
+					rte_intr_fd_get(
+							src->intr_handle),
+					strerror(errno));
 			ret = -errno;
 			goto fail;
 		}
@@ -213,7 +226,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 	struct rte_intr_callback *cb, *next;
 
 	/* do parameter checking first */
-	if (intr_handle == NULL || intr_handle->fd < 0) {
+	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
 		RTE_LOG(ERR, EAL,
 		"Unregistering with invalid input parameter\n");
 		return -EINVAL;
@@ -228,7 +241,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 
 	/* check if the insterrupt source for the fd is existent */
 	TAILQ_FOREACH(src, &intr_sources, next)
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_fd_get(src->intr_handle) ==
+					rte_intr_fd_get(intr_handle))
 			break;
 
 	/* No interrupt source registered for the fd */
@@ -268,7 +282,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 	struct rte_intr_callback *cb, *next;
 
 	/* do parameter checking first */
-	if (intr_handle == NULL || intr_handle->fd < 0) {
+	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
 		RTE_LOG(ERR, EAL,
 		"Unregistering with invalid input parameter\n");
 		return -EINVAL;
@@ -282,7 +296,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 
 	/* check if the insterrupt source for the fd is existent */
 	TAILQ_FOREACH(src, &intr_sources, next)
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_fd_get(src->intr_handle) ==
+					rte_intr_fd_get(intr_handle))
 			break;
 
 	/* No interrupt source registered for the fd */
@@ -314,7 +329,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 		 */
 		if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
 			RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
-				src->intr_handle.fd, strerror(errno));
+				rte_intr_fd_get(src->intr_handle),
+				strerror(errno));
 			/* removing non-existent even is an expected condition
 			 * in some circumstances (e.g. oneshot events).
 			 */
@@ -365,17 +381,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
 	if (intr_handle == NULL)
 		return -1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
 		rc = 0;
 		goto out;
 	}
 
-	if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+	if (rte_intr_fd_get(intr_handle) < 0 ||
+				rte_intr_dev_fd_get(intr_handle) < 0) {
 		rc = -1;
 		goto out;
 	}
 
-	switch (intr_handle->type) {
+	switch (rte_intr_type_get(intr_handle)) {
 	/* not used at this moment */
 	case RTE_INTR_HANDLE_ALARM:
 		rc = -1;
@@ -388,7 +405,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
 	default:
 		RTE_LOG(ERR, EAL,
 			"Unknown handle type of fd %d\n",
-					intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		rc = -1;
 		break;
 	}
@@ -406,17 +423,18 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
 	if (intr_handle == NULL)
 		return -1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
 		rc = 0;
 		goto out;
 	}
 
-	if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+	if (rte_intr_fd_get(intr_handle) < 0 ||
+				rte_intr_dev_fd_get(intr_handle) < 0) {
 		rc = -1;
 		goto out;
 	}
 
-	switch (intr_handle->type) {
+	switch (rte_intr_type_get(intr_handle)) {
 	/* not used at this moment */
 	case RTE_INTR_HANDLE_ALARM:
 		rc = -1;
@@ -429,7 +447,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
 	default:
 		RTE_LOG(ERR, EAL,
 			"Unknown handle type of fd %d\n",
-					intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		rc = -1;
 		break;
 	}
@@ -441,7 +459,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
 int
 rte_intr_ack(const struct rte_intr_handle *intr_handle)
 {
-	if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+	if (intr_handle &&
+	    rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
 		return 0;
 
 	return -1;
@@ -463,7 +482,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 
 		rte_spinlock_lock(&intr_lock);
 		TAILQ_FOREACH(src, &intr_sources, next)
-			if (src->intr_handle.fd == event_fd)
+			if (rte_intr_fd_get(src->intr_handle) ==
+								event_fd)
 				break;
 		if (src == NULL) {
 			rte_spinlock_unlock(&intr_lock);
@@ -475,7 +495,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 		rte_spinlock_unlock(&intr_lock);
 
 		/* set the length to be read dor different handle type */
-		switch (src->intr_handle.type) {
+		switch (rte_intr_type_get(src->intr_handle)) {
 		case RTE_INTR_HANDLE_ALARM:
 			bytes_read = 0;
 			call = true;
@@ -546,7 +566,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 				/* mark for deletion from the queue */
 				ke.flags = EV_DELETE;
 
-				if (intr_source_to_kevent(&src->intr_handle, &ke) < 0) {
+				if (intr_source_to_kevent(src->intr_handle,
+							  &ke) < 0) {
 					RTE_LOG(ERR, EAL, "Cannot convert to kevent\n");
 					rte_spinlock_unlock(&intr_lock);
 					return;
@@ -557,7 +578,9 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 				 */
 				if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
 					RTE_LOG(ERR, EAL, "Error removing fd %d kevent, "
-						"%s\n", src->intr_handle.fd,
+						"%s\n",
+						rte_intr_fd_get(
+							src->intr_handle),
 						strerror(errno));
 					/* removing non-existent even is an expected
 					 * condition in some circumstances
@@ -567,7 +590,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 
 				TAILQ_REMOVE(&src->callbacks, cb, next);
 				if (cb->ucb_fn)
-					cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+					cb->ucb_fn(src->intr_handle,
+						   cb->cb_arg);
 				free(cb);
 			}
 		}
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index 22b3b7bcd9..a250a9df66 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -20,6 +20,7 @@
 #include <stdbool.h>
 
 #include <rte_common.h>
+#include <rte_epoll.h>
 #include <rte_interrupts.h>
 #include <rte_memory.h>
 #include <rte_launch.h>
@@ -82,7 +83,7 @@ struct rte_intr_callback {
 
 struct rte_intr_source {
 	TAILQ_ENTRY(rte_intr_source) next;
-	struct rte_intr_handle intr_handle; /**< interrupt handle */
+	struct rte_intr_handle *intr_handle; /**< interrupt handle */
 	struct rte_intr_cb_list callbacks;  /**< user callbacks */
 	uint32_t active;
 };
@@ -112,7 +113,7 @@ static int
 vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 	int *fd_ptr;
 
 	len = sizeof(irq_set_buf);
@@ -125,13 +126,14 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set->start = 0;
 	fd_ptr = (int *) &irq_set->data;
-	*fd_ptr = intr_handle->fd;
+	*fd_ptr = rte_intr_fd_get(intr_handle);
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -144,11 +146,11 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 	return 0;
@@ -159,7 +161,7 @@ static int
 vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 
 	len = sizeof(struct vfio_irq_set);
 
@@ -171,11 +173,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -187,11 +190,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL,
-			"Error disabling INTx interrupts for fd %d\n", intr_handle->fd);
+			"Error disabling INTx interrupts for fd %d\n",
+			rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 	return 0;
@@ -202,6 +206,7 @@ static int
 vfio_ack_intx(const struct rte_intr_handle *intr_handle)
 {
 	struct vfio_irq_set irq_set;
+	int vfio_dev_fd;
 
 	/* unmask INTx */
 	memset(&irq_set, 0, sizeof(irq_set));
@@ -211,9 +216,10 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle)
 	irq_set.index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set.start = 0;
 
-	if (ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
 		RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
-			intr_handle->fd);
+			rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 	return 0;
@@ -225,7 +231,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
 	int len, ret;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
 	struct vfio_irq_set *irq_set;
-	int *fd_ptr;
+	int *fd_ptr, vfio_dev_fd;
 
 	len = sizeof(irq_set_buf);
 
@@ -236,13 +242,14 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
 	irq_set->start = 0;
 	fd_ptr = (int *) &irq_set->data;
-	*fd_ptr = intr_handle->fd;
+	*fd_ptr = rte_intr_fd_get(intr_handle);
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 	return 0;
@@ -253,7 +260,7 @@ static int
 vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 
 	len = sizeof(struct vfio_irq_set);
 
@@ -264,11 +271,13 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret)
 		RTE_LOG(ERR, EAL,
-			"Error disabling MSI interrupts for fd %d\n", intr_handle->fd);
+			"Error disabling MSI interrupts for fd %d\n",
+			rte_intr_fd_get(intr_handle));
 
 	return ret;
 }
@@ -279,30 +288,34 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) {
 	int len, ret;
 	char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
 	struct vfio_irq_set *irq_set;
-	int *fd_ptr;
+	int *fd_ptr, vfio_dev_fd, i;
 
 	len = sizeof(irq_set_buf);
 
 	irq_set = (struct vfio_irq_set *) irq_set_buf;
 	irq_set->argsz = len;
 	/* 0 < irq_set->count < RTE_MAX_RXTX_INTR_VEC_ID + 1 */
-	irq_set->count = intr_handle->max_intr ?
-		(intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID + 1 ?
-		RTE_MAX_RXTX_INTR_VEC_ID + 1 : intr_handle->max_intr) : 1;
+	irq_set->count = rte_intr_max_intr_get(intr_handle) ?
+		(rte_intr_max_intr_get(intr_handle) >
+		 RTE_MAX_RXTX_INTR_VEC_ID + 1 ?	RTE_MAX_RXTX_INTR_VEC_ID + 1 :
+		 rte_intr_max_intr_get(intr_handle)) : 1;
+
 	irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
 	irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
 	irq_set->start = 0;
 	fd_ptr = (int *) &irq_set->data;
 	/* INTR vector offset 0 reserve for non-efds mapping */
-	fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = intr_handle->fd;
-	memcpy(&fd_ptr[RTE_INTR_VEC_RXTX_OFFSET], intr_handle->efds,
-		sizeof(*intr_handle->efds) * intr_handle->nb_efd);
+	fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = rte_intr_fd_get(intr_handle);
+	for (i = 0; i < rte_intr_nb_efd_get(intr_handle); i++)
+		fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] =
+			rte_intr_efds_index_get(intr_handle, i);
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -314,7 +327,7 @@ static int
 vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 
 	len = sizeof(struct vfio_irq_set);
 
@@ -325,11 +338,13 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret)
 		RTE_LOG(ERR, EAL,
-			"Error disabling MSI-X interrupts for fd %d\n", intr_handle->fd);
+			"Error disabling MSI-X interrupts for fd %d\n",
+			rte_intr_fd_get(intr_handle));
 
 	return ret;
 }
@@ -342,7 +357,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
 	int len, ret;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
 	struct vfio_irq_set *irq_set;
-	int *fd_ptr;
+	int *fd_ptr, vfio_dev_fd;
 
 	len = sizeof(irq_set_buf);
 
@@ -354,13 +369,14 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
 	irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
 	irq_set->start = 0;
 	fd_ptr = (int *) &irq_set->data;
-	*fd_ptr = intr_handle->fd;
+	*fd_ptr = rte_intr_fd_get(intr_handle);
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -373,7 +389,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
 {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 
 	len = sizeof(struct vfio_irq_set);
 
@@ -384,11 +400,12 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
 	irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret)
 		RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n",
-			intr_handle->fd);
+			rte_intr_fd_get(intr_handle));
 
 	return ret;
 }
@@ -399,20 +416,22 @@ static int
 uio_intx_intr_disable(const struct rte_intr_handle *intr_handle)
 {
 	unsigned char command_high;
+	int uio_cfg_fd;
 
 	/* use UIO config file descriptor for uio_pci_generic */
-	if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+	uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+	if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
 		RTE_LOG(ERR, EAL,
 			"Error reading interrupts status for fd %d\n",
-			intr_handle->uio_cfg_fd);
+			uio_cfg_fd);
 		return -1;
 	}
 	/* disable interrupts */
 	command_high |= 0x4;
-	if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+	if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
 		RTE_LOG(ERR, EAL,
 			"Error disabling interrupts for fd %d\n",
-			intr_handle->uio_cfg_fd);
+			uio_cfg_fd);
 		return -1;
 	}
 
@@ -423,20 +442,22 @@ static int
 uio_intx_intr_enable(const struct rte_intr_handle *intr_handle)
 {
 	unsigned char command_high;
+	int uio_cfg_fd;
 
 	/* use UIO config file descriptor for uio_pci_generic */
-	if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+	uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+	if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
 		RTE_LOG(ERR, EAL,
 			"Error reading interrupts status for fd %d\n",
-			intr_handle->uio_cfg_fd);
+			uio_cfg_fd);
 		return -1;
 	}
 	/* enable interrupts */
 	command_high &= ~0x4;
-	if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+	if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
 		RTE_LOG(ERR, EAL,
 			"Error enabling interrupts for fd %d\n",
-			intr_handle->uio_cfg_fd);
+			uio_cfg_fd);
 		return -1;
 	}
 
@@ -448,10 +469,11 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle)
 {
 	const int value = 0;
 
-	if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+	if (write(rte_intr_fd_get(intr_handle), &value,
+		  sizeof(value)) < 0) {
 		RTE_LOG(ERR, EAL,
 			"Error disabling interrupts for fd %d (%s)\n",
-			intr_handle->fd, strerror(errno));
+			rte_intr_fd_get(intr_handle), strerror(errno));
 		return -1;
 	}
 	return 0;
@@ -462,10 +484,11 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
 {
 	const int value = 1;
 
-	if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+	if (write(rte_intr_fd_get(intr_handle), &value,
+		  sizeof(value)) < 0) {
 		RTE_LOG(ERR, EAL,
 			"Error enabling interrupts for fd %d (%s)\n",
-			intr_handle->fd, strerror(errno));
+			rte_intr_fd_get(intr_handle), strerror(errno));
 		return -1;
 	}
 	return 0;
@@ -482,7 +505,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 	wake_thread = 0;
 
 	/* first do parameter checking */
-	if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0 ||
+	    cb == NULL) {
 		RTE_LOG(ERR, EAL,
 			"Registering with invalid input parameter\n");
 		return -EINVAL;
@@ -503,7 +527,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 
 	/* check if there is at least one callback registered for the fd */
 	TAILQ_FOREACH(src, &intr_sources, next) {
-		if (src->intr_handle.fd == intr_handle->fd) {
+		if (rte_intr_fd_get(src->intr_handle) ==
+					rte_intr_fd_get(intr_handle)) {
 			/* we had no interrupts for this */
 			if (TAILQ_EMPTY(&src->callbacks))
 				wake_thread = 1;
@@ -522,12 +547,21 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 			free(callback);
 			ret = -ENOMEM;
 		} else {
-			src->intr_handle = *intr_handle;
-			TAILQ_INIT(&src->callbacks);
-			TAILQ_INSERT_TAIL(&(src->callbacks), callback, next);
-			TAILQ_INSERT_TAIL(&intr_sources, src, next);
-			wake_thread = 1;
-			ret = 0;
+			src->intr_handle = rte_intr_instance_alloc();
+			if (src->intr_handle == NULL) {
+				RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+				free(callback);
+				ret = -ENOMEM;
+			} else {
+				rte_intr_instance_copy(src->intr_handle,
+						       intr_handle);
+				TAILQ_INIT(&src->callbacks);
+				TAILQ_INSERT_TAIL(&(src->callbacks), callback,
+						  next);
+				TAILQ_INSERT_TAIL(&intr_sources, src, next);
+				wake_thread = 1;
+				ret = 0;
+			}
 		}
 	}
 
@@ -555,7 +589,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 	struct rte_intr_callback *cb, *next;
 
 	/* do parameter checking first */
-	if (intr_handle == NULL || intr_handle->fd < 0) {
+	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
 		RTE_LOG(ERR, EAL,
 		"Unregistering with invalid input parameter\n");
 		return -EINVAL;
@@ -565,7 +599,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 
 	/* check if the insterrupt source for the fd is existent */
 	TAILQ_FOREACH(src, &intr_sources, next)
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_fd_get(src->intr_handle) ==
+					rte_intr_fd_get(intr_handle))
 			break;
 
 	/* No interrupt source registered for the fd */
@@ -605,7 +640,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 	struct rte_intr_callback *cb, *next;
 
 	/* do parameter checking first */
-	if (intr_handle == NULL || intr_handle->fd < 0) {
+	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
 		RTE_LOG(ERR, EAL,
 		"Unregistering with invalid input parameter\n");
 		return -EINVAL;
@@ -615,7 +650,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 
 	/* check if the insterrupt source for the fd is existent */
 	TAILQ_FOREACH(src, &intr_sources, next)
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_fd_get(src->intr_handle) ==
+					rte_intr_fd_get(intr_handle))
 			break;
 
 	/* No interrupt source registered for the fd */
@@ -646,6 +682,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 		/* all callbacks for that source are removed. */
 		if (TAILQ_EMPTY(&src->callbacks)) {
 			TAILQ_REMOVE(&intr_sources, src, next);
+			rte_intr_instance_free(src->intr_handle);
 			free(src);
 		}
 	}
@@ -677,22 +714,23 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
 int
 rte_intr_enable(const struct rte_intr_handle *intr_handle)
 {
-	int rc = 0;
+	int rc = 0, uio_cfg_fd;
 
 	if (intr_handle == NULL)
 		return -1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
 		rc = 0;
 		goto out;
 	}
 
-	if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+	uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+	if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
 		rc = -1;
 		goto out;
 	}
 
-	switch (intr_handle->type){
+	switch (rte_intr_type_get(intr_handle)) {
 	/* write to the uio fd to enable the interrupt */
 	case RTE_INTR_HANDLE_UIO:
 		if (uio_intr_enable(intr_handle))
@@ -734,7 +772,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
 	default:
 		RTE_LOG(ERR, EAL,
 			"Unknown handle type of fd %d\n",
-					intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		rc = -1;
 		break;
 	}
@@ -757,13 +795,17 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
 int
 rte_intr_ack(const struct rte_intr_handle *intr_handle)
 {
-	if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+	int uio_cfg_fd;
+
+	if (intr_handle && rte_intr_type_get(intr_handle) ==
+							RTE_INTR_HANDLE_VDEV)
 		return 0;
 
-	if (!intr_handle || intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0)
+	uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+	if (!intr_handle || rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0)
 		return -1;
 
-	switch (intr_handle->type) {
+	switch (rte_intr_type_get(intr_handle)) {
 	/* Both acking and enabling are same for UIO */
 	case RTE_INTR_HANDLE_UIO:
 		if (uio_intr_enable(intr_handle))
@@ -796,7 +838,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
 	/* unknown handle type */
 	default:
 		RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
-			intr_handle->fd);
+			rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -806,22 +848,23 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
 int
 rte_intr_disable(const struct rte_intr_handle *intr_handle)
 {
-	int rc = 0;
+	int rc = 0, uio_cfg_fd;
 
 	if (intr_handle == NULL)
 		return -1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
 		rc = 0;
 		goto out;
 	}
 
-	if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+	uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+	if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
 		rc = -1;
 		goto out;
 	}
 
-	switch (intr_handle->type){
+	switch (rte_intr_type_get(intr_handle)) {
 	/* write to the uio fd to disable the interrupt */
 	case RTE_INTR_HANDLE_UIO:
 		if (uio_intr_disable(intr_handle))
@@ -863,7 +906,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
 	default:
 		RTE_LOG(ERR, EAL,
 			"Unknown handle type of fd %d\n",
-					intr_handle->fd);
+			rte_intr_fd_get(intr_handle));
 		rc = -1;
 		break;
 	}
@@ -896,7 +939,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 		}
 		rte_spinlock_lock(&intr_lock);
 		TAILQ_FOREACH(src, &intr_sources, next)
-			if (src->intr_handle.fd ==
+			if (rte_intr_fd_get(src->intr_handle) ==
 					events[n].data.fd)
 				break;
 		if (src == NULL){
@@ -909,7 +952,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 		rte_spinlock_unlock(&intr_lock);
 
 		/* set the length to be read dor different handle type */
-		switch (src->intr_handle.type) {
+		switch (rte_intr_type_get(src->intr_handle)) {
 		case RTE_INTR_HANDLE_UIO:
 		case RTE_INTR_HANDLE_UIO_INTX:
 			bytes_read = sizeof(buf.uio_intr_count);
@@ -973,6 +1016,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 					TAILQ_REMOVE(&src->callbacks, cb, next);
 					free(cb);
 				}
+				rte_intr_instance_free(src->intr_handle);
 				free(src);
 				return -1;
 			} else if (bytes_read == 0)
@@ -1012,7 +1056,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 			if (cb->pending_delete) {
 				TAILQ_REMOVE(&src->callbacks, cb, next);
 				if (cb->ucb_fn)
-					cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+					cb->ucb_fn(src->intr_handle,
+						   cb->cb_arg);
 				free(cb);
 				rv++;
 			}
@@ -1021,6 +1066,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 		/* all callbacks for that source are removed. */
 		if (TAILQ_EMPTY(&src->callbacks)) {
 			TAILQ_REMOVE(&intr_sources, src, next);
+			rte_intr_instance_free(src->intr_handle);
 			free(src);
 		}
 
@@ -1123,16 +1169,18 @@ eal_intr_thread_main(__rte_unused void *arg)
 				continue; /* skip those with no callbacks */
 			memset(&ev, 0, sizeof(ev));
 			ev.events = EPOLLIN | EPOLLPRI | EPOLLRDHUP | EPOLLHUP;
-			ev.data.fd = src->intr_handle.fd;
+			ev.data.fd = rte_intr_fd_get(src->intr_handle);
 
 			/**
 			 * add all the uio device file descriptor
 			 * into wait list.
 			 */
 			if (epoll_ctl(pfd, EPOLL_CTL_ADD,
-					src->intr_handle.fd, &ev) < 0){
+				rte_intr_fd_get(src->intr_handle),
+								&ev) < 0) {
 				rte_panic("Error adding fd %d epoll_ctl, %s\n",
-					src->intr_handle.fd, strerror(errno));
+				rte_intr_fd_get(src->intr_handle),
+				strerror(errno));
 			}
 			else
 				numfds++;
@@ -1185,7 +1233,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
 	int bytes_read = 0;
 	int nbytes;
 
-	switch (intr_handle->type) {
+	switch (rte_intr_type_get(intr_handle)) {
 	case RTE_INTR_HANDLE_UIO:
 	case RTE_INTR_HANDLE_UIO_INTX:
 		bytes_read = sizeof(buf.uio_intr_count);
@@ -1198,7 +1246,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
 		break;
 #endif
 	case RTE_INTR_HANDLE_VDEV:
-		bytes_read = intr_handle->efd_counter_size;
+		bytes_read = rte_intr_efd_counter_size_get(intr_handle);
 		/* For vdev, number of bytes to read is set by driver */
 		break;
 	case RTE_INTR_HANDLE_EXT:
@@ -1419,8 +1467,8 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 	efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
 		(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
 
-	if (!intr_handle || intr_handle->nb_efd == 0 ||
-	    efd_idx >= intr_handle->nb_efd) {
+	if (!intr_handle || rte_intr_nb_efd_get(intr_handle) == 0 ||
+	    efd_idx >= (unsigned int)rte_intr_nb_efd_get(intr_handle)) {
 		RTE_LOG(ERR, EAL, "Wrong intr vector number.\n");
 		return -EPERM;
 	}
@@ -1428,7 +1476,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 	switch (op) {
 	case RTE_INTR_EVENT_ADD:
 		epfd_op = EPOLL_CTL_ADD;
-		rev = &intr_handle->elist[efd_idx];
+		rev = rte_intr_elist_index_get(intr_handle, efd_idx);
 		if (__atomic_load_n(&rev->status,
 				__ATOMIC_RELAXED) != RTE_EPOLL_INVALID) {
 			RTE_LOG(INFO, EAL, "Event already been added.\n");
@@ -1442,7 +1490,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 		epdata->cb_fun = (rte_intr_event_cb_t)eal_intr_proc_rxtx_intr;
 		epdata->cb_arg = (void *)intr_handle;
 		rc = rte_epoll_ctl(epfd, epfd_op,
-				   intr_handle->efds[efd_idx], rev);
+				   rte_intr_efds_index_get(intr_handle,
+								  efd_idx),
+				   rev);
 		if (!rc)
 			RTE_LOG(DEBUG, EAL,
 				"efd %d associated with vec %d added on epfd %d"
@@ -1452,7 +1502,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 		break;
 	case RTE_INTR_EVENT_DEL:
 		epfd_op = EPOLL_CTL_DEL;
-		rev = &intr_handle->elist[efd_idx];
+		rev = rte_intr_elist_index_get(intr_handle, efd_idx);
 		if (__atomic_load_n(&rev->status,
 				__ATOMIC_RELAXED) == RTE_EPOLL_INVALID) {
 			RTE_LOG(INFO, EAL, "Event does not exist.\n");
@@ -1477,8 +1527,9 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
 	uint32_t i;
 	struct rte_epoll_event *rev;
 
-	for (i = 0; i < intr_handle->nb_efd; i++) {
-		rev = &intr_handle->elist[i];
+	for (i = 0; i < (uint32_t)rte_intr_nb_efd_get(intr_handle);
+									i++) {
+		rev = rte_intr_elist_index_get(intr_handle, i);
 		if (__atomic_load_n(&rev->status,
 				__ATOMIC_RELAXED) == RTE_EPOLL_INVALID)
 			continue;
@@ -1498,7 +1549,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 
 	assert(nb_efd != 0);
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX) {
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX) {
 		for (i = 0; i < n; i++) {
 			fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
 			if (fd < 0) {
@@ -1507,21 +1558,32 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 					errno, strerror(errno));
 				return -errno;
 			}
-			intr_handle->efds[i] = fd;
+
+			if (rte_intr_efds_index_set(intr_handle, i, fd))
+				return -rte_errno;
 		}
-		intr_handle->nb_efd   = n;
-		intr_handle->max_intr = NB_OTHER_INTR + n;
-	} else if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+
+		if (rte_intr_nb_efd_set(intr_handle, n))
+			return -rte_errno;
+
+		if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR + n))
+			return -rte_errno;
+	} else if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
 		/* only check, initialization would be done in vdev driver.*/
-		if (intr_handle->efd_counter_size >
+		if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) >
 		    sizeof(union rte_intr_read_buffer)) {
 			RTE_LOG(ERR, EAL, "the efd_counter_size is oversized");
 			return -EINVAL;
 		}
 	} else {
-		intr_handle->efds[0]  = intr_handle->fd;
-		intr_handle->nb_efd   = RTE_MIN(nb_efd, 1U);
-		intr_handle->max_intr = NB_OTHER_INTR;
+		if (rte_intr_efds_index_set(intr_handle, 0,
+					    rte_intr_fd_get(intr_handle)))
+			return -rte_errno;
+		if (rte_intr_nb_efd_set(intr_handle,
+					RTE_MIN(nb_efd, 1U)))
+			return -rte_errno;
+		if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR))
+			return -rte_errno;
 	}
 
 	return 0;
@@ -1533,18 +1595,20 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
 	uint32_t i;
 
 	rte_intr_free_epoll_fd(intr_handle);
-	if (intr_handle->max_intr > intr_handle->nb_efd) {
-		for (i = 0; i < intr_handle->nb_efd; i++)
-			close(intr_handle->efds[i]);
+	if (rte_intr_max_intr_get(intr_handle) >
+				rte_intr_nb_efd_get(intr_handle)) {
+		for (i = 0; i <
+			(uint32_t)rte_intr_nb_efd_get(intr_handle); i++)
+			close(rte_intr_efds_index_get(intr_handle, i));
 	}
-	intr_handle->nb_efd = 0;
-	intr_handle->max_intr = 0;
+	rte_intr_nb_efd_set(intr_handle, 0);
+	rte_intr_max_intr_set(intr_handle, 0);
 }
 
 int
 rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
 {
-	return !(!intr_handle->nb_efd);
+	return !(!rte_intr_nb_efd_get(intr_handle));
 }
 
 int
@@ -1553,16 +1617,17 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
 	if (!rte_intr_dp_is_en(intr_handle))
 		return 1;
 	else
-		return !!(intr_handle->max_intr - intr_handle->nb_efd);
+		return !!(rte_intr_max_intr_get(intr_handle) -
+				rte_intr_nb_efd_get(intr_handle));
 }
 
 int
 rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
 {
-	if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX)
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX)
 		return 1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV)
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
 		return 1;
 
 	return 0;
-- 
2.18.0


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal
    2021-10-18 19:37  4% ` [dpdk-dev] [PATCH v3 " Harman Kalra
@ 2021-10-19 18:35  4% ` Harman Kalra
  2021-10-19 18:35  1%   ` [dpdk-dev] [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
  2021-10-22 20:49  4% ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
  2 siblings, 1 reply; 200+ results
From: Harman Kalra @ 2021-10-19 18:35 UTC (permalink / raw)
  To: dev; +Cc: david.marchand, dmitry.kozliuk, mdr, thomas, Harman Kalra

Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.

Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=

This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.

Details on each patch of the series:
Patch 1: malloc: introduce malloc is ready API
This patch introduces a new API which tells if DPDK memory
subsystem is initialized and rte_malloc* APIs are ready to be
used. If rte_malloc* are setup, memory for interrupt instance
is allocated using rte_malloc else using traditional heap APIs.

Patch 2: eal/interrupts: implement get set APIs
This patch provides prototypes and implementation of all the new
get set APIs. Alloc APIs are implemented to allocate memory for
interrupt handle instance. Currently most of the drivers defines
interrupt handle instance as static but now it cant be static as
size of rte_intr_handle is unknown to all the drivers. Drivers are
expected to allocate interrupt instances during initialization
and free these instances during cleanup phase.
This patch also rearranges the headers related to interrupt
framework. Epoll related definitions prototypes are moved into a
new header i.e. rte_epoll.h and APIs defined in rte_eal_interrupts.h
which were driver specific are moved to rte_interrupts.h (as anyways
it was accessible and used outside DPDK library. Later in the series
rte_eal_interrupts.h is removed.

Patch 3: eal/interrupts: avoid direct access to interrupt handle
Modifying the interrupt framework for linux and freebsd to use these
get set alloc APIs as per requirement and avoid accessing the fields
directly.

Patch 4: test/interrupt: apply get set interrupt handle APIs
Updating interrupt test suite to use interrupt handle APIs.

Patch 5: drivers: remove direct access to interrupt handle fields
Modifying all the drivers and libraries which are currently directly
accessing the interrupt handle fields. Drivers are expected to
allocated the interrupt instance, use get set APIs with the allocated
interrupt handle and free it on cleanup.

Patch 6: eal/interrupts: make interrupt handle structure opaque
In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
definition is moved to c file to make it completely opaque. As part of
interrupt handle allocation, array like efds and elist(which are currently
static) are dynamically allocated with default size
(RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
device requirement using new API rte_intr_handle_event_list_update().
Eg, on PCI device probing MSIX size can be queried and these arrays can
be reallocated accordingly.

Patch 7: eal/alarm: introduce alarm fini routine
Introducing alarm fini routine, as the memory allocated for alarm interrupt
instance can be freed in alarm fini.

Testing performed:
1. Validated the series by running interrupts and alarm test suite.
2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
   where interrupts are expected on packet arrival.

v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif

v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.

v3:
* Removed flag from instance alloc API, rather auto detect
if memory should be allocated using glibc malloc APIs or
rte_malloc*
* Added APIs for get/set windows handle.
* Defined macros for repeated checks.

v4:
* Rectified some typo in the APIs documentation.
* Better names for some internal variables.

Harman Kalra (7):
  malloc: introduce malloc is ready API
  eal/interrupts: implement get set APIs
  eal/interrupts: avoid direct access to interrupt handle
  test/interrupt: apply get set interrupt handle APIs
  drivers: remove direct access to interrupt handle
  eal/interrupts: make interrupt handle structure opaque
  eal/alarm: introduce alarm fini routine

 MAINTAINERS                                   |   1 +
 app/test/test_interrupts.c                    | 162 +++--
 drivers/baseband/acc100/rte_acc100_pmd.c      |  18 +-
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |  21 +-
 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |  21 +-
 drivers/bus/auxiliary/auxiliary_common.c      |   2 +
 drivers/bus/auxiliary/linux/auxiliary.c       |   9 +
 drivers/bus/auxiliary/rte_bus_auxiliary.h     |   2 +-
 drivers/bus/dpaa/dpaa_bus.c                   |  26 +-
 drivers/bus/dpaa/rte_dpaa_bus.h               |   2 +-
 drivers/bus/fslmc/fslmc_bus.c                 |  15 +-
 drivers/bus/fslmc/fslmc_vfio.c                |  32 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |  19 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |   2 +-
 drivers/bus/fslmc/rte_fslmc.h                 |   2 +-
 drivers/bus/ifpga/ifpga_bus.c                 |  14 +-
 drivers/bus/ifpga/rte_bus_ifpga.h             |   2 +-
 drivers/bus/pci/bsd/pci.c                     |  21 +-
 drivers/bus/pci/linux/pci.c                   |   4 +-
 drivers/bus/pci/linux/pci_uio.c               |  73 +-
 drivers/bus/pci/linux/pci_vfio.c              | 115 +++-
 drivers/bus/pci/pci_common.c                  |  27 +-
 drivers/bus/pci/pci_common_uio.c              |  21 +-
 drivers/bus/pci/rte_bus_pci.h                 |   4 +-
 drivers/bus/vmbus/linux/vmbus_bus.c           |   5 +
 drivers/bus/vmbus/linux/vmbus_uio.c           |  37 +-
 drivers/bus/vmbus/rte_bus_vmbus.h             |   2 +-
 drivers/bus/vmbus/vmbus_common_uio.c          |  24 +-
 drivers/common/cnxk/roc_cpt.c                 |   8 +-
 drivers/common/cnxk/roc_dev.c                 |  14 +-
 drivers/common/cnxk/roc_irq.c                 | 108 +--
 drivers/common/cnxk/roc_nix_inl_dev_irq.c     |   8 +-
 drivers/common/cnxk/roc_nix_irq.c             |  36 +-
 drivers/common/cnxk/roc_npa.c                 |   2 +-
 drivers/common/cnxk/roc_platform.h            |  49 +-
 drivers/common/cnxk/roc_sso.c                 |   4 +-
 drivers/common/cnxk/roc_tim.c                 |   4 +-
 drivers/common/octeontx2/otx2_dev.c           |  14 +-
 drivers/common/octeontx2/otx2_irq.c           | 117 ++--
 .../octeontx2/otx2_cryptodev_hw_access.c      |   4 +-
 drivers/event/octeontx2/otx2_evdev_irq.c      |  12 +-
 drivers/mempool/octeontx2/otx2_mempool.c      |   2 +-
 drivers/net/atlantic/atl_ethdev.c             |  20 +-
 drivers/net/avp/avp_ethdev.c                  |   8 +-
 drivers/net/axgbe/axgbe_ethdev.c              |  12 +-
 drivers/net/axgbe/axgbe_mdio.c                |   6 +-
 drivers/net/bnx2x/bnx2x_ethdev.c              |  10 +-
 drivers/net/bnxt/bnxt_ethdev.c                |  33 +-
 drivers/net/bnxt/bnxt_irq.c                   |   4 +-
 drivers/net/dpaa/dpaa_ethdev.c                |  47 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  10 +-
 drivers/net/e1000/em_ethdev.c                 |  23 +-
 drivers/net/e1000/igb_ethdev.c                |  79 +--
 drivers/net/ena/ena_ethdev.c                  |  35 +-
 drivers/net/enic/enic_main.c                  |  26 +-
 drivers/net/failsafe/failsafe.c               |  22 +-
 drivers/net/failsafe/failsafe_intr.c          |  43 +-
 drivers/net/failsafe/failsafe_ops.c           |  21 +-
 drivers/net/failsafe/failsafe_private.h       |   2 +-
 drivers/net/fm10k/fm10k_ethdev.c              |  32 +-
 drivers/net/hinic/hinic_pmd_ethdev.c          |  10 +-
 drivers/net/hns3/hns3_ethdev.c                |  57 +-
 drivers/net/hns3/hns3_ethdev_vf.c             |  64 +-
 drivers/net/hns3/hns3_rxtx.c                  |   2 +-
 drivers/net/i40e/i40e_ethdev.c                |  53 +-
 drivers/net/iavf/iavf_ethdev.c                |  42 +-
 drivers/net/iavf/iavf_vchnl.c                 |   4 +-
 drivers/net/ice/ice_dcf.c                     |  10 +-
 drivers/net/ice/ice_dcf_ethdev.c              |  21 +-
 drivers/net/ice/ice_ethdev.c                  |  49 +-
 drivers/net/igc/igc_ethdev.c                  |  45 +-
 drivers/net/ionic/ionic_ethdev.c              |  17 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |  66 +-
 drivers/net/memif/memif_socket.c              | 108 ++-
 drivers/net/memif/memif_socket.h              |   4 +-
 drivers/net/memif/rte_eth_memif.c             |  59 +-
 drivers/net/memif/rte_eth_memif.h             |   2 +-
 drivers/net/mlx4/mlx4.c                       |  18 +-
 drivers/net/mlx4/mlx4.h                       |   2 +-
 drivers/net/mlx4/mlx4_intr.c                  |  47 +-
 drivers/net/mlx5/linux/mlx5_os.c              |  51 +-
 drivers/net/mlx5/linux/mlx5_socket.c          |  24 +-
 drivers/net/mlx5/mlx5.h                       |   6 +-
 drivers/net/mlx5/mlx5_rxq.c                   |  42 +-
 drivers/net/mlx5/mlx5_trigger.c               |   4 +-
 drivers/net/mlx5/mlx5_txpp.c                  |  25 +-
 drivers/net/netvsc/hn_ethdev.c                |   4 +-
 drivers/net/nfp/nfp_common.c                  |  34 +-
 drivers/net/nfp/nfp_ethdev.c                  |  13 +-
 drivers/net/nfp/nfp_ethdev_vf.c               |  13 +-
 drivers/net/ngbe/ngbe_ethdev.c                |  29 +-
 drivers/net/octeontx2/otx2_ethdev_irq.c       |  35 +-
 drivers/net/qede/qede_ethdev.c                |  16 +-
 drivers/net/sfc/sfc_intr.c                    |  30 +-
 drivers/net/tap/rte_eth_tap.c                 |  35 +-
 drivers/net/tap/rte_eth_tap.h                 |   2 +-
 drivers/net/tap/tap_intr.c                    |  32 +-
 drivers/net/thunderx/nicvf_ethdev.c           |  11 +
 drivers/net/thunderx/nicvf_struct.h           |   2 +-
 drivers/net/txgbe/txgbe_ethdev.c              |  34 +-
 drivers/net/txgbe/txgbe_ethdev_vf.c           |  33 +-
 drivers/net/vhost/rte_eth_vhost.c             |  75 +-
 drivers/net/virtio/virtio_ethdev.c            |  21 +-
 .../net/virtio/virtio_user/virtio_user_dev.c  |  47 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.c          |  43 +-
 drivers/raw/ifpga/ifpga_rawdev.c              |  61 +-
 drivers/raw/ntb/ntb.c                         |   9 +-
 .../regex/octeontx2/otx2_regexdev_hw_access.c |   4 +-
 drivers/vdpa/ifc/ifcvf_vdpa.c                 |   5 +-
 drivers/vdpa/mlx5/mlx5_vdpa.c                 |   9 +
 drivers/vdpa/mlx5/mlx5_vdpa.h                 |   4 +-
 drivers/vdpa/mlx5/mlx5_vdpa_event.c           |  22 +-
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c           |  44 +-
 lib/bbdev/rte_bbdev.c                         |   4 +-
 lib/eal/common/eal_common_interrupts.c        | 585 ++++++++++++++++
 lib/eal/common/eal_private.h                  |  11 +
 lib/eal/common/malloc_heap.c                  |  19 +-
 lib/eal/common/malloc_heap.h                  |   3 +
 lib/eal/common/meson.build                    |   1 +
 lib/eal/freebsd/eal.c                         |   1 +
 lib/eal/freebsd/eal_alarm.c                   |  52 +-
 lib/eal/freebsd/eal_interrupts.c              |  92 ++-
 lib/eal/include/meson.build                   |   2 +-
 lib/eal/include/rte_eal_interrupts.h          | 269 --------
 lib/eal/include/rte_eal_trace.h               |  24 +-
 lib/eal/include/rte_epoll.h                   | 118 ++++
 lib/eal/include/rte_interrupts.h              | 648 +++++++++++++++++-
 lib/eal/linux/eal.c                           |   1 +
 lib/eal/linux/eal_alarm.c                     |  37 +-
 lib/eal/linux/eal_dev.c                       |  63 +-
 lib/eal/linux/eal_interrupts.c                | 287 +++++---
 lib/eal/version.map                           |  47 +-
 lib/ethdev/ethdev_pci.h                       |   2 +-
 lib/ethdev/rte_ethdev.c                       |  14 +-
 134 files changed, 3568 insertions(+), 1709 deletions(-)
 create mode 100644 lib/eal/common/eal_common_interrupts.c
 delete mode 100644 lib/eal/include/rte_eal_interrupts.h
 create mode 100644 lib/eal/include/rte_epoll.h

-- 
2.18.0


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v3 3/7] cryptodev: move inline APIs into separate structure
  2021-10-18 14:41  2%     ` [dpdk-dev] [PATCH v3 3/7] cryptodev: move inline APIs into separate structure Akhil Goyal
@ 2021-10-19 16:00  0%       ` Zhang, Roy Fan
  0 siblings, 0 replies; 200+ results
From: Zhang, Roy Fan @ 2021-10-19 16:00 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj, De Lara Guarch,
	Pablo, Trahe, Fiona, Doherty, Declan, matan, g.singh,
	jianjay.zhou, asomalap, ruifeng.wang, Ananyev, Konstantin,
	Nicolau, Radu, ajit.khaparde, rnagadheeraj, adwivedi, Power,
	Ciara, Troy, Rebecca

Apart from the scheduler PMD changes required mentioned by Ciara,
re-acking this patch as all doubts are cleared on our end.
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>

> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Monday, October 18, 2021 3:42 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; anoobj@marvell.com; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Trahe, Fiona <fiona.trahe@intel.com>;
> Doherty, Declan <declan.doherty@intel.com>; matan@nvidia.com;
> g.singh@nxp.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> jianjay.zhou@huawei.com; asomalap@amd.com; ruifeng.wang@arm.com;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Nicolau, Radu
> <radu.nicolau@intel.com>; ajit.khaparde@broadcom.com;
> rnagadheeraj@marvell.com; adwivedi@marvell.com; Power, Ciara
> <ciara.power@intel.com>; Akhil Goyal <gakhil@marvell.com>; Troy, Rebecca
> <rebecca.troy@intel.com>
> Subject: [PATCH v3 3/7] cryptodev: move inline APIs into separate structure
> 
> Move fastpath inline function pointers from rte_cryptodev into a
> separate structure accessed via a flat array.
> The intension is to make rte_cryptodev and related structures private
> to avoid future API/ABI breakages.
> 
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> Tested-by: Rebecca Troy <rebecca.troy@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
>  lib/cryptodev/cryptodev_pmd.c      | 53
> +++++++++++++++++++++++++++++-
>  lib/cryptodev/cryptodev_pmd.h      | 11 +++++++
>  lib/cryptodev/rte_cryptodev.c      | 19 +++++++++++
>  lib/cryptodev/rte_cryptodev_core.h | 29 ++++++++++++++++
>  lib/cryptodev/version.map          |  5 +++
>  5 files changed, 116 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/cryptodev/cryptodev_pmd.c
> b/lib/cryptodev/cryptodev_pmd.c
> index 44a70ecb35..fd74543682 100644
> --- a/lib/cryptodev/cryptodev_pmd.c
> +++ b/lib/cryptodev/cryptodev_pmd.c
> @@ -3,7 +3,7 @@
>   */
> 
>  #include <sys/queue.h>
> -
> +#include <rte_errno.h>
>  #include <rte_string_fns.h>
>  #include <rte_malloc.h>
> 
> @@ -160,3 +160,54 @@ rte_cryptodev_pmd_destroy(struct rte_cryptodev
> *cryptodev)
> 
>  	return 0;
>  }
> +
> +static uint16_t
> +dummy_crypto_enqueue_burst(__rte_unused void *qp,
> +			   __rte_unused struct rte_crypto_op **ops,
> +			   __rte_unused uint16_t nb_ops)
> +{
> +	CDEV_LOG_ERR(
> +		"crypto enqueue burst requested for unconfigured device");
> +	rte_errno = ENOTSUP;
> +	return 0;
> +}
> +
> +static uint16_t
> +dummy_crypto_dequeue_burst(__rte_unused void *qp,
> +			   __rte_unused struct rte_crypto_op **ops,
> +			   __rte_unused uint16_t nb_ops)
> +{
> +	CDEV_LOG_ERR(
> +		"crypto dequeue burst requested for unconfigured device");
> +	rte_errno = ENOTSUP;
> +	return 0;
> +}
> +
> +void
> +cryptodev_fp_ops_reset(struct rte_crypto_fp_ops *fp_ops)
> +{
> +	static struct rte_cryptodev_cb_rcu
> dummy_cb[RTE_MAX_QUEUES_PER_PORT];
> +	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
> +	static const struct rte_crypto_fp_ops dummy = {
> +		.enqueue_burst = dummy_crypto_enqueue_burst,
> +		.dequeue_burst = dummy_crypto_dequeue_burst,
> +		.qp = {
> +			.data = dummy_data,
> +			.enq_cb = dummy_cb,
> +			.deq_cb = dummy_cb,
> +		},
> +	};
> +
> +	*fp_ops = dummy;
> +}
> +
> +void
> +cryptodev_fp_ops_set(struct rte_crypto_fp_ops *fp_ops,
> +		     const struct rte_cryptodev *dev)
> +{
> +	fp_ops->enqueue_burst = dev->enqueue_burst;
> +	fp_ops->dequeue_burst = dev->dequeue_burst;
> +	fp_ops->qp.data = dev->data->queue_pairs;
> +	fp_ops->qp.enq_cb = dev->enq_cbs;
> +	fp_ops->qp.deq_cb = dev->deq_cbs;
> +}
> diff --git a/lib/cryptodev/cryptodev_pmd.h
> b/lib/cryptodev/cryptodev_pmd.h
> index 36606dd10b..a71edbb991 100644
> --- a/lib/cryptodev/cryptodev_pmd.h
> +++ b/lib/cryptodev/cryptodev_pmd.h
> @@ -516,6 +516,17 @@ RTE_INIT(init_ ##driver_id)\
>  	driver_id = rte_cryptodev_allocate_driver(&crypto_drv, &(drv));\
>  }
> 
> +/* Reset crypto device fastpath APIs to dummy values. */
> +__rte_internal
> +void
> +cryptodev_fp_ops_reset(struct rte_crypto_fp_ops *fp_ops);
> +
> +/* Setup crypto device fastpath APIs. */
> +__rte_internal
> +void
> +cryptodev_fp_ops_set(struct rte_crypto_fp_ops *fp_ops,
> +		     const struct rte_cryptodev *dev);
> +
>  static inline void *
>  get_sym_session_private_data(const struct rte_cryptodev_sym_session
> *sess,
>  		uint8_t driver_id) {
> diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
> index eb86e629aa..305e013ebb 100644
> --- a/lib/cryptodev/rte_cryptodev.c
> +++ b/lib/cryptodev/rte_cryptodev.c
> @@ -53,6 +53,9 @@ static struct rte_cryptodev_global cryptodev_globals = {
>  		.nb_devs		= 0
>  };
> 
> +/* Public fastpath APIs. */
> +struct rte_crypto_fp_ops rte_crypto_fp_ops[RTE_CRYPTO_MAX_DEVS];
> +
>  /* spinlock for crypto device callbacks */
>  static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
> 
> @@ -917,6 +920,8 @@ rte_cryptodev_pmd_release_device(struct
> rte_cryptodev *cryptodev)
> 
>  	dev_id = cryptodev->data->dev_id;
> 
> +	cryptodev_fp_ops_reset(rte_crypto_fp_ops + dev_id);
> +
>  	/* Close device only if device operations have been set */
>  	if (cryptodev->dev_ops) {
>  		ret = rte_cryptodev_close(dev_id);
> @@ -1080,6 +1085,9 @@ rte_cryptodev_start(uint8_t dev_id)
>  	}
> 
>  	diag = (*dev->dev_ops->dev_start)(dev);
> +	/* expose selection of PMD fast-path functions */
> +	cryptodev_fp_ops_set(rte_crypto_fp_ops + dev_id, dev);
> +
>  	rte_cryptodev_trace_start(dev_id, diag);
>  	if (diag == 0)
>  		dev->data->dev_started = 1;
> @@ -1109,6 +1117,9 @@ rte_cryptodev_stop(uint8_t dev_id)
>  		return;
>  	}
> 
> +	/* point fast-path functions to dummy ones */
> +	cryptodev_fp_ops_reset(rte_crypto_fp_ops + dev_id);
> +
>  	(*dev->dev_ops->dev_stop)(dev);
>  	rte_cryptodev_trace_stop(dev_id);
>  	dev->data->dev_started = 0;
> @@ -2411,3 +2422,11 @@ rte_cryptodev_allocate_driver(struct
> cryptodev_driver *crypto_drv,
> 
>  	return nb_drivers++;
>  }
> +
> +RTE_INIT(cryptodev_init_fp_ops)
> +{
> +	uint32_t i;
> +
> +	for (i = 0; i != RTE_DIM(rte_crypto_fp_ops); i++)
> +		cryptodev_fp_ops_reset(rte_crypto_fp_ops + i);
> +}
> diff --git a/lib/cryptodev/rte_cryptodev_core.h
> b/lib/cryptodev/rte_cryptodev_core.h
> index 1633e55889..e9e9a44b3c 100644
> --- a/lib/cryptodev/rte_cryptodev_core.h
> +++ b/lib/cryptodev/rte_cryptodev_core.h
> @@ -25,6 +25,35 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
>  		struct rte_crypto_op **ops,	uint16_t nb_ops);
>  /**< Enqueue packets for processing on queue pair of a device. */
> 
> +/**
> + * @internal
> + * Structure used to hold opaque pointers to internal ethdev Rx/Tx
> + * queues data.
> + * The main purpose to expose these pointers at all - allow compiler
> + * to fetch this data for fast-path cryptodev inline functions in advance.
> + */
> +struct rte_cryptodev_qpdata {
> +	/** points to array of internal queue pair data pointers. */
> +	void **data;
> +	/** points to array of enqueue callback data pointers */
> +	struct rte_cryptodev_cb_rcu *enq_cb;
> +	/** points to array of dequeue callback data pointers */
> +	struct rte_cryptodev_cb_rcu *deq_cb;
> +};
> +
> +struct rte_crypto_fp_ops {
> +	/** PMD enqueue burst function. */
> +	enqueue_pkt_burst_t enqueue_burst;
> +	/** PMD dequeue burst function. */
> +	dequeue_pkt_burst_t dequeue_burst;
> +	/** Internal queue pair data pointers. */
> +	struct rte_cryptodev_qpdata qp;
> +	/** Reserved for future ops. */
> +	uintptr_t reserved[4];
> +} __rte_cache_aligned;
> +
> +extern struct rte_crypto_fp_ops
> rte_crypto_fp_ops[RTE_CRYPTO_MAX_DEVS];
> +
>  /**
>   * @internal
>   * The data part, with no function pointers, associated with each device.
> diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
> index 43cf937e40..ed62ced221 100644
> --- a/lib/cryptodev/version.map
> +++ b/lib/cryptodev/version.map
> @@ -45,6 +45,9 @@ DPDK_22 {
>  	rte_cryptodev_sym_session_init;
>  	rte_cryptodevs;
> 
> +	#added in 21.11
> +	rte_crypto_fp_ops;
> +
>  	local: *;
>  };
> 
> @@ -109,6 +112,8 @@ EXPERIMENTAL {
>  INTERNAL {
>  	global:
> 
> +	cryptodev_fp_ops_reset;
> +	cryptodev_fp_ops_set;
>  	rte_cryptodev_allocate_driver;
>  	rte_cryptodev_pmd_allocate;
>  	rte_cryptodev_pmd_callback_process;
> --
> 2.25.1


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3] test/hash: fix buffer overflow
  2021-10-19  7:02  3%       ` David Marchand
@ 2021-10-19 15:57  0%         ` Medvedkin, Vladimir
  0 siblings, 0 replies; 200+ results
From: Medvedkin, Vladimir @ 2021-10-19 15:57 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, Wang, Yipeng1, Gobriel, Sameh, Bruce Richardson, dpdk stable

Hi David,

On 19/10/2021 09:02, David Marchand wrote:
> On Fri, Oct 15, 2021 at 3:02 PM Medvedkin, Vladimir
> <vladimir.medvedkin@intel.com> wrote:
>>> I am confused.
>>> Does it mean that rte_jhash_32b is not compliant with rte_hash_create API?
>>>
>>
>> I think so too, because despite the fact that the ABI is the same, the
>> API remains different with respect to the length argument.
> 
> Sorry I don't follow you with "ABI is the same".
> Can you explain please?
> 

I meant that rte_hash accepts:

/** Type of function that can be used for calculating the hash value. */
typedef uint32_t (*rte_hash_function)(const void *key, uint32_t key_len, 
  uint32_t init_val);

as a hash function. And signatures of rte_jhash() and rte_jhash_32b() 
are the same, but differ in the semantics of the "key_len" argument. 
Internally rte_hash passes a length of the key counted in bytes to this 
functions, so problems appears if configured hash function considers the 
key_len as something else than the size in bytes.

> 
> I am not against the fix, but it seems to test something different
> than what an application using the hash library would do.
> Or if an application directly calls this hash function, maybe the unit
> test should not test it via rte_hash_create (which seems to defeat the
> abstraction).
> 

I'd say that user should not use this hash function with rte_hash.
Yipeng, Sameh, Bruce,
what do you think?

> 

-- 
Regards,
Vladimir

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 1/5] hash: add new toeplitz hash implementation
  2021-10-19  1:15  0%       ` Stephen Hemminger
@ 2021-10-19 15:42  0%         ` Medvedkin, Vladimir
  0 siblings, 0 replies; 200+ results
From: Medvedkin, Vladimir @ 2021-10-19 15:42 UTC (permalink / raw)
  To: Stephen Hemminger, Ananyev, Konstantin
  Cc: dev, Wang, Yipeng1, Gobriel, Sameh, Richardson, Bruce

Hi Stephen,

On 19/10/2021 03:15, Stephen Hemminger wrote:
> On Mon, 18 Oct 2021 10:40:00 +0000
> "Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote:
> 
>>> On Fri, 15 Oct 2021 10:30:02 +0100
>>> Vladimir Medvedkin <vladimir.medvedkin@intel.com> wrote:
>>>    
>>>> +			m[i * 8 + j] = (rss_key[i] << j)|
>>>> +				(uint8_t)((uint16_t)(rss_key[i + 1]) >>
>>>> +				(8 - j));
>>>> +		}
>>>
>>> This ends up being harder than necessary to read. Maybe split into
>>> multiple statements and/or use temporary variable.
>>>    
>>>> +RTE_INIT(rte_thash_gfni_init)
>>>> +{
>>>> +	rte_thash_gfni_supported = 0;
>>>
>>> Not necessary in C globals are initialized to zero by default.
>>>
>>> By removing that the constructor can be totally behind #ifdef
>>>    
>>>> +__rte_internal
>>>> +static inline __m512i
>>>> +__rte_thash_gfni(const uint64_t *mtrx, const uint8_t *tuple,
>>>> +	const uint8_t *secondary_tuple, int len)
>>>> +{
>>>> +	__m512i permute_idx = _mm512_set_epi8(7, 6, 5, 4, 7, 6, 5, 4,
>>>> +						6, 5, 4, 3, 6, 5, 4, 3,
>>>> +						5, 4, 3, 2, 5, 4, 3, 2,
>>>> +						4, 3, 2, 1, 4, 3, 2, 1,
>>>> +						3, 2, 1, 0, 3, 2, 1, 0,
>>>> +						2, 1, 0, -1, 2, 1, 0, -1,
>>>> +						1, 0, -1, -2, 1, 0, -1, -2,
>>>> +						0, -1, -2, -3, 0, -1, -2, -3);
>>>
>>> NAK
>>>
>>> Please don't put the implementation in an inline. This makes it harder
>>> to support (API/ABI) and blocks other architectures from implementing
>>> same thing with different instructions.
>>
>> I don't really understand your reasoning here.
>> rte_thash_gfni.h is an arch-specific header, which provides
>> arch-specific optimizations for RSS hash calculation
>> (Vladimir pls correct me if I am wrong here).
> 
> Ok, but rte_thash_gfni.h is included on all architectures.
> 

Ok, I'll rework the patch to move x86 + avx512 related things into x86 
arch specific header. Would that suit?

>> We do have dozens of inline functions that do use arch-specific instructions (both x86 and arm)
>> for different purposes:
>> sync primitives, memory-ordering, cache manipulations, LPM lookup, TSX, power-saving, etc.
>> That's a usual trade-off taken for performance reasons, when extra function call
>> costs too much comparing to the operation itself.
>> Why it suddenly became a problem for that particular case and how exactly it blocks other architectures?
>> Also I don't understand how it makes things harder in terms of API/ABI stability.
>> As I can see this patch doesn't introduce any public structs/unions.
>> All functions take as arguments just raw data buffers and length.
>> To summarize - in general, I don't see any good reason why this patch shouldn't be allowed.
>> Konstantin
> 
> The comments about rte_thash_gfni_supported initialization still apply.
> Why not:
> 
> #ifdef __GFNI__
> RTE_INIT(rte_thash_gfni_init)
> {
> 	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_GFNI))
> 		rte_thash_gfni_supported = 1;
> }
> #endif
> 

Agree, I'll reflect this changes in v3.


-- 
Regards,
Vladimir

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v16 0/5] Add PIE support for HQoS library
  2021-10-19  8:18  3%   ` [dpdk-dev] [PATCH v15 " Liguzinski, WojciechX
  2021-10-19 12:18  0%     ` Dumitrescu, Cristian
@ 2021-10-19 12:45  3%     ` Liguzinski, WojciechX
  2021-10-20  7:49  3%       ` [dpdk-dev] [PATCH v17 " Liguzinski, WojciechX
  1 sibling, 1 reply; 200+ results
From: Liguzinski, WojciechX @ 2021-10-19 12:45 UTC (permalink / raw)
  To: dev, jasvinder.singh, cristian.dumitrescu; +Cc: megha.ajmera

DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management. However, more
advanced queue management is required to address this problem and provide desirable
quality of service to users.

This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.

The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.

Liguzinski, WojciechX (5):
  sched: add PIE based congestion management
  example/qos_sched: add PIE support
  example/ip_pipeline: add PIE support
  doc/guides/prog_guide: added PIE
  app/test: add tests for PIE

 app/test/meson.build                         |    4 +
 app/test/test_pie.c                          | 1065 ++++++++++++++++++
 config/rte_config.h                          |    1 -
 doc/guides/prog_guide/glossary.rst           |    3 +
 doc/guides/prog_guide/qos_framework.rst      |   62 +-
 doc/guides/prog_guide/traffic_management.rst |   13 +-
 drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
 examples/ip_pipeline/tmgr.c                  |  142 +--
 examples/qos_sched/app_thread.c              |    1 -
 examples/qos_sched/cfg_file.c                |  127 ++-
 examples/qos_sched/cfg_file.h                |    5 +
 examples/qos_sched/init.c                    |   27 +-
 examples/qos_sched/main.h                    |    3 +
 examples/qos_sched/profile.cfg               |  196 ++--
 lib/sched/meson.build                        |   10 +-
 lib/sched/rte_pie.c                          |   86 ++
 lib/sched/rte_pie.h                          |  398 +++++++
 lib/sched/rte_sched.c                        |  241 ++--
 lib/sched/rte_sched.h                        |   63 +-
 lib/sched/version.map                        |    4 +
 20 files changed, 2172 insertions(+), 285 deletions(-)
 create mode 100644 app/test/test_pie.c
 create mode 100644 lib/sched/rte_pie.c
 create mode 100644 lib/sched/rte_pie.h

-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v3 6/7] cryptodev: update fast path APIs to use new flat array
  2021-10-18 14:42  3%     ` [dpdk-dev] [PATCH v3 6/7] cryptodev: update fast path APIs to use new flat array Akhil Goyal
@ 2021-10-19 12:28  0%       ` Ananyev, Konstantin
  0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-10-19 12:28 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj, De Lara Guarch,
	Pablo, Trahe, Fiona, Doherty, Declan, matan, g.singh, Zhang,
	Roy Fan, jianjay.zhou, asomalap, ruifeng.wang, Nicolau, Radu,
	ajit.khaparde, rnagadheeraj, adwivedi, Power, Ciara



> 
> Rework fast-path cryptodev functions to use rte_crypto_fp_ops[].
> While it is an API/ABI breakage, this change is intended to be
> transparent for both users (no changes in user app is required) and
> PMD developers (no changes in PMD is required).
> 
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
>  lib/cryptodev/rte_cryptodev.h | 27 +++++++++++++++++----------
>  1 file changed, 17 insertions(+), 10 deletions(-)
> 
> diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
> index ce0dca72be..56e3868ada 100644
> --- a/lib/cryptodev/rte_cryptodev.h
> +++ b/lib/cryptodev/rte_cryptodev.h
> @@ -1832,13 +1832,18 @@ static inline uint16_t
>  rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
>  		struct rte_crypto_op **ops, uint16_t nb_ops)
>  {
> -	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
> +	const struct rte_crypto_fp_ops *fp_ops;
> +	void *qp;
> 
>  	rte_cryptodev_trace_dequeue_burst(dev_id, qp_id, (void **)ops, nb_ops);
> -	nb_ops = (*dev->dequeue_burst)
> -			(dev->data->queue_pairs[qp_id], ops, nb_ops);
> +
> +	fp_ops = &rte_crypto_fp_ops[dev_id];
> +	qp = fp_ops->qp.data[qp_id];
> +
> +	nb_ops = fp_ops->dequeue_burst(qp, ops, nb_ops);
> +
>  #ifdef RTE_CRYPTO_CALLBACKS
> -	if (unlikely(dev->deq_cbs != NULL)) {
> +	if (unlikely(fp_ops->qp.deq_cb != NULL)) {
>  		struct rte_cryptodev_cb_rcu *list;
>  		struct rte_cryptodev_cb *cb;

As I ca see you decided to keep call-back related data-structs as public API.
I wonder that's to avoid extra changes with CB related code?
Or performance reasons?
Or probably something else?

> 
> @@ -1848,7 +1853,7 @@ rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
>  		 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
>  		 * not required.
>  		 */
> -		list = &dev->deq_cbs[qp_id];
> +		list = &fp_ops->qp.deq_cb[qp_id];
>  		rte_rcu_qsbr_thread_online(list->qsbr, 0);
>  		cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
> 
> @@ -1899,10 +1904,13 @@ static inline uint16_t
>  rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
>  		struct rte_crypto_op **ops, uint16_t nb_ops)
>  {
> -	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
> +	const struct rte_crypto_fp_ops *fp_ops;
> +	void *qp;
> 
> +	fp_ops = &rte_crypto_fp_ops[dev_id];
> +	qp = fp_ops->qp.data[qp_id];
>  #ifdef RTE_CRYPTO_CALLBACKS
> -	if (unlikely(dev->enq_cbs != NULL)) {
> +	if (unlikely(fp_ops->qp.enq_cb != NULL)) {
>  		struct rte_cryptodev_cb_rcu *list;
>  		struct rte_cryptodev_cb *cb;
> 
> @@ -1912,7 +1920,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
>  		 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
>  		 * not required.
>  		 */
> -		list = &dev->enq_cbs[qp_id];
> +		list = &fp_ops->qp.enq_cb[qp_id];
>  		rte_rcu_qsbr_thread_online(list->qsbr, 0);
>  		cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
> 
> @@ -1927,8 +1935,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
>  #endif
> 
>  	rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops, nb_ops);
> -	return (*dev->enqueue_burst)(
> -			dev->data->queue_pairs[qp_id], ops, nb_ops);
> +	return fp_ops->enqueue_burst(qp, ops, nb_ops);
>  }
> 
> 
> --
> 2.25.1


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v15 0/5] Add PIE support for HQoS library
  2021-10-19  8:18  3%   ` [dpdk-dev] [PATCH v15 " Liguzinski, WojciechX
@ 2021-10-19 12:18  0%     ` Dumitrescu, Cristian
  2021-10-19 12:45  3%     ` [dpdk-dev] [PATCH v16 " Liguzinski, WojciechX
  1 sibling, 0 replies; 200+ results
From: Dumitrescu, Cristian @ 2021-10-19 12:18 UTC (permalink / raw)
  To: Liguzinski, WojciechX, dev, Singh, Jasvinder; +Cc: Ajmera, Megha



> -----Original Message-----
> From: Liguzinski, WojciechX <wojciechx.liguzinski@intel.com>
> Sent: Tuesday, October 19, 2021 9:19 AM
> To: dev@dpdk.org; Singh, Jasvinder <jasvinder.singh@intel.com>;
> Dumitrescu, Cristian <cristian.dumitrescu@intel.com>
> Cc: Ajmera, Megha <megha.ajmera@intel.com>
> Subject: [PATCH v15 0/5] Add PIE support for HQoS library
> 
> DPDK sched library is equipped with mechanism that secures it from the
> bufferbloat problem
> which is a situation when excess buffers in the network cause high latency
> and latency
> variation. Currently, it supports RED for active queue management.
> However, more
> advanced queue management is required to address this problem and
> provide desirable
> quality of service to users.
> 
> This solution (RFC) proposes usage of new algorithm called "PIE"
> (Proportional Integral
> controller Enhanced) that can effectively and directly control queuing latency
> to address
> the bufferbloat problem.
> 
> The implementation of mentioned functionality includes modification of
> existing and
> adding a new set of data structures to the library, adding PIE related APIs.
> This affects structures in public API/ABI. That is why deprecation notice is
> going
> to be prepared and sent.
> 
> Liguzinski, WojciechX (5):
>   sched: add PIE based congestion management
>   example/qos_sched: add PIE support
>   example/ip_pipeline: add PIE support
>   doc/guides/prog_guide: added PIE
>   app/test: add tests for PIE
> 
>  app/test/meson.build                         |    4 +
>  app/test/test_pie.c                          | 1065 ++++++++++++++++++
>  config/rte_config.h                          |    1 -
>  doc/guides/prog_guide/glossary.rst           |    3 +
>  doc/guides/prog_guide/qos_framework.rst      |   60 +-
>  doc/guides/prog_guide/traffic_management.rst |   13 +-
>  drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
>  examples/ip_pipeline/tmgr.c                  |  142 +--
>  examples/qos_sched/app_thread.c              |    1 -
>  examples/qos_sched/cfg_file.c                |  127 ++-
>  examples/qos_sched/cfg_file.h                |    5 +
>  examples/qos_sched/init.c                    |   27 +-
>  examples/qos_sched/main.h                    |    3 +
>  examples/qos_sched/profile.cfg               |  196 ++--
>  lib/sched/meson.build                        |   10 +-
>  lib/sched/rte_pie.c                          |   86 ++
>  lib/sched/rte_pie.h                          |  398 +++++++
>  lib/sched/rte_sched.c                        |  241 ++--
>  lib/sched/rte_sched.h                        |   63 +-
>  lib/sched/version.map                        |    4 +
>  20 files changed, 2171 insertions(+), 284 deletions(-)
>  create mode 100644 app/test/test_pie.c
>  create mode 100644 lib/sched/rte_pie.c
>  create mode 100644 lib/sched/rte_pie.h
> 
> --
> 2.25.1

Series-acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro
  2021-10-19  9:27  0%       ` David Marchand
  2021-10-19  9:38  0%         ` Andrew Rybchenko
@ 2021-10-19  9:42  0%         ` Thomas Monjalon
  1 sibling, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-10-19  9:42 UTC (permalink / raw)
  To: Andrew Rybchenko, David Marchand
  Cc: dev, Olivier Matz, Ray Kinsella, Artem V. Andreev,
	Ashwin Sekhar T K, Pavan Nikhilesh, Hemant Agrawal,
	Sachin Saxena, Harman Kalra, Jerin Jacob, Nithin Dabilpuram, dev

19/10/2021 11:27, David Marchand:
> On Tue, Oct 19, 2021 at 11:05 AM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
> >
> > On 10/19/21 11:49 AM, David Marchand wrote:
> > > On Mon, Oct 18, 2021 at 4:50 PM Andrew Rybchenko
> > > <andrew.rybchenko@oktetlabs.ru> wrote:
> > >>
> > >> Add RTE_ prefix to macro used to register mempool driver.
> > >> The old one is still available but deprecated.
> > >
> > > ODP seems to use its own mempools.
> > >
> > > $ git grep-all -w MEMPOOL_REGISTER_OPS
> > > OpenDataplane/platform/linux-generic/pktio/dpdk.c:MEMPOOL_REGISTER_OPS(odp_pool_ops);
> > >
> > > I'd say it counts as a driver macro.
> > > If so, we could hide it in a driver-only header, along with
> > > rte_mempool_register_ops getting marked as internal.
> > >
> > > $ git grep-all -w rte_mempool_register_ops
> > > FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);
> > > FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);
> >
> > Do I understand correctly that it is required to remove it from
> > stable ABI/API, but still allow external SW to use it?
> >
> > Should I add one more patch to the series?
> 
> If we want to do the full job, we need to inspect driver-only symbols
> in rte_mempool.h.
> But this goes way further than a simple prefixing as this series intended.
> 
> I just read your reply, I think we agree.
> Let's go with simple prefix and take a note to cleanup in the future.

Yes, and we should probably discuss in techboard what should be kept
compatible for external mempool drivers.



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro
  2021-10-19  9:27  0%       ` David Marchand
@ 2021-10-19  9:38  0%         ` Andrew Rybchenko
  2021-10-19  9:42  0%         ` Thomas Monjalon
  1 sibling, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-19  9:38 UTC (permalink / raw)
  To: David Marchand, Thomas Monjalon
  Cc: Olivier Matz, Ray Kinsella, Artem V. Andreev, Ashwin Sekhar T K,
	Pavan Nikhilesh, Hemant Agrawal, Sachin Saxena, Harman Kalra,
	Jerin Jacob, Nithin Dabilpuram, dev

On 10/19/21 12:27 PM, David Marchand wrote:
> On Tue, Oct 19, 2021 at 11:05 AM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
>>
>> On 10/19/21 11:49 AM, David Marchand wrote:
>>> On Mon, Oct 18, 2021 at 4:50 PM Andrew Rybchenko
>>> <andrew.rybchenko@oktetlabs.ru> wrote:
>>>>
>>>> Add RTE_ prefix to macro used to register mempool driver.
>>>> The old one is still available but deprecated.
>>>
>>> ODP seems to use its own mempools.
>>>
>>> $ git grep-all -w MEMPOOL_REGISTER_OPS
>>> OpenDataplane/platform/linux-generic/pktio/dpdk.c:MEMPOOL_REGISTER_OPS(odp_pool_ops);
>>>
>>> I'd say it counts as a driver macro.
>>> If so, we could hide it in a driver-only header, along with
>>> rte_mempool_register_ops getting marked as internal.
>>>
>>> $ git grep-all -w rte_mempool_register_ops
>>> FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);
>>> FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);
>>
>> Do I understand correctly that it is required to remove it from
>> stable ABI/API, but still allow external SW to use it?
>>
>> Should I add one more patch to the series?
> 
> If we want to do the full job, we need to inspect driver-only symbols
> in rte_mempool.h.
> But this goes way further than a simple prefixing as this series intended.
> 
> I just read your reply, I think we agree.
> Let's go with simple prefix and take a note to cleanup in the future.

Agreed.



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro
  2021-10-19  9:04  3%     ` Andrew Rybchenko
  2021-10-19  9:23  0%       ` Andrew Rybchenko
@ 2021-10-19  9:27  0%       ` David Marchand
  2021-10-19  9:38  0%         ` Andrew Rybchenko
  2021-10-19  9:42  0%         ` Thomas Monjalon
  1 sibling, 2 replies; 200+ results
From: David Marchand @ 2021-10-19  9:27 UTC (permalink / raw)
  To: Andrew Rybchenko, Thomas Monjalon
  Cc: Olivier Matz, Ray Kinsella, Artem V. Andreev, Ashwin Sekhar T K,
	Pavan Nikhilesh, Hemant Agrawal, Sachin Saxena, Harman Kalra,
	Jerin Jacob, Nithin Dabilpuram, dev

On Tue, Oct 19, 2021 at 11:05 AM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
>
> On 10/19/21 11:49 AM, David Marchand wrote:
> > On Mon, Oct 18, 2021 at 4:50 PM Andrew Rybchenko
> > <andrew.rybchenko@oktetlabs.ru> wrote:
> >>
> >> Add RTE_ prefix to macro used to register mempool driver.
> >> The old one is still available but deprecated.
> >
> > ODP seems to use its own mempools.
> >
> > $ git grep-all -w MEMPOOL_REGISTER_OPS
> > OpenDataplane/platform/linux-generic/pktio/dpdk.c:MEMPOOL_REGISTER_OPS(odp_pool_ops);
> >
> > I'd say it counts as a driver macro.
> > If so, we could hide it in a driver-only header, along with
> > rte_mempool_register_ops getting marked as internal.
> >
> > $ git grep-all -w rte_mempool_register_ops
> > FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);
> > FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);
>
> Do I understand correctly that it is required to remove it from
> stable ABI/API, but still allow external SW to use it?
>
> Should I add one more patch to the series?

If we want to do the full job, we need to inspect driver-only symbols
in rte_mempool.h.
But this goes way further than a simple prefixing as this series intended.

I just read your reply, I think we agree.
Let's go with simple prefix and take a note to cleanup in the future.


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro
  2021-10-19  9:04  3%     ` Andrew Rybchenko
@ 2021-10-19  9:23  0%       ` Andrew Rybchenko
  2021-10-19  9:27  0%       ` David Marchand
  1 sibling, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-19  9:23 UTC (permalink / raw)
  To: David Marchand
  Cc: Olivier Matz, Ray Kinsella, Artem V. Andreev, Ashwin Sekhar T K,
	Pavan Nikhilesh, Hemant Agrawal, Sachin Saxena, Harman Kalra,
	Jerin Jacob, Nithin Dabilpuram, dev, Thomas Monjalon

On 10/19/21 12:04 PM, Andrew Rybchenko wrote:
> On 10/19/21 11:49 AM, David Marchand wrote:
>> On Mon, Oct 18, 2021 at 4:50 PM Andrew Rybchenko
>> <andrew.rybchenko@oktetlabs.ru> wrote:
>>>
>>> Add RTE_ prefix to macro used to register mempool driver.
>>> The old one is still available but deprecated.
>>
>> ODP seems to use its own mempools.
>>
>> $ git grep-all -w MEMPOOL_REGISTER_OPS
>> OpenDataplane/platform/linux-generic/pktio/dpdk.c:MEMPOOL_REGISTER_OPS(odp_pool_ops);
>>
>> I'd say it counts as a driver macro.
>> If so, we could hide it in a driver-only header, along with
>> rte_mempool_register_ops getting marked as internal.
>>
>> $ git grep-all -w rte_mempool_register_ops
>> FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);
>> FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);
> 
> Do I understand correctly that it is required to remove it from
> stable ABI/API, but still allow external SW to use it?
> 
> Should I add one more patch to the series?
> 

I'm afraid not now. It is too invasive or too illogical.
Basically it should more rte_mempool_ops to the header
as well, but it is heavily used by inline functions in
rte_mempool.h.

Of course, it is possible to move just register API
to the mempool_driver.h header, but value of such
changes is not really big.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro
  @ 2021-10-19  9:04  3%     ` Andrew Rybchenko
  2021-10-19  9:23  0%       ` Andrew Rybchenko
  2021-10-19  9:27  0%       ` David Marchand
  0 siblings, 2 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-19  9:04 UTC (permalink / raw)
  To: David Marchand
  Cc: Olivier Matz, Ray Kinsella, Artem V. Andreev, Ashwin Sekhar T K,
	Pavan Nikhilesh, Hemant Agrawal, Sachin Saxena, Harman Kalra,
	Jerin Jacob, Nithin Dabilpuram, dev, Thomas Monjalon

On 10/19/21 11:49 AM, David Marchand wrote:
> On Mon, Oct 18, 2021 at 4:50 PM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
>>
>> Add RTE_ prefix to macro used to register mempool driver.
>> The old one is still available but deprecated.
> 
> ODP seems to use its own mempools.
> 
> $ git grep-all -w MEMPOOL_REGISTER_OPS
> OpenDataplane/platform/linux-generic/pktio/dpdk.c:MEMPOOL_REGISTER_OPS(odp_pool_ops);
> 
> I'd say it counts as a driver macro.
> If so, we could hide it in a driver-only header, along with
> rte_mempool_register_ops getting marked as internal.
> 
> $ git grep-all -w rte_mempool_register_ops
> FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);
> FD.io-VPP/src/plugins/dpdk/buffer.c:  rte_mempool_register_ops (&ops);

Do I understand correctly that it is required to remove it from
stable ABI/API, but still allow external SW to use it?

Should I add one more patch to the series?



^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v14 0/5] Add PIE support for HQoS library
  2021-10-15 13:56  0%   ` Dumitrescu, Cristian
@ 2021-10-19  8:26  0%     ` Liguzinski, WojciechX
  0 siblings, 0 replies; 200+ results
From: Liguzinski, WojciechX @ 2021-10-19  8:26 UTC (permalink / raw)
  To: Dumitrescu, Cristian, dev, Singh, Jasvinder; +Cc: Ajmera, Megha

Hi Cristian,

Done.

BR,
Wojtek

-----Original Message-----
From: Dumitrescu, Cristian <cristian.dumitrescu@intel.com> 
Sent: Friday, October 15, 2021 3:57 PM
To: Liguzinski, WojciechX <wojciechx.liguzinski@intel.com>; dev@dpdk.org; Singh, Jasvinder <jasvinder.singh@intel.com>
Cc: Ajmera, Megha <megha.ajmera@intel.com>
Subject: RE: [PATCH v14 0/5] Add PIE support for HQoS library



> -----Original Message-----
> From: Liguzinski, WojciechX <wojciechx.liguzinski@intel.com>
> Sent: Friday, October 15, 2021 9:16 AM
> To: dev@dpdk.org; Singh, Jasvinder <jasvinder.singh@intel.com>; 
> Dumitrescu, Cristian <cristian.dumitrescu@intel.com>
> Cc: Ajmera, Megha <megha.ajmera@intel.com>
> Subject: [PATCH v14 0/5] Add PIE support for HQoS library
> 
> DPDK sched library is equipped with mechanism that secures it from the 
> bufferbloat problem which is a situation when excess buffers in the 
> network cause high latency and latency variation. Currently, it 
> supports RED for active queue management (which is designed to control 
> the queue length but it does not control latency directly and is now 
> being obsoleted).

Please remove the statement that RED is obsolete, as it is not true. Please refer only to the benefits on the new algorithm without any generic negative statements not supported by data versus other algorithms, thank you!

However, more advanced queue management is required to
> address this problem
> and provide desirable quality of service to users.
> 
> This solution (RFC) proposes usage of new algorithm called "PIE"
> (Proportional Integral
> controller Enhanced) that can effectively and directly control queuing 
> latency to address the bufferbloat problem.
> 
> The implementation of mentioned functionality includes modification of 
> existing and adding a new set of data structures to the library, 
> adding PIE related APIs.
> This affects structures in public API/ABI. That is why deprecation 
> notice is going to be prepared and sent.
> 
> Liguzinski, WojciechX (5):
>   sched: add PIE based congestion management
>   example/qos_sched: add PIE support
>   example/ip_pipeline: add PIE support
>   doc/guides/prog_guide: added PIE
>   app/test: add tests for PIE
> 
>  app/test/meson.build                         |    4 +
>  app/test/test_pie.c                          | 1065 ++++++++++++++++++
>  config/rte_config.h                          |    1 -
>  doc/guides/prog_guide/glossary.rst           |    3 +
>  doc/guides/prog_guide/qos_framework.rst      |   60 +-
>  doc/guides/prog_guide/traffic_management.rst |   13 +-
>  drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
>  examples/ip_pipeline/tmgr.c                  |  142 +--
>  examples/qos_sched/app_thread.c              |    1 -
>  examples/qos_sched/cfg_file.c                |  111 +-
>  examples/qos_sched/cfg_file.h                |    5 +
>  examples/qos_sched/init.c                    |   27 +-
>  examples/qos_sched/main.h                    |    3 +
>  examples/qos_sched/profile.cfg               |  196 ++--
>  lib/sched/meson.build                        |   10 +-
>  lib/sched/rte_pie.c                          |   86 ++
>  lib/sched/rte_pie.h                          |  398 +++++++
>  lib/sched/rte_sched.c                        |  240 ++--
>  lib/sched/rte_sched.h                        |   63 +-
>  lib/sched/version.map                        |    3 +
>  20 files changed, 2161 insertions(+), 276 deletions(-)  create mode 
> 100644 app/test/test_pie.c  create mode 100644 lib/sched/rte_pie.c  
> create mode 100644 lib/sched/rte_pie.h
> 
> --
> 2.25.1


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v15 0/5] Add PIE support for HQoS library
  2021-10-15  8:16  3% ` [dpdk-dev] [PATCH v14 " Liguzinski, WojciechX
  2021-10-15 13:56  0%   ` Dumitrescu, Cristian
@ 2021-10-19  8:18  3%   ` Liguzinski, WojciechX
  2021-10-19 12:18  0%     ` Dumitrescu, Cristian
  2021-10-19 12:45  3%     ` [dpdk-dev] [PATCH v16 " Liguzinski, WojciechX
  1 sibling, 2 replies; 200+ results
From: Liguzinski, WojciechX @ 2021-10-19  8:18 UTC (permalink / raw)
  To: dev, jasvinder.singh, cristian.dumitrescu; +Cc: megha.ajmera

DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency
variation. Currently, it supports RED for active queue management. However, more
advanced queue management is required to address this problem and provide desirable
quality of service to users.

This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address
the bufferbloat problem.

The implementation of mentioned functionality includes modification of existing and
adding a new set of data structures to the library, adding PIE related APIs.
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.

Liguzinski, WojciechX (5):
  sched: add PIE based congestion management
  example/qos_sched: add PIE support
  example/ip_pipeline: add PIE support
  doc/guides/prog_guide: added PIE
  app/test: add tests for PIE

 app/test/meson.build                         |    4 +
 app/test/test_pie.c                          | 1065 ++++++++++++++++++
 config/rte_config.h                          |    1 -
 doc/guides/prog_guide/glossary.rst           |    3 +
 doc/guides/prog_guide/qos_framework.rst      |   60 +-
 doc/guides/prog_guide/traffic_management.rst |   13 +-
 drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
 examples/ip_pipeline/tmgr.c                  |  142 +--
 examples/qos_sched/app_thread.c              |    1 -
 examples/qos_sched/cfg_file.c                |  127 ++-
 examples/qos_sched/cfg_file.h                |    5 +
 examples/qos_sched/init.c                    |   27 +-
 examples/qos_sched/main.h                    |    3 +
 examples/qos_sched/profile.cfg               |  196 ++--
 lib/sched/meson.build                        |   10 +-
 lib/sched/rte_pie.c                          |   86 ++
 lib/sched/rte_pie.h                          |  398 +++++++
 lib/sched/rte_sched.c                        |  241 ++--
 lib/sched/rte_sched.h                        |   63 +-
 lib/sched/version.map                        |    4 +
 20 files changed, 2171 insertions(+), 284 deletions(-)
 create mode 100644 app/test/test_pie.c
 create mode 100644 lib/sched/rte_pie.c
 create mode 100644 lib/sched/rte_pie.h

-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v3] test/hash: fix buffer overflow
  2021-10-15 13:02  3%     ` Medvedkin, Vladimir
@ 2021-10-19  7:02  3%       ` David Marchand
  2021-10-19 15:57  0%         ` Medvedkin, Vladimir
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-10-19  7:02 UTC (permalink / raw)
  To: Medvedkin, Vladimir
  Cc: dev, Wang, Yipeng1, Gobriel, Sameh, Bruce Richardson, dpdk stable

On Fri, Oct 15, 2021 at 3:02 PM Medvedkin, Vladimir
<vladimir.medvedkin@intel.com> wrote:
> > I am confused.
> > Does it mean that rte_jhash_32b is not compliant with rte_hash_create API?
> >
>
> I think so too, because despite the fact that the ABI is the same, the
> API remains different with respect to the length argument.

Sorry I don't follow you with "ABI is the same".
Can you explain please?


I am not against the fix, but it seems to test something different
than what an application using the hash library would do.
Or if an application directly calls this hash function, maybe the unit
test should not test it via rte_hash_create (which seems to defeat the
abstraction).


-- 
David Marchand


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [EXT] Re: [PATCH v4 14/14] eventdev: mark trace variables as internal
  2021-10-18 15:06  0%       ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
@ 2021-10-19  7:01  3%         ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-10-19  7:01 UTC (permalink / raw)
  To: Pavan Nikhilesh Bhagavatula, Ray Kinsella
  Cc: Jerin Jacob, Jerin Jacob Kollanukkaran, dpdk-dev

Hello Pavan,

On Mon, Oct 18, 2021 at 5:07 PM Pavan Nikhilesh Bhagavatula
<pbhagavatula@marvell.com> wrote:
> >[for-main]dell[dpdk-next-eventdev] $ ./devtools/checkpatches.sh -n
> >14
> >
> >### eventdev: move inline APIs into separate structure
> >
> >INFO: symbol event_dev_fp_ops_reset has been added to the
> >INTERNAL
> >section of the version map
> >INFO: symbol event_dev_fp_ops_set has been added to the INTERNAL
> >section of the version map
> >INFO: symbol event_dev_probing_finish has been added to the
> >INTERNAL
> >section of the version map
>
> These can be ignored as they are internal

Those first warnings are informational.

>
> >ERROR: symbol rte_event_fp_ops is added in the DPDK_22 section, but
> >is
> >expected to be added in the EXPERIMENTAL section of the version map
>
> This is a replacement for rte_eventdevs, ethdev rework also doesn’t mark
> it as experimental. @David Marchand @Ray Kinsella any opinions?

This check is there to ensure that added symbols first go through a
period in experimental status.

Same as for ethdev, the use of inlines in stable API directly exposes
a new symbol to applications.
With this implementation, this check can be waived and the symbol can
go directly to stable status.

This symbol being exposed as stable, it will be frozen in ABI until
next breakage.
I see you reserved 6 spots for new ops, so it looks ok.



-- 
David Marchand


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] ring: fix size of name array in ring structure
  2021-10-18 14:54  0% ` Honnappa Nagarahalli
@ 2021-10-19  7:01  0%   ` Tu, Lijuan
  0 siblings, 0 replies; 200+ results
From: Tu, Lijuan @ 2021-10-19  7:01 UTC (permalink / raw)
  To: Honnappa Nagarahalli, dev, andrew.rybchenko, Ananyev, Konstantin,
	ci, Lincoln Lavoie, dpdklab
  Cc: nd, zoltan.kiss, nd

I saw a lot of patches failed on test_scatter_mbuf_2048, the case could be passed in intel lab.

This is maintained by UNL guys. 

+ unl.

> -----Original Message-----
> From: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Sent: 2021年10月18日 22:54
> To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; dev@dpdk.org;
> andrew.rybchenko@oktetlabs.ru; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Cc: nd <nd@arm.com>; zoltan.kiss@schaman.hu; Tu, Lijuan
> <lijuan.tu@intel.com>; nd <nd@arm.com>
> Subject: RE: [PATCH] ring: fix size of name array in ring structure
> 
> This patch has a CI failure in DTS in test_scatter_mbuf_2048 for Fortville_Spirit
> NIC. I am not sure how this change is related to the failure. The log is as follows:
> 
> TestScatter: Test Case test_scatter_mbuf_2048 Result FAILED: 'packet receive
> error'
> 
> Has anyone seen this error? Is this a known issue?
> 
> Thanks,
> Honnappa
> 
> > -----Original Message-----
> > From: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Sent: Thursday, October 14, 2021 3:56 PM
> > To: dev@dpdk.org; Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>;
> > andrew.rybchenko@oktetlabs.ru; konstantin.ananyev@intel.com
> > Cc: nd <nd@arm.com>; zoltan.kiss@schaman.hu
> > Subject: [PATCH] ring: fix size of name array in ring structure
> >
> > Use correct define for the name array size. The change breaks ABI and
> > hence cannot be backported to stable branches.
> >
> > Fixes: 38c9817ee1d8 ("mempool: adjust name size in related data
> > types")
> > Cc: zoltan.kiss@schaman.hu
> >
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > ---
> >  lib/ring/rte_ring_core.h | 7 +------
> >  1 file changed, 1 insertion(+), 6 deletions(-)
> >
> > diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h index
> > 31f7200fa9..46ad584f9c 100644
> > --- a/lib/ring/rte_ring_core.h
> > +++ b/lib/ring/rte_ring_core.h
> > @@ -118,12 +118,7 @@ struct rte_ring_hts_headtail {
> >   * a problem.
> >   */
> >  struct rte_ring {
> > -	/*
> > -	 * Note: this field kept the RTE_MEMZONE_NAMESIZE size due to ABI
> > -	 * compatibility requirements, it could be changed to
> > RTE_RING_NAMESIZE
> > -	 * next time the ABI changes
> > -	 */
> > -	char name[RTE_MEMZONE_NAMESIZE] __rte_cache_aligned;
> > +	char name[RTE_RING_NAMESIZE] __rte_cache_aligned;
> >  	/**< Name of the ring. */
> >  	int flags;               /**< Flags supplied at creation. */
> >  	const struct rte_memzone *memzone;
> > --
> > 2.25.1


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 1/5] hash: add new toeplitz hash implementation
  2021-10-18 10:40  3%     ` Ananyev, Konstantin
@ 2021-10-19  1:15  0%       ` Stephen Hemminger
  2021-10-19 15:42  0%         ` Medvedkin, Vladimir
  0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-10-19  1:15 UTC (permalink / raw)
  To: Ananyev, Konstantin
  Cc: Medvedkin, Vladimir, dev, Wang, Yipeng1, Gobriel, Sameh,
	Richardson, Bruce

On Mon, 18 Oct 2021 10:40:00 +0000
"Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote:

> > On Fri, 15 Oct 2021 10:30:02 +0100
> > Vladimir Medvedkin <vladimir.medvedkin@intel.com> wrote:
> >   
> > > +			m[i * 8 + j] = (rss_key[i] << j)|
> > > +				(uint8_t)((uint16_t)(rss_key[i + 1]) >>
> > > +				(8 - j));
> > > +		}  
> > 
> > This ends up being harder than necessary to read. Maybe split into
> > multiple statements and/or use temporary variable.
> >   
> > > +RTE_INIT(rte_thash_gfni_init)
> > > +{
> > > +	rte_thash_gfni_supported = 0;  
> > 
> > Not necessary in C globals are initialized to zero by default.
> > 
> > By removing that the constructor can be totally behind #ifdef
> >   
> > > +__rte_internal
> > > +static inline __m512i
> > > +__rte_thash_gfni(const uint64_t *mtrx, const uint8_t *tuple,
> > > +	const uint8_t *secondary_tuple, int len)
> > > +{
> > > +	__m512i permute_idx = _mm512_set_epi8(7, 6, 5, 4, 7, 6, 5, 4,
> > > +						6, 5, 4, 3, 6, 5, 4, 3,
> > > +						5, 4, 3, 2, 5, 4, 3, 2,
> > > +						4, 3, 2, 1, 4, 3, 2, 1,
> > > +						3, 2, 1, 0, 3, 2, 1, 0,
> > > +						2, 1, 0, -1, 2, 1, 0, -1,
> > > +						1, 0, -1, -2, 1, 0, -1, -2,
> > > +						0, -1, -2, -3, 0, -1, -2, -3);  
> > 
> > NAK
> > 
> > Please don't put the implementation in an inline. This makes it harder
> > to support (API/ABI) and blocks other architectures from implementing
> > same thing with different instructions.  
> 
> I don't really understand your reasoning here.
> rte_thash_gfni.h is an arch-specific header, which provides
> arch-specific optimizations for RSS hash calculation
> (Vladimir pls correct me if I am wrong here).

Ok, but rte_thash_gfni.h is included on all architectures.

> We do have dozens of inline functions that do use arch-specific instructions (both x86 and arm)
> for different purposes:
> sync primitives, memory-ordering, cache manipulations, LPM lookup, TSX, power-saving, etc.
> That's a usual trade-off taken for performance reasons, when extra function call
> costs too much comparing to the operation itself.
> Why it suddenly became a problem for that particular case and how exactly it blocks other architectures?
> Also I don't understand how it makes things harder in terms of API/ABI stability.
> As I can see this patch doesn't introduce any public structs/unions.
> All functions take as arguments just raw data buffers and length.
> To summarize - in general, I don't see any good reason why this patch shouldn't be allowed.
> Konstantin

The comments about rte_thash_gfni_supported initialization still apply.
Why not:

#ifdef __GFNI__
RTE_INIT(rte_thash_gfni_init)
{
	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_GFNI))
		rte_thash_gfni_supported = 1;
}
#endif

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6 0/6] hide eth dev related structures
  2021-10-18 16:47  0%       ` Ferruh Yigit
@ 2021-10-18 23:47  0%         ` Ajit Khaparde
  0 siblings, 0 replies; 200+ results
From: Ajit Khaparde @ 2021-10-18 23:47 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Ali Alnubani, Konstantin Ananyev, dev, jerinj, Raslan Darawsheh,
	Andrew Rybchenko, Qi Zhang, Honnappa Nagarahalli, xiaoyun.li,
	anoobj, ndabilpuram, adwivedi, shepard.siegel, ed.czeck,
	john.miller, irusskikh, somnath.kotur, rahul.lakkireddy,
	hemant.agrawal, sachin.saxena, haiyue.wang, johndale, hyonkim,
	xiao.w.wang, humin29, yisen.zhuang, oulijun, beilei.xing,
	jingjing.wu, qiming.yang, Matan Azrad, Slava Ovsiienko, sthemmin,
	NBU-Contact-longli, heinrich.kuhn, kirankumark, mczekaj,
	jiawenwu, jianwang, maxime.coquelin, chenbo.xia,
	NBU-Contact-Thomas Monjalon, mdr, jay.jayatheerthan

On Mon, Oct 18, 2021 at 9:47 AM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> On 10/18/2021 5:04 PM, Ali Alnubani wrote:
> >> -----Original Message-----
> >> From: dev <dev-bounces@dpdk.org> On Behalf Of Ferruh Yigit
> >> Sent: Wednesday, October 13, 2021 11:16 PM
> >> To: Konstantin Ananyev <konstantin.ananyev@intel.com>; dev@dpdk.org;
> >> jerinj@marvell.com; Ajit Khaparde <ajit.khaparde@broadcom.com>; Raslan
> >> Darawsheh <rasland@nvidia.com>; Andrew Rybchenko
> >> <andrew.rybchenko@oktetlabs.ru>; Qi Zhang <qi.z.zhang@intel.com>;
> >> Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> >> Cc: xiaoyun.li@intel.com; anoobj@marvell.com; jerinj@marvell.com;
> >> ndabilpuram@marvell.com; adwivedi@marvell.com;
> >> shepard.siegel@atomicrules.com; ed.czeck@atomicrules.com;
> >> john.miller@atomicrules.com; irusskikh@marvell.com;
> >> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> >> rahul.lakkireddy@chelsio.com; hemant.agrawal@nxp.com;
> >> sachin.saxena@oss.nxp.com; haiyue.wang@intel.com; johndale@cisco.com;
> >> hyonkim@cisco.com; qi.z.zhang@intel.com; xiao.w.wang@intel.com;
> >> humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com;
> >> beilei.xing@intel.com; jingjing.wu@intel.com; qiming.yang@intel.com;
> >> Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> >> <viacheslavo@nvidia.com>; sthemmin@microsoft.com; NBU-Contact-longli
> >> <longli@microsoft.com>; heinrich.kuhn@corigine.com;
> >> kirankumark@marvell.com; andrew.rybchenko@oktetlabs.ru;
> >> mczekaj@marvell.com; jiawenwu@trustnetic.com;
> >> jianwang@trustnetic.com; maxime.coquelin@redhat.com;
> >> chenbo.xia@intel.com; NBU-Contact-Thomas Monjalon
> >> <thomas@monjalon.net>; mdr@ashroe.eu; jay.jayatheerthan@intel.com
> >> Subject: Re: [dpdk-dev] [PATCH v6 0/6] hide eth dev related structures
> >>
> >> On 10/13/2021 2:36 PM, Konstantin Ananyev wrote:
> >>> v6 changes:
> >>> - Update comments (Andrew)
> >>> - Move callback related variables under corresponding ifdefs (Andrew)
> >>> - Few nits in rte_eth_macaddrs_get (Andrew)
> >>> - Rebased on top of next-net tree
> >>>
> >>> v5 changes:
> >>> - Fix spelling (Thomas/David)
> >>> - Rename internal helper functions (David)
> >>> - Reorder patches and update commit messages (Thomas)
> >>> - Update comments (Thomas)
> >>> - Changed layout in rte_eth_fp_ops, to group functions and
> >>>      related data based on their functionality:
> >>>      first 64B line for Rx, second one for Tx.
> >>>      Didn't observe any real performance difference comparing to
> >>>      original layout. Though decided to keep a new one, as it seems
> >>>      a bit more plausible.
> >>>
> >>> v4 changes:
> >>>    - Fix secondary process attach (Pavan)
> >>>    - Fix build failure (Ferruh)
> >>>    - Update lib/ethdev/verion.map (Ferruh)
> >>>      Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
> >>>      section makes checkpatch.sh to complain.
> >>>
> >>> v3 changes:
> >>>    - Changes in public struct naming (Jerin/Haiyue)
> >>>    - Split patches
> >>>    - Update docs
> >>>    - Shamelessly included Andrew's patch:
> >>>      https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-
> >> 1-andrew.rybchenko@oktetlabs.ru/
> >>>      into these series.
> >>>      I have to do similar thing here, so decided to avoid duplicated effort.
> >>>
> >>> The aim of these patch series is to make rte_ethdev core data structures
> >>> (rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
> >>> DPDK and not visible to the user.
> >>> That should allow future possible changes to core ethdev related structures
> >>> to be transparent to the user and help to improve ABI/API stability.
> >>> Note that current ethdev API is preserved, but it is a formal ABI break.
> >>>
> >>> The work is based on previous discussions at:
> >>> https://www.mail-archive.com/dev@dpdk.org/msg211405.html
> >>> https://www.mail-archive.com/dev@dpdk.org/msg216685.html
> >>> and consists of the following main points:
> >>> 1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
> >>>      related data pointer from rte_eth_dev into a separate flat array.
> >>>      We keep it public to still be able to use inline functions for these
> >>>      'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> >>>      Note that apart from function pointers itself, each element of this
> >>>      flat array also contains two opaque pointers for each ethdev:
> >>>      1) a pointer to an array of internal queue data pointers
> >>>      2)  points to array of queue callback data pointers.
> >>>      Note that exposing this extra information allows us to avoid extra
> >>>      changes inside PMD level, plus should help to avoid possible
> >>>      performance degradation.
> >>> 2. Change implementation of 'fast' inline ethdev functions
> >>>      (rte_eth_rx_burst(), etc.) to use new public flat array.
> >>>      While it is an ABI breakage, this change is intended to be transparent
> >>>      for both users (no changes in user app is required) and PMD developers
> >>>      (no changes in PMD is required).
> >>>      One extra note - with new implementation RX/TX callback invocation
> >>>      will cost one extra function call with this changes. That might cause
> >>>      some slowdown for code-path with RX/TX callbacks heavily involved.
> >>>      Hope such trade-off is acceptable for the community.
> >>> 3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and
> >> related
> >>>      things into internal header: <ethdev_driver.h>.
> >>>
> >>> That approach was selected to:
> >>>     - Avoid(/minimize) possible performance losses.
> >>>     - Minimize required changes inside PMDs.
> >>>
> >>> Performance testing results (ICX 2.0GHz, E810 (ice)):
> >>>    - testpmd macswap fwd mode, plus
> >>>      a) no RX/TX callbacks:
> >>>         no actual slowdown observed
> >>>      b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
> >>>         ~2% slowdown
> >>>    - l3fwd: no actual slowdown observed
> >>>
> >>> Would like to thank everyone who already reviewed and tested previous
> >>> versions of these series. All other interested parties please don't be shy
> >>> and provide your feedback.
> >>>
> >>> Konstantin Ananyev (6):
> >>>     ethdev: allocate max space for internal queue array
> >>>     ethdev: change input parameters for rx_queue_count
> >>>     ethdev: copy fast-path API into separate structure
> >>>     ethdev: make fast-path functions to use new flat array
> >>>     ethdev: add API to retrieve multiple ethernet addresses
> >>>     ethdev: hide eth dev related structures
> >>>
> >>
> >> For series,
> >> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
> >>
> >> No performance regression detected on my testing.
> >>
> >> I am merging the series to next-net now which helps testing,
> >> but before merging to main repo it will be good to get more
> >> ack and test results (I can squash new tags later).
> >>
> >> @Jerin, @Ajit, @Raslan, @Andrew, @Qi, @Honnappa,
> >> Can you please test this set for any possible regression?
> >>
> >> Series applied to dpdk-next-net/main, thanks.
> >>
> >
> > Tested (on dpdk-next-net/main) single and multi-core packet forwarding performance with testpmd on both ConnectX-5 and ConnectX-6 Dx. I didn't see any noticeable regressions.
> >
>
> Thanks!
>
> At this stage I am putting set to pull request for main repo.
> Last day for anyone who wants to test the set.
I tested dpdk-next-net/main. My testing was limited and did not reveal
any issues.

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v5 12/14] eventdev: promote event vector API to stable
                         ` (2 preceding siblings ...)
  2021-10-18 23:36  4%     ` [dpdk-dev] [PATCH v5 11/14] eventdev: move timer adapters memory to hugepage pbhagavatula
@ 2021-10-18 23:36  4%     ` pbhagavatula
  3 siblings, 0 replies; 200+ results
From: pbhagavatula @ 2021-10-18 23:36 UTC (permalink / raw)
  To: jerinj, Jay Jayatheerthan, Ray Kinsella; +Cc: dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Promote event vector configuration APIs to stable.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
 doc/guides/rel_notes/release_21_11.rst  | 2 ++
 lib/eventdev/rte_event_eth_rx_adapter.h | 1 -
 lib/eventdev/rte_eventdev.h             | 1 -
 lib/eventdev/version.map                | 4 ++--
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 9694b32002..57389dc594 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -229,6 +229,8 @@ API Changes
 * eventdev: Move memory used by timer adapters to hugepage. This will prevent
   TLB misses if any and aligns to memory structure of other subsystems.
 
+* eventdev: Event vector configuration APIs have been made stable.
+
 ABI Changes
 -----------
 
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h b/lib/eventdev/rte_event_eth_rx_adapter.h
index c4257e750d..ab625f7273 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.h
+++ b/lib/eventdev/rte_event_eth_rx_adapter.h
@@ -588,7 +588,6 @@ int rte_event_eth_rx_adapter_cb_register(uint8_t id, uint16_t eth_dev_id,
  *  - 0: Success.
  *  - <0: Error code on failure.
  */
-__rte_experimental
 int rte_event_eth_rx_adapter_vector_limits_get(
 	uint8_t dev_id, uint16_t eth_port_id,
 	struct rte_event_eth_rx_adapter_vector_limits *limits);
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index f1fcd6ce3d..14d4d9ec81 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1734,7 +1734,6 @@ int rte_event_dev_selftest(uint8_t dev_id);
  *    - ENOMEM - no appropriate memory area found in which to create memzone
  *    - ENAMETOOLONG - mempool name requested is too long.
  */
-__rte_experimental
 struct rte_mempool *
 rte_event_vector_pool_create(const char *name, unsigned int n,
 			     unsigned int cache_size, uint16_t nb_elem,
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index 9f6eb4ba3c..8f2fb0cf14 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -42,6 +42,7 @@ DPDK_22 {
 	rte_event_eth_rx_adapter_start;
 	rte_event_eth_rx_adapter_stats_get;
 	rte_event_eth_rx_adapter_stats_reset;
+	rte_event_eth_rx_adapter_vector_limits_get;
 	rte_event_eth_rx_adapter_stop;
 	rte_event_eth_tx_adapter_caps_get;
 	rte_event_eth_tx_adapter_create;
@@ -83,6 +84,7 @@ DPDK_22 {
 	rte_event_timer_arm_burst;
 	rte_event_timer_arm_tmo_tick_burst;
 	rte_event_timer_cancel_burst;
+	rte_event_vector_pool_create;
 
 	#added in 21.11
 	rte_event_fp_ops;
@@ -136,8 +138,6 @@ EXPERIMENTAL {
 	rte_event_eth_rx_adapter_create_with_params;
 
 	#added in 21.05
-	rte_event_vector_pool_create;
-	rte_event_eth_rx_adapter_vector_limits_get;
 	__rte_eventdev_trace_crypto_adapter_enqueue;
 	rte_event_eth_rx_adapter_queue_conf_get;
 };
-- 
2.17.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v5 11/14] eventdev: move timer adapters memory to hugepage
    2021-10-18 23:35  6%     ` [dpdk-dev] [PATCH v5 04/14] eventdev: move inline APIs into separate structure pbhagavatula
  2021-10-18 23:36  5%     ` [dpdk-dev] [PATCH v5 10/14] eventdev: rearrange fields in timer object pbhagavatula
@ 2021-10-18 23:36  4%     ` pbhagavatula
  2021-10-20 20:24  0%       ` Carrillo, Erik G
  2021-10-18 23:36  4%     ` [dpdk-dev] [PATCH v5 12/14] eventdev: promote event vector API to stable pbhagavatula
  3 siblings, 1 reply; 200+ results
From: pbhagavatula @ 2021-10-18 23:36 UTC (permalink / raw)
  To: jerinj, Erik Gabriel Carrillo; +Cc: dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Move memory used by timer adapters to hugepage.
Allocate memory on the first adapter create or lookup to address
both primary and secondary process usecases.
This will prevent TLB misses if any and aligns to memory structure
of other subsystems.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 doc/guides/rel_notes/release_21_11.rst |  2 ++
 lib/eventdev/rte_event_timer_adapter.c | 36 ++++++++++++++++++++++++--
 2 files changed, 36 insertions(+), 2 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 6442c79977..9694b32002 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -226,6 +226,8 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* eventdev: Move memory used by timer adapters to hugepage. This will prevent
+  TLB misses if any and aligns to memory structure of other subsystems.
 
 ABI Changes
 -----------
diff --git a/lib/eventdev/rte_event_timer_adapter.c b/lib/eventdev/rte_event_timer_adapter.c
index ae55407042..894f532ef0 100644
--- a/lib/eventdev/rte_event_timer_adapter.c
+++ b/lib/eventdev/rte_event_timer_adapter.c
@@ -33,7 +33,7 @@ RTE_LOG_REGISTER_SUFFIX(evtim_logtype, adapter.timer, NOTICE);
 RTE_LOG_REGISTER_SUFFIX(evtim_buffer_logtype, adapter.timer, NOTICE);
 RTE_LOG_REGISTER_SUFFIX(evtim_svc_logtype, adapter.timer.svc, NOTICE);
 
-static struct rte_event_timer_adapter adapters[RTE_EVENT_TIMER_ADAPTER_NUM_MAX];
+static struct rte_event_timer_adapter *adapters;
 
 static const struct event_timer_adapter_ops swtim_ops;
 
@@ -138,6 +138,17 @@ rte_event_timer_adapter_create_ext(
 	int n, ret;
 	struct rte_eventdev *dev;
 
+	if (adapters == NULL) {
+		adapters = rte_zmalloc("Eventdev",
+				       sizeof(struct rte_event_timer_adapter) *
+					       RTE_EVENT_TIMER_ADAPTER_NUM_MAX,
+				       RTE_CACHE_LINE_SIZE);
+		if (adapters == NULL) {
+			rte_errno = ENOMEM;
+			return NULL;
+		}
+	}
+
 	if (conf == NULL) {
 		rte_errno = EINVAL;
 		return NULL;
@@ -312,6 +323,17 @@ rte_event_timer_adapter_lookup(uint16_t adapter_id)
 	int ret;
 	struct rte_eventdev *dev;
 
+	if (adapters == NULL) {
+		adapters = rte_zmalloc("Eventdev",
+				       sizeof(struct rte_event_timer_adapter) *
+					       RTE_EVENT_TIMER_ADAPTER_NUM_MAX,
+				       RTE_CACHE_LINE_SIZE);
+		if (adapters == NULL) {
+			rte_errno = ENOMEM;
+			return NULL;
+		}
+	}
+
 	if (adapters[adapter_id].allocated)
 		return &adapters[adapter_id]; /* Adapter is already loaded */
 
@@ -358,7 +380,7 @@ rte_event_timer_adapter_lookup(uint16_t adapter_id)
 int
 rte_event_timer_adapter_free(struct rte_event_timer_adapter *adapter)
 {
-	int ret;
+	int i, ret;
 
 	ADAPTER_VALID_OR_ERR_RET(adapter, -EINVAL);
 	FUNC_PTR_OR_ERR_RET(adapter->ops->uninit, -EINVAL);
@@ -382,6 +404,16 @@ rte_event_timer_adapter_free(struct rte_event_timer_adapter *adapter)
 	adapter->data = NULL;
 	adapter->allocated = 0;
 
+	ret = 0;
+	for (i = 0; i < RTE_EVENT_TIMER_ADAPTER_NUM_MAX; i++)
+		if (adapters[i].allocated)
+			ret = adapter[i].allocated;
+
+	if (!ret) {
+		rte_free(adapters);
+		adapters = NULL;
+	}
+
 	rte_eventdev_trace_timer_adapter_free(adapter);
 	return 0;
 }
-- 
2.17.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v5 10/14] eventdev: rearrange fields in timer object
    2021-10-18 23:35  6%     ` [dpdk-dev] [PATCH v5 04/14] eventdev: move inline APIs into separate structure pbhagavatula
@ 2021-10-18 23:36  5%     ` pbhagavatula
  2021-10-18 23:36  4%     ` [dpdk-dev] [PATCH v5 11/14] eventdev: move timer adapters memory to hugepage pbhagavatula
  2021-10-18 23:36  4%     ` [dpdk-dev] [PATCH v5 12/14] eventdev: promote event vector API to stable pbhagavatula
  3 siblings, 0 replies; 200+ results
From: pbhagavatula @ 2021-10-18 23:36 UTC (permalink / raw)
  To: jerinj, Erik Gabriel Carrillo; +Cc: dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Rearrange fields in rte_event_timer data structure to remove holes.
Also, remove use of volatile from rte_event_timer.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 doc/guides/rel_notes/release_21_11.rst | 3 +++
 lib/eventdev/rte_event_timer_adapter.h | 4 ++--
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index b4e1770d4d..6442c79977 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -283,6 +283,9 @@ ABI Changes
   accessed directly by user any more. This change is transparent to both
   applications and PMDs.
 
+* eventdev: Re-arrange fields in ``rte_event_timer`` to remove holes.
+  ``rte_event_timer_adapter_pmd.h`` has been made internal.
+
 
 Known Issues
 ------------
diff --git a/lib/eventdev/rte_event_timer_adapter.h b/lib/eventdev/rte_event_timer_adapter.h
index cad6d3b4c5..1551741820 100644
--- a/lib/eventdev/rte_event_timer_adapter.h
+++ b/lib/eventdev/rte_event_timer_adapter.h
@@ -475,8 +475,6 @@ struct rte_event_timer {
 	 *  - op: RTE_EVENT_OP_NEW
 	 *  - event_type: RTE_EVENT_TYPE_TIMER
 	 */
-	volatile enum rte_event_timer_state state;
-	/**< State of the event timer. */
 	uint64_t timeout_ticks;
 	/**< Expiry timer ticks expressed in number of *timer_ticks_ns* from
 	 * now.
@@ -488,6 +486,8 @@ struct rte_event_timer {
 	 * implementation specific values to share between the arm and cancel
 	 * operations.  The application should not modify this field.
 	 */
+	enum rte_event_timer_state state;
+	/**< State of the event timer. */
 	uint8_t user_meta[0];
 	/**< Memory to store user specific metadata.
 	 * The event timer adapter implementation should not modify this area.
-- 
2.17.1


^ permalink raw reply	[relevance 5%]

* [dpdk-dev] [PATCH v5 04/14] eventdev: move inline APIs into separate structure
  @ 2021-10-18 23:35  6%     ` pbhagavatula
  2021-10-18 23:36  5%     ` [dpdk-dev] [PATCH v5 10/14] eventdev: rearrange fields in timer object pbhagavatula
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 200+ results
From: pbhagavatula @ 2021-10-18 23:35 UTC (permalink / raw)
  To: jerinj, Ray Kinsella; +Cc: dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Move fastpath inline function pointers from rte_eventdev into a
separate structure accessed via a flat array.
The intention is to make rte_eventdev and related structures private
to avoid future API/ABI breakages.`

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
 doc/guides/rel_notes/release_21_11.rst |   6 ++
 lib/eventdev/eventdev_pmd.h            |  38 +++++++++
 lib/eventdev/eventdev_pmd_pci.h        |   4 +-
 lib/eventdev/eventdev_private.c        | 112 +++++++++++++++++++++++++
 lib/eventdev/meson.build               |  21 ++---
 lib/eventdev/rte_eventdev.c            |  22 ++++-
 lib/eventdev/rte_eventdev_core.h       |  26 ++++++
 lib/eventdev/version.map               |   6 ++
 8 files changed, 223 insertions(+), 12 deletions(-)
 create mode 100644 lib/eventdev/eventdev_private.c

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 38e601c236..b4e1770d4d 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -277,6 +277,12 @@ ABI Changes
   were added in structure ``rte_event_eth_rx_adapter_stats`` to get additional
   status.
 
+* eventdev: A new structure ``rte_event_fp_ops`` has been added which is now used
+  by the fastpath inline functions. The structures ``rte_eventdev``,
+  ``rte_eventdev_data`` have been made internal. ``rte_eventdevs[]`` can't be
+  accessed directly by user any more. This change is transparent to both
+  applications and PMDs.
+
 
 Known Issues
 ------------
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index 9b2aec8371..0532b542d4 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -1188,4 +1188,42 @@ __rte_internal
 int
 rte_event_pmd_release(struct rte_eventdev *eventdev);
 
+/**
+ *
+ * @internal
+ * This is the last step of device probing.
+ * It must be called after a port is allocated and initialized successfully.
+ *
+ * @param eventdev
+ *  New event device.
+ */
+__rte_internal
+void
+event_dev_probing_finish(struct rte_eventdev *eventdev);
+
+/**
+ * Reset eventdevice fastpath APIs to dummy values.
+ *
+ * @param fp_ops
+ * The *fp_ops* pointer to reset.
+ */
+__rte_internal
+void
+event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op);
+
+/**
+ * Set eventdevice fastpath APIs to event device values.
+ *
+ * @param fp_ops
+ * The *fp_ops* pointer to set.
+ */
+__rte_internal
+void
+event_dev_fp_ops_set(struct rte_event_fp_ops *fp_ops,
+		     const struct rte_eventdev *dev);
+
+#ifdef __cplusplus
+}
+#endif
+
 #endif /* _RTE_EVENTDEV_PMD_H_ */
diff --git a/lib/eventdev/eventdev_pmd_pci.h b/lib/eventdev/eventdev_pmd_pci.h
index 2f12a5eb24..499852db16 100644
--- a/lib/eventdev/eventdev_pmd_pci.h
+++ b/lib/eventdev/eventdev_pmd_pci.h
@@ -67,8 +67,10 @@ rte_event_pmd_pci_probe_named(struct rte_pci_driver *pci_drv,
 
 	/* Invoke PMD device initialization function */
 	retval = devinit(eventdev);
-	if (retval == 0)
+	if (retval == 0) {
+		event_dev_probing_finish(eventdev);
 		return 0;
+	}
 
 	RTE_EDEV_LOG_ERR("driver %s: (vendor_id=0x%x device_id=0x%x)"
 			" failed", pci_drv->driver.name,
diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
new file mode 100644
index 0000000000..9084833847
--- /dev/null
+++ b/lib/eventdev/eventdev_private.c
@@ -0,0 +1,112 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "eventdev_pmd.h"
+#include "rte_eventdev.h"
+
+static uint16_t
+dummy_event_enqueue(__rte_unused void *port,
+		    __rte_unused const struct rte_event *ev)
+{
+	RTE_EDEV_LOG_ERR(
+		"event enqueue requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_enqueue_burst(__rte_unused void *port,
+			  __rte_unused const struct rte_event ev[],
+			  __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event enqueue burst requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_dequeue(__rte_unused void *port, __rte_unused struct rte_event *ev,
+		    __rte_unused uint64_t timeout_ticks)
+{
+	RTE_EDEV_LOG_ERR(
+		"event dequeue requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_dequeue_burst(__rte_unused void *port,
+			  __rte_unused struct rte_event ev[],
+			  __rte_unused uint16_t nb_events,
+			  __rte_unused uint64_t timeout_ticks)
+{
+	RTE_EDEV_LOG_ERR(
+		"event dequeue burst requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_tx_adapter_enqueue(__rte_unused void *port,
+			       __rte_unused struct rte_event ev[],
+			       __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event Tx adapter enqueue requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_tx_adapter_enqueue_same_dest(__rte_unused void *port,
+					 __rte_unused struct rte_event ev[],
+					 __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event Tx adapter enqueue same destination requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_crypto_adapter_enqueue(__rte_unused void *port,
+				   __rte_unused struct rte_event ev[],
+				   __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event crypto adapter enqueue requested for unconfigured event device");
+	return 0;
+}
+
+void
+event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
+{
+	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+	static const struct rte_event_fp_ops dummy = {
+		.enqueue = dummy_event_enqueue,
+		.enqueue_burst = dummy_event_enqueue_burst,
+		.enqueue_new_burst = dummy_event_enqueue_burst,
+		.enqueue_forward_burst = dummy_event_enqueue_burst,
+		.dequeue = dummy_event_dequeue,
+		.dequeue_burst = dummy_event_dequeue_burst,
+		.txa_enqueue = dummy_event_tx_adapter_enqueue,
+		.txa_enqueue_same_dest =
+			dummy_event_tx_adapter_enqueue_same_dest,
+		.ca_enqueue = dummy_event_crypto_adapter_enqueue,
+		.data = dummy_data,
+	};
+
+	*fp_op = dummy;
+}
+
+void
+event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op,
+		     const struct rte_eventdev *dev)
+{
+	fp_op->enqueue = dev->enqueue;
+	fp_op->enqueue_burst = dev->enqueue_burst;
+	fp_op->enqueue_new_burst = dev->enqueue_new_burst;
+	fp_op->enqueue_forward_burst = dev->enqueue_forward_burst;
+	fp_op->dequeue = dev->dequeue;
+	fp_op->dequeue_burst = dev->dequeue_burst;
+	fp_op->txa_enqueue = dev->txa_enqueue;
+	fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest;
+	fp_op->ca_enqueue = dev->ca_enqueue;
+	fp_op->data = dev->data->ports;
+}
diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build
index 8b51fde361..cb9abe92f6 100644
--- a/lib/eventdev/meson.build
+++ b/lib/eventdev/meson.build
@@ -8,24 +8,25 @@ else
 endif
 
 sources = files(
-        'rte_eventdev.c',
-        'rte_event_ring.c',
+        'eventdev_private.c',
         'eventdev_trace_points.c',
-        'rte_event_eth_rx_adapter.c',
-        'rte_event_timer_adapter.c',
         'rte_event_crypto_adapter.c',
+        'rte_event_eth_rx_adapter.c',
         'rte_event_eth_tx_adapter.c',
+        'rte_event_ring.c',
+        'rte_event_timer_adapter.c',
+        'rte_eventdev.c',
 )
 headers = files(
-        'rte_eventdev.h',
-        'rte_eventdev_trace.h',
-        'rte_eventdev_trace_fp.h',
-        'rte_event_ring.h',
+        'rte_event_crypto_adapter.h',
         'rte_event_eth_rx_adapter.h',
+        'rte_event_eth_tx_adapter.h',
+        'rte_event_ring.h',
         'rte_event_timer_adapter.h',
         'rte_event_timer_adapter_pmd.h',
-        'rte_event_crypto_adapter.h',
-        'rte_event_eth_tx_adapter.h',
+        'rte_eventdev.h',
+        'rte_eventdev_trace.h',
+        'rte_eventdev_trace_fp.h',
 )
 indirect_headers += files(
         'rte_eventdev_core.h',
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index bfcfa31cd1..4c30a37831 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -46,6 +46,9 @@ static struct rte_eventdev_global eventdev_globals = {
 	.nb_devs		= 0
 };
 
+/* Public fastpath APIs. */
+struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
+
 /* Event dev north bound API implementation */
 
 uint8_t
@@ -300,8 +303,8 @@ int
 rte_event_dev_configure(uint8_t dev_id,
 			const struct rte_event_dev_config *dev_conf)
 {
-	struct rte_eventdev *dev;
 	struct rte_event_dev_info info;
+	struct rte_eventdev *dev;
 	int diag;
 
 	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
@@ -470,10 +473,13 @@ rte_event_dev_configure(uint8_t dev_id,
 		return diag;
 	}
 
+	event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
+
 	/* Configure the device */
 	diag = (*dev->dev_ops->dev_configure)(dev);
 	if (diag != 0) {
 		RTE_EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id, diag);
+		event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
 		event_dev_queue_config(dev, 0);
 		event_dev_port_config(dev, 0);
 	}
@@ -1244,6 +1250,8 @@ rte_event_dev_start(uint8_t dev_id)
 	else
 		return diag;
 
+	event_dev_fp_ops_set(rte_event_fp_ops + dev_id, dev);
+
 	return 0;
 }
 
@@ -1284,6 +1292,7 @@ rte_event_dev_stop(uint8_t dev_id)
 	dev->data->dev_started = 0;
 	(*dev->dev_ops->dev_stop)(dev);
 	rte_eventdev_trace_stop(dev_id);
+	event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
 }
 
 int
@@ -1302,6 +1311,7 @@ rte_event_dev_close(uint8_t dev_id)
 		return -EBUSY;
 	}
 
+	event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
 	rte_eventdev_trace_close(dev_id);
 	return (*dev->dev_ops->dev_close)(dev);
 }
@@ -1435,6 +1445,7 @@ rte_event_pmd_release(struct rte_eventdev *eventdev)
 	if (eventdev == NULL)
 		return -EINVAL;
 
+	event_dev_fp_ops_reset(rte_event_fp_ops + eventdev->data->dev_id);
 	eventdev->attached = RTE_EVENTDEV_DETACHED;
 	eventdev_globals.nb_devs--;
 
@@ -1460,6 +1471,15 @@ rte_event_pmd_release(struct rte_eventdev *eventdev)
 	return 0;
 }
 
+void
+event_dev_probing_finish(struct rte_eventdev *eventdev)
+{
+	if (eventdev == NULL)
+		return;
+
+	event_dev_fp_ops_set(rte_event_fp_ops + eventdev->data->dev_id,
+			     eventdev);
+}
 
 static int
 handle_dev_list(const char *cmd __rte_unused,
diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
index 115b97e431..916023f71f 100644
--- a/lib/eventdev/rte_eventdev_core.h
+++ b/lib/eventdev/rte_eventdev_core.h
@@ -39,6 +39,32 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port,
 						   uint16_t nb_events);
 /**< @internal Enqueue burst of events on crypto adapter */
 
+struct rte_event_fp_ops {
+	void **data;
+	/**< points to array of internal port data pointers */
+	event_enqueue_t enqueue;
+	/**< PMD enqueue function. */
+	event_enqueue_burst_t enqueue_burst;
+	/**< PMD enqueue burst function. */
+	event_enqueue_burst_t enqueue_new_burst;
+	/**< PMD enqueue burst new function. */
+	event_enqueue_burst_t enqueue_forward_burst;
+	/**< PMD enqueue burst fwd function. */
+	event_dequeue_t dequeue;
+	/**< PMD dequeue function. */
+	event_dequeue_burst_t dequeue_burst;
+	/**< PMD dequeue burst function. */
+	event_tx_adapter_enqueue_t txa_enqueue;
+	/**< PMD Tx adapter enqueue function. */
+	event_tx_adapter_enqueue_t txa_enqueue_same_dest;
+	/**< PMD Tx adapter enqueue same destination function. */
+	event_crypto_adapter_enqueue_t ca_enqueue;
+	/**< PMD Crypto adapter enqueue function. */
+	uintptr_t reserved[6];
+} __rte_cache_aligned;
+
+extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
+
 #define RTE_EVENTDEV_NAME_MAX_LEN (64)
 /**< @internal Max length of name of event PMD */
 
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index cd72f45d29..e684154bf9 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -85,6 +85,9 @@ DPDK_22 {
 	rte_event_timer_cancel_burst;
 	rte_eventdevs;
 
+	#added in 21.11
+	rte_event_fp_ops;
+
 	local: *;
 };
 
@@ -143,6 +146,9 @@ EXPERIMENTAL {
 INTERNAL {
 	global:
 
+	event_dev_fp_ops_reset;
+	event_dev_fp_ops_set;
+	event_dev_probing_finish;
 	rte_event_pmd_selftest_seqn_dynfield_offset;
 	rte_event_pmd_allocate;
 	rte_event_pmd_get_named_dev;
-- 
2.17.1


^ permalink raw reply	[relevance 6%]

* [dpdk-dev] [PATCH v9 2/4] mempool: add non-IO flag
  @ 2021-10-18 22:43  3%             ` Dmitry Kozlyuk
  0 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-18 22:43 UTC (permalink / raw)
  To: dev
  Cc: David Marchand, Matan Azrad, Andrew Rybchenko, Maryam Tahhan,
	Reshma Pattan, Olivier Matz

Mempool is a generic allocator that is not necessarily used
for device IO operations and its memory for DMA.
Add MEMPOOL_F_NON_IO flag to mark such mempools automatically
a) if their objects are not contiguous;
b) if IOVA is not available for any object.
Other components can inspect this flag
in order to optimize their memory management.

Discussion: https://mails.dpdk.org/archives/dev/2021-August/216654.html

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/proc-info/main.c                   |   6 +-
 app/test/test_mempool.c                | 115 +++++++++++++++++++++++++
 doc/guides/rel_notes/release_21_11.rst |   3 +
 lib/mempool/rte_mempool.c              |  10 +++
 lib/mempool/rte_mempool.h              |   2 +
 5 files changed, 134 insertions(+), 2 deletions(-)

diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index a8e928fa9f..8ec9cadd79 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -1295,7 +1295,8 @@ show_mempool(char *name)
 				"\t  -- No cache align (%c)\n"
 				"\t  -- SP put (%c), SC get (%c)\n"
 				"\t  -- Pool created (%c)\n"
-				"\t  -- No IOVA config (%c)\n",
+				"\t  -- No IOVA config (%c)\n"
+				"\t  -- Not used for IO (%c)\n",
 				ptr->name,
 				ptr->socket_id,
 				(flags & MEMPOOL_F_NO_SPREAD) ? 'y' : 'n',
@@ -1303,7 +1304,8 @@ show_mempool(char *name)
 				(flags & MEMPOOL_F_SP_PUT) ? 'y' : 'n',
 				(flags & MEMPOOL_F_SC_GET) ? 'y' : 'n',
 				(flags & MEMPOOL_F_POOL_CREATED) ? 'y' : 'n',
-				(flags & MEMPOOL_F_NO_IOVA_CONTIG) ? 'y' : 'n');
+				(flags & MEMPOOL_F_NO_IOVA_CONTIG) ? 'y' : 'n',
+				(flags & MEMPOOL_F_NON_IO) ? 'y' : 'n');
 			printf("  - Size %u Cache %u element %u\n"
 				"  - header %u trailer %u\n"
 				"  - private data size %u\n",
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index 5339a4cbd8..f4947680bc 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -12,6 +12,7 @@
 #include <sys/queue.h>
 
 #include <rte_common.h>
+#include <rte_eal_paging.h>
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_errno.h>
@@ -729,6 +730,112 @@ test_mempool_events_safety(void)
 #pragma pop_macro("RTE_TEST_TRACE_FAILURE")
 }
 
+#pragma push_macro("RTE_TEST_TRACE_FAILURE")
+#undef RTE_TEST_TRACE_FAILURE
+#define RTE_TEST_TRACE_FAILURE(...) do { \
+		ret = TEST_FAILED; \
+		goto exit; \
+	} while (0)
+
+static int
+test_mempool_flag_non_io_set_when_no_iova_contig_set(void)
+{
+	struct rte_mempool *mp = NULL;
+	int ret;
+
+	mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
+				      MEMPOOL_ELT_SIZE, 0, 0,
+				      SOCKET_ID_ANY, MEMPOOL_F_NO_IOVA_CONTIG);
+	RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
+				 rte_strerror(rte_errno));
+	rte_mempool_set_ops_byname(mp, rte_mbuf_best_mempool_ops(), NULL);
+	ret = rte_mempool_populate_default(mp);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(-ret));
+	RTE_TEST_ASSERT(mp->flags & MEMPOOL_F_NON_IO,
+			"NON_IO flag is not set when NO_IOVA_CONTIG is set");
+	ret = TEST_SUCCESS;
+exit:
+	rte_mempool_free(mp);
+	return ret;
+}
+
+static int
+test_mempool_flag_non_io_unset_when_populated_with_valid_iova(void)
+{
+	void *virt = NULL;
+	rte_iova_t iova;
+	size_t total_size = MEMPOOL_ELT_SIZE * MEMPOOL_SIZE;
+	size_t block_size = total_size / 3;
+	struct rte_mempool *mp = NULL;
+	int ret;
+
+	/*
+	 * Since objects from the pool are never used in the test,
+	 * we don't care for contiguous IOVA, on the other hand,
+	 * reiuring it could cause spurious test failures.
+	 */
+	virt = rte_malloc("test_mempool", total_size, rte_mem_page_size());
+	RTE_TEST_ASSERT_NOT_NULL(virt, "Cannot allocate memory");
+	iova = rte_mem_virt2iova(virt);
+	RTE_TEST_ASSERT_NOT_EQUAL(iova,  RTE_BAD_IOVA, "Cannot get IOVA");
+	mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
+				      MEMPOOL_ELT_SIZE, 0, 0,
+				      SOCKET_ID_ANY, 0);
+	RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
+				 rte_strerror(rte_errno));
+
+	ret = rte_mempool_populate_iova(mp, RTE_PTR_ADD(virt, 1 * block_size),
+					RTE_BAD_IOVA, block_size, NULL, NULL);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(-ret));
+	RTE_TEST_ASSERT(mp->flags & MEMPOOL_F_NON_IO,
+			"NON_IO flag is not set when mempool is populated with only RTE_BAD_IOVA");
+
+	ret = rte_mempool_populate_iova(mp, virt, iova, block_size, NULL, NULL);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(-ret));
+	RTE_TEST_ASSERT(!(mp->flags & MEMPOOL_F_NON_IO),
+			"NON_IO flag is not unset when mempool is populated with valid IOVA");
+
+	ret = rte_mempool_populate_iova(mp, RTE_PTR_ADD(virt, 2 * block_size),
+					RTE_BAD_IOVA, block_size, NULL, NULL);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(-ret));
+	RTE_TEST_ASSERT(!(mp->flags & MEMPOOL_F_NON_IO),
+			"NON_IO flag is set even when some objects have valid IOVA");
+	ret = TEST_SUCCESS;
+
+exit:
+	rte_mempool_free(mp);
+	rte_free(virt);
+	return ret;
+}
+
+static int
+test_mempool_flag_non_io_unset_by_default(void)
+{
+	struct rte_mempool *mp;
+	int ret;
+
+	mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
+				      MEMPOOL_ELT_SIZE, 0, 0,
+				      SOCKET_ID_ANY, 0);
+	RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
+				 rte_strerror(rte_errno));
+	ret = rte_mempool_populate_default(mp);
+	RTE_TEST_ASSERT_EQUAL(ret, (int)mp->size, "Failed to populate mempool: %s",
+			      rte_strerror(-ret));
+	RTE_TEST_ASSERT(!(mp->flags & MEMPOOL_F_NON_IO),
+			"NON_IO flag is set by default");
+	ret = TEST_SUCCESS;
+exit:
+	rte_mempool_free(mp);
+	return ret;
+}
+
+#pragma pop_macro("RTE_TEST_TRACE_FAILURE")
+
 static int
 test_mempool(void)
 {
@@ -914,6 +1021,14 @@ test_mempool(void)
 	if (test_mempool_events_safety() < 0)
 		GOTO_ERR(ret, err);
 
+	/* test NON_IO flag inference */
+	if (test_mempool_flag_non_io_unset_by_default() < 0)
+		GOTO_ERR(ret, err);
+	if (test_mempool_flag_non_io_set_when_no_iova_contig_set() < 0)
+		GOTO_ERR(ret, err);
+	if (test_mempool_flag_non_io_unset_when_populated_with_valid_iova() < 0)
+		GOTO_ERR(ret, err);
+
 	rte_mempool_list_dump(stdout);
 
 	ret = 0;
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d5435a64aa..f6bb5adeff 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -237,6 +237,9 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* mempool: Added ``MEMPOOL_F_NON_IO`` flag to give a hint to DPDK components
+  that objects from this pool will not be used for device IO (e.g. DMA).
+
 
 ABI Changes
 -----------
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 8810d08ab5..7d7d97d85d 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -372,6 +372,10 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next);
 	mp->nb_mem_chunks++;
 
+	/* At least some objects in the pool can now be used for IO. */
+	if (iova != RTE_BAD_IOVA)
+		mp->flags &= ~MEMPOOL_F_NON_IO;
+
 	/* Report the mempool as ready only when fully populated. */
 	if (mp->populated_size >= mp->size)
 		mempool_event_callback_invoke(RTE_MEMPOOL_EVENT_READY, mp);
@@ -851,6 +855,12 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		return NULL;
 	}
 
+	/*
+	 * No objects in the pool can be used for IO until it's populated
+	 * with at least some objects with valid IOVA.
+	 */
+	flags |= MEMPOOL_F_NON_IO;
+
 	/* "no cache align" imply "no spread" */
 	if (flags & MEMPOOL_F_NO_CACHE_ALIGN)
 		flags |= MEMPOOL_F_NO_SPREAD;
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 5799d4a705..b2e20c8855 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -257,6 +257,8 @@ struct rte_mempool {
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
+/** Internal: no object from the pool can be used for device IO (DMA). */
+#define MEMPOOL_F_NON_IO         0x0040
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v3 6/8] cryptodev: rework session framework
    2021-10-18 21:34  1%   ` [dpdk-dev] [PATCH v3 1/8] security: rework session framework Akhil Goyal
@ 2021-10-18 21:34  1%   ` Akhil Goyal
  2021-10-20 19:27  0%     ` Ananyev, Konstantin
    2 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-18 21:34 UTC (permalink / raw)
  To: dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
	konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
	adwivedi, ciara.power, haiyue.wang, jiawenwu, jianwang,
	Akhil Goyal

As per current design, rte_cryptodev_sym_session_create() and
rte_cryptodev_sym_session_init() use separate mempool objects
for a single session.
And structure rte_cryptodev_sym_session is not directly used
by the application, it may cause ABI breakage if the structure
is modified in future.

To address these two issues, the rte_cryptodev_sym_session_create
will take one mempool object for both the session and session
private data. The API rte_cryptodev_sym_session_init will now not
take mempool object.
rte_cryptodev_sym_session_create will now return an opaque session
pointer which will be used by the app in rte_cryptodev_sym_session_init
and other APIs.

With this change, rte_cryptodev_sym_session_init will send
pointer to session private data of corresponding driver to the PMD
based on the driver_id for filling the PMD data.

In data path, opaque session pointer is attached to rte_crypto_op
and the PMD can call an internal library API to get the session
private data pointer based on the driver id.

Note: currently nb_drivers are getting updated in RTE_INIT which
result in increasing the memory requirements for session.
User can compile off drivers which are not in use to reduce the
memory consumption of a session.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
 app/test-crypto-perf/cperf.h                  |   1 -
 app/test-crypto-perf/cperf_ops.c              |  33 ++---
 app/test-crypto-perf/cperf_ops.h              |   6 +-
 app/test-crypto-perf/cperf_test_latency.c     |   5 +-
 app/test-crypto-perf/cperf_test_latency.h     |   1 -
 .../cperf_test_pmd_cyclecount.c               |   5 +-
 .../cperf_test_pmd_cyclecount.h               |   1 -
 app/test-crypto-perf/cperf_test_throughput.c  |   5 +-
 app/test-crypto-perf/cperf_test_throughput.h  |   1 -
 app/test-crypto-perf/cperf_test_verify.c      |   5 +-
 app/test-crypto-perf/cperf_test_verify.h      |   1 -
 app/test-crypto-perf/main.c                   |  29 +---
 app/test/test_cryptodev.c                     | 130 +++++-------------
 app/test/test_cryptodev.h                     |   1 -
 app/test/test_cryptodev_asym.c                |   3 +-
 app/test/test_cryptodev_blockcipher.c         |   6 +-
 app/test/test_event_crypto_adapter.c          |  28 +---
 app/test/test_ipsec.c                         |  22 +--
 drivers/crypto/armv8/armv8_pmd_private.h      |   2 -
 drivers/crypto/armv8/rte_armv8_pmd.c          |  21 ++-
 drivers/crypto/armv8/rte_armv8_pmd_ops.c      |  34 +----
 drivers/crypto/bcmfs/bcmfs_sym_session.c      |  36 +----
 drivers/crypto/bcmfs/bcmfs_sym_session.h      |   6 +-
 drivers/crypto/caam_jr/caam_jr.c              |  32 ++---
 drivers/crypto/ccp/ccp_pmd_ops.c              |  32 +----
 drivers/crypto/ccp/ccp_pmd_private.h          |   2 -
 drivers/crypto/ccp/rte_ccp_pmd.c              |  24 ++--
 drivers/crypto/cnxk/cn10k_cryptodev_ops.c     |  18 ++-
 drivers/crypto/cnxk/cn9k_cryptodev_ops.c      |  18 +--
 drivers/crypto/cnxk/cnxk_cryptodev_ops.c      |  61 +++-----
 drivers/crypto/cnxk/cnxk_cryptodev_ops.h      |  15 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   |  29 +---
 drivers/crypto/dpaa_sec/dpaa_sec.c            |  31 +----
 drivers/crypto/ipsec_mb/ipsec_mb_ops.c        |  32 +----
 drivers/crypto/ipsec_mb/ipsec_mb_private.h    |  29 ++--
 drivers/crypto/ipsec_mb/pmd_aesni_gcm.c       |  23 ++--
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c        |   1 -
 drivers/crypto/ipsec_mb/pmd_chacha_poly.c     |   1 -
 drivers/crypto/ipsec_mb/pmd_kasumi.c          |   1 -
 drivers/crypto/ipsec_mb/pmd_snow3g.c          |   1 -
 drivers/crypto/ipsec_mb/pmd_zuc.c             |   1 -
 drivers/crypto/mlx5/mlx5_crypto.c             |  24 +---
 drivers/crypto/mvsam/mrvl_pmd_private.h       |   3 -
 drivers/crypto/mvsam/rte_mrvl_pmd_ops.c       |  36 ++---
 drivers/crypto/nitrox/nitrox_sym.c            |  31 +----
 drivers/crypto/null/null_crypto_pmd.c         |  20 ++-
 drivers/crypto/null/null_crypto_pmd_ops.c     |  34 +----
 drivers/crypto/null/null_crypto_pmd_private.h |   2 -
 .../crypto/octeontx/otx_cryptodev_hw_access.h |   1 -
 drivers/crypto/octeontx/otx_cryptodev_ops.c   |  60 +++-----
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |  52 +++----
 .../octeontx2/otx2_cryptodev_ops_helper.h     |  16 +--
 drivers/crypto/octeontx2/otx2_cryptodev_qp.h  |   2 -
 drivers/crypto/openssl/openssl_pmd_private.h  |   2 -
 drivers/crypto/openssl/rte_openssl_pmd.c      |  18 +--
 drivers/crypto/openssl/rte_openssl_pmd_ops.c  |  35 +----
 drivers/crypto/qat/qat_sym_session.c          |  29 +---
 drivers/crypto/qat/qat_sym_session.h          |   6 +-
 drivers/crypto/scheduler/scheduler_pmd_ops.c  |   9 +-
 drivers/crypto/virtio/virtio_cryptodev.c      |  31 ++---
 .../octeontx2/otx2_evdev_crypto_adptr_rx.h    |   3 +-
 examples/fips_validation/fips_dev_self_test.c |  32 ++---
 examples/fips_validation/main.c               |  20 +--
 examples/ipsec-secgw/ipsec-secgw.c            |  40 ------
 examples/ipsec-secgw/ipsec.c                  |   3 +-
 examples/ipsec-secgw/ipsec.h                  |   1 -
 examples/ipsec-secgw/ipsec_worker.c           |   4 -
 examples/l2fwd-crypto/main.c                  |  41 +-----
 examples/vhost_crypto/main.c                  |  16 +--
 lib/cryptodev/cryptodev_pmd.h                 |   7 +-
 lib/cryptodev/rte_crypto.h                    |   2 +-
 lib/cryptodev/rte_crypto_sym.h                |   2 +-
 lib/cryptodev/rte_cryptodev.c                 |  76 ++++++----
 lib/cryptodev/rte_cryptodev.h                 |  23 ++--
 lib/cryptodev/rte_cryptodev_trace.h           |   5 +-
 lib/pipeline/rte_table_action.c               |   8 +-
 lib/pipeline/rte_table_action.h               |   2 +-
 lib/vhost/rte_vhost_crypto.h                  |   3 -
 lib/vhost/vhost_crypto.c                      |   7 +-
 79 files changed, 396 insertions(+), 1043 deletions(-)

diff --git a/app/test-crypto-perf/cperf.h b/app/test-crypto-perf/cperf.h
index 2b0aad095c..db58228dce 100644
--- a/app/test-crypto-perf/cperf.h
+++ b/app/test-crypto-perf/cperf.h
@@ -15,7 +15,6 @@ struct cperf_op_fns;
 
 typedef void  *(*cperf_constructor_t)(
 		struct rte_mempool *sess_mp,
-		struct rte_mempool *sess_priv_mp,
 		uint8_t dev_id,
 		uint16_t qp_id,
 		const struct cperf_options *options,
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 6c3aa77dec..ec867b0174 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -13,7 +13,7 @@ static int
 cperf_set_ops_asym(struct rte_crypto_op **ops,
 		   uint32_t src_buf_offset __rte_unused,
 		   uint32_t dst_buf_offset __rte_unused, uint16_t nb_ops,
-		   struct rte_cryptodev_sym_session *sess,
+		   void *sess,
 		   const struct cperf_options *options __rte_unused,
 		   const struct cperf_test_vector *test_vector __rte_unused,
 		   uint16_t iv_offset __rte_unused,
@@ -56,7 +56,7 @@ static int
 cperf_set_ops_security(struct rte_crypto_op **ops,
 		uint32_t src_buf_offset __rte_unused,
 		uint32_t dst_buf_offset __rte_unused,
-		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
+		uint16_t nb_ops, void *sess,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
 		uint16_t iv_offset __rte_unused, uint32_t *imix_idx,
@@ -141,7 +141,7 @@ cperf_set_ops_security(struct rte_crypto_op **ops,
 static int
 cperf_set_ops_null_cipher(struct rte_crypto_op **ops,
 		uint32_t src_buf_offset, uint32_t dst_buf_offset,
-		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
+		uint16_t nb_ops, void *sess,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector __rte_unused,
 		uint16_t iv_offset __rte_unused, uint32_t *imix_idx,
@@ -181,7 +181,7 @@ cperf_set_ops_null_cipher(struct rte_crypto_op **ops,
 static int
 cperf_set_ops_null_auth(struct rte_crypto_op **ops,
 		uint32_t src_buf_offset, uint32_t dst_buf_offset,
-		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
+		uint16_t nb_ops, void *sess,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector __rte_unused,
 		uint16_t iv_offset __rte_unused, uint32_t *imix_idx,
@@ -221,7 +221,7 @@ cperf_set_ops_null_auth(struct rte_crypto_op **ops,
 static int
 cperf_set_ops_cipher(struct rte_crypto_op **ops,
 		uint32_t src_buf_offset, uint32_t dst_buf_offset,
-		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
+		uint16_t nb_ops, void *sess,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
 		uint16_t iv_offset, uint32_t *imix_idx,
@@ -278,7 +278,7 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
 static int
 cperf_set_ops_auth(struct rte_crypto_op **ops,
 		uint32_t src_buf_offset, uint32_t dst_buf_offset,
-		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
+		uint16_t nb_ops, void *sess,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
 		uint16_t iv_offset, uint32_t *imix_idx,
@@ -379,7 +379,7 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
 static int
 cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
 		uint32_t src_buf_offset, uint32_t dst_buf_offset,
-		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
+		uint16_t nb_ops, void *sess,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
 		uint16_t iv_offset, uint32_t *imix_idx,
@@ -495,7 +495,7 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
 static int
 cperf_set_ops_aead(struct rte_crypto_op **ops,
 		uint32_t src_buf_offset, uint32_t dst_buf_offset,
-		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
+		uint16_t nb_ops, void *sess,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
 		uint16_t iv_offset, uint32_t *imix_idx,
@@ -720,9 +720,8 @@ create_ipsec_session(struct rte_mempool *sess_mp,
 				&sess_conf, sess_mp);
 }
 
-static struct rte_cryptodev_sym_session *
+static void *
 cperf_create_session(struct rte_mempool *sess_mp,
-	struct rte_mempool *priv_mp,
 	uint8_t dev_id,
 	const struct cperf_options *options,
 	const struct cperf_test_vector *test_vector,
@@ -747,7 +746,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
 		if (sess == NULL)
 			return NULL;
 		rc = rte_cryptodev_asym_session_init(dev_id, (void *)sess,
-						     &xform, priv_mp);
+						     &xform, sess_mp);
 		if (rc < 0) {
 			if (sess != NULL) {
 				rte_cryptodev_asym_session_clear(dev_id,
@@ -905,8 +904,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
 			cipher_xform.cipher.iv.length = 0;
 		}
 		/* create crypto session */
-		rte_cryptodev_sym_session_init(dev_id, sess, &cipher_xform,
-				priv_mp);
+		rte_cryptodev_sym_session_init(dev_id, sess, &cipher_xform);
 	/*
 	 *  auth only
 	 */
@@ -933,8 +931,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
 			auth_xform.auth.iv.length = 0;
 		}
 		/* create crypto session */
-		rte_cryptodev_sym_session_init(dev_id, sess, &auth_xform,
-				priv_mp);
+		rte_cryptodev_sym_session_init(dev_id, sess, &auth_xform);
 	/*
 	 * cipher and auth
 	 */
@@ -993,12 +990,12 @@ cperf_create_session(struct rte_mempool *sess_mp,
 			cipher_xform.next = &auth_xform;
 			/* create crypto session */
 			rte_cryptodev_sym_session_init(dev_id,
-					sess, &cipher_xform, priv_mp);
+					sess, &cipher_xform);
 		} else { /* auth then cipher */
 			auth_xform.next = &cipher_xform;
 			/* create crypto session */
 			rte_cryptodev_sym_session_init(dev_id,
-					sess, &auth_xform, priv_mp);
+					sess, &auth_xform);
 		}
 	} else { /* options->op_type == CPERF_AEAD */
 		aead_xform.type = RTE_CRYPTO_SYM_XFORM_AEAD;
@@ -1019,7 +1016,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
 
 		/* Create crypto session */
 		rte_cryptodev_sym_session_init(dev_id,
-					sess, &aead_xform, priv_mp);
+					sess, &aead_xform);
 	}
 
 	return sess;
diff --git a/app/test-crypto-perf/cperf_ops.h b/app/test-crypto-perf/cperf_ops.h
index 30d38f90e3..d3513590f1 100644
--- a/app/test-crypto-perf/cperf_ops.h
+++ b/app/test-crypto-perf/cperf_ops.h
@@ -12,15 +12,15 @@
 #include "cperf_test_vectors.h"
 
 
-typedef struct rte_cryptodev_sym_session *(*cperf_sessions_create_t)(
-		struct rte_mempool *sess_mp, struct rte_mempool *sess_priv_mp,
+typedef void *(*cperf_sessions_create_t)(
+		struct rte_mempool *sess_mp,
 		uint8_t dev_id, const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
 		uint16_t iv_offset);
 
 typedef int (*cperf_populate_ops_t)(struct rte_crypto_op **ops,
 		uint32_t src_buf_offset, uint32_t dst_buf_offset,
-		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
+		uint16_t nb_ops, void *sess,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
 		uint16_t iv_offset, uint32_t *imix_idx,
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 69f55de50a..63d520ee66 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -24,7 +24,7 @@ struct cperf_latency_ctx {
 
 	struct rte_mempool *pool;
 
-	struct rte_cryptodev_sym_session *sess;
+	void *sess;
 
 	cperf_populate_ops_t populate_ops;
 
@@ -59,7 +59,6 @@ cperf_latency_test_free(struct cperf_latency_ctx *ctx)
 
 void *
 cperf_latency_test_constructor(struct rte_mempool *sess_mp,
-		struct rte_mempool *sess_priv_mp,
 		uint8_t dev_id, uint16_t qp_id,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
@@ -84,7 +83,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
 		sizeof(struct rte_crypto_sym_op) +
 		sizeof(struct cperf_op_result *);
 
-	ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id, options,
+	ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
 			test_vector, iv_offset);
 	if (ctx->sess == NULL)
 		goto err;
diff --git a/app/test-crypto-perf/cperf_test_latency.h b/app/test-crypto-perf/cperf_test_latency.h
index ed5b0a07bb..d3fc3218d7 100644
--- a/app/test-crypto-perf/cperf_test_latency.h
+++ b/app/test-crypto-perf/cperf_test_latency.h
@@ -17,7 +17,6 @@
 void *
 cperf_latency_test_constructor(
 		struct rte_mempool *sess_mp,
-		struct rte_mempool *sess_priv_mp,
 		uint8_t dev_id,
 		uint16_t qp_id,
 		const struct cperf_options *options,
diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
index e43e2a3b96..11083ea141 100644
--- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
+++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
@@ -27,7 +27,7 @@ struct cperf_pmd_cyclecount_ctx {
 	struct rte_crypto_op **ops;
 	struct rte_crypto_op **ops_processed;
 
-	struct rte_cryptodev_sym_session *sess;
+	void *sess;
 
 	cperf_populate_ops_t populate_ops;
 
@@ -93,7 +93,6 @@ cperf_pmd_cyclecount_test_free(struct cperf_pmd_cyclecount_ctx *ctx)
 
 void *
 cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
-		struct rte_mempool *sess_priv_mp,
 		uint8_t dev_id, uint16_t qp_id,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
@@ -120,7 +119,7 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
 	uint16_t iv_offset = sizeof(struct rte_crypto_op) +
 			sizeof(struct rte_crypto_sym_op);
 
-	ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id, options,
+	ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
 			test_vector, iv_offset);
 	if (ctx->sess == NULL)
 		goto err;
diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.h b/app/test-crypto-perf/cperf_test_pmd_cyclecount.h
index 3084038a18..beb4419910 100644
--- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.h
+++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.h
@@ -18,7 +18,6 @@
 void *
 cperf_pmd_cyclecount_test_constructor(
 		struct rte_mempool *sess_mp,
-		struct rte_mempool *sess_priv_mp,
 		uint8_t dev_id,
 		uint16_t qp_id,
 		const struct cperf_options *options,
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 739ed9e573..887f42c532 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -18,7 +18,7 @@ struct cperf_throughput_ctx {
 
 	struct rte_mempool *pool;
 
-	struct rte_cryptodev_sym_session *sess;
+	void *sess;
 
 	cperf_populate_ops_t populate_ops;
 
@@ -65,7 +65,6 @@ cperf_throughput_test_free(struct cperf_throughput_ctx *ctx)
 
 void *
 cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
-		struct rte_mempool *sess_priv_mp,
 		uint8_t dev_id, uint16_t qp_id,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
@@ -88,7 +87,7 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
 	uint16_t iv_offset = sizeof(struct rte_crypto_op) +
 		sizeof(struct rte_crypto_sym_op);
 
-	ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id, options,
+	ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
 			test_vector, iv_offset);
 	if (ctx->sess == NULL)
 		goto err;
diff --git a/app/test-crypto-perf/cperf_test_throughput.h b/app/test-crypto-perf/cperf_test_throughput.h
index 91e1a4b708..439ec8e559 100644
--- a/app/test-crypto-perf/cperf_test_throughput.h
+++ b/app/test-crypto-perf/cperf_test_throughput.h
@@ -18,7 +18,6 @@
 void *
 cperf_throughput_test_constructor(
 		struct rte_mempool *sess_mp,
-		struct rte_mempool *sess_priv_mp,
 		uint8_t dev_id,
 		uint16_t qp_id,
 		const struct cperf_options *options,
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index 1962438034..0b3d4f7c59 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -18,7 +18,7 @@ struct cperf_verify_ctx {
 
 	struct rte_mempool *pool;
 
-	struct rte_cryptodev_sym_session *sess;
+	void *sess;
 
 	cperf_populate_ops_t populate_ops;
 
@@ -51,7 +51,6 @@ cperf_verify_test_free(struct cperf_verify_ctx *ctx)
 
 void *
 cperf_verify_test_constructor(struct rte_mempool *sess_mp,
-		struct rte_mempool *sess_priv_mp,
 		uint8_t dev_id, uint16_t qp_id,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
@@ -74,7 +73,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
 	uint16_t iv_offset = sizeof(struct rte_crypto_op) +
 		sizeof(struct rte_crypto_sym_op);
 
-	ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id, options,
+	ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
 			test_vector, iv_offset);
 	if (ctx->sess == NULL)
 		goto err;
diff --git a/app/test-crypto-perf/cperf_test_verify.h b/app/test-crypto-perf/cperf_test_verify.h
index ac2192ba99..9f70ad87ba 100644
--- a/app/test-crypto-perf/cperf_test_verify.h
+++ b/app/test-crypto-perf/cperf_test_verify.h
@@ -18,7 +18,6 @@
 void *
 cperf_verify_test_constructor(
 		struct rte_mempool *sess_mp,
-		struct rte_mempool *sess_priv_mp,
 		uint8_t dev_id,
 		uint16_t qp_id,
 		const struct cperf_options *options,
diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
index 6fdb92fb7c..7a9fb9a64d 100644
--- a/app/test-crypto-perf/main.c
+++ b/app/test-crypto-perf/main.c
@@ -119,35 +119,14 @@ fill_session_pool_socket(int32_t socket_id, uint32_t session_priv_size,
 	char mp_name[RTE_MEMPOOL_NAMESIZE];
 	struct rte_mempool *sess_mp;
 
-	if (session_pool_socket[socket_id].priv_mp == NULL) {
-		snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
-			"priv_sess_mp_%u", socket_id);
-
-		sess_mp = rte_mempool_create(mp_name,
-					nb_sessions,
-					session_priv_size,
-					0, 0, NULL, NULL, NULL,
-					NULL, socket_id,
-					0);
-
-		if (sess_mp == NULL) {
-			printf("Cannot create pool \"%s\" on socket %d\n",
-				mp_name, socket_id);
-			return -ENOMEM;
-		}
-
-		printf("Allocated pool \"%s\" on socket %d\n",
-			mp_name, socket_id);
-		session_pool_socket[socket_id].priv_mp = sess_mp;
-	}
-
 	if (session_pool_socket[socket_id].sess_mp == NULL) {
 
 		snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
 			"sess_mp_%u", socket_id);
 
 		sess_mp = rte_cryptodev_sym_session_pool_create(mp_name,
-					nb_sessions, 0, 0, 0, socket_id);
+					nb_sessions, session_priv_size,
+					0, 0, socket_id);
 
 		if (sess_mp == NULL) {
 			printf("Cannot create pool \"%s\" on socket %d\n",
@@ -345,12 +324,9 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
 			return ret;
 
 		qp_conf.mp_session = session_pool_socket[socket_id].sess_mp;
-		qp_conf.mp_session_private =
-				session_pool_socket[socket_id].priv_mp;
 
 		if (opts->op_type == CPERF_ASYM_MODEX) {
 			qp_conf.mp_session = NULL;
-			qp_conf.mp_session_private = NULL;
 		}
 
 		ret = rte_cryptodev_configure(cdev_id, &conf);
@@ -705,7 +681,6 @@ main(int argc, char **argv)
 
 		ctx[i] = cperf_testmap[opts.test].constructor(
 				session_pool_socket[socket_id].sess_mp,
-				session_pool_socket[socket_id].priv_mp,
 				cdev_id, qp_id,
 				&opts, t_vec, &op_fns);
 		if (ctx[i] == NULL) {
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 996b3b4de6..c2032b619b 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -81,7 +81,7 @@ struct crypto_unittest_params {
 #endif
 
 	union {
-		struct rte_cryptodev_sym_session *sess;
+		void *sess;
 #ifdef RTE_LIB_SECURITY
 		void *sec_session;
 #endif
@@ -121,7 +121,7 @@ test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
 		uint8_t *hmac_key);
 
 static int
-test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_sym_session *sess,
+test_AES_CBC_HMAC_SHA512_decrypt_perform(void *sess,
 		struct crypto_unittest_params *ut_params,
 		struct crypto_testsuite_params *ts_param,
 		const uint8_t *cipher,
@@ -612,23 +612,11 @@ testsuite_setup(void)
 	}
 
 	ts_params->session_mpool = rte_cryptodev_sym_session_pool_create(
-			"test_sess_mp", MAX_NB_SESSIONS, 0, 0, 0,
+			"test_sess_mp", MAX_NB_SESSIONS, session_size, 0, 0,
 			SOCKET_ID_ANY);
 	TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
 			"session mempool allocation failed");
 
-	ts_params->session_priv_mpool = rte_mempool_create(
-			"test_sess_mp_priv",
-			MAX_NB_SESSIONS,
-			session_size,
-			0, 0, NULL, NULL, NULL,
-			NULL, SOCKET_ID_ANY,
-			0);
-	TEST_ASSERT_NOT_NULL(ts_params->session_priv_mpool,
-			"session mempool allocation failed");
-
-
-
 	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
 			&ts_params->conf),
 			"Failed to configure cryptodev %u with %u qps",
@@ -636,7 +624,6 @@ testsuite_setup(void)
 
 	ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
 	ts_params->qp_conf.mp_session = ts_params->session_mpool;
-	ts_params->qp_conf.mp_session_private = ts_params->session_priv_mpool;
 
 	for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
 		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
@@ -666,11 +653,6 @@ testsuite_teardown(void)
 	}
 
 	/* Free session mempools */
-	if (ts_params->session_priv_mpool != NULL) {
-		rte_mempool_free(ts_params->session_priv_mpool);
-		ts_params->session_priv_mpool = NULL;
-	}
-
 	if (ts_params->session_mpool != NULL) {
 		rte_mempool_free(ts_params->session_mpool);
 		ts_params->session_mpool = NULL;
@@ -1346,7 +1328,6 @@ dev_configure_and_start(uint64_t ff_disable)
 	ts_params->conf.ff_disable = ff_disable;
 	ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
 	ts_params->qp_conf.mp_session = ts_params->session_mpool;
-	ts_params->qp_conf.mp_session_private = ts_params->session_priv_mpool;
 
 	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
 			&ts_params->conf),
@@ -1568,7 +1549,6 @@ test_queue_pair_descriptor_setup(void)
 	 */
 	qp_conf.nb_descriptors = MIN_NUM_OPS_INFLIGHT; /* min size*/
 	qp_conf.mp_session = ts_params->session_mpool;
-	qp_conf.mp_session_private = ts_params->session_priv_mpool;
 
 	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
 		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
@@ -2162,8 +2142,7 @@ test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
 
 	/* Create crypto session*/
 	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
-			ut_params->sess, &ut_params->cipher_xform,
-			ts_params->session_priv_mpool);
+			ut_params->sess, &ut_params->cipher_xform);
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
 	/* Generate crypto op data structure */
@@ -2263,7 +2242,7 @@ test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
 		uint8_t *hmac_key);
 
 static int
-test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_sym_session *sess,
+test_AES_CBC_HMAC_SHA512_decrypt_perform(void *sess,
 		struct crypto_unittest_params *ut_params,
 		struct crypto_testsuite_params *ts_params,
 		const uint8_t *cipher,
@@ -2304,7 +2283,7 @@ test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
 
 
 static int
-test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_sym_session *sess,
+test_AES_CBC_HMAC_SHA512_decrypt_perform(void *sess,
 		struct crypto_unittest_params *ut_params,
 		struct crypto_testsuite_params *ts_params,
 		const uint8_t *cipher,
@@ -2417,8 +2396,7 @@ create_wireless_algo_hash_session(uint8_t dev_id,
 			ts_params->session_mpool);
 
 	status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-			&ut_params->auth_xform,
-			ts_params->session_priv_mpool);
+			&ut_params->auth_xform);
 	if (status == -ENOTSUP)
 		return TEST_SKIPPED;
 
@@ -2459,8 +2437,7 @@ create_wireless_algo_cipher_session(uint8_t dev_id,
 			ts_params->session_mpool);
 
 	status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-			&ut_params->cipher_xform,
-			ts_params->session_priv_mpool);
+			&ut_params->cipher_xform);
 	if (status == -ENOTSUP)
 		return TEST_SKIPPED;
 
@@ -2582,8 +2559,7 @@ create_wireless_algo_cipher_auth_session(uint8_t dev_id,
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
 	status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-			&ut_params->cipher_xform,
-			ts_params->session_priv_mpool);
+			&ut_params->cipher_xform);
 	if (status == -ENOTSUP)
 		return TEST_SKIPPED;
 
@@ -2645,8 +2621,7 @@ create_wireless_cipher_auth_session(uint8_t dev_id,
 			ts_params->session_mpool);
 
 	status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-			&ut_params->cipher_xform,
-			ts_params->session_priv_mpool);
+			&ut_params->cipher_xform);
 	if (status == -ENOTSUP)
 		return TEST_SKIPPED;
 
@@ -2715,13 +2690,11 @@ create_wireless_algo_auth_cipher_session(uint8_t dev_id,
 		ut_params->auth_xform.next = NULL;
 		ut_params->cipher_xform.next = &ut_params->auth_xform;
 		status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-				&ut_params->cipher_xform,
-				ts_params->session_priv_mpool);
+				&ut_params->cipher_xform);
 
 	} else
 		status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-				&ut_params->auth_xform,
-				ts_params->session_priv_mpool);
+				&ut_params->auth_xform);
 
 	if (status == -ENOTSUP)
 		return TEST_SKIPPED;
@@ -7965,8 +7938,7 @@ create_aead_session(uint8_t dev_id, enum rte_crypto_aead_algorithm algo,
 			ts_params->session_mpool);
 
 	rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-			&ut_params->aead_xform,
-			ts_params->session_priv_mpool);
+			&ut_params->aead_xform);
 
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
@@ -11192,8 +11164,7 @@ static int MD5_HMAC_create_session(struct crypto_testsuite_params *ts_params,
 			ts_params->session_mpool);
 
 	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
-			ut_params->sess, &ut_params->auth_xform,
-			ts_params->session_priv_mpool);
+			ut_params->sess, &ut_params->auth_xform);
 
 	if (ut_params->sess == NULL)
 		return TEST_FAILED;
@@ -11406,7 +11377,7 @@ test_multi_session(void)
 	struct crypto_unittest_params *ut_params = &unittest_params;
 
 	struct rte_cryptodev_info dev_info;
-	struct rte_cryptodev_sym_session **sessions;
+	void **sessions;
 
 	uint16_t i;
 
@@ -11429,9 +11400,7 @@ test_multi_session(void)
 
 	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
 
-	sessions = rte_malloc(NULL,
-			sizeof(struct rte_cryptodev_sym_session *) *
-			(MAX_NB_SESSIONS + 1), 0);
+	sessions = rte_malloc(NULL, sizeof(void *) * (MAX_NB_SESSIONS + 1), 0);
 
 	/* Create multiple crypto sessions*/
 	for (i = 0; i < MAX_NB_SESSIONS; i++) {
@@ -11440,8 +11409,7 @@ test_multi_session(void)
 				ts_params->session_mpool);
 
 		rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
-				sessions[i], &ut_params->auth_xform,
-				ts_params->session_priv_mpool);
+				sessions[i], &ut_params->auth_xform);
 		TEST_ASSERT_NOT_NULL(sessions[i],
 				"Session creation failed at session number %u",
 				i);
@@ -11479,8 +11447,7 @@ test_multi_session(void)
 	sessions[i] = NULL;
 	/* Next session create should fail */
 	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
-			sessions[i], &ut_params->auth_xform,
-			ts_params->session_priv_mpool);
+			sessions[i], &ut_params->auth_xform);
 	TEST_ASSERT_NULL(sessions[i],
 			"Session creation succeeded unexpectedly!");
 
@@ -11511,7 +11478,7 @@ test_multi_session_random_usage(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
 	struct rte_cryptodev_info dev_info;
-	struct rte_cryptodev_sym_session **sessions;
+	void **sessions;
 	uint32_t i, j;
 	struct multi_session_params ut_paramz[] = {
 
@@ -11555,8 +11522,7 @@ test_multi_session_random_usage(void)
 	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
 
 	sessions = rte_malloc(NULL,
-			(sizeof(struct rte_cryptodev_sym_session *)
-					* MAX_NB_SESSIONS) + 1, 0);
+			(sizeof(void *) * MAX_NB_SESSIONS) + 1, 0);
 
 	for (i = 0; i < MB_SESSION_NUMBER; i++) {
 		sessions[i] = rte_cryptodev_sym_session_create(
@@ -11573,8 +11539,7 @@ test_multi_session_random_usage(void)
 		rte_cryptodev_sym_session_init(
 				ts_params->valid_devs[0],
 				sessions[i],
-				&ut_paramz[i].ut_params.auth_xform,
-				ts_params->session_priv_mpool);
+				&ut_paramz[i].ut_params.auth_xform);
 
 		TEST_ASSERT_NOT_NULL(sessions[i],
 				"Session creation failed at session number %u",
@@ -11657,8 +11622,7 @@ test_null_invalid_operation(void)
 
 	/* Create Crypto session*/
 	ret = rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
-			ut_params->sess, &ut_params->cipher_xform,
-			ts_params->session_priv_mpool);
+			ut_params->sess, &ut_params->cipher_xform);
 	TEST_ASSERT(ret < 0,
 			"Session creation succeeded unexpectedly");
 
@@ -11675,8 +11639,7 @@ test_null_invalid_operation(void)
 
 	/* Create Crypto session*/
 	ret = rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
-			ut_params->sess, &ut_params->auth_xform,
-			ts_params->session_priv_mpool);
+			ut_params->sess, &ut_params->auth_xform);
 	TEST_ASSERT(ret < 0,
 			"Session creation succeeded unexpectedly");
 
@@ -11721,8 +11684,7 @@ test_null_burst_operation(void)
 
 	/* Create Crypto session*/
 	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
-			ut_params->sess, &ut_params->cipher_xform,
-			ts_params->session_priv_mpool);
+			ut_params->sess, &ut_params->cipher_xform);
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
 	TEST_ASSERT_EQUAL(rte_crypto_op_bulk_alloc(ts_params->op_mpool,
@@ -11834,7 +11796,6 @@ test_enq_callback_setup(void)
 
 	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
 	qp_conf.mp_session = ts_params->session_mpool;
-	qp_conf.mp_session_private = ts_params->session_priv_mpool;
 
 	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
 			ts_params->valid_devs[0], qp_id, &qp_conf,
@@ -11934,7 +11895,6 @@ test_deq_callback_setup(void)
 
 	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
 	qp_conf.mp_session = ts_params->session_mpool;
-	qp_conf.mp_session_private = ts_params->session_priv_mpool;
 
 	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
 			ts_params->valid_devs[0], qp_id, &qp_conf,
@@ -12143,8 +12103,7 @@ static int create_gmac_session(uint8_t dev_id,
 			ts_params->session_mpool);
 
 	rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-			&ut_params->auth_xform,
-			ts_params->session_priv_mpool);
+			&ut_params->auth_xform);
 
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
@@ -12788,8 +12747,7 @@ create_auth_session(struct crypto_unittest_params *ut_params,
 			ts_params->session_mpool);
 
 	rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-				&ut_params->auth_xform,
-				ts_params->session_priv_mpool);
+				&ut_params->auth_xform);
 
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
@@ -12841,8 +12799,7 @@ create_auth_cipher_session(struct crypto_unittest_params *ut_params,
 			ts_params->session_mpool);
 
 	rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-				&ut_params->auth_xform,
-				ts_params->session_priv_mpool);
+				&ut_params->auth_xform);
 
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
@@ -13352,8 +13309,7 @@ test_authenticated_encrypt_with_esn(
 
 	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
 				ut_params->sess,
-				&ut_params->cipher_xform,
-				ts_params->session_priv_mpool);
+				&ut_params->cipher_xform);
 
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
@@ -13484,8 +13440,7 @@ test_authenticated_decrypt_with_esn(
 
 	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
 				ut_params->sess,
-				&ut_params->auth_xform,
-				ts_params->session_priv_mpool);
+				&ut_params->auth_xform);
 
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
@@ -14214,11 +14169,6 @@ test_scheduler_attach_worker_op(void)
 			rte_mempool_free(ts_params->session_mpool);
 			ts_params->session_mpool = NULL;
 		}
-		if (ts_params->session_priv_mpool) {
-			rte_mempool_free(ts_params->session_priv_mpool);
-			ts_params->session_priv_mpool = NULL;
-		}
-
 		if (info.sym.max_nb_sessions != 0 &&
 				info.sym.max_nb_sessions < MAX_NB_SESSIONS) {
 			RTE_LOG(ERR, USER1,
@@ -14235,32 +14185,14 @@ test_scheduler_attach_worker_op(void)
 			ts_params->session_mpool =
 				rte_cryptodev_sym_session_pool_create(
 						"test_sess_mp",
-						MAX_NB_SESSIONS, 0, 0, 0,
+						MAX_NB_SESSIONS,
+						session_size, 0, 0,
 						SOCKET_ID_ANY);
 			TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
 					"session mempool allocation failed");
 		}
 
-		/*
-		 * Create mempool with maximum number of sessions,
-		 * to include device specific session private data
-		 */
-		if (ts_params->session_priv_mpool == NULL) {
-			ts_params->session_priv_mpool = rte_mempool_create(
-					"test_sess_mp_priv",
-					MAX_NB_SESSIONS,
-					session_size,
-					0, 0, NULL, NULL, NULL,
-					NULL, SOCKET_ID_ANY,
-					0);
-
-			TEST_ASSERT_NOT_NULL(ts_params->session_priv_mpool,
-					"session mempool allocation failed");
-		}
-
 		ts_params->qp_conf.mp_session = ts_params->session_mpool;
-		ts_params->qp_conf.mp_session_private =
-				ts_params->session_priv_mpool;
 
 		ret = rte_cryptodev_scheduler_worker_attach(sched_id,
 				(uint8_t)i);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 90c8287365..f5cdf40e93 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -90,7 +90,6 @@ struct crypto_testsuite_params {
 	struct rte_mempool *large_mbuf_pool;
 	struct rte_mempool *op_mpool;
 	struct rte_mempool *session_mpool;
-	struct rte_mempool *session_priv_mpool;
 	struct rte_cryptodev_config conf;
 	struct rte_cryptodev_qp_conf qp_conf;
 
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 9d19a6d6d9..7b05cb1647 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -923,8 +923,7 @@ testsuite_setup(void)
 
 	/* configure qp */
 	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
-	ts_params->qp_conf.mp_session = ts_params->session_mpool;
-	ts_params->qp_conf.mp_session_private = ts_params->session_mpool;
+	ts_params->qp_conf.mp_session = NULL;
 	for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
 		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
 			dev_id, qp_id, &ts_params->qp_conf,
diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c
index 3cdb2c96e8..9417803f18 100644
--- a/app/test/test_cryptodev_blockcipher.c
+++ b/app/test/test_cryptodev_blockcipher.c
@@ -68,7 +68,6 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
 	struct rte_mempool *mbuf_pool,
 	struct rte_mempool *op_mpool,
 	struct rte_mempool *sess_mpool,
-	struct rte_mempool *sess_priv_mpool,
 	uint8_t dev_id,
 	char *test_msg)
 {
@@ -81,7 +80,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
 	struct rte_crypto_sym_op *sym_op = NULL;
 	struct rte_crypto_op *op = NULL;
 	struct rte_cryptodev_info dev_info;
-	struct rte_cryptodev_sym_session *sess = NULL;
+	void *sess = NULL;
 
 	int status = TEST_SUCCESS;
 	const struct blockcipher_test_data *tdata = t->test_data;
@@ -514,7 +513,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
 		sess = rte_cryptodev_sym_session_create(sess_mpool);
 
 		status = rte_cryptodev_sym_session_init(dev_id, sess,
-				init_xform, sess_priv_mpool);
+				init_xform);
 		if (status == -ENOTSUP) {
 			snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "UNSUPPORTED");
 			status = TEST_SKIPPED;
@@ -831,7 +830,6 @@ blockcipher_test_case_run(const void *data)
 			p_testsuite_params->mbuf_pool,
 			p_testsuite_params->op_mpool,
 			p_testsuite_params->session_mpool,
-			p_testsuite_params->session_priv_mpool,
 			p_testsuite_params->valid_devs[0],
 			test_msg);
 	return status;
diff --git a/app/test/test_event_crypto_adapter.c b/app/test/test_event_crypto_adapter.c
index 0c7ebe6981..76d2aeab6d 100644
--- a/app/test/test_event_crypto_adapter.c
+++ b/app/test/test_event_crypto_adapter.c
@@ -61,7 +61,6 @@ struct event_crypto_adapter_test_params {
 	struct rte_mempool *mbuf_pool;
 	struct rte_mempool *op_mpool;
 	struct rte_mempool *session_mpool;
-	struct rte_mempool *session_priv_mpool;
 	struct rte_cryptodev_config *config;
 	uint8_t crypto_event_port_id;
 	uint8_t internal_port_op_fwd;
@@ -167,7 +166,7 @@ static int
 test_op_forward_mode(uint8_t session_less)
 {
 	struct rte_crypto_sym_xform cipher_xform;
-	struct rte_cryptodev_sym_session *sess;
+	void *sess;
 	union rte_event_crypto_metadata m_data;
 	struct rte_crypto_sym_op *sym_op;
 	struct rte_crypto_op *op;
@@ -203,7 +202,7 @@ test_op_forward_mode(uint8_t session_less)
 
 		/* Create Crypto session*/
 		ret = rte_cryptodev_sym_session_init(TEST_CDEV_ID, sess,
-				&cipher_xform, params.session_priv_mpool);
+				&cipher_xform);
 		TEST_ASSERT_SUCCESS(ret, "Failed to init session\n");
 
 		ret = rte_event_crypto_adapter_caps_get(evdev, TEST_CDEV_ID,
@@ -366,7 +365,7 @@ static int
 test_op_new_mode(uint8_t session_less)
 {
 	struct rte_crypto_sym_xform cipher_xform;
-	struct rte_cryptodev_sym_session *sess;
+	void *sess;
 	union rte_event_crypto_metadata m_data;
 	struct rte_crypto_sym_op *sym_op;
 	struct rte_crypto_op *op;
@@ -409,7 +408,7 @@ test_op_new_mode(uint8_t session_less)
 						&m_data, sizeof(m_data));
 		}
 		ret = rte_cryptodev_sym_session_init(TEST_CDEV_ID, sess,
-				&cipher_xform, params.session_priv_mpool);
+				&cipher_xform);
 		TEST_ASSERT_SUCCESS(ret, "Failed to init session\n");
 
 		rte_crypto_op_attach_sym_session(op, sess);
@@ -550,22 +549,12 @@ configure_cryptodev(void)
 
 	params.session_mpool = rte_cryptodev_sym_session_pool_create(
 			"CRYPTO_ADAPTER_SESSION_MP",
-			MAX_NB_SESSIONS, 0, 0,
+			MAX_NB_SESSIONS, session_size, 0,
 			sizeof(union rte_event_crypto_metadata),
 			SOCKET_ID_ANY);
 	TEST_ASSERT_NOT_NULL(params.session_mpool,
 			"session mempool allocation failed\n");
 
-	params.session_priv_mpool = rte_mempool_create(
-				"CRYPTO_AD_SESS_MP_PRIV",
-				MAX_NB_SESSIONS,
-				session_size,
-				0, 0, NULL, NULL, NULL,
-				NULL, SOCKET_ID_ANY,
-				0);
-	TEST_ASSERT_NOT_NULL(params.session_priv_mpool,
-			"session mempool allocation failed\n");
-
 	rte_cryptodev_info_get(TEST_CDEV_ID, &info);
 	conf.nb_queue_pairs = info.max_nb_queue_pairs;
 	conf.socket_id = SOCKET_ID_ANY;
@@ -577,7 +566,6 @@ configure_cryptodev(void)
 
 	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
 	qp_conf.mp_session = params.session_mpool;
-	qp_conf.mp_session_private = params.session_priv_mpool;
 
 	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
 			TEST_CDEV_ID, TEST_CDEV_QP_ID, &qp_conf,
@@ -931,12 +919,6 @@ crypto_teardown(void)
 		rte_mempool_free(params.session_mpool);
 		params.session_mpool = NULL;
 	}
-	if (params.session_priv_mpool != NULL) {
-		rte_mempool_avail_count(params.session_priv_mpool);
-		rte_mempool_free(params.session_priv_mpool);
-		params.session_priv_mpool = NULL;
-	}
-
 	/* Free ops mempool */
 	if (params.op_mpool != NULL) {
 		RTE_LOG(DEBUG, USER1, "EVENT_CRYPTO_SYM_OP_POOL count %u\n",
diff --git a/app/test/test_ipsec.c b/app/test/test_ipsec.c
index 3b49a0b13a..1e3913489d 100644
--- a/app/test/test_ipsec.c
+++ b/app/test/test_ipsec.c
@@ -356,20 +356,9 @@ testsuite_setup(void)
 		return TEST_FAILED;
 	}
 
-	ts_params->qp_conf.mp_session_private = rte_mempool_create(
-				"test_priv_sess_mp",
-				MAX_NB_SESSIONS,
-				sess_sz,
-				0, 0, NULL, NULL, NULL,
-				NULL, SOCKET_ID_ANY,
-				0);
-
-	TEST_ASSERT_NOT_NULL(ts_params->qp_conf.mp_session_private,
-			"private session mempool allocation failed");
-
 	ts_params->qp_conf.mp_session =
 		rte_cryptodev_sym_session_pool_create("test_sess_mp",
-			MAX_NB_SESSIONS, 0, 0, 0, SOCKET_ID_ANY);
+			MAX_NB_SESSIONS, sess_sz, 0, 0, SOCKET_ID_ANY);
 
 	TEST_ASSERT_NOT_NULL(ts_params->qp_conf.mp_session,
 			"session mempool allocation failed");
@@ -414,11 +403,6 @@ testsuite_teardown(void)
 		rte_mempool_free(ts_params->qp_conf.mp_session);
 		ts_params->qp_conf.mp_session = NULL;
 	}
-
-	if (ts_params->qp_conf.mp_session_private != NULL) {
-		rte_mempool_free(ts_params->qp_conf.mp_session_private);
-		ts_params->qp_conf.mp_session_private = NULL;
-	}
 }
 
 static int
@@ -645,7 +629,7 @@ create_crypto_session(struct ipsec_unitest_params *ut,
 	struct rte_cryptodev_qp_conf *qp, uint8_t dev_id, uint32_t j)
 {
 	int32_t rc;
-	struct rte_cryptodev_sym_session *s;
+	void *s;
 
 	s = rte_cryptodev_sym_session_create(qp->mp_session);
 	if (s == NULL)
@@ -653,7 +637,7 @@ create_crypto_session(struct ipsec_unitest_params *ut,
 
 	/* initiliaze SA crypto session for device */
 	rc = rte_cryptodev_sym_session_init(dev_id, s,
-			ut->crypto_xforms, qp->mp_session_private);
+			ut->crypto_xforms);
 	if (rc == 0) {
 		ut->ss[j].crypto.ses = s;
 		return 0;
diff --git a/drivers/crypto/armv8/armv8_pmd_private.h b/drivers/crypto/armv8/armv8_pmd_private.h
index 75ddba79c1..41292d8851 100644
--- a/drivers/crypto/armv8/armv8_pmd_private.h
+++ b/drivers/crypto/armv8/armv8_pmd_private.h
@@ -106,8 +106,6 @@ struct armv8_crypto_qp {
 	/**< Ring for placing process packets */
 	struct rte_mempool *sess_mp;
 	/**< Session Mempool */
-	struct rte_mempool *sess_mp_priv;
-       /**< Session Private Data Mempool */
 	struct rte_cryptodev_stats stats;
 	/**< Queue pair statistics */
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
diff --git a/drivers/crypto/armv8/rte_armv8_pmd.c b/drivers/crypto/armv8/rte_armv8_pmd.c
index 32127a874c..51034de9eb 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd.c
@@ -528,27 +528,23 @@ get_session(struct armv8_crypto_qp *qp, struct rte_crypto_op *op)
 		}
 	} else {
 		/* provide internal session */
-		void *_sess = NULL;
-		void *_sess_private_data = NULL;
+		struct rte_cryptodev_sym_session *_sess =
+			rte_cryptodev_sym_session_create(qp->sess_mp);
 
-		if (rte_mempool_get(qp->sess_mp, (void **)&_sess))
+		if (_sess == NULL)
 			return NULL;
 
-		if (rte_mempool_get(qp->sess_mp_priv,
-				(void **)&_sess_private_data))
-			return NULL;
-
-		sess = (struct armv8_crypto_session *)_sess_private_data;
-
+		_sess->sess_data[cryptodev_driver_id].data =
+				(void *)((uint8_t *)_sess +
+				rte_cryptodev_sym_get_header_session_size() +
+				(cryptodev_driver_id * _sess->priv_sz));
+		sess = _sess->sess_data[cryptodev_driver_id].data;
 		if (unlikely(armv8_crypto_set_session_parameters(sess,
 				op->sym->xform) != 0)) {
 			rte_mempool_put(qp->sess_mp, _sess);
-			rte_mempool_put(qp->sess_mp_priv, _sess_private_data);
 			sess = NULL;
 		}
 		op->sym->session = (struct rte_cryptodev_sym_session *)_sess;
-		set_sym_session_private_data(op->sym->session,
-				cryptodev_driver_id, _sess_private_data);
 	}
 
 	if (unlikely(sess == NULL))
@@ -677,7 +673,6 @@ process_op(struct armv8_crypto_qp *qp, struct rte_crypto_op *op,
 		memset(op->sym->session, 0,
 			rte_cryptodev_sym_get_existing_header_session_size(
 				op->sym->session));
-		rte_mempool_put(qp->sess_mp_priv, sess);
 		rte_mempool_put(qp->sess_mp, op->sym->session);
 		op->sym->session = NULL;
 	}
diff --git a/drivers/crypto/armv8/rte_armv8_pmd_ops.c b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
index 1b2749fe62..2d3b54b063 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd_ops.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
@@ -244,7 +244,6 @@ armv8_crypto_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 		goto qp_setup_cleanup;
 
 	qp->sess_mp = qp_conf->mp_session;
-	qp->sess_mp_priv = qp_conf->mp_session_private;
 
 	memset(&qp->stats, 0, sizeof(qp->stats));
 
@@ -268,10 +267,8 @@ armv8_crypto_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
 static int
 armv8_crypto_pmd_sym_session_configure(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mempool)
+		void *sess)
 {
-	void *sess_private_data;
 	int ret;
 
 	if (unlikely(sess == NULL)) {
@@ -279,42 +276,23 @@ armv8_crypto_pmd_sym_session_configure(struct rte_cryptodev *dev,
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		CDEV_LOG_ERR(
-			"Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
-	ret = armv8_crypto_set_session_parameters(sess_private_data, xform);
+	ret = armv8_crypto_set_session_parameters(sess, xform);
 	if (ret != 0) {
 		ARMV8_CRYPTO_LOG_ERR("failed configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-			sess_private_data);
-
 	return 0;
 }
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static void
-armv8_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+armv8_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-
+	RTE_SET_USED(dev);
 	/* Zero out the whole structure */
-	if (sess_priv) {
-		memset(sess_priv, 0, sizeof(struct armv8_crypto_session));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
-	}
+	if (sess)
+		memset(sess, 0, sizeof(struct armv8_crypto_session));
 }
 
 struct rte_cryptodev_ops armv8_crypto_pmd_ops = {
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.c b/drivers/crypto/bcmfs/bcmfs_sym_session.c
index 675ed0ad55..b4b167d0c2 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_session.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.c
@@ -224,10 +224,9 @@ bcmfs_sym_get_session(struct rte_crypto_op *op)
 int
 bcmfs_sym_session_configure(struct rte_cryptodev *dev,
 			    struct rte_crypto_sym_xform *xform,
-			    struct rte_cryptodev_sym_session *sess,
-			    struct rte_mempool *mempool)
+			    void *sess)
 {
-	void *sess_private_data;
+	RTE_SET_USED(dev);
 	int ret;
 
 	if (unlikely(sess == NULL)) {
@@ -235,44 +234,23 @@ bcmfs_sym_session_configure(struct rte_cryptodev *dev,
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		BCMFS_DP_LOG(ERR,
-			"Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
-	ret = crypto_set_session_parameters(sess_private_data, xform);
+	ret = crypto_set_session_parameters(sess, xform);
 
 	if (ret != 0) {
 		BCMFS_DP_LOG(ERR, "Failed configure session parameters");
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-				     sess_private_data);
-
 	return 0;
 }
 
 /* Clear the memory of session so it doesn't leave key material behind */
 void
-bcmfs_sym_session_clear(struct rte_cryptodev *dev,
-			struct rte_cryptodev_sym_session  *sess)
+bcmfs_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-
-	if (sess_priv) {
-		struct rte_mempool *sess_mp;
-
-		memset(sess_priv, 0, sizeof(struct bcmfs_sym_session));
-		sess_mp = rte_mempool_from_obj(sess_priv);
-
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
-	}
+	RTE_SET_USED(dev);
+	if (sess)
+		memset(sess, 0, sizeof(struct bcmfs_sym_session));
 }
 
 unsigned int
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.h b/drivers/crypto/bcmfs/bcmfs_sym_session.h
index d40595b4bd..7faafe2fd5 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_session.h
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.h
@@ -93,12 +93,10 @@ bcmfs_process_crypto_op(struct rte_crypto_op *op,
 int
 bcmfs_sym_session_configure(struct rte_cryptodev *dev,
 			    struct rte_crypto_sym_xform *xform,
-			    struct rte_cryptodev_sym_session *sess,
-			    struct rte_mempool *mempool);
+			    void *sess);
 
 void
-bcmfs_sym_session_clear(struct rte_cryptodev *dev,
-			struct rte_cryptodev_sym_session  *sess);
+bcmfs_sym_session_clear(struct rte_cryptodev *dev, void *sess);
 
 unsigned int
 bcmfs_sym_session_get_private_size(struct rte_cryptodev *dev __rte_unused);
diff --git a/drivers/crypto/caam_jr/caam_jr.c b/drivers/crypto/caam_jr/caam_jr.c
index 44de978d29..4c88ec637a 100644
--- a/drivers/crypto/caam_jr/caam_jr.c
+++ b/drivers/crypto/caam_jr/caam_jr.c
@@ -1692,52 +1692,36 @@ caam_jr_set_session_parameters(struct rte_cryptodev *dev,
 static int
 caam_jr_sym_session_configure(struct rte_cryptodev *dev,
 			      struct rte_crypto_sym_xform *xform,
-			      struct rte_cryptodev_sym_session *sess,
-			      struct rte_mempool *mempool)
+			      void *sess)
 {
-	void *sess_private_data;
 	int ret;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		CAAM_JR_ERR("Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
-	memset(sess_private_data, 0, sizeof(struct caam_jr_session));
-	ret = caam_jr_set_session_parameters(dev, xform, sess_private_data);
+	memset(sess, 0, sizeof(struct caam_jr_session));
+	ret = caam_jr_set_session_parameters(dev, xform, sess);
 	if (ret != 0) {
 		CAAM_JR_ERR("failed to configure session parameters");
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id, sess_private_data);
-
 	return 0;
 }
 
 /* Clear the memory of session so it doesn't leave key material behind */
 static void
-caam_jr_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+caam_jr_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-	struct caam_jr_session *s = (struct caam_jr_session *)sess_priv;
+	RTE_SET_USED(dev);
+
+	struct caam_jr_session *s = (struct caam_jr_session *)sess;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (sess_priv) {
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-
+	if (sess) {
 		rte_free(s->cipher_key.data);
 		rte_free(s->auth_key.data);
 		memset(s, 0, sizeof(struct caam_jr_session));
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
 	}
 }
 
diff --git a/drivers/crypto/ccp/ccp_pmd_ops.c b/drivers/crypto/ccp/ccp_pmd_ops.c
index 0d615d311c..cac1268130 100644
--- a/drivers/crypto/ccp/ccp_pmd_ops.c
+++ b/drivers/crypto/ccp/ccp_pmd_ops.c
@@ -727,7 +727,6 @@ ccp_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 	}
 
 	qp->sess_mp = qp_conf->mp_session;
-	qp->sess_mp_priv = qp_conf->mp_session_private;
 
 	/* mempool for batch info */
 	qp->batch_mp = rte_mempool_create(
@@ -758,11 +757,9 @@ ccp_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
 static int
 ccp_pmd_sym_session_configure(struct rte_cryptodev *dev,
 			  struct rte_crypto_sym_xform *xform,
-			  struct rte_cryptodev_sym_session *sess,
-			  struct rte_mempool *mempool)
+			  void *sess)
 {
 	int ret;
-	void *sess_private_data;
 	struct ccp_private *internals;
 
 	if (unlikely(sess == NULL || xform == NULL)) {
@@ -770,39 +767,22 @@ ccp_pmd_sym_session_configure(struct rte_cryptodev *dev,
 		return -ENOMEM;
 	}
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		CCP_LOG_ERR("Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
 	internals = (struct ccp_private *)dev->data->dev_private;
-	ret = ccp_set_session_parameters(sess_private_data, xform, internals);
+	ret = ccp_set_session_parameters(sess, xform, internals);
 	if (ret != 0) {
 		CCP_LOG_ERR("failed configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
-	set_sym_session_private_data(sess, dev->driver_id,
-				 sess_private_data);
 
 	return 0;
 }
 
 static void
-ccp_pmd_sym_session_clear(struct rte_cryptodev *dev,
-		      struct rte_cryptodev_sym_session *sess)
+ccp_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-
-	if (sess_priv) {
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-
-		rte_mempool_put(sess_mp, sess_priv);
-		memset(sess_priv, 0, sizeof(struct ccp_session));
-		set_sym_session_private_data(sess, index, NULL);
-	}
+	RTE_SET_USED(dev);
+	if (sess)
+		memset(sess, 0, sizeof(struct ccp_session));
 }
 
 struct rte_cryptodev_ops ccp_ops = {
diff --git a/drivers/crypto/ccp/ccp_pmd_private.h b/drivers/crypto/ccp/ccp_pmd_private.h
index 1c4118ee3c..6704e39ab8 100644
--- a/drivers/crypto/ccp/ccp_pmd_private.h
+++ b/drivers/crypto/ccp/ccp_pmd_private.h
@@ -78,8 +78,6 @@ struct ccp_qp {
 	/**< Ring for placing process packets */
 	struct rte_mempool *sess_mp;
 	/**< Session Mempool */
-	struct rte_mempool *sess_mp_priv;
-	/**< Session Private Data Mempool */
 	struct rte_mempool *batch_mp;
 	/**< Session Mempool for batch info */
 	struct rte_cryptodev_stats qp_stats;
diff --git a/drivers/crypto/ccp/rte_ccp_pmd.c b/drivers/crypto/ccp/rte_ccp_pmd.c
index a35a8cd775..3e8c1f3b51 100644
--- a/drivers/crypto/ccp/rte_ccp_pmd.c
+++ b/drivers/crypto/ccp/rte_ccp_pmd.c
@@ -61,28 +61,26 @@ get_ccp_session(struct ccp_qp *qp, struct rte_crypto_op *op)
 				op->sym->session,
 				ccp_cryptodev_driver_id);
 	} else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
-		void *_sess;
-		void *_sess_private_data = NULL;
 		struct ccp_private *internals;
+		struct rte_cryptodev_sym_session *_sess =
+			rte_cryptodev_sym_session_create(qp->sess_mp);
 
-		if (rte_mempool_get(qp->sess_mp, &_sess))
-			return NULL;
-		if (rte_mempool_get(qp->sess_mp, (void **)&_sess_private_data))
+		if (_sess == NULL)
 			return NULL;
 
-		sess = (struct ccp_session *)_sess_private_data;
+		_sess->sess_data[ccp_cryptodev_driver_id].data =
+				(void *)((uint8_t *)_sess +
+				rte_cryptodev_sym_get_header_session_size() +
+				(ccp_cryptodev_driver_id * _sess->priv_sz));
+		sess = _sess->sess_data[ccp_cryptodev_driver_id].data;
 
 		internals = (struct ccp_private *)qp->dev->data->dev_private;
 		if (unlikely(ccp_set_session_parameters(sess, op->sym->xform,
 							internals) != 0)) {
 			rte_mempool_put(qp->sess_mp, _sess);
-			rte_mempool_put(qp->sess_mp_priv, _sess_private_data);
 			sess = NULL;
 		}
 		op->sym->session = (struct rte_cryptodev_sym_session *)_sess;
-		set_sym_session_private_data(op->sym->session,
-					 ccp_cryptodev_driver_id,
-					 _sess_private_data);
 	}
 
 	return sess;
@@ -166,8 +164,10 @@ ccp_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
 						ops[i]->sym->session,
 						ccp_cryptodev_driver_id);
 
-			rte_mempool_put(qp->sess_mp_priv,
-					sess);
+			memset(sess, 0, sizeof(struct ccp_session));
+			memset(ops[i]->sym->session, 0,
+			rte_cryptodev_sym_get_existing_header_session_size(
+				ops[i]->sym->session));
 			rte_mempool_put(qp->sess_mp,
 					ops[i]->sym->session);
 			ops[i]->sym->session = NULL;
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index de2eebd507..76c992858f 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -32,17 +32,18 @@ cn10k_cpt_sym_temp_sess_create(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op)
 	if (sess == NULL)
 		return NULL;
 
-	ret = sym_session_configure(qp->lf.roc_cpt, driver_id, sym_op->xform,
-				    sess, qp->sess_mp_priv);
+	sess->sess_data[driver_id].data =
+			(void *)((uint8_t *)sess +
+			rte_cryptodev_sym_get_header_session_size() +
+			(driver_id * sess->priv_sz));
+	priv = get_sym_session_private_data(sess, driver_id);
+	ret = sym_session_configure(qp->lf.roc_cpt, sym_op->xform, (void *)priv);
 	if (ret)
 		goto sess_put;
 
-	priv = get_sym_session_private_data(sess, driver_id);
-
 	sym_op->session = sess;
 
 	return priv;
-
 sess_put:
 	rte_mempool_put(qp->sess_mp, sess);
 	return NULL;
@@ -147,9 +148,7 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[],
 			ret = cpt_sym_inst_fill(qp, op, sess, infl_req,
 						&inst[0]);
 			if (unlikely(ret)) {
-				sym_session_clear(cn10k_cryptodev_driver_id,
-						  op->sym->session);
-				rte_mempool_put(qp->sess_mp, op->sym->session);
+				sym_session_clear(op->sym->session);
 				return 0;
 			}
 			w7 = sess->cpt_inst_w7;
@@ -474,8 +473,7 @@ cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp,
 temp_sess_free:
 	if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
 		if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
-			sym_session_clear(cn10k_cryptodev_driver_id,
-					  cop->sym->session);
+			sym_session_clear(cop->sym->session);
 			sz = rte_cryptodev_sym_get_existing_header_session_size(
 				cop->sym->session);
 			memset(cop->sym->session, 0, sz);
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index 4c2dc5b080..5f83581131 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -81,17 +81,19 @@ cn9k_cpt_sym_temp_sess_create(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op)
 	if (sess == NULL)
 		return NULL;
 
-	ret = sym_session_configure(qp->lf.roc_cpt, driver_id, sym_op->xform,
-				    sess, qp->sess_mp_priv);
+	sess->sess_data[driver_id].data =
+			(void *)((uint8_t *)sess +
+			rte_cryptodev_sym_get_header_session_size() +
+			(driver_id * sess->priv_sz));
+	priv = get_sym_session_private_data(sess, driver_id);
+	ret = sym_session_configure(qp->lf.roc_cpt, sym_op->xform,
+			(void *)priv);
 	if (ret)
 		goto sess_put;
 
-	priv = get_sym_session_private_data(sess, driver_id);
-
 	sym_op->session = sess;
 
 	return priv;
-
 sess_put:
 	rte_mempool_put(qp->sess_mp, sess);
 	return NULL;
@@ -126,8 +128,7 @@ cn9k_cpt_inst_prep(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
 			ret = cn9k_cpt_sym_inst_fill(qp, op, sess, infl_req,
 						     inst);
 			if (unlikely(ret)) {
-				sym_session_clear(cn9k_cryptodev_driver_id,
-						  op->sym->session);
+				sym_session_clear(op->sym->session);
 				rte_mempool_put(qp->sess_mp, op->sym->session);
 			}
 			inst->w7.u64 = sess->cpt_inst_w7;
@@ -484,8 +485,7 @@ cn9k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
 temp_sess_free:
 	if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
 		if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
-			sym_session_clear(cn9k_cryptodev_driver_id,
-					  cop->sym->session);
+			sym_session_clear(cop->sym->session);
 			sz = rte_cryptodev_sym_get_existing_header_session_size(
 				cop->sym->session);
 			memset(cop->sym->session, 0, sz);
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index e49f826225..776cf02b57 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -382,7 +382,6 @@ cnxk_cpt_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 	}
 
 	qp->sess_mp = conf->mp_session;
-	qp->sess_mp_priv = conf->mp_session_private;
 	dev->data->queue_pairs[qp_id] = qp;
 
 	return 0;
@@ -496,27 +495,20 @@ cnxk_cpt_inst_w7_get(struct cnxk_se_sess *sess, struct roc_cpt *roc_cpt)
 }
 
 int
-sym_session_configure(struct roc_cpt *roc_cpt, int driver_id,
+sym_session_configure(struct roc_cpt *roc_cpt,
 		      struct rte_crypto_sym_xform *xform,
-		      struct rte_cryptodev_sym_session *sess,
-		      struct rte_mempool *pool)
+		      void *sess) 
 {
 	struct cnxk_se_sess *sess_priv;
-	void *priv;
 	int ret;
 
 	ret = sym_xform_verify(xform);
 	if (unlikely(ret < 0))
 		return ret;
 
-	if (unlikely(rte_mempool_get(pool, &priv))) {
-		plt_dp_err("Could not allocate session private data");
-		return -ENOMEM;
-	}
+	memset(sess, 0, sizeof(struct cnxk_se_sess));
 
-	memset(priv, 0, sizeof(struct cnxk_se_sess));
-
-	sess_priv = priv;
+	sess_priv = sess;
 
 	switch (ret) {
 	case CNXK_CPT_CIPHER:
@@ -550,7 +542,7 @@ sym_session_configure(struct roc_cpt *roc_cpt, int driver_id,
 	}
 
 	if (ret)
-		goto priv_put;
+		return -ENOTSUP;
 
 	if ((sess_priv->roc_se_ctx.fc_type == ROC_SE_HASH_HMAC) &&
 	    cpt_mac_len_verify(&xform->auth)) {
@@ -560,66 +552,45 @@ sym_session_configure(struct roc_cpt *roc_cpt, int driver_id,
 			sess_priv->roc_se_ctx.auth_key = NULL;
 		}
 
-		ret = -ENOTSUP;
-		goto priv_put;
+		return -ENOTSUP;
 	}
 
 	sess_priv->cpt_inst_w7 = cnxk_cpt_inst_w7_get(sess_priv, roc_cpt);
 
-	set_sym_session_private_data(sess, driver_id, sess_priv);
-
 	return 0;
-
-priv_put:
-	rte_mempool_put(pool, priv);
-
-	return -ENOTSUP;
 }
 
 int
 cnxk_cpt_sym_session_configure(struct rte_cryptodev *dev,
 			       struct rte_crypto_sym_xform *xform,
-			       struct rte_cryptodev_sym_session *sess,
-			       struct rte_mempool *pool)
+			       void *sess)
 {
 	struct cnxk_cpt_vf *vf = dev->data->dev_private;
 	struct roc_cpt *roc_cpt = &vf->cpt;
-	uint8_t driver_id;
 
-	driver_id = dev->driver_id;
-
-	return sym_session_configure(roc_cpt, driver_id, xform, sess, pool);
+	return sym_session_configure(roc_cpt, xform, sess);
 }
 
 void
-sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess)
+sym_session_clear(void *sess)
 {
-	void *priv = get_sym_session_private_data(sess, driver_id);
-	struct cnxk_se_sess *sess_priv;
-	struct rte_mempool *pool;
+	struct cnxk_se_sess *sess_priv = sess;
 
-	if (priv == NULL)
+	if (sess == NULL)
 		return;
 
-	sess_priv = priv;
-
 	if (sess_priv->roc_se_ctx.auth_key != NULL)
 		plt_free(sess_priv->roc_se_ctx.auth_key);
 
-	memset(priv, 0, cnxk_cpt_sym_session_get_size(NULL));
-
-	pool = rte_mempool_from_obj(priv);
-
-	set_sym_session_private_data(sess, driver_id, NULL);
-
-	rte_mempool_put(pool, priv);
+	memset(sess_priv, 0, cnxk_cpt_sym_session_get_size(NULL));
 }
 
 void
-cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev,
-			   struct rte_cryptodev_sym_session *sess)
+cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	return sym_session_clear(dev->driver_id, sess);
+	RTE_SET_USED(dev);
+
+	return sym_session_clear(sess);
 }
 
 unsigned int
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
index c5332dec53..97a2fb1050 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
@@ -79,8 +79,6 @@ struct cnxk_cpt_qp {
 	/**< Pending queue */
 	struct rte_mempool *sess_mp;
 	/**< Session mempool */
-	struct rte_mempool *sess_mp_priv;
-	/**< Session private data mempool */
 	struct cpt_qp_meta_info meta_info;
 	/**< Metabuf info required to support operations on the queue pair */
 	struct roc_cpt_lmtline lmtline;
@@ -111,18 +109,15 @@ unsigned int cnxk_cpt_sym_session_get_size(struct rte_cryptodev *dev);
 
 int cnxk_cpt_sym_session_configure(struct rte_cryptodev *dev,
 				   struct rte_crypto_sym_xform *xform,
-				   struct rte_cryptodev_sym_session *sess,
-				   struct rte_mempool *pool);
+				   void *sess);
 
-int sym_session_configure(struct roc_cpt *roc_cpt, int driver_id,
+int sym_session_configure(struct roc_cpt *roc_cpt,
 			  struct rte_crypto_sym_xform *xform,
-			  struct rte_cryptodev_sym_session *sess,
-			  struct rte_mempool *pool);
+			  void *sess);
 
-void cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev,
-				struct rte_cryptodev_sym_session *sess);
+void cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev, void *sess);
 
-void sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess);
+void sym_session_clear(void *sess);
 
 unsigned int cnxk_ae_session_size_get(struct rte_cryptodev *dev __rte_unused);
 
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 9115dd8e70..717e506998 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -3438,49 +3438,32 @@ dpaa2_sec_security_session_get_size(void *device __rte_unused)
 static int
 dpaa2_sec_sym_session_configure(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mempool)
+		void *sess)
 {
-	void *sess_private_data;
 	int ret;
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		DPAA2_SEC_ERR("Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
-	ret = dpaa2_sec_set_session_parameters(dev, xform, sess_private_data);
+	ret = dpaa2_sec_set_session_parameters(dev, xform, sess);
 	if (ret != 0) {
 		DPAA2_SEC_ERR("Failed to configure session parameters");
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-		sess_private_data);
-
 	return 0;
 }
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static void
-dpaa2_sec_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+dpaa2_sec_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
 	PMD_INIT_FUNC_TRACE();
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-	dpaa2_sec_session *s = (dpaa2_sec_session *)sess_priv;
+	RTE_SET_USED(dev);
+	dpaa2_sec_session *s = (dpaa2_sec_session *)sess;
 
-	if (sess_priv) {
+	if (sess) {
 		rte_free(s->ctxt);
 		rte_free(s->cipher_key.data);
 		rte_free(s->auth_key.data);
 		memset(s, 0, sizeof(dpaa2_sec_session));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
 	}
 }
 
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index e38ba21e89..fc267784a8 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -2534,33 +2534,18 @@ dpaa_sec_set_session_parameters(struct rte_cryptodev *dev,
 
 static int
 dpaa_sec_sym_session_configure(struct rte_cryptodev *dev,
-		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mempool)
+		struct rte_crypto_sym_xform *xform, void *sess)
 {
-	void *sess_private_data;
 	int ret;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		DPAA_SEC_ERR("Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
-	ret = dpaa_sec_set_session_parameters(dev, xform, sess_private_data);
+	ret = dpaa_sec_set_session_parameters(dev, xform, sess);
 	if (ret != 0) {
 		DPAA_SEC_ERR("failed to configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-			sess_private_data);
-
-
 	return 0;
 }
 
@@ -2581,18 +2566,14 @@ free_session_memory(struct rte_cryptodev *dev, dpaa_sec_session *s)
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static void
-dpaa_sec_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+dpaa_sec_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
 	PMD_INIT_FUNC_TRACE();
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-	dpaa_sec_session *s = (dpaa_sec_session *)sess_priv;
+	RTE_SET_USED(dev);
+	dpaa_sec_session *s = (dpaa_sec_session *)sess;
 
-	if (sess_priv) {
+	if (sess)
 		free_session_memory(dev, s);
-		set_sym_session_private_data(sess, index, NULL);
-	}
 }
 
 #ifdef RTE_LIB_SECURITY
diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
index 189262c4ad..e4b4de7612 100644
--- a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
+++ b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c
@@ -262,7 +262,6 @@ ipsec_mb_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 
 	qp->pmd_type = internals->pmd_type;
 	qp->sess_mp = qp_conf->mp_session;
-	qp->sess_mp_priv = qp_conf->mp_session_private;
 
 	qp->ingress_queue = ipsec_mb_qp_create_processed_ops_ring(qp,
 		qp_conf->nb_descriptors, socket_id);
@@ -311,9 +310,8 @@ ipsec_mb_sym_session_get_size(struct rte_cryptodev *dev)
 int
 ipsec_mb_sym_session_configure(
 	struct rte_cryptodev *dev, struct rte_crypto_sym_xform *xform,
-	struct rte_cryptodev_sym_session *sess, struct rte_mempool *mempool)
+	void *sess)
 {
-	void *sess_private_data;
 	struct ipsec_mb_dev_private *internals = dev->data->dev_private;
 	struct ipsec_mb_internals *pmd_data =
 		&ipsec_mb_pmds[internals->pmd_type];
@@ -329,42 +327,22 @@ ipsec_mb_sym_session_configure(
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		IPSEC_MB_LOG(ERR, "Couldn't get object from session mempool");
-		free_mb_mgr(mb_mgr);
-		return -ENOMEM;
-	}
-
-	ret = (*pmd_data->session_configure)(mb_mgr, sess_private_data, xform);
+	ret = (*pmd_data->session_configure)(mb_mgr, sess, xform);
 	if (ret != 0) {
 		IPSEC_MB_LOG(ERR, "failed configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		free_mb_mgr(mb_mgr);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id, sess_private_data);
-
 	free_mb_mgr(mb_mgr);
 	return 0;
 }
 
 /** Clear the session memory */
 void
-ipsec_mb_sym_session_clear(struct rte_cryptodev *dev,
-			       struct rte_cryptodev_sym_session *sess)
+ipsec_mb_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-
 	/* Zero out the whole structure */
-	if (sess_priv) {
-		memset(sess_priv, 0, ipsec_mb_sym_session_get_size(dev));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
-	}
+	if (sess)
+		memset(sess, 0, ipsec_mb_sym_session_get_size(dev));
 }
diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_private.h b/drivers/crypto/ipsec_mb/ipsec_mb_private.h
index 866722d6f4..38850c6532 100644
--- a/drivers/crypto/ipsec_mb/ipsec_mb_private.h
+++ b/drivers/crypto/ipsec_mb/ipsec_mb_private.h
@@ -132,8 +132,6 @@ struct ipsec_mb_qp {
 	/**< Ring for placing operations ready for processing */
 	struct rte_mempool *sess_mp;
 	/**< Session Mempool */
-	struct rte_mempool *sess_mp_priv;
-	/**< Session Private Data Mempool */
 	struct rte_cryptodev_stats stats;
 	/**< Queue pair statistics */
 	enum ipsec_mb_pmd_types pmd_type;
@@ -394,13 +392,11 @@ ipsec_mb_sym_session_get_size(struct rte_cryptodev *dev);
 int ipsec_mb_sym_session_configure(
 	struct rte_cryptodev *dev,
 	struct rte_crypto_sym_xform *xform,
-	struct rte_cryptodev_sym_session *sess,
-	struct rte_mempool *mempool);
+	void *sess);
 
 /** Clear the memory of session so it does not leave key material behind */
 void
-ipsec_mb_sym_session_clear(struct rte_cryptodev *dev,
-				struct rte_cryptodev_sym_session *sess);
+ipsec_mb_sym_session_clear(struct rte_cryptodev *dev, void *sess);
 
 /** Get session from op. If sessionless create a session */
 static __rte_always_inline void *
@@ -410,8 +406,7 @@ ipsec_mb_get_session_private(struct ipsec_mb_qp *qp, struct rte_crypto_op *op)
 	uint32_t driver_id = ipsec_mb_get_driver_id(qp->pmd_type);
 	struct rte_crypto_sym_op *sym_op = op->sym;
 	uint8_t sess_type = op->sess_type;
-	void *_sess;
-	void *_sess_private_data = NULL;
+	struct rte_cryptodev_sym_session *_sess;
 	struct ipsec_mb_internals *pmd_data = &ipsec_mb_pmds[qp->pmd_type];
 
 	switch (sess_type) {
@@ -421,26 +416,22 @@ ipsec_mb_get_session_private(struct ipsec_mb_qp *qp, struct rte_crypto_op *op)
 							    driver_id);
 	break;
 	case RTE_CRYPTO_OP_SESSIONLESS:
-		if (!qp->sess_mp ||
-		    rte_mempool_get(qp->sess_mp, (void **)&_sess))
+		_sess = rte_cryptodev_sym_session_create(qp->sess_mp);
+		if (_sess == NULL)
 			return NULL;
 
-		if (!qp->sess_mp_priv ||
-		    rte_mempool_get(qp->sess_mp_priv,
-					(void **)&_sess_private_data))
-			return NULL;
-
-		sess = _sess_private_data;
+		_sess->sess_data[driver_id].data =
+				(void *)((uint8_t *)_sess +
+				rte_cryptodev_sym_get_header_session_size() +
+				(driver_id * _sess->priv_sz));
+		sess = _sess->sess_data[driver_id].data;
 		if (unlikely(pmd_data->session_configure(qp->mb_mgr,
 				sess, sym_op->xform) != 0)) {
 			rte_mempool_put(qp->sess_mp, _sess);
-			rte_mempool_put(qp->sess_mp_priv, _sess_private_data);
 			sess = NULL;
 		}
 
 		sym_op->session = (struct rte_cryptodev_sym_session *)_sess;
-		set_sym_session_private_data(sym_op->session, driver_id,
-					     _sess_private_data);
 	break;
 	default:
 		IPSEC_MB_LOG(ERR, "Unrecognized session type %u", sess_type);
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_gcm.c b/drivers/crypto/ipsec_mb/pmd_aesni_gcm.c
index 2c203795ab..0c3a689074 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_gcm.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_gcm.c
@@ -240,7 +240,6 @@ handle_completed_gcm_crypto_op(struct ipsec_mb_qp *qp,
 		memset(op->sym->session, 0,
 			rte_cryptodev_sym_get_existing_header_session_size(
 				op->sym->session));
-		rte_mempool_put(qp->sess_mp_priv, sess);
 		rte_mempool_put(qp->sess_mp, op->sym->session);
 		op->sym->session = NULL;
 	}
@@ -462,27 +461,23 @@ aesni_gcm_get_session(struct ipsec_mb_qp *qp,
 			    get_sym_session_private_data(sym_op->session,
 							 driver_id);
 	} else {
-		void *_sess;
-		void *_sess_private_data = NULL;
+		struct rte_cryptodev_sym_session *_sess =
+			rte_cryptodev_sym_session_create(qp->sess_mp);
 
-		if (rte_mempool_get(qp->sess_mp, (void **)&_sess))
+		if(_sess == NULL)
 			return NULL;
 
-		if (rte_mempool_get(qp->sess_mp_priv,
-				(void **)&_sess_private_data))
-			return NULL;
-
-		sess = (struct aesni_gcm_session *)_sess_private_data;
-
+		_sess->sess_data[driver_id].data =
+				(void *)((uint8_t *)_sess +
+				rte_cryptodev_sym_get_header_session_size() +
+				(driver_id * _sess->priv_sz));
+		sess = _sess->sess_data[driver_id].data;
 		if (unlikely(aesni_gcm_session_configure(qp->mb_mgr,
-				 _sess_private_data, sym_op->xform) != 0)) {
+				 sess, sym_op->xform) != 0)) {
 			rte_mempool_put(qp->sess_mp, _sess);
-			rte_mempool_put(qp->sess_mp_priv, _sess_private_data);
 			sess = NULL;
 		}
 		sym_op->session = (struct rte_cryptodev_sym_session *)_sess;
-		set_sym_session_private_data(sym_op->session, driver_id,
-					     _sess_private_data);
 	}
 
 	if (unlikely(sess == NULL))
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index a7b65e565c..e2654b4a0b 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -1542,7 +1542,6 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 		memset(op->sym->session, 0,
 			rte_cryptodev_sym_get_existing_header_session_size(
 				op->sym->session));
-		rte_mempool_put(qp->sess_mp_priv, sess);
 		rte_mempool_put(qp->sess_mp, op->sym->session);
 		op->sym->session = NULL;
 	}
diff --git a/drivers/crypto/ipsec_mb/pmd_chacha_poly.c b/drivers/crypto/ipsec_mb/pmd_chacha_poly.c
index d953d6e5f5..7a90836602 100644
--- a/drivers/crypto/ipsec_mb/pmd_chacha_poly.c
+++ b/drivers/crypto/ipsec_mb/pmd_chacha_poly.c
@@ -293,7 +293,6 @@ handle_completed_chacha20_poly1305_crypto_op(struct ipsec_mb_qp *qp,
 		memset(op->sym->session, 0,
 			rte_cryptodev_sym_get_existing_header_session_size(
 				op->sym->session));
-		rte_mempool_put(qp->sess_mp_priv, sess);
 		rte_mempool_put(qp->sess_mp, op->sym->session);
 		op->sym->session = NULL;
 	}
diff --git a/drivers/crypto/ipsec_mb/pmd_kasumi.c b/drivers/crypto/ipsec_mb/pmd_kasumi.c
index c9d4f9d0ae..1ecb73a392 100644
--- a/drivers/crypto/ipsec_mb/pmd_kasumi.c
+++ b/drivers/crypto/ipsec_mb/pmd_kasumi.c
@@ -235,7 +235,6 @@ process_ops(struct rte_crypto_op **ops, struct kasumi_session *session,
 			    ops[i]->sym->session, 0,
 			    rte_cryptodev_sym_get_existing_header_session_size(
 				ops[i]->sym->session));
-			rte_mempool_put(qp->sess_mp_priv, session);
 			rte_mempool_put(qp->sess_mp, ops[i]->sym->session);
 			ops[i]->sym->session = NULL;
 		}
diff --git a/drivers/crypto/ipsec_mb/pmd_snow3g.c b/drivers/crypto/ipsec_mb/pmd_snow3g.c
index ebc9a0b562..13b425d1ad 100644
--- a/drivers/crypto/ipsec_mb/pmd_snow3g.c
+++ b/drivers/crypto/ipsec_mb/pmd_snow3g.c
@@ -365,7 +365,6 @@ process_ops(struct rte_crypto_op **ops, struct snow3g_session *session,
 			memset(ops[i]->sym->session, 0,
 			rte_cryptodev_sym_get_existing_header_session_size(
 					ops[i]->sym->session));
-			rte_mempool_put(qp->sess_mp_priv, session);
 			rte_mempool_put(qp->sess_mp, ops[i]->sym->session);
 			ops[i]->sym->session = NULL;
 		}
diff --git a/drivers/crypto/ipsec_mb/pmd_zuc.c b/drivers/crypto/ipsec_mb/pmd_zuc.c
index b542264069..616accb2fa 100644
--- a/drivers/crypto/ipsec_mb/pmd_zuc.c
+++ b/drivers/crypto/ipsec_mb/pmd_zuc.c
@@ -239,7 +239,6 @@ process_ops(struct rte_crypto_op **ops, enum ipsec_mb_operation op_type,
 			memset(ops[i]->sym->session, 0,
 			rte_cryptodev_sym_get_existing_header_session_size(
 					ops[i]->sym->session));
-			rte_mempool_put(qp->sess_mp_priv, sessions[i]);
 			rte_mempool_put(qp->sess_mp, ops[i]->sym->session);
 			ops[i]->sym->session = NULL;
 		}
diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c
index 14b6783e13..8babdad59f 100644
--- a/drivers/crypto/mlx5/mlx5_crypto.c
+++ b/drivers/crypto/mlx5/mlx5_crypto.c
@@ -165,14 +165,12 @@ mlx5_crypto_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
 static int
 mlx5_crypto_sym_session_configure(struct rte_cryptodev *dev,
 				  struct rte_crypto_sym_xform *xform,
-				  struct rte_cryptodev_sym_session *session,
-				  struct rte_mempool *mp)
+				  void *session)
 {
 	struct mlx5_crypto_priv *priv = dev->data->dev_private;
-	struct mlx5_crypto_session *sess_private_data;
+	struct mlx5_crypto_session *sess_private_data = session;
 	struct rte_crypto_cipher_xform *cipher;
 	uint8_t encryption_order;
-	int ret;
 
 	if (unlikely(xform->next != NULL)) {
 		DRV_LOG(ERR, "Xform next is not supported.");
@@ -183,17 +181,9 @@ mlx5_crypto_sym_session_configure(struct rte_cryptodev *dev,
 		DRV_LOG(ERR, "Only AES-XTS algorithm is supported.");
 		return -ENOTSUP;
 	}
-	ret = rte_mempool_get(mp, (void *)&sess_private_data);
-	if (ret != 0) {
-		DRV_LOG(ERR,
-			"Failed to get session %p private data from mempool.",
-			sess_private_data);
-		return -ENOMEM;
-	}
 	cipher = &xform->cipher;
 	sess_private_data->dek = mlx5_crypto_dek_prepare(priv, cipher);
 	if (sess_private_data->dek == NULL) {
-		rte_mempool_put(mp, sess_private_data);
 		DRV_LOG(ERR, "Failed to prepare dek.");
 		return -ENOMEM;
 	}
@@ -228,27 +218,21 @@ mlx5_crypto_sym_session_configure(struct rte_cryptodev *dev,
 	sess_private_data->dek_id =
 			rte_cpu_to_be_32(sess_private_data->dek->obj->id &
 					 0xffffff);
-	set_sym_session_private_data(session, dev->driver_id,
-				     sess_private_data);
 	DRV_LOG(DEBUG, "Session %p was configured.", sess_private_data);
 	return 0;
 }
 
 static void
-mlx5_crypto_sym_session_clear(struct rte_cryptodev *dev,
-			      struct rte_cryptodev_sym_session *sess)
+mlx5_crypto_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
 	struct mlx5_crypto_priv *priv = dev->data->dev_private;
-	struct mlx5_crypto_session *spriv = get_sym_session_private_data(sess,
-								dev->driver_id);
+	struct mlx5_crypto_session *spriv = sess;
 
 	if (unlikely(spriv == NULL)) {
 		DRV_LOG(ERR, "Failed to get session %p private data.", spriv);
 		return;
 	}
 	mlx5_crypto_dek_destroy(priv, spriv->dek);
-	set_sym_session_private_data(sess, dev->driver_id, NULL);
-	rte_mempool_put(rte_mempool_from_obj(spriv), spriv);
 	DRV_LOG(DEBUG, "Session %p was cleared.", spriv);
 }
 
diff --git a/drivers/crypto/mvsam/mrvl_pmd_private.h b/drivers/crypto/mvsam/mrvl_pmd_private.h
index 719d73d82c..8ebebd88c5 100644
--- a/drivers/crypto/mvsam/mrvl_pmd_private.h
+++ b/drivers/crypto/mvsam/mrvl_pmd_private.h
@@ -51,9 +51,6 @@ struct mrvl_crypto_qp {
 	/** Session Mempool. */
 	struct rte_mempool *sess_mp;
 
-	/** Session Private Data Mempool. */
-	struct rte_mempool *sess_mp_priv;
-
 	/** Queue pair statistics. */
 	struct rte_cryptodev_stats stats;
 
diff --git a/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c b/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
index 6ca1bb8b40..bf058d254c 100644
--- a/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
+++ b/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
@@ -704,7 +704,6 @@ mrvl_crypto_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 			break;
 
 		qp->sess_mp = qp_conf->mp_session;
-		qp->sess_mp_priv = qp_conf->mp_session_private;
 
 		memset(&qp->stats, 0, sizeof(qp->stats));
 		dev->data->queue_pairs[qp_id] = qp;
@@ -735,12 +734,9 @@ mrvl_crypto_pmd_sym_session_get_size(__rte_unused struct rte_cryptodev *dev)
  */
 static int
 mrvl_crypto_pmd_sym_session_configure(__rte_unused struct rte_cryptodev *dev,
-		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mp)
+		struct rte_crypto_sym_xform *xform, void *sess)
 {
 	struct mrvl_crypto_session *mrvl_sess;
-	void *sess_private_data;
 	int ret;
 
 	if (sess == NULL) {
@@ -748,25 +744,16 @@ mrvl_crypto_pmd_sym_session_configure(__rte_unused struct rte_cryptodev *dev,
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mp, &sess_private_data)) {
-		CDEV_LOG_ERR("Couldn't get object from session mempool.");
-		return -ENOMEM;
-	}
+	memset(sess, 0, sizeof(struct mrvl_crypto_session));
 
-	memset(sess_private_data, 0, sizeof(struct mrvl_crypto_session));
-
-	ret = mrvl_crypto_set_session_parameters(sess_private_data, xform);
+	ret = mrvl_crypto_set_session_parameters(sess, xform);
 	if (ret != 0) {
 		MRVL_LOG(ERR, "Failed to configure session parameters!");
-
-		/* Return session to mempool */
-		rte_mempool_put(mp, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id, sess_private_data);
 
-	mrvl_sess = (struct mrvl_crypto_session *)sess_private_data;
+	mrvl_sess = (struct mrvl_crypto_session *)sess;
 	if (sam_session_create(&mrvl_sess->sam_sess_params,
 				&mrvl_sess->sam_sess) < 0) {
 		MRVL_LOG(DEBUG, "Failed to create session!");
@@ -789,17 +776,13 @@ mrvl_crypto_pmd_sym_session_configure(__rte_unused struct rte_cryptodev *dev,
  * @returns 0. Always.
  */
 static void
-mrvl_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+mrvl_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-
+	RTE_SET_USED(dev);
 	/* Zero out the whole structure */
-	if (sess_priv) {
+	if (sess) {
 		struct mrvl_crypto_session *mrvl_sess =
-			(struct mrvl_crypto_session *)sess_priv;
+			(struct mrvl_crypto_session *)sess;
 
 		if (mrvl_sess->sam_sess &&
 		    sam_session_destroy(mrvl_sess->sam_sess) < 0) {
@@ -807,9 +790,6 @@ mrvl_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev,
 		}
 
 		memset(mrvl_sess, 0, sizeof(struct mrvl_crypto_session));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
 	}
 }
 
diff --git a/drivers/crypto/nitrox/nitrox_sym.c b/drivers/crypto/nitrox/nitrox_sym.c
index cb5393d2f1..9405cb62f5 100644
--- a/drivers/crypto/nitrox/nitrox_sym.c
+++ b/drivers/crypto/nitrox/nitrox_sym.c
@@ -532,22 +532,16 @@ configure_aead_ctx(struct rte_crypto_aead_xform *xform,
 static int
 nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
 			      struct rte_crypto_sym_xform *xform,
-			      struct rte_cryptodev_sym_session *sess,
-			      struct rte_mempool *mempool)
+			      void *sess)
 {
-	void *mp_obj;
 	struct nitrox_crypto_ctx *ctx;
 	struct rte_crypto_cipher_xform *cipher_xform = NULL;
 	struct rte_crypto_auth_xform *auth_xform = NULL;
 	struct rte_crypto_aead_xform *aead_xform = NULL;
 	int ret = -EINVAL;
 
-	if (rte_mempool_get(mempool, &mp_obj)) {
-		NITROX_LOG(ERR, "Couldn't allocate context\n");
-		return -ENOMEM;
-	}
-
-	ctx = mp_obj;
+	RTE_SET_USED(cdev);
+	ctx = sess;
 	ctx->nitrox_chain = get_crypto_chain_order(xform);
 	switch (ctx->nitrox_chain) {
 	case NITROX_CHAIN_CIPHER_ONLY:
@@ -586,28 +580,17 @@ nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
 	}
 
 	ctx->iova = rte_mempool_virt2iova(ctx);
-	set_sym_session_private_data(sess, cdev->driver_id, ctx);
 	return 0;
 err:
-	rte_mempool_put(mempool, mp_obj);
 	return ret;
 }
 
 static void
-nitrox_sym_dev_sess_clear(struct rte_cryptodev *cdev,
-			  struct rte_cryptodev_sym_session *sess)
+nitrox_sym_dev_sess_clear(struct rte_cryptodev *cdev, void *sess)
 {
-	struct nitrox_crypto_ctx *ctx = get_sym_session_private_data(sess,
-							cdev->driver_id);
-	struct rte_mempool *sess_mp;
-
-	if (!ctx)
-		return;
-
-	memset(ctx, 0, sizeof(*ctx));
-	sess_mp = rte_mempool_from_obj(ctx);
-	set_sym_session_private_data(sess, cdev->driver_id, NULL);
-	rte_mempool_put(sess_mp, ctx);
+	RTE_SET_USED(cdev);
+	if (sess)
+		memset(sess, 0, sizeof(struct nitrox_crypto_ctx));
 }
 
 static struct nitrox_crypto_ctx *
diff --git a/drivers/crypto/null/null_crypto_pmd.c b/drivers/crypto/null/null_crypto_pmd.c
index 9ecb434fd0..1785a08822 100644
--- a/drivers/crypto/null/null_crypto_pmd.c
+++ b/drivers/crypto/null/null_crypto_pmd.c
@@ -81,27 +81,25 @@ get_session(struct null_crypto_qp *qp, struct rte_crypto_op *op)
 					get_sym_session_private_data(
 					sym_op->session, cryptodev_driver_id);
 	} else {
-		void *_sess = NULL;
-		void *_sess_private_data = NULL;
+		struct rte_cryptodev_sym_session *_sess = NULL;
 
-		if (rte_mempool_get(qp->sess_mp, (void **)&_sess))
+		/* Create temporary session */
+		_sess = rte_cryptodev_sym_session_create(qp->sess_mp);
+		if (_sess == NULL)
 			return NULL;
 
-		if (rte_mempool_get(qp->sess_mp_priv,
-				(void **)&_sess_private_data))
-			return NULL;
-
-		sess = (struct null_crypto_session *)_sess_private_data;
+		_sess->sess_data[cryptodev_driver_id].data =
+				(void *)((uint8_t *)_sess +
+				rte_cryptodev_sym_get_header_session_size() +
+				(cryptodev_driver_id * _sess->priv_sz));
+		sess = _sess->sess_data[cryptodev_driver_id].data;
 
 		if (unlikely(null_crypto_set_session_parameters(sess,
 				sym_op->xform) != 0)) {
 			rte_mempool_put(qp->sess_mp, _sess);
-			rte_mempool_put(qp->sess_mp_priv, _sess_private_data);
 			sess = NULL;
 		}
 		sym_op->session = (struct rte_cryptodev_sym_session *)_sess;
-		set_sym_session_private_data(op->sym->session,
-				cryptodev_driver_id, _sess_private_data);
 	}
 
 	return sess;
diff --git a/drivers/crypto/null/null_crypto_pmd_ops.c b/drivers/crypto/null/null_crypto_pmd_ops.c
index a8b5a06e7f..65bfa8dcf7 100644
--- a/drivers/crypto/null/null_crypto_pmd_ops.c
+++ b/drivers/crypto/null/null_crypto_pmd_ops.c
@@ -234,7 +234,6 @@ null_crypto_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 	}
 
 	qp->sess_mp = qp_conf->mp_session;
-	qp->sess_mp_priv = qp_conf->mp_session_private;
 
 	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
 
@@ -258,10 +257,8 @@ null_crypto_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
 static int
 null_crypto_pmd_sym_session_configure(struct rte_cryptodev *dev __rte_unused,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mp)
+		void *sess)
 {
-	void *sess_private_data;
 	int ret;
 
 	if (unlikely(sess == NULL)) {
@@ -269,42 +266,23 @@ null_crypto_pmd_sym_session_configure(struct rte_cryptodev *dev __rte_unused,
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mp, &sess_private_data)) {
-		NULL_LOG(ERR,
-				"Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
-	ret = null_crypto_set_session_parameters(sess_private_data, xform);
+	ret = null_crypto_set_session_parameters(sess, xform);
 	if (ret != 0) {
 		NULL_LOG(ERR, "failed configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mp, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-		sess_private_data);
-
 	return 0;
 }
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static void
-null_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+null_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-
+	RTE_SET_USED(dev);
 	/* Zero out the whole structure */
-	if (sess_priv) {
-		memset(sess_priv, 0, sizeof(struct null_crypto_session));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
-	}
+	if (sess)
+		memset(sess, 0, sizeof(struct null_crypto_session));
 }
 
 static struct rte_cryptodev_ops pmd_ops = {
diff --git a/drivers/crypto/null/null_crypto_pmd_private.h b/drivers/crypto/null/null_crypto_pmd_private.h
index 89c4345b6f..ae34ce6671 100644
--- a/drivers/crypto/null/null_crypto_pmd_private.h
+++ b/drivers/crypto/null/null_crypto_pmd_private.h
@@ -31,8 +31,6 @@ struct null_crypto_qp {
 	/**< Ring for placing process packets */
 	struct rte_mempool *sess_mp;
 	/**< Session Mempool */
-	struct rte_mempool *sess_mp_priv;
-	/**< Session Mempool */
 	struct rte_cryptodev_stats qp_stats;
 	/**< Queue pair statistics */
 } __rte_cache_aligned;
diff --git a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
index e48805fb09..4647d568de 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
+++ b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
@@ -49,7 +49,6 @@ struct cpt_instance {
 	uint32_t queue_id;
 	uintptr_t rsvd;
 	struct rte_mempool *sess_mp;
-	struct rte_mempool *sess_mp_priv;
 	struct cpt_qp_meta_info meta_info;
 	uint8_t ca_enabled;
 };
diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c
index 9e8fd495cf..abd0963be0 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
+++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
@@ -171,7 +171,6 @@ otx_cpt_que_pair_setup(struct rte_cryptodev *dev,
 
 	instance->queue_id = que_pair_id;
 	instance->sess_mp = qp_conf->mp_session;
-	instance->sess_mp_priv = qp_conf->mp_session_private;
 	dev->data->queue_pairs[que_pair_id] = instance;
 
 	return 0;
@@ -243,29 +242,22 @@ sym_xform_verify(struct rte_crypto_sym_xform *xform)
 }
 
 static int
-sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
-		      struct rte_cryptodev_sym_session *sess,
-		      struct rte_mempool *pool)
+sym_session_configure(struct rte_crypto_sym_xform *xform,
+		      void *sess)
 {
 	struct rte_crypto_sym_xform *temp_xform = xform;
 	struct cpt_sess_misc *misc;
 	vq_cmd_word3_t vq_cmd_w3;
-	void *priv;
 	int ret;
 
 	ret = sym_xform_verify(xform);
 	if (unlikely(ret))
 		return ret;
 
-	if (unlikely(rte_mempool_get(pool, &priv))) {
-		CPT_LOG_ERR("Could not allocate session private data");
-		return -ENOMEM;
-	}
-
-	memset(priv, 0, sizeof(struct cpt_sess_misc) +
+	memset(sess, 0, sizeof(struct cpt_sess_misc) +
 			offsetof(struct cpt_ctx, mc_ctx));
 
-	misc = priv;
+	misc = sess;
 
 	for ( ; xform != NULL; xform = xform->next) {
 		switch (xform->type) {
@@ -301,8 +293,6 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
 		goto priv_put;
 	}
 
-	set_sym_session_private_data(sess, driver_id, priv);
-
 	misc->ctx_dma_addr = rte_mempool_virt2iova(misc) +
 			     sizeof(struct cpt_sess_misc);
 
@@ -316,56 +306,46 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
 	return 0;
 
 priv_put:
-	if (priv)
-		rte_mempool_put(pool, priv);
 	return -ENOTSUP;
 }
 
 static void
-sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess)
+sym_session_clear(void *sess)
 {
-	void *priv = get_sym_session_private_data(sess, driver_id);
 	struct cpt_sess_misc *misc;
-	struct rte_mempool *pool;
 	struct cpt_ctx *ctx;
 
-	if (priv == NULL)
+	if (sess == NULL)
 		return;
 
-	misc = priv;
+	misc = sess;
 	ctx = SESS_PRIV(misc);
 
 	if (ctx->auth_key != NULL)
 		rte_free(ctx->auth_key);
 
-	memset(priv, 0, cpt_get_session_size());
-
-	pool = rte_mempool_from_obj(priv);
-
-	set_sym_session_private_data(sess, driver_id, NULL);
-
-	rte_mempool_put(pool, priv);
+	memset(sess, 0, cpt_get_session_size());
 }
 
 static int
 otx_cpt_session_cfg(struct rte_cryptodev *dev,
 		    struct rte_crypto_sym_xform *xform,
-		    struct rte_cryptodev_sym_session *sess,
-		    struct rte_mempool *pool)
+		    void *sess)
 {
 	CPT_PMD_INIT_FUNC_TRACE();
+	RTE_SET_USED(dev);
 
-	return sym_session_configure(dev->driver_id, xform, sess, pool);
+	return sym_session_configure(xform, sess);
 }
 
 
 static void
-otx_cpt_session_clear(struct rte_cryptodev *dev,
-		  struct rte_cryptodev_sym_session *sess)
+otx_cpt_session_clear(struct rte_cryptodev *dev, void *sess)
 {
 	CPT_PMD_INIT_FUNC_TRACE();
+	RTE_SET_USED(dev);
 
-	return sym_session_clear(dev->driver_id, sess);
+	return sym_session_clear(sess);
 }
 
 static unsigned int
@@ -576,7 +556,6 @@ static __rte_always_inline void * __rte_hot
 otx_cpt_enq_single_sym_sessless(struct cpt_instance *instance,
 				struct rte_crypto_op *op)
 {
-	const int driver_id = otx_cryptodev_driver_id;
 	struct rte_crypto_sym_op *sym_op = op->sym;
 	struct rte_cryptodev_sym_session *sess;
 	void *req;
@@ -589,8 +568,12 @@ otx_cpt_enq_single_sym_sessless(struct cpt_instance *instance,
 		return NULL;
 	}
 
-	ret = sym_session_configure(driver_id, sym_op->xform, sess,
-				    instance->sess_mp_priv);
+	sess->sess_data[otx_cryptodev_driver_id].data =
+			(void *)((uint8_t *)sess +
+			rte_cryptodev_sym_get_header_session_size() +
+			(otx_cryptodev_driver_id * sess->priv_sz));
+	ret = sym_session_configure(sym_op->xform,
+			sess->sess_data[otx_cryptodev_driver_id].data);
 	if (ret)
 		goto sess_put;
 
@@ -604,7 +587,7 @@ otx_cpt_enq_single_sym_sessless(struct cpt_instance *instance,
 	return req;
 
 priv_put:
-	sym_session_clear(driver_id, sess);
+	sym_session_clear(sess);
 sess_put:
 	rte_mempool_put(instance->sess_mp, sess);
 	return NULL;
@@ -913,7 +896,6 @@ free_sym_session_data(const struct cpt_instance *instance,
 	memset(cop->sym->session, 0,
 	       rte_cryptodev_sym_get_existing_header_session_size(
 		       cop->sym->session));
-	rte_mempool_put(instance->sess_mp_priv, sess_private_data_t);
 	rte_mempool_put(instance->sess_mp, cop->sym->session);
 	cop->sym->session = NULL;
 }
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
index 7b744cd4b4..dcfbc49996 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
@@ -371,29 +371,21 @@ sym_xform_verify(struct rte_crypto_sym_xform *xform)
 }
 
 static int
-sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
-		      struct rte_cryptodev_sym_session *sess,
-		      struct rte_mempool *pool)
+sym_session_configure(struct rte_crypto_sym_xform *xform, void *sess)
 {
 	struct rte_crypto_sym_xform *temp_xform = xform;
 	struct cpt_sess_misc *misc;
 	vq_cmd_word3_t vq_cmd_w3;
-	void *priv;
 	int ret;
 
 	ret = sym_xform_verify(xform);
 	if (unlikely(ret))
 		return ret;
 
-	if (unlikely(rte_mempool_get(pool, &priv))) {
-		CPT_LOG_ERR("Could not allocate session private data");
-		return -ENOMEM;
-	}
-
-	memset(priv, 0, sizeof(struct cpt_sess_misc) +
+	memset(sess, 0, sizeof(struct cpt_sess_misc) +
 			offsetof(struct cpt_ctx, mc_ctx));
 
-	misc = priv;
+	misc = sess;
 
 	for ( ; xform != NULL; xform = xform->next) {
 		switch (xform->type) {
@@ -414,7 +406,7 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
 		}
 
 		if (ret)
-			goto priv_put;
+			return -ENOTSUP;
 	}
 
 	if ((GET_SESS_FC_TYPE(misc) == HASH_HMAC) &&
@@ -425,12 +417,9 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
 			rte_free(ctx->auth_key);
 			ctx->auth_key = NULL;
 		}
-		ret = -ENOTSUP;
-		goto priv_put;
+		return -ENOTSUP;
 	}
 
-	set_sym_session_private_data(sess, driver_id, misc);
-
 	misc->ctx_dma_addr = rte_mempool_virt2iova(misc) +
 			     sizeof(struct cpt_sess_misc);
 
@@ -451,11 +440,6 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
 	misc->cpt_inst_w7 = vq_cmd_w3.u64;
 
 	return 0;
-
-priv_put:
-	rte_mempool_put(pool, priv);
-
-	return -ENOTSUP;
 }
 
 static __rte_always_inline int32_t __rte_hot
@@ -765,7 +749,6 @@ otx2_cpt_enqueue_sym_sessless(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
 			      struct pending_queue *pend_q,
 			      unsigned int burst_index)
 {
-	const int driver_id = otx2_cryptodev_driver_id;
 	struct rte_crypto_sym_op *sym_op = op->sym;
 	struct rte_cryptodev_sym_session *sess;
 	int ret;
@@ -775,8 +758,12 @@ otx2_cpt_enqueue_sym_sessless(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
 	if (sess == NULL)
 		return -ENOMEM;
 
-	ret = sym_session_configure(driver_id, sym_op->xform, sess,
-				    qp->sess_mp_priv);
+	sess->sess_data[otx2_cryptodev_driver_id].data =
+			(void *)((uint8_t *)sess +
+			rte_cryptodev_sym_get_header_session_size() +
+			(otx2_cryptodev_driver_id * sess->priv_sz));
+	ret = sym_session_configure(sym_op->xform,
+			sess->sess_data[otx2_cryptodev_driver_id].data);
 	if (ret)
 		goto sess_put;
 
@@ -790,7 +777,7 @@ otx2_cpt_enqueue_sym_sessless(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
 	return 0;
 
 priv_put:
-	sym_session_clear(driver_id, sess);
+	sym_session_clear(sess);
 sess_put:
 	rte_mempool_put(qp->sess_mp, sess);
 	return ret;
@@ -1035,8 +1022,7 @@ otx2_cpt_dequeue_post_process(struct otx2_cpt_qp *qp, struct rte_crypto_op *cop,
 		}
 
 		if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
-			sym_session_clear(otx2_cryptodev_driver_id,
-					  cop->sym->session);
+			sym_session_clear(cop->sym->session);
 			sz = rte_cryptodev_sym_get_existing_header_session_size(
 					cop->sym->session);
 			memset(cop->sym->session, 0, sz);
@@ -1291,7 +1277,6 @@ otx2_cpt_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 	}
 
 	qp->sess_mp = conf->mp_session;
-	qp->sess_mp_priv = conf->mp_session_private;
 	dev->data->queue_pairs[qp_id] = qp;
 
 	return 0;
@@ -1330,21 +1315,22 @@ otx2_cpt_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
 static int
 otx2_cpt_sym_session_configure(struct rte_cryptodev *dev,
 			       struct rte_crypto_sym_xform *xform,
-			       struct rte_cryptodev_sym_session *sess,
-			       struct rte_mempool *pool)
+			       void *sess)
 {
 	CPT_PMD_INIT_FUNC_TRACE();
+	RTE_SET_USED(dev);
 
-	return sym_session_configure(dev->driver_id, xform, sess, pool);
+	return sym_session_configure(xform, sess);
 }
 
 static void
 otx2_cpt_sym_session_clear(struct rte_cryptodev *dev,
-			   struct rte_cryptodev_sym_session *sess)
+			   void *sess)
 {
 	CPT_PMD_INIT_FUNC_TRACE();
+	RTE_SET_USED(dev);
 
-	return sym_session_clear(dev->driver_id, sess);
+	return sym_session_clear(sess);
 }
 
 static unsigned int
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h b/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
index 01c081a216..5f63eaf7b7 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
@@ -8,29 +8,21 @@
 #include "cpt_pmd_logs.h"
 
 static void
-sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess)
+sym_session_clear(void *sess)
 {
-	void *priv = get_sym_session_private_data(sess, driver_id);
 	struct cpt_sess_misc *misc;
-	struct rte_mempool *pool;
 	struct cpt_ctx *ctx;
 
-	if (priv == NULL)
+	if (sess == NULL)
 		return;
 
-	misc = priv;
+	misc = sess;
 	ctx = SESS_PRIV(misc);
 
 	if (ctx->auth_key != NULL)
 		rte_free(ctx->auth_key);
 
-	memset(priv, 0, cpt_get_session_size());
-
-	pool = rte_mempool_from_obj(priv);
-
-	set_sym_session_private_data(sess, driver_id, NULL);
-
-	rte_mempool_put(pool, priv);
+	memset(sess, 0, cpt_get_session_size());
 }
 
 static __rte_always_inline uint8_t
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_qp.h b/drivers/crypto/octeontx2/otx2_cryptodev_qp.h
index 95bce3621a..82ece44723 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_qp.h
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_qp.h
@@ -27,8 +27,6 @@ struct otx2_cpt_qp {
 	/**< Pending queue */
 	struct rte_mempool *sess_mp;
 	/**< Session mempool */
-	struct rte_mempool *sess_mp_priv;
-	/**< Session private data mempool */
 	struct cpt_qp_meta_info meta_info;
 	/**< Metabuf info required to support operations on the queue pair */
 	rte_iova_t iq_dma_addr;
diff --git a/drivers/crypto/openssl/openssl_pmd_private.h b/drivers/crypto/openssl/openssl_pmd_private.h
index b2054b3754..2a9302bc19 100644
--- a/drivers/crypto/openssl/openssl_pmd_private.h
+++ b/drivers/crypto/openssl/openssl_pmd_private.h
@@ -64,8 +64,6 @@ struct openssl_qp {
 	/**< Ring for placing process packets */
 	struct rte_mempool *sess_mp;
 	/**< Session Mempool */
-	struct rte_mempool *sess_mp_priv;
-	/**< Session Private Data Mempool */
 	struct rte_cryptodev_stats stats;
 	/**< Queue pair statistics */
 	uint8_t temp_digest[DIGEST_LENGTH_MAX];
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 5794ed8159..feb4a2dece 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -762,27 +762,24 @@ get_session(struct openssl_qp *qp, struct rte_crypto_op *op)
 			return NULL;
 
 		/* provide internal session */
-		void *_sess = rte_cryptodev_sym_session_create(qp->sess_mp);
-		void *_sess_private_data = NULL;
+		struct rte_cryptodev_sym_session *_sess =
+			rte_cryptodev_sym_session_create(qp->sess_mp);
 
 		if (_sess == NULL)
 			return NULL;
 
-		if (rte_mempool_get(qp->sess_mp_priv,
-				(void **)&_sess_private_data))
-			return NULL;
-
-		sess = (struct openssl_session *)_sess_private_data;
+		_sess->sess_data[cryptodev_driver_id].data =
+				(void *)((uint8_t *)_sess +
+				rte_cryptodev_sym_get_header_session_size() +
+				(cryptodev_driver_id * _sess->priv_sz));
+		sess = _sess->sess_data[cryptodev_driver_id].data;
 
 		if (unlikely(openssl_set_session_parameters(sess,
 				op->sym->xform) != 0)) {
 			rte_mempool_put(qp->sess_mp, _sess);
-			rte_mempool_put(qp->sess_mp_priv, _sess_private_data);
 			sess = NULL;
 		}
 		op->sym->session = (struct rte_cryptodev_sym_session *)_sess;
-		set_sym_session_private_data(op->sym->session,
-				cryptodev_driver_id, _sess_private_data);
 	}
 
 	if (sess == NULL)
@@ -2106,7 +2103,6 @@ process_op(struct openssl_qp *qp, struct rte_crypto_op *op,
 		memset(op->sym->session, 0,
 			rte_cryptodev_sym_get_existing_header_session_size(
 				op->sym->session));
-		rte_mempool_put(qp->sess_mp_priv, sess);
 		rte_mempool_put(qp->sess_mp, op->sym->session);
 		op->sym->session = NULL;
 	}
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 52715f86f8..1b48a6b400 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -741,7 +741,6 @@ openssl_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 		goto qp_setup_cleanup;
 
 	qp->sess_mp = qp_conf->mp_session;
-	qp->sess_mp_priv = qp_conf->mp_session_private;
 
 	memset(&qp->stats, 0, sizeof(qp->stats));
 
@@ -772,10 +771,8 @@ openssl_pmd_asym_session_get_size(struct rte_cryptodev *dev __rte_unused)
 static int
 openssl_pmd_sym_session_configure(struct rte_cryptodev *dev __rte_unused,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mempool)
+		void *sess)
 {
-	void *sess_private_data;
 	int ret;
 
 	if (unlikely(sess == NULL)) {
@@ -783,24 +780,12 @@ openssl_pmd_sym_session_configure(struct rte_cryptodev *dev __rte_unused,
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		OPENSSL_LOG(ERR,
-			"Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
-	ret = openssl_set_session_parameters(sess_private_data, xform);
+	ret = openssl_set_session_parameters(sess, xform);
 	if (ret != 0) {
 		OPENSSL_LOG(ERR, "failed configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-			sess_private_data);
-
 	return 0;
 }
 
@@ -1154,19 +1139,13 @@ openssl_pmd_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static void
-openssl_pmd_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+openssl_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-
+	RTE_SET_USED(dev);
 	/* Zero out the whole structure */
-	if (sess_priv) {
-		openssl_reset_session(sess_priv);
-		memset(sess_priv, 0, sizeof(struct openssl_session));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
+	if (sess) {
+		openssl_reset_session(sess);
+		memset(sess, 0, sizeof(struct openssl_session));
 	}
 }
 
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index cfa7d59914..5df8f86420 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -172,21 +172,14 @@ qat_is_auth_alg_supported(enum rte_crypto_auth_algorithm algo,
 }
 
 void
-qat_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+qat_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-	struct qat_sym_session *s = (struct qat_sym_session *)sess_priv;
+	struct qat_sym_session *s = (struct qat_sym_session *)sess;
 
-	if (sess_priv) {
+	if (sess) {
 		if (s->bpi_ctx)
 			bpi_cipher_ctx_free(s->bpi_ctx);
 		memset(s, 0, qat_sym_session_get_private_size(dev));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
 	}
 }
 
@@ -458,31 +451,17 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
 int
 qat_sym_session_configure(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mempool)
+		void *sess_private_data)
 {
-	void *sess_private_data;
 	int ret;
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		CDEV_LOG_ERR(
-			"Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
 	ret = qat_sym_session_set_parameters(dev, xform, sess_private_data);
 	if (ret != 0) {
 		QAT_LOG(ERR,
 		    "Crypto QAT PMD: failed to configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-		sess_private_data);
-
 	return 0;
 }
 
diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h
index ea329c1f71..299e758d1c 100644
--- a/drivers/crypto/qat/qat_sym_session.h
+++ b/drivers/crypto/qat/qat_sym_session.h
@@ -112,8 +112,7 @@ struct qat_sym_session {
 int
 qat_sym_session_configure(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mempool);
+		void *sess);
 
 int
 qat_sym_session_set_parameters(struct rte_cryptodev *dev,
@@ -135,8 +134,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
 				struct qat_sym_session *session);
 
 void
-qat_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *session);
+qat_sym_session_clear(struct rte_cryptodev *dev, void *session);
 
 unsigned int
 qat_sym_session_get_private_size(struct rte_cryptodev *dev);
diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
index 465b88ade8..87260b5a22 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -476,9 +476,7 @@ scheduler_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
 
 static int
 scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
-	struct rte_crypto_sym_xform *xform,
-	struct rte_cryptodev_sym_session *sess,
-	struct rte_mempool *mempool)
+	struct rte_crypto_sym_xform *xform, void *sess)
 {
 	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
 	uint32_t i;
@@ -488,7 +486,7 @@ scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
 		struct scheduler_worker *worker = &sched_ctx->workers[i];
 
 		ret = rte_cryptodev_sym_session_init(worker->dev_id, sess,
-					xform, mempool);
+					xform);
 		if (ret < 0) {
 			CR_SCHED_LOG(ERR, "unable to config sym session");
 			return ret;
@@ -500,8 +498,7 @@ scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static void
-scheduler_pmd_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+scheduler_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
 	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
 	uint32_t i;
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index ed64866758..70d03869fe 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -37,11 +37,10 @@ static void virtio_crypto_dev_free_mbufs(struct rte_cryptodev *dev);
 static unsigned int virtio_crypto_sym_get_session_private_size(
 		struct rte_cryptodev *dev);
 static void virtio_crypto_sym_clear_session(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess);
+		void *sess);
 static int virtio_crypto_sym_configure_session(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *session,
-		struct rte_mempool *mp);
+		void *session);
 
 /*
  * The set of PCI devices this driver supports
@@ -929,7 +928,7 @@ virtio_crypto_check_sym_clear_session_paras(
 static void
 virtio_crypto_sym_clear_session(
 		struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+		void *sess)
 {
 	struct virtio_crypto_hw *hw;
 	struct virtqueue *vq;
@@ -1292,11 +1291,9 @@ static int
 virtio_crypto_check_sym_configure_session_paras(
 		struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sym_sess,
-		struct rte_mempool *mempool)
+		void *sym_sess)
 {
-	if (unlikely(xform == NULL) || unlikely(sym_sess == NULL) ||
-		unlikely(mempool == NULL)) {
+	if (unlikely(xform == NULL) || unlikely(sym_sess == NULL)) {
 		VIRTIO_CRYPTO_SESSION_LOG_ERR("NULL pointer");
 		return -1;
 	}
@@ -1311,12 +1308,9 @@ static int
 virtio_crypto_sym_configure_session(
 		struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mempool)
+		void *sess)
 {
 	int ret;
-	struct virtio_crypto_session crypto_sess;
-	void *session_private = &crypto_sess;
 	struct virtio_crypto_session *session;
 	struct virtio_crypto_op_ctrl_req *ctrl_req;
 	enum virtio_crypto_cmd_id cmd_id;
@@ -1328,19 +1322,13 @@ virtio_crypto_sym_configure_session(
 	PMD_INIT_FUNC_TRACE();
 
 	ret = virtio_crypto_check_sym_configure_session_paras(dev, xform,
-			sess, mempool);
+			sess);
 	if (ret < 0) {
 		VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid parameters");
 		return ret;
 	}
 
-	if (rte_mempool_get(mempool, &session_private)) {
-		VIRTIO_CRYPTO_SESSION_LOG_ERR(
-			"Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
-	session = (struct virtio_crypto_session *)session_private;
+	session = (struct virtio_crypto_session *)sess;
 	memset(session, 0, sizeof(struct virtio_crypto_session));
 	ctrl_req = &session->ctrl;
 	ctrl_req->header.opcode = VIRTIO_CRYPTO_CIPHER_CREATE_SESSION;
@@ -1403,9 +1391,6 @@ virtio_crypto_sym_configure_session(
 		goto error_out;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-		session_private);
-
 	return 0;
 
 error_out:
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
index b33cb7e139..8522f2dfda 100644
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
+++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
@@ -38,8 +38,7 @@ otx2_ca_deq_post_process(const struct otx2_cpt_qp *qp,
 		}
 
 		if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
-			sym_session_clear(otx2_cryptodev_driver_id,
-					  cop->sym->session);
+			sym_session_clear(cop->sym->session);
 			memset(cop->sym->session, 0,
 			rte_cryptodev_sym_get_existing_header_session_size(
 				cop->sym->session));
diff --git a/examples/fips_validation/fips_dev_self_test.c b/examples/fips_validation/fips_dev_self_test.c
index b4eab05a98..bbc27a1b6f 100644
--- a/examples/fips_validation/fips_dev_self_test.c
+++ b/examples/fips_validation/fips_dev_self_test.c
@@ -969,7 +969,6 @@ struct fips_dev_auto_test_env {
 	struct rte_mempool *mpool;
 	struct rte_mempool *op_pool;
 	struct rte_mempool *sess_pool;
-	struct rte_mempool *sess_priv_pool;
 	struct rte_mbuf *mbuf;
 	struct rte_crypto_op *op;
 };
@@ -981,7 +980,7 @@ typedef int (*fips_dev_self_test_prepare_xform_t)(uint8_t,
 		uint32_t);
 
 typedef int (*fips_dev_self_test_prepare_op_t)(struct rte_crypto_op *,
-		struct rte_mbuf *, struct rte_cryptodev_sym_session *,
+		struct rte_mbuf *, void *,
 		uint32_t, struct fips_dev_self_test_vector *);
 
 typedef int (*fips_dev_self_test_check_result_t)(struct rte_crypto_op *,
@@ -1173,7 +1172,7 @@ prepare_aead_xform(uint8_t dev_id,
 static int
 prepare_cipher_op(struct rte_crypto_op *op,
 		struct rte_mbuf *mbuf,
-		struct rte_cryptodev_sym_session *session,
+		void *session,
 		uint32_t dir,
 		struct fips_dev_self_test_vector *vec)
 {
@@ -1212,7 +1211,7 @@ prepare_cipher_op(struct rte_crypto_op *op,
 static int
 prepare_auth_op(struct rte_crypto_op *op,
 		struct rte_mbuf *mbuf,
-		struct rte_cryptodev_sym_session *session,
+		void *session,
 		uint32_t dir,
 		struct fips_dev_self_test_vector *vec)
 {
@@ -1251,7 +1250,7 @@ prepare_auth_op(struct rte_crypto_op *op,
 static int
 prepare_aead_op(struct rte_crypto_op *op,
 		struct rte_mbuf *mbuf,
-		struct rte_cryptodev_sym_session *session,
+		void *session,
 		uint32_t dir,
 		struct fips_dev_self_test_vector *vec)
 {
@@ -1464,7 +1463,7 @@ run_single_test(uint8_t dev_id,
 		uint32_t negative_test)
 {
 	struct rte_crypto_sym_xform xform;
-	struct rte_cryptodev_sym_session *sess;
+	void *sess;
 	uint16_t n_deqd;
 	uint8_t key[256];
 	int ret;
@@ -1484,8 +1483,7 @@ run_single_test(uint8_t dev_id,
 	if (!sess)
 		return -ENOMEM;
 
-	ret = rte_cryptodev_sym_session_init(dev_id,
-			sess, &xform, env->sess_priv_pool);
+	ret = rte_cryptodev_sym_session_init(dev_id, sess, &xform);
 	if (ret < 0) {
 		RTE_LOG(ERR, PMD, "Error %i: Init session\n", ret);
 		return ret;
@@ -1533,8 +1531,6 @@ fips_dev_auto_test_uninit(uint8_t dev_id,
 		rte_mempool_free(env->op_pool);
 	if (env->sess_pool)
 		rte_mempool_free(env->sess_pool);
-	if (env->sess_priv_pool)
-		rte_mempool_free(env->sess_priv_pool);
 
 	rte_cryptodev_stop(dev_id);
 }
@@ -1542,7 +1538,7 @@ fips_dev_auto_test_uninit(uint8_t dev_id,
 static int
 fips_dev_auto_test_init(uint8_t dev_id, struct fips_dev_auto_test_env *env)
 {
-	struct rte_cryptodev_qp_conf qp_conf = {128, NULL, NULL};
+	struct rte_cryptodev_qp_conf qp_conf = {128, NULL};
 	uint32_t sess_sz = rte_cryptodev_sym_get_private_session_size(dev_id);
 	struct rte_cryptodev_config conf;
 	char name[128];
@@ -1586,25 +1582,13 @@ fips_dev_auto_test_init(uint8_t dev_id, struct fips_dev_auto_test_env *env)
 	snprintf(name, 128, "%s%u", "SELF_TEST_SESS_POOL", dev_id);
 
 	env->sess_pool = rte_cryptodev_sym_session_pool_create(name,
-			128, 0, 0, 0, rte_cryptodev_socket_id(dev_id));
+			128, sess_sz, 0, 0, rte_cryptodev_socket_id(dev_id));
 	if (!env->sess_pool) {
 		ret = -ENOMEM;
 		goto error_exit;
 	}
 
-	memset(name, 0, 128);
-	snprintf(name, 128, "%s%u", "SELF_TEST_SESS_PRIV_POOL", dev_id);
-
-	env->sess_priv_pool = rte_mempool_create(name,
-			128, sess_sz, 0, 0, NULL, NULL, NULL,
-			NULL, rte_cryptodev_socket_id(dev_id), 0);
-	if (!env->sess_priv_pool) {
-		ret = -ENOMEM;
-		goto error_exit;
-	}
-
 	qp_conf.mp_session = env->sess_pool;
-	qp_conf.mp_session_private = env->sess_priv_pool;
 
 	ret = rte_cryptodev_queue_pair_setup(dev_id, 0, &qp_conf,
 			rte_cryptodev_socket_id(dev_id));
diff --git a/examples/fips_validation/main.c b/examples/fips_validation/main.c
index b0de3d269a..837afddcda 100644
--- a/examples/fips_validation/main.c
+++ b/examples/fips_validation/main.c
@@ -48,13 +48,12 @@ struct cryptodev_fips_validate_env {
 	uint16_t mbuf_data_room;
 	struct rte_mempool *mpool;
 	struct rte_mempool *sess_mpool;
-	struct rte_mempool *sess_priv_mpool;
 	struct rte_mempool *op_pool;
 	struct rte_mbuf *mbuf;
 	uint8_t *digest;
 	uint16_t digest_len;
 	struct rte_crypto_op *op;
-	struct rte_cryptodev_sym_session *sess;
+	void *sess;
 	uint16_t self_test;
 	struct fips_dev_broken_test_config *broken_test_config;
 } env;
@@ -63,7 +62,7 @@ static int
 cryptodev_fips_validate_app_int(void)
 {
 	struct rte_cryptodev_config conf = {rte_socket_id(), 1, 0};
-	struct rte_cryptodev_qp_conf qp_conf = {128, NULL, NULL};
+	struct rte_cryptodev_qp_conf qp_conf = {128, NULL};
 	struct rte_cryptodev_info dev_info;
 	uint32_t sess_sz = rte_cryptodev_sym_get_private_session_size(
 			env.dev_id);
@@ -103,16 +102,11 @@ cryptodev_fips_validate_app_int(void)
 	ret = -ENOMEM;
 
 	env.sess_mpool = rte_cryptodev_sym_session_pool_create(
-			"FIPS_SESS_MEMPOOL", 16, 0, 0, 0, rte_socket_id());
+			"FIPS_SESS_MEMPOOL", 16, sess_sz, 0, 0,
+			rte_socket_id());
 	if (!env.sess_mpool)
 		goto error_exit;
 
-	env.sess_priv_mpool = rte_mempool_create("FIPS_SESS_PRIV_MEMPOOL",
-			16, sess_sz, 0, 0, NULL, NULL, NULL,
-			NULL, rte_socket_id(), 0);
-	if (!env.sess_priv_mpool)
-		goto error_exit;
-
 	env.op_pool = rte_crypto_op_pool_create(
 			"FIPS_OP_POOL",
 			RTE_CRYPTO_OP_TYPE_SYMMETRIC,
@@ -127,7 +121,6 @@ cryptodev_fips_validate_app_int(void)
 		goto error_exit;
 
 	qp_conf.mp_session = env.sess_mpool;
-	qp_conf.mp_session_private = env.sess_priv_mpool;
 
 	ret = rte_cryptodev_queue_pair_setup(env.dev_id, 0, &qp_conf,
 			rte_socket_id());
@@ -141,8 +134,6 @@ cryptodev_fips_validate_app_int(void)
 	rte_mempool_free(env.mpool);
 	if (env.sess_mpool)
 		rte_mempool_free(env.sess_mpool);
-	if (env.sess_priv_mpool)
-		rte_mempool_free(env.sess_priv_mpool);
 	if (env.op_pool)
 		rte_mempool_free(env.op_pool);
 
@@ -158,7 +149,6 @@ cryptodev_fips_validate_app_uninit(void)
 	rte_cryptodev_sym_session_free(env.sess);
 	rte_mempool_free(env.mpool);
 	rte_mempool_free(env.sess_mpool);
-	rte_mempool_free(env.sess_priv_mpool);
 	rte_mempool_free(env.op_pool);
 }
 
@@ -1179,7 +1169,7 @@ fips_run_test(void)
 		return -ENOMEM;
 
 	ret = rte_cryptodev_sym_session_init(env.dev_id,
-			env.sess, &xform, env.sess_priv_mpool);
+			env.sess, &xform);
 	if (ret < 0) {
 		RTE_LOG(ERR, USER1, "Error %i: Init session\n",
 				ret);
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index f458d15a7a..ccaa0c31f7 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -1206,15 +1206,11 @@ ipsec_poll_mode_worker(void)
 	qconf->inbound.sa_ctx = socket_ctx[socket_id].sa_in;
 	qconf->inbound.cdev_map = cdev_map_in;
 	qconf->inbound.session_pool = socket_ctx[socket_id].session_pool;
-	qconf->inbound.session_priv_pool =
-			socket_ctx[socket_id].session_priv_pool;
 	qconf->outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out;
 	qconf->outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out;
 	qconf->outbound.sa_ctx = socket_ctx[socket_id].sa_out;
 	qconf->outbound.cdev_map = cdev_map_out;
 	qconf->outbound.session_pool = socket_ctx[socket_id].session_pool;
-	qconf->outbound.session_priv_pool =
-			socket_ctx[socket_id].session_priv_pool;
 	qconf->frag.pool_dir = socket_ctx[socket_id].mbuf_pool;
 	qconf->frag.pool_indir = socket_ctx[socket_id].mbuf_pool_indir;
 
@@ -2132,8 +2128,6 @@ cryptodevs_init(uint16_t req_queue_num)
 		qp_conf.nb_descriptors = CDEV_QUEUE_DESC;
 		qp_conf.mp_session =
 			socket_ctx[dev_conf.socket_id].session_pool;
-		qp_conf.mp_session_private =
-			socket_ctx[dev_conf.socket_id].session_priv_pool;
 		for (qp = 0; qp < dev_conf.nb_queue_pairs; qp++)
 			if (rte_cryptodev_queue_pair_setup(cdev_id, qp,
 					&qp_conf, dev_conf.socket_id))
@@ -2395,38 +2389,6 @@ session_pool_init(struct socket_ctx *ctx, int32_t socket_id, size_t sess_sz)
 		printf("Allocated session pool on socket %d\n",	socket_id);
 }
 
-static void
-session_priv_pool_init(struct socket_ctx *ctx, int32_t socket_id,
-	size_t sess_sz)
-{
-	char mp_name[RTE_MEMPOOL_NAMESIZE];
-	struct rte_mempool *sess_mp;
-	uint32_t nb_sess;
-
-	snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
-			"sess_mp_priv_%u", socket_id);
-	nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
-		rte_lcore_count());
-	nb_sess = RTE_MAX(nb_sess, CDEV_MP_CACHE_SZ *
-			CDEV_MP_CACHE_MULTIPLIER);
-	sess_mp = rte_mempool_create(mp_name,
-			nb_sess,
-			sess_sz,
-			CDEV_MP_CACHE_SZ,
-			0, NULL, NULL, NULL,
-			NULL, socket_id,
-			0);
-	ctx->session_priv_pool = sess_mp;
-
-	if (ctx->session_priv_pool == NULL)
-		rte_exit(EXIT_FAILURE,
-			"Cannot init session priv pool on socket %d\n",
-			socket_id);
-	else
-		printf("Allocated session priv pool on socket %d\n",
-			socket_id);
-}
-
 static void
 pool_init(struct socket_ctx *ctx, int32_t socket_id, uint32_t nb_mbuf)
 {
@@ -2928,8 +2890,6 @@ main(int32_t argc, char **argv)
 
 		pool_init(&socket_ctx[socket_id], socket_id, nb_bufs_in_pool);
 		session_pool_init(&socket_ctx[socket_id], socket_id, sess_sz);
-		session_priv_pool_init(&socket_ctx[socket_id], socket_id,
-			sess_sz);
 	}
 	printf("Number of mbufs in packet pool %d\n", nb_bufs_in_pool);
 
diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index 03d907cba8..a5921de11c 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -143,8 +143,7 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa,
 		ips->crypto.ses = rte_cryptodev_sym_session_create(
 				ipsec_ctx->session_pool);
 		rte_cryptodev_sym_session_init(ipsec_ctx->tbl[cdev_id_qp].id,
-				ips->crypto.ses, sa->xforms,
-				ipsec_ctx->session_priv_pool);
+				ips->crypto.ses, sa->xforms);
 
 		rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id,
 				&cdev_info);
diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
index 8405c48171..673c64e8dc 100644
--- a/examples/ipsec-secgw/ipsec.h
+++ b/examples/ipsec-secgw/ipsec.h
@@ -243,7 +243,6 @@ struct socket_ctx {
 	struct rte_mempool *mbuf_pool;
 	struct rte_mempool *mbuf_pool_indir;
 	struct rte_mempool *session_pool;
-	struct rte_mempool *session_priv_pool;
 };
 
 struct cnt_blk {
diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c
index 6f49239c4a..c65855a460 100644
--- a/examples/ipsec-secgw/ipsec_worker.c
+++ b/examples/ipsec-secgw/ipsec_worker.c
@@ -540,14 +540,10 @@ ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links,
 	lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in;
 	lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in;
 	lconf.inbound.session_pool = socket_ctx[socket_id].session_pool;
-	lconf.inbound.session_priv_pool =
-			socket_ctx[socket_id].session_priv_pool;
 	lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out;
 	lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out;
 	lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out;
 	lconf.outbound.session_pool = socket_ctx[socket_id].session_pool;
-	lconf.outbound.session_priv_pool =
-			socket_ctx[socket_id].session_priv_pool;
 
 	RTE_LOG(INFO, IPSEC,
 		"Launching event mode worker (non-burst - Tx internal port - "
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 2b029c65e6..154ac7793f 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -188,7 +188,7 @@ struct l2fwd_crypto_params {
 	struct l2fwd_iv auth_iv;
 	struct l2fwd_iv aead_iv;
 	struct l2fwd_key aad;
-	struct rte_cryptodev_sym_session *session;
+	void *session;
 
 	uint8_t do_cipher;
 	uint8_t do_hash;
@@ -229,7 +229,6 @@ struct rte_mempool *l2fwd_pktmbuf_pool;
 struct rte_mempool *l2fwd_crypto_op_pool;
 static struct {
 	struct rte_mempool *sess_mp;
-	struct rte_mempool *priv_mp;
 } session_pool_socket[RTE_MAX_NUMA_NODES];
 
 /* Per-port statistics struct */
@@ -671,11 +670,11 @@ generate_random_key(uint8_t *key, unsigned length)
 }
 
 /* Session is created and is later attached to the crypto operation. 8< */
-static struct rte_cryptodev_sym_session *
+static void *
 initialize_crypto_session(struct l2fwd_crypto_options *options, uint8_t cdev_id)
 {
 	struct rte_crypto_sym_xform *first_xform;
-	struct rte_cryptodev_sym_session *session;
+	void *session;
 	int retval = rte_cryptodev_socket_id(cdev_id);
 
 	if (retval < 0)
@@ -703,8 +702,7 @@ initialize_crypto_session(struct l2fwd_crypto_options *options, uint8_t cdev_id)
 		return NULL;
 
 	if (rte_cryptodev_sym_session_init(cdev_id, session,
-				first_xform,
-				session_pool_socket[socket_id].priv_mp) < 0)
+				first_xform) < 0)
 		return NULL;
 
 	return session;
@@ -730,7 +728,7 @@ l2fwd_main_loop(struct l2fwd_crypto_options *options)
 			US_PER_S * BURST_TX_DRAIN_US;
 	struct l2fwd_crypto_params *cparams;
 	struct l2fwd_crypto_params port_cparams[qconf->nb_crypto_devs];
-	struct rte_cryptodev_sym_session *session;
+	void *session;
 
 	if (qconf->nb_rx_ports == 0) {
 		RTE_LOG(INFO, L2FWD, "lcore %u has nothing to do\n", lcore_id);
@@ -2388,30 +2386,6 @@ initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
 		} else
 			sessions_needed = enabled_cdev_count;
 
-		if (session_pool_socket[socket_id].priv_mp == NULL) {
-			char mp_name[RTE_MEMPOOL_NAMESIZE];
-
-			snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
-				"priv_sess_mp_%u", socket_id);
-
-			session_pool_socket[socket_id].priv_mp =
-					rte_mempool_create(mp_name,
-						sessions_needed,
-						max_sess_sz,
-						0, 0, NULL, NULL, NULL,
-						NULL, socket_id,
-						0);
-
-			if (session_pool_socket[socket_id].priv_mp == NULL) {
-				printf("Cannot create pool on socket %d\n",
-					socket_id);
-				return -ENOMEM;
-			}
-
-			printf("Allocated pool \"%s\" on socket %d\n",
-				mp_name, socket_id);
-		}
-
 		if (session_pool_socket[socket_id].sess_mp == NULL) {
 			char mp_name[RTE_MEMPOOL_NAMESIZE];
 			snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
@@ -2421,7 +2395,8 @@ initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
 					rte_cryptodev_sym_session_pool_create(
 							mp_name,
 							sessions_needed,
-							0, 0, 0, socket_id);
+							max_sess_sz,
+							0, 0, socket_id);
 
 			if (session_pool_socket[socket_id].sess_mp == NULL) {
 				printf("Cannot create pool on socket %d\n",
@@ -2573,8 +2548,6 @@ initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
 
 		qp_conf.nb_descriptors = 2048;
 		qp_conf.mp_session = session_pool_socket[socket_id].sess_mp;
-		qp_conf.mp_session_private =
-				session_pool_socket[socket_id].priv_mp;
 
 		retval = rte_cryptodev_queue_pair_setup(cdev_id, 0, &qp_conf,
 				socket_id);
diff --git a/examples/vhost_crypto/main.c b/examples/vhost_crypto/main.c
index dea7dcbd07..cbb97aaf76 100644
--- a/examples/vhost_crypto/main.c
+++ b/examples/vhost_crypto/main.c
@@ -46,7 +46,6 @@ struct vhost_crypto_info {
 	int vids[MAX_NB_SOCKETS];
 	uint32_t nb_vids;
 	struct rte_mempool *sess_pool;
-	struct rte_mempool *sess_priv_pool;
 	struct rte_mempool *cop_pool;
 	uint8_t cid;
 	uint32_t qid;
@@ -304,7 +303,6 @@ new_device(int vid)
 	}
 
 	ret = rte_vhost_crypto_create(vid, info->cid, info->sess_pool,
-			info->sess_priv_pool,
 			rte_lcore_to_socket_id(options.los[i].lcore_id));
 	if (ret) {
 		RTE_LOG(ERR, USER1, "Cannot create vhost crypto\n");
@@ -458,7 +456,6 @@ free_resource(void)
 
 		rte_mempool_free(info->cop_pool);
 		rte_mempool_free(info->sess_pool);
-		rte_mempool_free(info->sess_priv_pool);
 
 		for (j = 0; j < lo->nb_sockets; j++) {
 			rte_vhost_driver_unregister(lo->socket_files[i]);
@@ -544,16 +541,12 @@ main(int argc, char *argv[])
 
 		snprintf(name, 127, "SESS_POOL_%u", lo->lcore_id);
 		info->sess_pool = rte_cryptodev_sym_session_pool_create(name,
-				SESSION_MAP_ENTRIES, 0, 0, 0,
-				rte_lcore_to_socket_id(lo->lcore_id));
-
-		snprintf(name, 127, "SESS_POOL_PRIV_%u", lo->lcore_id);
-		info->sess_priv_pool = rte_mempool_create(name,
 				SESSION_MAP_ENTRIES,
 				rte_cryptodev_sym_get_private_session_size(
-				info->cid), 64, 0, NULL, NULL, NULL, NULL,
-				rte_lcore_to_socket_id(lo->lcore_id), 0);
-		if (!info->sess_priv_pool || !info->sess_pool) {
+					info->cid), 0, 0,
+				rte_lcore_to_socket_id(lo->lcore_id));
+
+		if (!info->sess_pool) {
 			RTE_LOG(ERR, USER1, "Failed to create mempool");
 			goto error_exit;
 		}
@@ -574,7 +567,6 @@ main(int argc, char *argv[])
 
 		qp_conf.nb_descriptors = NB_CRYPTO_DESCRIPTORS;
 		qp_conf.mp_session = info->sess_pool;
-		qp_conf.mp_session_private = info->sess_priv_pool;
 
 		for (j = 0; j < dev_info.max_nb_queue_pairs; j++) {
 			ret = rte_cryptodev_queue_pair_setup(info->cid, j,
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index 89bf2af399..d35e66d3b5 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -301,7 +301,6 @@ typedef unsigned int (*cryptodev_asym_get_session_private_size_t)(
  * @param	dev		Crypto device pointer
  * @param	xform		Single or chain of crypto xforms
  * @param	session		Pointer to cryptodev's private session structure
- * @param	mp		Mempool where the private session is allocated
  *
  * @return
  *  - Returns 0 if private session structure have been created successfully.
@@ -310,9 +309,7 @@ typedef unsigned int (*cryptodev_asym_get_session_private_size_t)(
  *  - Returns -ENOMEM if the private session could not be allocated.
  */
 typedef int (*cryptodev_sym_configure_session_t)(struct rte_cryptodev *dev,
-		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *session,
-		struct rte_mempool *mp);
+		struct rte_crypto_sym_xform *xform, void *session);
 /**
  * Configure a Crypto asymmetric session on a device.
  *
@@ -338,7 +335,7 @@ typedef int (*cryptodev_asym_configure_session_t)(struct rte_cryptodev *dev,
  * @param	sess		Cryptodev session structure
  */
 typedef void (*cryptodev_sym_free_session_t)(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess);
+		void *sess);
 /**
  * Free asymmetric session private data.
  *
diff --git a/lib/cryptodev/rte_crypto.h b/lib/cryptodev/rte_crypto.h
index a864f5036f..200617f623 100644
--- a/lib/cryptodev/rte_crypto.h
+++ b/lib/cryptodev/rte_crypto.h
@@ -420,7 +420,7 @@ rte_crypto_op_sym_xforms_alloc(struct rte_crypto_op *op, uint8_t nb_xforms)
  */
 static inline int
 rte_crypto_op_attach_sym_session(struct rte_crypto_op *op,
-		struct rte_cryptodev_sym_session *sess)
+		void *sess)
 {
 	if (unlikely(op->type != RTE_CRYPTO_OP_TYPE_SYMMETRIC))
 		return -1;
diff --git a/lib/cryptodev/rte_crypto_sym.h b/lib/cryptodev/rte_crypto_sym.h
index daa090b978..a84964163c 100644
--- a/lib/cryptodev/rte_crypto_sym.h
+++ b/lib/cryptodev/rte_crypto_sym.h
@@ -924,7 +924,7 @@ __rte_crypto_sym_op_sym_xforms_alloc(struct rte_crypto_sym_op *sym_op,
  */
 static inline int
 __rte_crypto_sym_op_attach_sym_session(struct rte_crypto_sym_op *sym_op,
-		struct rte_cryptodev_sym_session *sess)
+		void *sess)
 {
 	sym_op->session = sess;
 
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index 305e013ebb..783e33bef6 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -201,6 +201,8 @@ struct rte_cryptodev_sym_session_pool_private_data {
 	/**< number of elements in sess_data array */
 	uint16_t user_data_sz;
 	/**< session user data will be placed after sess_data */
+	uint16_t sess_priv_sz;
+	/**< session user data will be placed after sess_data */
 };
 
 int
@@ -1218,16 +1220,9 @@ rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
 		return -EINVAL;
 	}
 
-	if ((qp_conf->mp_session && !qp_conf->mp_session_private) ||
-			(!qp_conf->mp_session && qp_conf->mp_session_private)) {
-		CDEV_LOG_ERR("Invalid mempools\n");
-		return -EINVAL;
-	}
-
 	if (qp_conf->mp_session) {
 		struct rte_cryptodev_sym_session_pool_private_data *pool_priv;
 		uint32_t obj_size = qp_conf->mp_session->elt_size;
-		uint32_t obj_priv_size = qp_conf->mp_session_private->elt_size;
 		struct rte_cryptodev_sym_session s = {0};
 
 		pool_priv = rte_mempool_get_priv(qp_conf->mp_session);
@@ -1239,11 +1234,11 @@ rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
 
 		s.nb_drivers = pool_priv->nb_drivers;
 		s.user_data_sz = pool_priv->user_data_sz;
+		s.priv_sz = pool_priv->sess_priv_sz;
 
-		if ((rte_cryptodev_sym_get_existing_header_session_size(&s) >
-			obj_size) || (s.nb_drivers <= dev->driver_id) ||
-			rte_cryptodev_sym_get_private_session_size(dev_id) >
-				obj_priv_size) {
+		if (((rte_cryptodev_sym_get_existing_header_session_size(&s) +
+				(s.nb_drivers * s.priv_sz)) > obj_size) ||
+				(s.nb_drivers <= dev->driver_id)) {
 			CDEV_LOG_ERR("Invalid mempool\n");
 			return -EINVAL;
 		}
@@ -1705,11 +1700,11 @@ rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
 
 int
 rte_cryptodev_sym_session_init(uint8_t dev_id,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_crypto_sym_xform *xforms,
-		struct rte_mempool *mp)
+		void *sess_opaque,
+		struct rte_crypto_sym_xform *xforms)
 {
 	struct rte_cryptodev *dev;
+	struct rte_cryptodev_sym_session *sess = sess_opaque;
 	uint32_t sess_priv_sz = rte_cryptodev_sym_get_private_session_size(
 			dev_id);
 	uint8_t index;
@@ -1722,10 +1717,10 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
 
 	dev = rte_cryptodev_pmd_get_dev(dev_id);
 
-	if (sess == NULL || xforms == NULL || dev == NULL || mp == NULL)
+	if (sess == NULL || xforms == NULL || dev == NULL)
 		return -EINVAL;
 
-	if (mp->elt_size < sess_priv_sz)
+	if (sess->priv_sz < sess_priv_sz)
 		return -EINVAL;
 
 	index = dev->driver_id;
@@ -1735,8 +1730,11 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
 	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->sym_session_configure, -ENOTSUP);
 
 	if (sess->sess_data[index].refcnt == 0) {
+		sess->sess_data[index].data = (void *)((uint8_t *)sess +
+				rte_cryptodev_sym_get_header_session_size() +
+				(index * sess->priv_sz));
 		ret = dev->dev_ops->sym_session_configure(dev, xforms,
-							sess, mp);
+				sess->sess_data[index].data);
 		if (ret < 0) {
 			CDEV_LOG_ERR(
 				"dev_id %d failed to configure session details",
@@ -1745,7 +1743,7 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
 		}
 	}
 
-	rte_cryptodev_trace_sym_session_init(dev_id, sess, xforms, mp);
+	rte_cryptodev_trace_sym_session_init(dev_id, sess, xforms);
 	sess->sess_data[index].refcnt++;
 	return 0;
 }
@@ -1790,6 +1788,21 @@ rte_cryptodev_asym_session_init(uint8_t dev_id,
 	rte_cryptodev_trace_asym_session_init(dev_id, sess, xforms, mp);
 	return 0;
 }
+static size_t
+get_max_sym_sess_priv_sz(void)
+{
+	size_t max_sz, sz;
+	int16_t cdev_id, n;
+
+	max_sz = 0;
+	n =  rte_cryptodev_count();
+	for (cdev_id = 0; cdev_id != n; cdev_id++) {
+		sz = rte_cryptodev_sym_get_private_session_size(cdev_id);
+		if (sz > max_sz)
+			max_sz = sz;
+	}
+	return max_sz;
+}
 
 struct rte_mempool *
 rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
@@ -1799,15 +1812,15 @@ rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
 	struct rte_mempool *mp;
 	struct rte_cryptodev_sym_session_pool_private_data *pool_priv;
 	uint32_t obj_sz;
+	uint32_t sess_priv_sz = get_max_sym_sess_priv_sz();
 
 	obj_sz = rte_cryptodev_sym_get_header_session_size() + user_data_size;
-	if (obj_sz > elt_size)
+	if (elt_size < obj_sz + (sess_priv_sz * nb_drivers)) {
 		CDEV_LOG_INFO("elt_size %u is expanded to %u\n", elt_size,
-				obj_sz);
-	else
-		obj_sz = elt_size;
-
-	mp = rte_mempool_create(name, nb_elts, obj_sz, cache_size,
+				obj_sz + (sess_priv_sz * nb_drivers));
+		elt_size = obj_sz + (sess_priv_sz * nb_drivers);
+	}
+	mp = rte_mempool_create(name, nb_elts, elt_size, cache_size,
 			(uint32_t)(sizeof(*pool_priv)),
 			NULL, NULL, NULL, NULL,
 			socket_id, 0);
@@ -1827,6 +1840,7 @@ rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
 
 	pool_priv->nb_drivers = nb_drivers;
 	pool_priv->user_data_sz = user_data_size;
+	pool_priv->sess_priv_sz = sess_priv_sz;
 
 	rte_cryptodev_trace_sym_session_pool_create(name, nb_elts,
 		elt_size, cache_size, user_data_size, mp);
@@ -1860,7 +1874,7 @@ rte_cryptodev_sym_is_valid_session_pool(struct rte_mempool *mp)
 	return 1;
 }
 
-struct rte_cryptodev_sym_session *
+void *
 rte_cryptodev_sym_session_create(struct rte_mempool *mp)
 {
 	struct rte_cryptodev_sym_session *sess;
@@ -1881,6 +1895,7 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mp)
 
 	sess->nb_drivers = pool_priv->nb_drivers;
 	sess->user_data_sz = pool_priv->user_data_sz;
+	sess->priv_sz = pool_priv->sess_priv_sz;
 	sess->opaque_data = 0;
 
 	/* Clear device session pointer.
@@ -1890,7 +1905,7 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mp)
 			rte_cryptodev_sym_session_data_size(sess));
 
 	rte_cryptodev_trace_sym_session_create(mp, sess);
-	return sess;
+	return (void *)sess;
 }
 
 struct rte_cryptodev_asym_session *
@@ -1928,9 +1943,9 @@ rte_cryptodev_asym_session_create(struct rte_mempool *mp)
 }
 
 int
-rte_cryptodev_sym_session_clear(uint8_t dev_id,
-		struct rte_cryptodev_sym_session *sess)
+rte_cryptodev_sym_session_clear(uint8_t dev_id, void *s)
 {
+	struct rte_cryptodev_sym_session *sess = s;
 	struct rte_cryptodev *dev;
 	uint8_t driver_id;
 
@@ -1952,7 +1967,7 @@ rte_cryptodev_sym_session_clear(uint8_t dev_id,
 
 	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->sym_session_clear, -ENOTSUP);
 
-	dev->dev_ops->sym_session_clear(dev, sess);
+	dev->dev_ops->sym_session_clear(dev, sess->sess_data[driver_id].data);
 
 	rte_cryptodev_trace_sym_session_clear(dev_id, sess);
 	return 0;
@@ -1983,10 +1998,11 @@ rte_cryptodev_asym_session_clear(uint8_t dev_id,
 }
 
 int
-rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess)
+rte_cryptodev_sym_session_free(void *s)
 {
 	uint8_t i;
 	struct rte_mempool *sess_mp;
+	struct rte_cryptodev_sym_session *sess = s;
 
 	if (sess == NULL)
 		return -EINVAL;
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 56e3868ada..68271fd7e3 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -539,8 +539,6 @@ struct rte_cryptodev_qp_conf {
 	uint32_t nb_descriptors; /**< Number of descriptors per queue pair */
 	struct rte_mempool *mp_session;
 	/**< The mempool for creating session in sessionless mode */
-	struct rte_mempool *mp_session_private;
-	/**< The mempool for creating sess private data in sessionless mode */
 };
 
 /**
@@ -910,6 +908,8 @@ struct rte_cryptodev_sym_session {
 	/**< number of elements in sess_data array */
 	uint16_t user_data_sz;
 	/**< session user data will be placed after sess_data */
+	uint16_t priv_sz;
+	/**< Maximum private session data size which each driver can use */
 	__extension__ struct {
 		void *data;
 		uint16_t refcnt;
@@ -961,10 +961,10 @@ rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
  * @param   mempool    Symmetric session mempool to allocate session
  *                     objects from
  * @return
- *  - On success return pointer to sym-session
+ *  - On success return opaque pointer to sym-session
  *  - On failure returns NULL
  */
-struct rte_cryptodev_sym_session *
+void *
 rte_cryptodev_sym_session_create(struct rte_mempool *mempool);
 
 /**
@@ -993,7 +993,7 @@ rte_cryptodev_asym_session_create(struct rte_mempool *mempool);
  *  - -EBUSY if not all device private data has been freed.
  */
 int
-rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess);
+rte_cryptodev_sym_session_free(void *sess);
 
 /**
  * Frees asymmetric crypto session header, after checking that all
@@ -1013,25 +1013,23 @@ rte_cryptodev_asym_session_free(struct rte_cryptodev_asym_session *sess);
 
 /**
  * Fill out private data for the device id, based on its device type.
+ * Memory for private data is already allocated in sess, driver need
+ * to fill the content.
  *
  * @param   dev_id   ID of device that we want the session to be used on
  * @param   sess     Session where the private data will be attached to
  * @param   xforms   Symmetric crypto transform operations to apply on flow
  *                   processed with this session
- * @param   mempool  Mempool where the private data is allocated.
  *
  * @return
  *  - On success, zero.
  *  - -EINVAL if input parameters are invalid.
  *  - -ENOTSUP if crypto device does not support the crypto transform or
  *    does not support symmetric operations.
- *  - -ENOMEM if the private session could not be allocated.
  */
 int
-rte_cryptodev_sym_session_init(uint8_t dev_id,
-			struct rte_cryptodev_sym_session *sess,
-			struct rte_crypto_sym_xform *xforms,
-			struct rte_mempool *mempool);
+rte_cryptodev_sym_session_init(uint8_t dev_id, void *sess,
+			struct rte_crypto_sym_xform *xforms);
 
 /**
  * Initialize asymmetric session on a device with specific asymmetric xform
@@ -1070,8 +1068,7 @@ rte_cryptodev_asym_session_init(uint8_t dev_id,
  *  - -ENOTSUP if crypto device does not support symmetric operations.
  */
 int
-rte_cryptodev_sym_session_clear(uint8_t dev_id,
-			struct rte_cryptodev_sym_session *sess);
+rte_cryptodev_sym_session_clear(uint8_t dev_id, void *sess);
 
 /**
  * Frees resources held by asymmetric session during rte_cryptodev_session_init
diff --git a/lib/cryptodev/rte_cryptodev_trace.h b/lib/cryptodev/rte_cryptodev_trace.h
index d1f4f069a3..44da04c425 100644
--- a/lib/cryptodev/rte_cryptodev_trace.h
+++ b/lib/cryptodev/rte_cryptodev_trace.h
@@ -56,7 +56,6 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_u16(queue_pair_id);
 	rte_trace_point_emit_u32(conf->nb_descriptors);
 	rte_trace_point_emit_ptr(conf->mp_session);
-	rte_trace_point_emit_ptr(conf->mp_session_private);
 )
 
 RTE_TRACE_POINT(
@@ -106,15 +105,13 @@ RTE_TRACE_POINT(
 RTE_TRACE_POINT(
 	rte_cryptodev_trace_sym_session_init,
 	RTE_TRACE_POINT_ARGS(uint8_t dev_id,
-		struct rte_cryptodev_sym_session *sess, void *xforms,
-		void *mempool),
+		struct rte_cryptodev_sym_session *sess, void *xforms),
 	rte_trace_point_emit_u8(dev_id);
 	rte_trace_point_emit_ptr(sess);
 	rte_trace_point_emit_u64(sess->opaque_data);
 	rte_trace_point_emit_u16(sess->nb_drivers);
 	rte_trace_point_emit_u16(sess->user_data_sz);
 	rte_trace_point_emit_ptr(xforms);
-	rte_trace_point_emit_ptr(mempool);
 )
 
 RTE_TRACE_POINT(
diff --git a/lib/pipeline/rte_table_action.c b/lib/pipeline/rte_table_action.c
index 4b0316bfed..c3b7fb84c4 100644
--- a/lib/pipeline/rte_table_action.c
+++ b/lib/pipeline/rte_table_action.c
@@ -1719,7 +1719,7 @@ struct sym_crypto_data {
 	uint16_t op_mask;
 
 	/** Session pointer. */
-	struct rte_cryptodev_sym_session *session;
+	void *session;
 
 	/** Direction of crypto, encrypt or decrypt */
 	uint16_t direction;
@@ -1780,7 +1780,7 @@ sym_crypto_apply(struct sym_crypto_data *data,
 	const struct rte_crypto_auth_xform *auth_xform = NULL;
 	const struct rte_crypto_aead_xform *aead_xform = NULL;
 	struct rte_crypto_sym_xform *xform = p->xform;
-	struct rte_cryptodev_sym_session *session;
+	void *session;
 	int ret;
 
 	memset(data, 0, sizeof(*data));
@@ -1905,7 +1905,7 @@ sym_crypto_apply(struct sym_crypto_data *data,
 		return -ENOMEM;
 
 	ret = rte_cryptodev_sym_session_init(cfg->cryptodev_id, session,
-			p->xform, cfg->mp_init);
+			p->xform);
 	if (ret < 0) {
 		rte_cryptodev_sym_session_free(session);
 		return ret;
@@ -2858,7 +2858,7 @@ rte_table_action_time_read(struct rte_table_action *action,
 	return 0;
 }
 
-struct rte_cryptodev_sym_session *
+void *
 rte_table_action_crypto_sym_session_get(struct rte_table_action *action,
 	void *data)
 {
diff --git a/lib/pipeline/rte_table_action.h b/lib/pipeline/rte_table_action.h
index 82bc9d9ac9..68db453a8b 100644
--- a/lib/pipeline/rte_table_action.h
+++ b/lib/pipeline/rte_table_action.h
@@ -1129,7 +1129,7 @@ rte_table_action_time_read(struct rte_table_action *action,
  *   The pointer to the session on success, NULL otherwise.
  */
 __rte_experimental
-struct rte_cryptodev_sym_session *
+void *
 rte_table_action_crypto_sym_session_get(struct rte_table_action *action,
 	void *data);
 
diff --git a/lib/vhost/rte_vhost_crypto.h b/lib/vhost/rte_vhost_crypto.h
index f54d731139..d9b7beed9c 100644
--- a/lib/vhost/rte_vhost_crypto.h
+++ b/lib/vhost/rte_vhost_crypto.h
@@ -50,8 +50,6 @@ rte_vhost_crypto_driver_start(const char *path);
  *  multiple Vhost-crypto devices.
  * @param sess_pool
  *  The pointer to the created cryptodev session pool.
- * @param sess_priv_pool
- *  The pointer to the created cryptodev session private data mempool.
  * @param socket_id
  *  NUMA Socket ID to allocate resources on. *
  * @return
@@ -61,7 +59,6 @@ rte_vhost_crypto_driver_start(const char *path);
 int
 rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
 		struct rte_mempool *sess_pool,
-		struct rte_mempool *sess_priv_pool,
 		int socket_id);
 
 /**
diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c
index 926b5c0bd9..b4464c4253 100644
--- a/lib/vhost/vhost_crypto.c
+++ b/lib/vhost/vhost_crypto.c
@@ -338,7 +338,7 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
 		VhostUserCryptoSessionParam *sess_param)
 {
 	struct rte_crypto_sym_xform xform1 = {0}, xform2 = {0};
-	struct rte_cryptodev_sym_session *session;
+	void *session;
 	int ret;
 
 	switch (sess_param->op_type) {
@@ -383,8 +383,7 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
 		return;
 	}
 
-	if (rte_cryptodev_sym_session_init(vcrypto->cid, session, &xform1,
-			vcrypto->sess_priv_pool) < 0) {
+	if (rte_cryptodev_sym_session_init(vcrypto->cid, session, &xform1) < 0) {
 		VC_LOG_ERR("Failed to initialize session");
 		sess_param->session_id = -VIRTIO_CRYPTO_ERR;
 		return;
@@ -1425,7 +1424,6 @@ rte_vhost_crypto_driver_start(const char *path)
 int
 rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
 		struct rte_mempool *sess_pool,
-		struct rte_mempool *sess_priv_pool,
 		int socket_id)
 {
 	struct virtio_net *dev = get_device(vid);
@@ -1447,7 +1445,6 @@ rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
 	}
 
 	vcrypto->sess_pool = sess_pool;
-	vcrypto->sess_priv_pool = sess_priv_pool;
 	vcrypto->cid = cryptodev_id;
 	vcrypto->cache_session_id = UINT64_MAX;
 	vcrypto->last_session_id = 1;
-- 
2.25.1


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v3 1/8] security: rework session framework
  @ 2021-10-18 21:34  1%   ` Akhil Goyal
  2021-10-18 21:34  1%   ` [dpdk-dev] [PATCH v3 6/8] cryptodev: " Akhil Goyal
    2 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-10-18 21:34 UTC (permalink / raw)
  To: dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
	konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
	adwivedi, ciara.power, haiyue.wang, jiawenwu, jianwang,
	Akhil Goyal

As per current design, rte_security_session_create()
unnecessarily use 2 mempool objects for a single session.
And structure rte_security_session is not directly used
by the application, it may cause ABI breakage if the structure
is modified in future.

To address these two issues, the API will now take only 1 mempool
object instead of 2 and return a void pointer directly
to the session private data. With this change, the library layer
will get the object from mempool and pass session_private_data
to the PMD for filling the PMD data.
Since set and get pkt metadata for security sessions are now
made inline for Inline crypto/proto mode, a new member fast_mdata
is added to the rte_security_session.
To access opaque data and fast_mdata will be accessed via inline
APIs which can do pointer manipulations inside library from
session_private_data pointer coming from application.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
 app/test-crypto-perf/cperf_ops.c              |  13 +-
 .../cperf_test_pmd_cyclecount.c               |   2 +-
 app/test/test_cryptodev.c                     |  17 +-
 app/test/test_ipsec.c                         |  11 +-
 app/test/test_security.c                      | 193 ++++--------------
 drivers/crypto/caam_jr/caam_jr.c              |  32 +--
 drivers/crypto/cnxk/cn10k_cryptodev_ops.c     |   6 +-
 drivers/crypto/cnxk/cn10k_ipsec.c             |  53 +----
 drivers/crypto/cnxk/cn9k_cryptodev_ops.c      |   2 +-
 drivers/crypto/cnxk/cn9k_ipsec.c              |  50 +----
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   |  39 +---
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c   |   3 +-
 drivers/crypto/dpaa_sec/dpaa_sec.c            |  34 +--
 drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c     |   3 +-
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c        |  32 +--
 drivers/crypto/mvsam/rte_mrvl_pmd.c           |   3 +-
 drivers/crypto/mvsam/rte_mrvl_pmd_ops.c       |  11 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_sec.c |  54 +----
 drivers/crypto/qat/qat_sym.c                  |   3 +-
 drivers/crypto/qat/qat_sym.h                  |   8 +-
 drivers/crypto/qat/qat_sym_session.c          |  21 +-
 drivers/crypto/qat/qat_sym_session.h          |   4 +-
 drivers/net/ixgbe/ixgbe_ipsec.c               |  38 +---
 drivers/net/meson.build                       |   2 +-
 drivers/net/octeontx2/otx2_ethdev_sec.c       |  51 ++---
 drivers/net/octeontx2/otx2_ethdev_sec_tx.h    |   2 +-
 drivers/net/txgbe/txgbe_ipsec.c               |  38 +---
 examples/ipsec-secgw/ipsec.c                  |   9 +-
 lib/security/rte_security.c                   |  28 +--
 lib/security/rte_security.h                   |  41 ++--
 lib/security/rte_security_driver.h            |  16 +-
 32 files changed, 204 insertions(+), 617 deletions(-)

diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 263841c339..6c3aa77dec 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -67,8 +67,6 @@ cperf_set_ops_security(struct rte_crypto_op **ops,
 
 	for (i = 0; i < nb_ops; i++) {
 		struct rte_crypto_sym_op *sym_op = ops[i]->sym;
-		struct rte_security_session *sec_sess =
-			(struct rte_security_session *)sess;
 		uint32_t buf_sz;
 
 		uint32_t *per_pkt_hfn = rte_crypto_op_ctod_offset(ops[i],
@@ -76,7 +74,7 @@ cperf_set_ops_security(struct rte_crypto_op **ops,
 		*per_pkt_hfn = options->pdcp_ses_hfn_en ? 0 : PDCP_DEFAULT_HFN;
 
 		ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
-		rte_security_attach_session(ops[i], sec_sess);
+		rte_security_attach_session(ops[i], (void *)sess);
 		sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
 							src_buf_offset);
 
@@ -608,7 +606,6 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
 
 static struct rte_cryptodev_sym_session *
 create_ipsec_session(struct rte_mempool *sess_mp,
-		struct rte_mempool *priv_mp,
 		uint8_t dev_id,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
@@ -720,7 +717,7 @@ create_ipsec_session(struct rte_mempool *sess_mp,
 
 	/* Create security session */
 	return (void *)rte_security_session_create(ctx,
-				&sess_conf, sess_mp, priv_mp);
+				&sess_conf, sess_mp);
 }
 
 static struct rte_cryptodev_sym_session *
@@ -831,11 +828,11 @@ cperf_create_session(struct rte_mempool *sess_mp,
 
 		/* Create security session */
 		return (void *)rte_security_session_create(ctx,
-					&sess_conf, sess_mp, priv_mp);
+					&sess_conf, sess_mp);
 	}
 
 	if (options->op_type == CPERF_IPSEC) {
-		return create_ipsec_session(sess_mp, priv_mp, dev_id,
+		return create_ipsec_session(sess_mp, dev_id,
 				options, test_vector, iv_offset);
 	}
 
@@ -880,7 +877,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
 
 		/* Create security session */
 		return (void *)rte_security_session_create(ctx,
-					&sess_conf, sess_mp, priv_mp);
+					&sess_conf, sess_mp);
 	}
 #endif
 	sess = rte_cryptodev_sym_session_create(sess_mp);
diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
index fda97e8ab9..e43e2a3b96 100644
--- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
+++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
@@ -70,7 +70,7 @@ cperf_pmd_cyclecount_test_free(struct cperf_pmd_cyclecount_ctx *ctx)
 				(struct rte_security_ctx *)
 				rte_cryptodev_get_sec_ctx(ctx->dev_id);
 			rte_security_session_destroy(sec_ctx,
-				(struct rte_security_session *)ctx->sess);
+				(void *)ctx->sess);
 		} else
 #endif
 		{
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 814a0b401d..996b3b4de6 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -83,7 +83,7 @@ struct crypto_unittest_params {
 	union {
 		struct rte_cryptodev_sym_session *sess;
 #ifdef RTE_LIB_SECURITY
-		struct rte_security_session *sec_session;
+		void *sec_session;
 #endif
 	};
 #ifdef RTE_LIB_SECURITY
@@ -8403,8 +8403,7 @@ static int test_pdcp_proto(int i, int oop, enum rte_crypto_cipher_operation opc,
 
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx,
-				&sess_conf, ts_params->session_mpool,
-				ts_params->session_priv_mpool);
+				&sess_conf, ts_params->session_mpool);
 
 	if (!ut_params->sec_session) {
 		printf("TestCase %s()-%d line %d failed %s: ",
@@ -8675,8 +8674,7 @@ test_pdcp_proto_SGL(int i, int oop,
 
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx,
-				&sess_conf, ts_params->session_mpool,
-				ts_params->session_priv_mpool);
+				&sess_conf, ts_params->session_mpool);
 
 	if (!ut_params->sec_session) {
 		printf("TestCase %s()-%d line %d failed %s: ",
@@ -9175,8 +9173,7 @@ test_ipsec_proto_process(const struct ipsec_test_data td[],
 
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
-					ts_params->session_mpool,
-					ts_params->session_priv_mpool);
+					ts_params->session_mpool);
 
 	if (ut_params->sec_session == NULL)
 		return TEST_SKIPPED;
@@ -9597,8 +9594,7 @@ test_docsis_proto_uplink(int i, struct docsis_test_data *d_td)
 
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
-					ts_params->session_mpool,
-					ts_params->session_priv_mpool);
+					ts_params->session_mpool);
 
 	if (!ut_params->sec_session) {
 		printf("TestCase %s(%d) line %d: %s\n",
@@ -9773,8 +9769,7 @@ test_docsis_proto_downlink(int i, struct docsis_test_data *d_td)
 
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
-					ts_params->session_mpool,
-					ts_params->session_priv_mpool);
+					ts_params->session_mpool);
 
 	if (!ut_params->sec_session) {
 		printf("TestCase %s(%d) line %d: %s\n",
diff --git a/app/test/test_ipsec.c b/app/test/test_ipsec.c
index c6d6b88d6d..2ffa2a8e79 100644
--- a/app/test/test_ipsec.c
+++ b/app/test/test_ipsec.c
@@ -148,18 +148,16 @@ const struct supported_auth_algo auth_algos[] = {
 
 static int
 dummy_sec_create(void *device, struct rte_security_session_conf *conf,
-	struct rte_security_session *sess, struct rte_mempool *mp)
+	void *sess)
 {
 	RTE_SET_USED(device);
 	RTE_SET_USED(conf);
-	RTE_SET_USED(mp);
-
-	sess->sess_private_data = NULL;
+	RTE_SET_USED(sess);
 	return 0;
 }
 
 static int
-dummy_sec_destroy(void *device, struct rte_security_session *sess)
+dummy_sec_destroy(void *device, void *sess)
 {
 	RTE_SET_USED(device);
 	RTE_SET_USED(sess);
@@ -631,8 +629,7 @@ create_dummy_sec_session(struct ipsec_unitest_params *ut,
 	static struct rte_security_session_conf conf;
 
 	ut->ss[j].security.ses = rte_security_session_create(&dummy_sec_ctx,
-					&conf, qp->mp_session,
-					qp->mp_session_private);
+					&conf, qp->mp_session);
 
 	if (ut->ss[j].security.ses == NULL)
 		return -ENOMEM;
diff --git a/app/test/test_security.c b/app/test/test_security.c
index 060cf1ffa8..1cea756880 100644
--- a/app/test/test_security.c
+++ b/app/test/test_security.c
@@ -200,25 +200,6 @@
 			expected_mempool_usage, mempool_usage);		\
 } while (0)
 
-/**
- * Verify usage of mempool by checking if number of allocated objects matches
- * expectations. The mempool is used to manage objects for sessions priv data.
- * A single object is acquired from mempool during session_create
- * and put back in session_destroy.
- *
- * @param   expected_priv_mp_usage	expected number of used priv mp objects
- */
-#define TEST_ASSERT_PRIV_MP_USAGE(expected_priv_mp_usage) do {		\
-	struct security_testsuite_params *ts_params = &testsuite_params;\
-	unsigned int priv_mp_usage;					\
-	priv_mp_usage = rte_mempool_in_use_count(			\
-			ts_params->session_priv_mpool);			\
-	TEST_ASSERT_EQUAL(expected_priv_mp_usage, priv_mp_usage,	\
-			"Expecting %u priv mempool allocations, "	\
-			"but there are %u allocated objects",		\
-			expected_priv_mp_usage, priv_mp_usage);		\
-} while (0)
-
 /**
  * Mockup structures and functions for rte_security_ops;
  *
@@ -253,39 +234,28 @@
 static struct mock_session_create_data {
 	void *device;
 	struct rte_security_session_conf *conf;
-	struct rte_security_session *sess;
+	void *sess;
 	struct rte_mempool *mp;
-	struct rte_mempool *priv_mp;
 
 	int ret;
 
 	int called;
 	int failed;
-} mock_session_create_exp = {NULL, NULL, NULL, NULL, NULL, 0, 0, 0};
+} mock_session_create_exp = {NULL, NULL, NULL, NULL, 0, 0, 0};
 
 static int
 mock_session_create(void *device,
 		struct rte_security_session_conf *conf,
-		struct rte_security_session *sess,
-		struct rte_mempool *priv_mp)
+		void *sess)
 {
-	void *sess_priv;
-	int ret;
 
 	mock_session_create_exp.called++;
 
 	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, device);
 	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, conf);
-	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, priv_mp);
 
-	if (mock_session_create_exp.ret == 0) {
-		ret = rte_mempool_get(priv_mp, &sess_priv);
-		TEST_ASSERT_EQUAL(0, ret,
-			"priv mempool does not have enough objects");
-
-		set_sec_session_private_data(sess, sess_priv);
+	if (mock_session_create_exp.ret == 0)
 		mock_session_create_exp.sess = sess;
-	}
 
 	return mock_session_create_exp.ret;
 }
@@ -297,7 +267,7 @@ mock_session_create(void *device,
  */
 static struct mock_session_update_data {
 	void *device;
-	struct rte_security_session *sess;
+	void *sess;
 	struct rte_security_session_conf *conf;
 
 	int ret;
@@ -308,7 +278,7 @@ static struct mock_session_update_data {
 
 static int
 mock_session_update(void *device,
-		struct rte_security_session *sess,
+		void *sess,
 		struct rte_security_session_conf *conf)
 {
 	mock_session_update_exp.called++;
@@ -351,7 +321,7 @@ mock_session_get_size(void *device)
  */
 static struct mock_session_stats_get_data {
 	void *device;
-	struct rte_security_session *sess;
+	void *sess;
 	struct rte_security_stats *stats;
 
 	int ret;
@@ -362,7 +332,7 @@ static struct mock_session_stats_get_data {
 
 static int
 mock_session_stats_get(void *device,
-		struct rte_security_session *sess,
+		void *sess,
 		struct rte_security_stats *stats)
 {
 	mock_session_stats_get_exp.called++;
@@ -381,7 +351,7 @@ mock_session_stats_get(void *device,
  */
 static struct mock_session_destroy_data {
 	void *device;
-	struct rte_security_session *sess;
+	void *sess;
 
 	int ret;
 
@@ -390,15 +360,9 @@ static struct mock_session_destroy_data {
 } mock_session_destroy_exp = {NULL, NULL, 0, 0, 0};
 
 static int
-mock_session_destroy(void *device, struct rte_security_session *sess)
+mock_session_destroy(void *device, void *sess)
 {
-	void *sess_priv = get_sec_session_private_data(sess);
-
 	mock_session_destroy_exp.called++;
-	if ((mock_session_destroy_exp.ret == 0) && (sess_priv != NULL)) {
-		rte_mempool_put(rte_mempool_from_obj(sess_priv), sess_priv);
-		set_sec_session_private_data(sess, NULL);
-	}
 	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_destroy_exp, device);
 	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_destroy_exp, sess);
 
@@ -412,7 +376,7 @@ mock_session_destroy(void *device, struct rte_security_session *sess)
  */
 static struct mock_set_pkt_metadata_data {
 	void *device;
-	struct rte_security_session *sess;
+	void *sess;
 	struct rte_mbuf *m;
 	void *params;
 
@@ -424,7 +388,7 @@ static struct mock_set_pkt_metadata_data {
 
 static int
 mock_set_pkt_metadata(void *device,
-		struct rte_security_session *sess,
+		void *sess,
 		struct rte_mbuf *m,
 		void *params)
 {
@@ -536,7 +500,6 @@ struct rte_security_ops mock_ops = {
  */
 static struct security_testsuite_params {
 	struct rte_mempool *session_mpool;
-	struct rte_mempool *session_priv_mpool;
 } testsuite_params = { NULL };
 
 /**
@@ -549,7 +512,7 @@ static struct security_testsuite_params {
 static struct security_unittest_params {
 	struct rte_security_ctx ctx;
 	struct rte_security_session_conf conf;
-	struct rte_security_session *sess;
+	void *sess;
 } unittest_params = {
 	.ctx = {
 		.device = NULL,
@@ -563,7 +526,7 @@ static struct security_unittest_params {
 #define SECURITY_TEST_PRIV_MEMPOOL_NAME "SecurityTestPrivMp"
 #define SECURITY_TEST_MEMPOOL_SIZE 15
 #define SECURITY_TEST_SESSION_OBJ_SZ sizeof(struct rte_security_session)
-#define SECURITY_TEST_SESSION_PRIV_OBJ_SZ 64
+#define SECURITY_TEST_SESSION_PRIV_OBJ_SZ 1024
 
 /**
  * testsuite_setup initializes whole test suite parameters.
@@ -577,27 +540,13 @@ testsuite_setup(void)
 	ts_params->session_mpool = rte_mempool_create(
 			SECURITY_TEST_MEMPOOL_NAME,
 			SECURITY_TEST_MEMPOOL_SIZE,
-			SECURITY_TEST_SESSION_OBJ_SZ,
+			SECURITY_TEST_SESSION_OBJ_SZ +
+			SECURITY_TEST_SESSION_PRIV_OBJ_SZ,
 			0, 0, NULL, NULL, NULL, NULL,
 			SOCKET_ID_ANY, 0);
 	TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
 			"Cannot create mempool %s\n", rte_strerror(rte_errno));
 
-	ts_params->session_priv_mpool = rte_mempool_create(
-			SECURITY_TEST_PRIV_MEMPOOL_NAME,
-			SECURITY_TEST_MEMPOOL_SIZE,
-			SECURITY_TEST_SESSION_PRIV_OBJ_SZ,
-			0, 0, NULL, NULL, NULL, NULL,
-			SOCKET_ID_ANY, 0);
-	if (ts_params->session_priv_mpool == NULL) {
-		RTE_LOG(ERR, USER1, "TestCase %s() line %d failed (null): "
-				"Cannot create priv mempool %s\n",
-				__func__, __LINE__, rte_strerror(rte_errno));
-		rte_mempool_free(ts_params->session_mpool);
-		ts_params->session_mpool = NULL;
-		return TEST_FAILED;
-	}
-
 	return TEST_SUCCESS;
 }
 
@@ -612,10 +561,6 @@ testsuite_teardown(void)
 		rte_mempool_free(ts_params->session_mpool);
 		ts_params->session_mpool = NULL;
 	}
-	if (ts_params->session_priv_mpool) {
-		rte_mempool_free(ts_params->session_priv_mpool);
-		ts_params->session_priv_mpool = NULL;
-	}
 }
 
 /**
@@ -704,7 +649,7 @@ ut_setup_with_session(void)
 {
 	struct security_unittest_params *ut_params = &unittest_params;
 	struct security_testsuite_params *ts_params = &testsuite_params;
-	struct rte_security_session *sess;
+	void *sess;
 
 	int ret = ut_setup();
 	if (ret != TEST_SUCCESS)
@@ -713,12 +658,11 @@ ut_setup_with_session(void)
 	mock_session_create_exp.device = NULL;
 	mock_session_create_exp.conf = &ut_params->conf;
 	mock_session_create_exp.mp = ts_params->session_mpool;
-	mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
 	mock_session_create_exp.ret = 0;
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool,
-			ts_params->session_priv_mpool);
+			ts_params->session_mpool);
+	mock_session_get_size_exp.called = 0;
 	TEST_ASSERT_MOCK_FUNCTION_CALL_NOT_NULL(rte_security_session_create,
 			sess);
 	TEST_ASSERT_EQUAL(sess, mock_session_create_exp.sess,
@@ -757,16 +701,14 @@ test_session_create_inv_context(void)
 {
 	struct security_testsuite_params *ts_params = &testsuite_params;
 	struct security_unittest_params *ut_params = &unittest_params;
-	struct rte_security_session *sess;
+	void *sess;
 
 	sess = rte_security_session_create(NULL, &ut_params->conf,
-			ts_params->session_mpool,
-			ts_params->session_priv_mpool);
+			ts_params->session_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
-	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	return TEST_SUCCESS;
@@ -781,18 +723,16 @@ test_session_create_inv_context_ops(void)
 {
 	struct security_testsuite_params *ts_params = &testsuite_params;
 	struct security_unittest_params *ut_params = &unittest_params;
-	struct rte_security_session *sess;
+	void *sess;
 
 	ut_params->ctx.ops = NULL;
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool,
-			ts_params->session_priv_mpool);
+			ts_params->session_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
-	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	return TEST_SUCCESS;
@@ -807,18 +747,16 @@ test_session_create_inv_context_ops_fun(void)
 {
 	struct security_testsuite_params *ts_params = &testsuite_params;
 	struct security_unittest_params *ut_params = &unittest_params;
-	struct rte_security_session *sess;
+	void *sess;
 
 	ut_params->ctx.ops = &empty_ops;
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool,
-			ts_params->session_priv_mpool);
+			ts_params->session_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
-	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	return TEST_SUCCESS;
@@ -832,16 +770,14 @@ test_session_create_inv_configuration(void)
 {
 	struct security_testsuite_params *ts_params = &testsuite_params;
 	struct security_unittest_params *ut_params = &unittest_params;
-	struct rte_security_session *sess;
+	void *sess;
 
 	sess = rte_security_session_create(&ut_params->ctx, NULL,
-			ts_params->session_mpool,
-			ts_params->session_priv_mpool);
+			ts_params->session_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
-	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	return TEST_SUCCESS;
@@ -855,39 +791,14 @@ static int
 test_session_create_inv_mempool(void)
 {
 	struct security_unittest_params *ut_params = &unittest_params;
-	struct security_testsuite_params *ts_params = &testsuite_params;
-	struct rte_security_session *sess;
+	void *sess;
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			NULL, ts_params->session_priv_mpool);
+			NULL);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
-	TEST_ASSERT_PRIV_MP_USAGE(0);
-	TEST_ASSERT_SESSION_COUNT(0);
-
-	return TEST_SUCCESS;
-}
-
-/**
- * Test execution of rte_security_session_create with NULL session
- * priv mempool
- */
-static int
-test_session_create_inv_sess_priv_mempool(void)
-{
-	struct security_unittest_params *ut_params = &unittest_params;
-	struct security_testsuite_params *ts_params = &testsuite_params;
-	struct rte_security_session *sess;
-
-	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool, NULL);
-	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
-			sess, NULL, "%p");
-	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
-	TEST_ASSERT_MEMPOOL_USAGE(0);
-	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	return TEST_SUCCESS;
@@ -902,9 +813,8 @@ test_session_create_mempool_empty(void)
 {
 	struct security_testsuite_params *ts_params = &testsuite_params;
 	struct security_unittest_params *ut_params = &unittest_params;
-	struct rte_security_session *tmp[SECURITY_TEST_MEMPOOL_SIZE];
-	void *tmp1[SECURITY_TEST_MEMPOOL_SIZE];
-	struct rte_security_session *sess;
+	void *tmp[SECURITY_TEST_MEMPOOL_SIZE];
+	void *sess;
 
 	/* Get all available objects from mempool. */
 	int i, ret;
@@ -914,34 +824,23 @@ test_session_create_mempool_empty(void)
 		TEST_ASSERT_EQUAL(0, ret,
 				"Expect getting %d object from mempool"
 				" to succeed", i);
-		ret = rte_mempool_get(ts_params->session_priv_mpool,
-				(void **)(&tmp1[i]));
-		TEST_ASSERT_EQUAL(0, ret,
-				"Expect getting %d object from priv mempool"
-				" to succeed", i);
 	}
 	TEST_ASSERT_MEMPOOL_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
-	TEST_ASSERT_PRIV_MP_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool,
-			ts_params->session_priv_mpool);
+			ts_params->session_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
-	TEST_ASSERT_PRIV_MP_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	/* Put objects back to the pool. */
 	for (i = 0; i < SECURITY_TEST_MEMPOOL_SIZE; ++i) {
 		rte_mempool_put(ts_params->session_mpool,
 				(void *)(tmp[i]));
-		rte_mempool_put(ts_params->session_priv_mpool,
-				(tmp1[i]));
 	}
 	TEST_ASSERT_MEMPOOL_USAGE(0);
-	TEST_ASSERT_PRIV_MP_USAGE(0);
 
 	return TEST_SUCCESS;
 }
@@ -955,22 +854,19 @@ test_session_create_ops_failure(void)
 {
 	struct security_testsuite_params *ts_params = &testsuite_params;
 	struct security_unittest_params *ut_params = &unittest_params;
-	struct rte_security_session *sess;
+	void *sess;
 
 	mock_session_create_exp.device = NULL;
 	mock_session_create_exp.conf = &ut_params->conf;
 	mock_session_create_exp.mp = ts_params->session_mpool;
-	mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
 	mock_session_create_exp.ret = -1;	/* Return failure status. */
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool,
-			ts_params->session_priv_mpool);
+			ts_params->session_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 1);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
-	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	return TEST_SUCCESS;
@@ -984,17 +880,15 @@ test_session_create_success(void)
 {
 	struct security_testsuite_params *ts_params = &testsuite_params;
 	struct security_unittest_params *ut_params = &unittest_params;
-	struct rte_security_session *sess;
+	void *sess;
 
 	mock_session_create_exp.device = NULL;
 	mock_session_create_exp.conf = &ut_params->conf;
 	mock_session_create_exp.mp = ts_params->session_mpool;
-	mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
 	mock_session_create_exp.ret = 0;	/* Return success status. */
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool,
-			ts_params->session_priv_mpool);
+			ts_params->session_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_NOT_NULL(rte_security_session_create,
 			sess);
 	TEST_ASSERT_EQUAL(sess, mock_session_create_exp.sess,
@@ -1003,7 +897,6 @@ test_session_create_success(void)
 			sess, mock_session_create_exp.sess);
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 1);
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	/*
@@ -1389,7 +1282,6 @@ test_session_destroy_inv_context(void)
 	struct security_unittest_params *ut_params = &unittest_params;
 
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	int ret = rte_security_session_destroy(NULL, ut_params->sess);
@@ -1397,7 +1289,6 @@ test_session_destroy_inv_context(void)
 			ret, -EINVAL, "%d");
 	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	return TEST_SUCCESS;
@@ -1414,7 +1305,6 @@ test_session_destroy_inv_context_ops(void)
 	ut_params->ctx.ops = NULL;
 
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	int ret = rte_security_session_destroy(&ut_params->ctx,
@@ -1423,7 +1313,6 @@ test_session_destroy_inv_context_ops(void)
 			ret, -EINVAL, "%d");
 	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	return TEST_SUCCESS;
@@ -1440,7 +1329,6 @@ test_session_destroy_inv_context_ops_fun(void)
 	ut_params->ctx.ops = &empty_ops;
 
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	int ret = rte_security_session_destroy(&ut_params->ctx,
@@ -1449,7 +1337,6 @@ test_session_destroy_inv_context_ops_fun(void)
 			ret, -ENOTSUP, "%d");
 	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	return TEST_SUCCESS;
@@ -1464,7 +1351,6 @@ test_session_destroy_inv_session(void)
 	struct security_unittest_params *ut_params = &unittest_params;
 
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	int ret = rte_security_session_destroy(&ut_params->ctx, NULL);
@@ -1472,7 +1358,6 @@ test_session_destroy_inv_session(void)
 			ret, -EINVAL, "%d");
 	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	return TEST_SUCCESS;
@@ -1492,7 +1377,6 @@ test_session_destroy_ops_failure(void)
 	mock_session_destroy_exp.ret = -1;
 
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	int ret = rte_security_session_destroy(&ut_params->ctx,
@@ -1501,7 +1385,6 @@ test_session_destroy_ops_failure(void)
 			ret, -1, "%d");
 	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 1);
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	return TEST_SUCCESS;
@@ -1519,7 +1402,6 @@ test_session_destroy_success(void)
 	mock_session_destroy_exp.sess = ut_params->sess;
 	mock_session_destroy_exp.ret = 0;
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	int ret = rte_security_session_destroy(&ut_params->ctx,
@@ -1528,7 +1410,6 @@ test_session_destroy_success(void)
 			ret, 0, "%d");
 	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 1);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
-	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	/*
@@ -2495,8 +2376,6 @@ static struct unit_test_suite security_testsuite  = {
 				test_session_create_inv_configuration),
 		TEST_CASE_ST(ut_setup, ut_teardown,
 				test_session_create_inv_mempool),
-		TEST_CASE_ST(ut_setup, ut_teardown,
-				test_session_create_inv_sess_priv_mempool),
 		TEST_CASE_ST(ut_setup, ut_teardown,
 				test_session_create_mempool_empty),
 		TEST_CASE_ST(ut_setup, ut_teardown,
diff --git a/drivers/crypto/caam_jr/caam_jr.c b/drivers/crypto/caam_jr/caam_jr.c
index 8c56610ac8..00e680cf03 100644
--- a/drivers/crypto/caam_jr/caam_jr.c
+++ b/drivers/crypto/caam_jr/caam_jr.c
@@ -1361,9 +1361,7 @@ caam_jr_enqueue_op(struct rte_crypto_op *op, struct caam_jr_qp *qp)
 					cryptodev_driver_id);
 		break;
 	case RTE_CRYPTO_OP_SECURITY_SESSION:
-		ses = (struct caam_jr_session *)
-			get_sec_session_private_data(
-					op->sym->sec_session);
+		ses = (struct caam_jr_session *)(op->sym->sec_session);
 		break;
 	default:
 		CAAM_JR_DP_ERR("sessionless crypto op not supported");
@@ -1911,22 +1909,14 @@ caam_jr_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
 static int
 caam_jr_security_session_create(void *dev,
 				struct rte_security_session_conf *conf,
-				struct rte_security_session *sess,
-				struct rte_mempool *mempool)
+				void *sess)
 {
-	void *sess_private_data;
 	struct rte_cryptodev *cdev = (struct rte_cryptodev *)dev;
 	int ret;
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		CAAM_JR_ERR("Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
 	switch (conf->protocol) {
 	case RTE_SECURITY_PROTOCOL_IPSEC:
-		ret = caam_jr_set_ipsec_session(cdev, conf,
-				sess_private_data);
+		ret = caam_jr_set_ipsec_session(cdev, conf, sess);
 		break;
 	case RTE_SECURITY_PROTOCOL_MACSEC:
 		return -ENOTSUP;
@@ -1935,34 +1925,24 @@ caam_jr_security_session_create(void *dev,
 	}
 	if (ret != 0) {
 		CAAM_JR_ERR("failed to configure session parameters");
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sec_session_private_data(sess, sess_private_data);
-
 	return ret;
 }
 
 /* Clear the memory of session so it doesn't leave key material behind */
 static int
-caam_jr_security_session_destroy(void *dev __rte_unused,
-				 struct rte_security_session *sess)
+caam_jr_security_session_destroy(void *dev __rte_unused, void *sess)
 {
 	PMD_INIT_FUNC_TRACE();
-	void *sess_priv = get_sec_session_private_data(sess);
 
-	struct caam_jr_session *s = (struct caam_jr_session *)sess_priv;
-
-	if (sess_priv) {
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
+	struct caam_jr_session *s = (struct caam_jr_session *)sess;
 
+	if (sess) {
 		rte_free(s->cipher_key.data);
 		rte_free(s->auth_key.data);
 		memset(sess, 0, sizeof(struct caam_jr_session));
-		set_sec_session_private_data(sess, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
 	}
 	return 0;
 }
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index c25c8e67b2..de2eebd507 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -122,8 +122,8 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[],
 
 	if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
 		if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
-			sec_sess = get_sec_session_private_data(
-				sym_op->sec_session);
+			sec_sess = (struct cn10k_sec_session *)
+				(sym_op->sec_session);
 			ret = cpt_sec_inst_fill(op, sec_sess, infl_req,
 						&inst[0]);
 			if (unlikely(ret))
@@ -360,7 +360,7 @@ cn10k_cpt_sec_ucc_process(struct rte_crypto_op *cop,
 	if (!(infl_req->op_flags & CPT_OP_FLAGS_IPSEC_DIR_INBOUND))
 		return;
 
-	sess = get_sec_session_private_data(cop->sym->sec_session);
+	sess = (struct cn10k_sec_session *)(cop->sym->sec_session);
 	sa = &sess->sa;
 
 	mbuf = cop->sym->m_src;
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c
index 27df1dcd64..425fe599e0 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.c
+++ b/drivers/crypto/cnxk/cn10k_ipsec.c
@@ -35,17 +35,15 @@ static int
 cn10k_ipsec_outb_sa_create(struct roc_cpt *roc_cpt,
 			   struct rte_security_ipsec_xform *ipsec_xfrm,
 			   struct rte_crypto_sym_xform *crypto_xfrm,
-			   struct rte_security_session *sec_sess)
+			   struct cn10k_sec_session *sess)
 {
 	union roc_ot_ipsec_outb_param1 param1;
 	struct roc_ot_ipsec_outb_sa *out_sa;
 	struct cnxk_ipsec_outb_rlens rlens;
-	struct cn10k_sec_session *sess;
 	struct cn10k_ipsec_sa *sa;
 	union cpt_inst_w4 inst_w4;
 	int ret;
 
-	sess = get_sec_session_private_data(sec_sess);
 	sa = &sess->sa;
 	out_sa = &sa->out_sa;
 
@@ -114,16 +112,14 @@ static int
 cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt,
 			  struct rte_security_ipsec_xform *ipsec_xfrm,
 			  struct rte_crypto_sym_xform *crypto_xfrm,
-			  struct rte_security_session *sec_sess)
+			  struct cn10k_sec_session *sess)
 {
 	union roc_ot_ipsec_inb_param1 param1;
 	struct roc_ot_ipsec_inb_sa *in_sa;
-	struct cn10k_sec_session *sess;
 	struct cn10k_ipsec_sa *sa;
 	union cpt_inst_w4 inst_w4;
 	int ret;
 
-	sess = get_sec_session_private_data(sec_sess);
 	sa = &sess->sa;
 	in_sa = &sa->in_sa;
 
@@ -175,7 +171,7 @@ static int
 cn10k_ipsec_session_create(void *dev,
 			   struct rte_security_ipsec_xform *ipsec_xfrm,
 			   struct rte_crypto_sym_xform *crypto_xfrm,
-			   struct rte_security_session *sess)
+			   struct cn10k_sec_session *sess)
 {
 	struct rte_cryptodev *crypto_dev = dev;
 	struct roc_cpt *roc_cpt;
@@ -204,55 +200,28 @@ cn10k_ipsec_session_create(void *dev,
 
 static int
 cn10k_sec_session_create(void *device, struct rte_security_session_conf *conf,
-			 struct rte_security_session *sess,
-			 struct rte_mempool *mempool)
+			 void *sess)
 {
-	struct cn10k_sec_session *priv;
-	int ret;
+	struct cn10k_sec_session *priv = sess;
 
 	if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
 		return -EINVAL;
 
-	if (rte_mempool_get(mempool, (void **)&priv)) {
-		plt_err("Could not allocate security session private data");
-		return -ENOMEM;
-	}
-
-	set_sec_session_private_data(sess, priv);
-
 	if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC) {
-		ret = -ENOTSUP;
-		goto mempool_put;
+		return -ENOTSUP;
 	}
-	ret = cn10k_ipsec_session_create(device, &conf->ipsec,
-					 conf->crypto_xform, sess);
-	if (ret)
-		goto mempool_put;
-
-	return 0;
-
-mempool_put:
-	rte_mempool_put(mempool, priv);
-	set_sec_session_private_data(sess, NULL);
-	return ret;
+	return cn10k_ipsec_session_create(device, &conf->ipsec,
+					 conf->crypto_xform, priv);
 }
 
 static int
-cn10k_sec_session_destroy(void *device __rte_unused,
-			  struct rte_security_session *sess)
+cn10k_sec_session_destroy(void *device __rte_unused, void *sess)
 {
-	struct cn10k_sec_session *priv;
-	struct rte_mempool *sess_mp;
-
-	priv = get_sec_session_private_data(sess);
+	struct cn10k_sec_session *priv = sess;
 
 	if (priv == NULL)
 		return 0;
-
-	sess_mp = rte_mempool_from_obj(priv);
-
-	set_sec_session_private_data(sess, NULL);
-	rte_mempool_put(sess_mp, priv);
+	memset(priv, 0, sizeof(*priv));
 
 	return 0;
 }
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index 75277936b0..4c2dc5b080 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -56,7 +56,7 @@ cn9k_cpt_sec_inst_fill(struct rte_crypto_op *op,
 		return -ENOTSUP;
 	}
 
-	priv = get_sec_session_private_data(op->sym->sec_session);
+	priv = (struct cn9k_sec_session *)(op->sym->sec_session);
 	sa = &priv->sa;
 
 	if (sa->dir == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
diff --git a/drivers/crypto/cnxk/cn9k_ipsec.c b/drivers/crypto/cnxk/cn9k_ipsec.c
index 53fb793654..a602d38a11 100644
--- a/drivers/crypto/cnxk/cn9k_ipsec.c
+++ b/drivers/crypto/cnxk/cn9k_ipsec.c
@@ -275,14 +275,13 @@ static int
 cn9k_ipsec_outb_sa_create(struct cnxk_cpt_qp *qp,
 			  struct rte_security_ipsec_xform *ipsec,
 			  struct rte_crypto_sym_xform *crypto_xform,
-			  struct rte_security_session *sec_sess)
+			  struct cn9k_sec_session *sess)
 {
 	struct rte_crypto_sym_xform *auth_xform = crypto_xform->next;
 	struct roc_ie_on_ip_template *template = NULL;
 	struct roc_cpt *roc_cpt = qp->lf.roc_cpt;
 	struct cnxk_cpt_inst_tmpl *inst_tmpl;
 	struct roc_ie_on_outb_sa *out_sa;
-	struct cn9k_sec_session *sess;
 	struct roc_ie_on_sa_ctl *ctl;
 	struct cn9k_ipsec_sa *sa;
 	struct rte_ipv6_hdr *ip6;
@@ -294,7 +293,6 @@ cn9k_ipsec_outb_sa_create(struct cnxk_cpt_qp *qp,
 	size_t ctx_len;
 	int ret;
 
-	sess = get_sec_session_private_data(sec_sess);
 	sa = &sess->sa;
 	out_sa = &sa->out_sa;
 	ctl = &out_sa->common_sa.ctl;
@@ -422,13 +420,12 @@ static int
 cn9k_ipsec_inb_sa_create(struct cnxk_cpt_qp *qp,
 			 struct rte_security_ipsec_xform *ipsec,
 			 struct rte_crypto_sym_xform *crypto_xform,
-			 struct rte_security_session *sec_sess)
+			 struct cn9k_sec_session *sess)
 {
 	struct rte_crypto_sym_xform *auth_xform = crypto_xform;
 	struct roc_cpt *roc_cpt = qp->lf.roc_cpt;
 	struct cnxk_cpt_inst_tmpl *inst_tmpl;
 	struct roc_ie_on_inb_sa *in_sa;
-	struct cn9k_sec_session *sess;
 	struct cn9k_ipsec_sa *sa;
 	const uint8_t *auth_key;
 	union cpt_inst_w4 w4;
@@ -437,7 +434,6 @@ cn9k_ipsec_inb_sa_create(struct cnxk_cpt_qp *qp,
 	size_t ctx_len = 0;
 	int ret;
 
-	sess = get_sec_session_private_data(sec_sess);
 	sa = &sess->sa;
 	in_sa = &sa->in_sa;
 
@@ -501,7 +497,7 @@ static int
 cn9k_ipsec_session_create(void *dev,
 			  struct rte_security_ipsec_xform *ipsec_xform,
 			  struct rte_crypto_sym_xform *crypto_xform,
-			  struct rte_security_session *sess)
+			  struct cn9k_sec_session *sess)
 {
 	struct rte_cryptodev *crypto_dev = dev;
 	struct cnxk_cpt_qp *qp;
@@ -532,53 +528,32 @@ cn9k_ipsec_session_create(void *dev,
 
 static int
 cn9k_sec_session_create(void *device, struct rte_security_session_conf *conf,
-			struct rte_security_session *sess,
-			struct rte_mempool *mempool)
+			void *sess)
 {
-	struct cn9k_sec_session *priv;
-	int ret;
+	struct cn9k_sec_session *priv = sess;
 
 	if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
 		return -EINVAL;
 
-	if (rte_mempool_get(mempool, (void **)&priv)) {
-		plt_err("Could not allocate security session private data");
-		return -ENOMEM;
-	}
-
 	memset(priv, 0, sizeof(*priv));
 
-	set_sec_session_private_data(sess, priv);
-
 	if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC) {
-		ret = -ENOTSUP;
-		goto mempool_put;
+		return -ENOTSUP;
 	}
 
-	ret = cn9k_ipsec_session_create(device, &conf->ipsec,
-					conf->crypto_xform, sess);
-	if (ret)
-		goto mempool_put;
-
-	return 0;
-
-mempool_put:
-	rte_mempool_put(mempool, priv);
-	set_sec_session_private_data(sess, NULL);
-	return ret;
+	return cn9k_ipsec_session_create(device, &conf->ipsec,
+					conf->crypto_xform, priv);
 }
 
 static int
-cn9k_sec_session_destroy(void *device __rte_unused,
-			 struct rte_security_session *sess)
+cn9k_sec_session_destroy(void *device __rte_unused, void *sess)
 {
 	struct roc_ie_on_outb_sa *out_sa;
 	struct cn9k_sec_session *priv;
-	struct rte_mempool *sess_mp;
 	struct roc_ie_on_sa_ctl *ctl;
 	struct cn9k_ipsec_sa *sa;
 
-	priv = get_sec_session_private_data(sess);
+	priv = sess;
 	if (priv == NULL)
 		return 0;
 
@@ -590,13 +565,8 @@ cn9k_sec_session_destroy(void *device __rte_unused,
 
 	rte_io_wmb();
 
-	sess_mp = rte_mempool_from_obj(priv);
-
 	memset(priv, 0, sizeof(*priv));
 
-	set_sec_session_private_data(sess, NULL);
-	rte_mempool_put(sess_mp, priv);
-
 	return 0;
 }
 
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index cb2ad435bf..feaf3ccd4f 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1351,8 +1351,7 @@ build_sec_fd(struct rte_crypto_op *op,
 				op->sym->session, cryptodev_driver_id);
 #ifdef RTE_LIB_SECURITY
 	else if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
-		sess = (dpaa2_sec_session *)get_sec_session_private_data(
-				op->sym->sec_session);
+		sess = (dpaa2_sec_session *)(op->sym->sec_session);
 #endif
 	else
 		return -ENOTSUP;
@@ -1525,7 +1524,7 @@ sec_simple_fd_to_mbuf(const struct qbman_fd *fd)
 	struct rte_crypto_op *op;
 	uint16_t len = DPAA2_GET_FD_LEN(fd);
 	int16_t diff = 0;
-	dpaa2_sec_session *sess_priv __rte_unused;
+	dpaa2_sec_session *sess_priv;
 
 	struct rte_mbuf *mbuf = DPAA2_INLINE_MBUF_FROM_BUF(
 		DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)),
@@ -1538,8 +1537,7 @@ sec_simple_fd_to_mbuf(const struct qbman_fd *fd)
 	mbuf->buf_iova = op->sym->aead.digest.phys_addr;
 	op->sym->aead.digest.phys_addr = 0L;
 
-	sess_priv = (dpaa2_sec_session *)get_sec_session_private_data(
-				op->sym->sec_session);
+	sess_priv = (dpaa2_sec_session *)(op->sym->sec_session);
 	if (sess_priv->dir == DIR_ENC)
 		mbuf->data_off += SEC_FLC_DHR_OUTBOUND;
 	else
@@ -3388,63 +3386,44 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
 static int
 dpaa2_sec_security_session_create(void *dev,
 				  struct rte_security_session_conf *conf,
-				  struct rte_security_session *sess,
-				  struct rte_mempool *mempool)
+				  void *sess)
 {
-	void *sess_private_data;
 	struct rte_cryptodev *cdev = (struct rte_cryptodev *)dev;
 	int ret;
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		DPAA2_SEC_ERR("Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
 	switch (conf->protocol) {
 	case RTE_SECURITY_PROTOCOL_IPSEC:
-		ret = dpaa2_sec_set_ipsec_session(cdev, conf,
-				sess_private_data);
+		ret = dpaa2_sec_set_ipsec_session(cdev, conf, sess);
 		break;
 	case RTE_SECURITY_PROTOCOL_MACSEC:
 		return -ENOTSUP;
 	case RTE_SECURITY_PROTOCOL_PDCP:
-		ret = dpaa2_sec_set_pdcp_session(cdev, conf,
-				sess_private_data);
+		ret = dpaa2_sec_set_pdcp_session(cdev, conf, sess);
 		break;
 	default:
 		return -EINVAL;
 	}
 	if (ret != 0) {
 		DPAA2_SEC_ERR("Failed to configure session parameters");
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sec_session_private_data(sess, sess_private_data);
-
 	return ret;
 }
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static int
-dpaa2_sec_security_session_destroy(void *dev __rte_unused,
-		struct rte_security_session *sess)
+dpaa2_sec_security_session_destroy(void *dev __rte_unused, void *sess)
 {
 	PMD_INIT_FUNC_TRACE();
-	void *sess_priv = get_sec_session_private_data(sess);
 
-	dpaa2_sec_session *s = (dpaa2_sec_session *)sess_priv;
-
-	if (sess_priv) {
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
+	dpaa2_sec_session *s = (dpaa2_sec_session *)sess;
 
+	if (sess) {
 		rte_free(s->ctxt);
 		rte_free(s->cipher_key.data);
 		rte_free(s->auth_key.data);
 		memset(s, 0, sizeof(dpaa2_sec_session));
-		set_sec_session_private_data(sess, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
 	}
 	return 0;
 }
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
index a2ffc6c02f..387bd92ab0 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
@@ -1005,8 +1005,7 @@ dpaa2_sec_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
 	}
 
 	if (sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
-		sess = (dpaa2_sec_session *)get_sec_session_private_data(
-				session_ctx.sec_sess);
+		sess = (dpaa2_sec_session *)session_ctx.sec_sess;
 	else if (sess_type == RTE_CRYPTO_OP_WITH_SESSION)
 		sess = (dpaa2_sec_session *)get_sym_session_private_data(
 			session_ctx.crypto_sess, cryptodev_driver_id);
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 454b9c4785..617c48298f 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -1790,8 +1790,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
 #ifdef RTE_LIB_SECURITY
 			case RTE_CRYPTO_OP_SECURITY_SESSION:
 				ses = (dpaa_sec_session *)
-					get_sec_session_private_data(
-							op->sym->sec_session);
+					(op->sym->sec_session);
 				break;
 #endif
 			default:
@@ -2569,7 +2568,6 @@ static inline void
 free_session_memory(struct rte_cryptodev *dev, dpaa_sec_session *s)
 {
 	struct dpaa_sec_dev_private *qi = dev->data->dev_private;
-	struct rte_mempool *sess_mp = rte_mempool_from_obj((void *)s);
 	uint8_t i;
 
 	for (i = 0; i < MAX_DPAA_CORES; i++) {
@@ -2579,7 +2577,6 @@ free_session_memory(struct rte_cryptodev *dev, dpaa_sec_session *s)
 		s->qp[i] = NULL;
 	}
 	free_session_data(s);
-	rte_mempool_put(sess_mp, (void *)s);
 }
 
 /** Clear the memory of session so it doesn't leave key material behind */
@@ -3114,26 +3111,17 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
 static int
 dpaa_sec_security_session_create(void *dev,
 				 struct rte_security_session_conf *conf,
-				 struct rte_security_session *sess,
-				 struct rte_mempool *mempool)
+				 void *sess)
 {
-	void *sess_private_data;
 	struct rte_cryptodev *cdev = (struct rte_cryptodev *)dev;
 	int ret;
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		DPAA_SEC_ERR("Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
 	switch (conf->protocol) {
 	case RTE_SECURITY_PROTOCOL_IPSEC:
-		ret = dpaa_sec_set_ipsec_session(cdev, conf,
-				sess_private_data);
+		ret = dpaa_sec_set_ipsec_session(cdev, conf, sess);
 		break;
 	case RTE_SECURITY_PROTOCOL_PDCP:
-		ret = dpaa_sec_set_pdcp_session(cdev, conf,
-				sess_private_data);
+		ret = dpaa_sec_set_pdcp_session(cdev, conf, sess);
 		break;
 	case RTE_SECURITY_PROTOCOL_MACSEC:
 		return -ENOTSUP;
@@ -3142,29 +3130,21 @@ dpaa_sec_security_session_create(void *dev,
 	}
 	if (ret != 0) {
 		DPAA_SEC_ERR("failed to configure session parameters");
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sec_session_private_data(sess, sess_private_data);
-
 	return ret;
 }
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static int
-dpaa_sec_security_session_destroy(void *dev __rte_unused,
-		struct rte_security_session *sess)
+dpaa_sec_security_session_destroy(void *dev __rte_unused, void *sess)
 {
 	PMD_INIT_FUNC_TRACE();
-	void *sess_priv = get_sec_session_private_data(sess);
-	dpaa_sec_session *s = (dpaa_sec_session *)sess_priv;
+	dpaa_sec_session *s = (dpaa_sec_session *)sess;
 
-	if (sess_priv) {
+	if (sess)
 		free_session_memory((struct rte_cryptodev *)dev, s);
-		set_sec_session_private_data(sess, NULL);
-	}
 	return 0;
 }
 #endif
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
index 522685f8cf..a07901ebd3 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
@@ -1010,8 +1010,7 @@ dpaa_sec_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
 	}
 
 	if (sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
-		sess = (dpaa_sec_session *)get_sec_session_private_data(
-				session_ctx.sec_sess);
+		sess = (dpaa_sec_session *)session_ctx.sec_sess;
 	else if (sess_type == RTE_CRYPTO_OP_WITH_SESSION)
 		sess = (dpaa_sec_session *)get_sym_session_private_data(
 			session_ctx.crypto_sess, dpaa_cryptodev_driver_id);
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index e05bc04c3b..58ca2a6e54 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -1353,8 +1353,7 @@ set_sec_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
 		op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
 		return -1;
 	}
-	session = (struct aesni_mb_session *)
-		get_sec_session_private_data(op->sym->sec_session);
+	session = (struct aesni_mb_session *)(op->sym->sec_session);
 
 	if (unlikely(session == NULL)) {
 		op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
@@ -1491,7 +1490,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job)
 		 * this is for DOCSIS
 		 */
 		is_docsis_sec = 1;
-		sess = get_sec_session_private_data(op->sym->sec_session);
+		sess = (struct aesni_mb_session *)(op->sym->sec_session);
 	} else
 #endif
 	{
@@ -1894,10 +1893,8 @@ struct rte_cryptodev_ops aesni_mb_pmd_ops = {
  */
 static int
 aesni_mb_pmd_sec_sess_create(void *dev, struct rte_security_session_conf *conf,
-		struct rte_security_session *sess,
-		struct rte_mempool *mempool)
+		void *sess_private_data)
 {
-	void *sess_private_data;
 	struct rte_cryptodev *cdev = (struct rte_cryptodev *)dev;
 	int ret;
 
@@ -1907,41 +1904,24 @@ aesni_mb_pmd_sec_sess_create(void *dev, struct rte_security_session_conf *conf,
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		IPSEC_MB_LOG(ERR, "Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
 	ret = aesni_mb_set_docsis_sec_session_parameters(cdev, conf,
 			sess_private_data);
 
 	if (ret != 0) {
 		IPSEC_MB_LOG(ERR, "Failed to configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sec_session_private_data(sess, sess_private_data);
-
 	return ret;
 }
 
 /** Clear the memory of session so it does not leave key material behind */
 static int
-aesni_mb_pmd_sec_sess_destroy(void *dev __rte_unused,
-		struct rte_security_session *sess)
+aesni_mb_pmd_sec_sess_destroy(void *dev __rte_unused, void *sess_priv)
 {
-	void *sess_priv = get_sec_session_private_data(sess);
-
-	if (sess_priv) {
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-
+	if (sess_priv)
 		memset(sess_priv, 0, sizeof(struct aesni_mb_session));
-		set_sec_session_private_data(sess, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
-	}
+
 	return 0;
 }
 
diff --git a/drivers/crypto/mvsam/rte_mrvl_pmd.c b/drivers/crypto/mvsam/rte_mrvl_pmd.c
index 04efd9aaa8..94e3ff9e5b 100644
--- a/drivers/crypto/mvsam/rte_mrvl_pmd.c
+++ b/drivers/crypto/mvsam/rte_mrvl_pmd.c
@@ -773,8 +773,7 @@ mrvl_request_prepare_sec(struct sam_cio_ipsec_params *request,
 		return -EINVAL;
 	}
 
-	sess = (struct mrvl_crypto_session *)get_sec_session_private_data(
-			op->sym->sec_session);
+	sess = (struct mrvl_crypto_session *)(op->sym->sec_session);
 	if (unlikely(sess == NULL)) {
 		MRVL_LOG(ERR, "Session was not created for this device! %d",
 			 cryptodev_driver_id);
diff --git a/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c b/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
index 3064b1f136..e04a2c88c7 100644
--- a/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
+++ b/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
@@ -913,16 +913,12 @@ mrvl_crypto_pmd_security_session_create(__rte_unused void *dev,
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static int
-mrvl_crypto_pmd_security_session_destroy(void *dev __rte_unused,
-		struct rte_security_session *sess)
+mrvl_crypto_pmd_security_session_destroy(void *dev __rte_unused, void *sess)
 {
-	void *sess_priv = get_sec_session_private_data(sess);
-
 	/* Zero out the whole structure */
-	if (sess_priv) {
+	if (sess) {
 		struct mrvl_crypto_session *mrvl_sess =
 			(struct mrvl_crypto_session *)sess_priv;
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
 
 		if (mrvl_sess->sam_sess &&
 		    sam_session_destroy(mrvl_sess->sam_sess) < 0) {
@@ -932,9 +928,6 @@ mrvl_crypto_pmd_security_session_destroy(void *dev __rte_unused,
 		rte_free(mrvl_sess->sam_sess_params.cipher_key);
 		rte_free(mrvl_sess->sam_sess_params.auth_key);
 		rte_free(mrvl_sess->sam_sess_params.cipher_iv);
-		memset(sess, 0, sizeof(struct rte_security_session));
-		set_sec_session_private_data(sess, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
 	}
 	return 0;
 }
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
index 37fad11d91..7b744cd4b4 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
@@ -702,7 +702,7 @@ otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
 	uint8_t esn;
 	int ret;
 
-	priv = get_sec_session_private_data(op->sym->sec_session);
+	priv = (struct otx2_sec_session *)(op->sym->sec_session);
 	sess = &priv->ipsec.lp;
 	sa = &sess->in_sa;
 
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_sec.c b/drivers/crypto/octeontx2/otx2_cryptodev_sec.c
index a5db40047d..56900e3187 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_sec.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_sec.c
@@ -203,7 +203,7 @@ static int
 crypto_sec_ipsec_outb_session_create(struct rte_cryptodev *crypto_dev,
 				     struct rte_security_ipsec_xform *ipsec,
 				     struct rte_crypto_sym_xform *crypto_xform,
-				     struct rte_security_session *sec_sess)
+				     struct otx2_sec_session *sess)
 {
 	struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
 	struct otx2_ipsec_po_ip_template *template = NULL;
@@ -212,13 +212,11 @@ crypto_sec_ipsec_outb_session_create(struct rte_cryptodev *crypto_dev,
 	struct otx2_ipsec_po_sa_ctl *ctl;
 	int cipher_key_len, auth_key_len;
 	struct otx2_ipsec_po_out_sa *sa;
-	struct otx2_sec_session *sess;
 	struct otx2_cpt_inst_s inst;
 	struct rte_ipv6_hdr *ip6;
 	struct rte_ipv4_hdr *ip;
 	int ret, ctx_len;
 
-	sess = get_sec_session_private_data(sec_sess);
 	sess->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
 	lp = &sess->ipsec.lp;
 
@@ -398,7 +396,7 @@ static int
 crypto_sec_ipsec_inb_session_create(struct rte_cryptodev *crypto_dev,
 				    struct rte_security_ipsec_xform *ipsec,
 				    struct rte_crypto_sym_xform *crypto_xform,
-				    struct rte_security_session *sec_sess)
+				    struct otx2_sec_session *sess)
 {
 	struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
 	const uint8_t *cipher_key, *auth_key;
@@ -406,11 +404,9 @@ crypto_sec_ipsec_inb_session_create(struct rte_cryptodev *crypto_dev,
 	struct otx2_ipsec_po_sa_ctl *ctl;
 	int cipher_key_len, auth_key_len;
 	struct otx2_ipsec_po_in_sa *sa;
-	struct otx2_sec_session *sess;
 	struct otx2_cpt_inst_s inst;
 	int ret;
 
-	sess = get_sec_session_private_data(sec_sess);
 	sess->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
 	lp = &sess->ipsec.lp;
 
@@ -512,7 +508,7 @@ static int
 crypto_sec_ipsec_session_create(struct rte_cryptodev *crypto_dev,
 				struct rte_security_ipsec_xform *ipsec,
 				struct rte_crypto_sym_xform *crypto_xform,
-				struct rte_security_session *sess)
+				struct otx2_sec_session *sess)
 {
 	int ret;
 
@@ -536,10 +532,9 @@ crypto_sec_ipsec_session_create(struct rte_cryptodev *crypto_dev,
 static int
 otx2_crypto_sec_session_create(void *device,
 			       struct rte_security_session_conf *conf,
-			       struct rte_security_session *sess,
-			       struct rte_mempool *mempool)
+			       void *sess)
 {
-	struct otx2_sec_session *priv;
+	struct otx2_sec_session *priv = sess;
 	int ret;
 
 	if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
@@ -548,51 +543,25 @@ otx2_crypto_sec_session_create(void *device,
 	if (rte_security_dynfield_register() < 0)
 		return -rte_errno;
 
-	if (rte_mempool_get(mempool, (void **)&priv)) {
-		otx2_err("Could not allocate security session private data");
-		return -ENOMEM;
-	}
-
-	set_sec_session_private_data(sess, priv);
-
 	priv->userdata = conf->userdata;
 
 	if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC)
 		ret = crypto_sec_ipsec_session_create(device, &conf->ipsec,
 						      conf->crypto_xform,
-						      sess);
+						      priv);
 	else
 		ret = -ENOTSUP;
 
-	if (ret)
-		goto mempool_put;
-
-	return 0;
-
-mempool_put:
-	rte_mempool_put(mempool, priv);
-	set_sec_session_private_data(sess, NULL);
 	return ret;
 }
 
 static int
-otx2_crypto_sec_session_destroy(void *device __rte_unused,
-				struct rte_security_session *sess)
+otx2_crypto_sec_session_destroy(void *device __rte_unused, void *sess)
 {
-	struct otx2_sec_session *priv;
-	struct rte_mempool *sess_mp;
+	struct otx2_sec_session *priv = sess;
 
-	priv = get_sec_session_private_data(sess);
-
-	if (priv == NULL)
-		return 0;
-
-	sess_mp = rte_mempool_from_obj(priv);
-
-	memset(priv, 0, sizeof(*priv));
-
-	set_sec_session_private_data(sess, NULL);
-	rte_mempool_put(sess_mp, priv);
+	if (priv)
+		memset(priv, 0, sizeof(*priv));
 
 	return 0;
 }
@@ -604,8 +573,7 @@ otx2_crypto_sec_session_get_size(void *device __rte_unused)
 }
 
 static int
-otx2_crypto_sec_set_pkt_mdata(void *device __rte_unused,
-			      struct rte_security_session *session,
+otx2_crypto_sec_set_pkt_mdata(void *device __rte_unused, void *session,
 			      struct rte_mbuf *m, void *params __rte_unused)
 {
 	/* Set security session as the pkt metadata */
diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c
index 93b257522b..fbb17e61ff 100644
--- a/drivers/crypto/qat/qat_sym.c
+++ b/drivers/crypto/qat/qat_sym.c
@@ -250,8 +250,7 @@ qat_sym_build_request(void *in_op, uint8_t *out_msg,
 				op->sym->session, qat_sym_driver_id);
 #ifdef RTE_LIB_SECURITY
 	} else {
-		ctx = (struct qat_sym_session *)get_sec_session_private_data(
-				op->sym->sec_session);
+		ctx = (struct qat_sym_session *)(op->sym->sec_session);
 		if (likely(ctx)) {
 			if (unlikely(ctx->bpi_ctx == NULL)) {
 				QAT_DP_LOG(ERR, "QAT PMD only supports security"
diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h
index e3ec7f0de4..8904aabd3d 100644
--- a/drivers/crypto/qat/qat_sym.h
+++ b/drivers/crypto/qat/qat_sym.h
@@ -202,9 +202,7 @@ qat_sym_preprocess_requests(void **ops, uint16_t nb_ops)
 		op = (struct rte_crypto_op *)ops[i];
 
 		if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
-			ctx = (struct qat_sym_session *)
-				get_sec_session_private_data(
-					op->sym->sec_session);
+			ctx = (struct qat_sym_session *)(op->sym->sec_session);
 
 			if (ctx == NULL || ctx->bpi_ctx == NULL)
 				continue;
@@ -243,9 +241,7 @@ qat_sym_process_response(void **op, uint8_t *resp, void *op_cookie)
 		 * Assuming at this point that if it's a security
 		 * op, that this is for DOCSIS
 		 */
-		sess = (struct qat_sym_session *)
-				get_sec_session_private_data(
-				rx_op->sym->sec_session);
+		sess = (struct qat_sym_session *)(rx_op->sym->sec_session);
 		is_docsis_sec = 1;
 	} else
 #endif
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index 3f2f6736fc..2a22347c7f 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -2283,10 +2283,8 @@ qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev,
 int
 qat_security_session_create(void *dev,
 				struct rte_security_session_conf *conf,
-				struct rte_security_session *sess,
-				struct rte_mempool *mempool)
+				void *sess_private_data)
 {
-	void *sess_private_data;
 	struct rte_cryptodev *cdev = (struct rte_cryptodev *)dev;
 	int ret;
 
@@ -2296,40 +2294,25 @@ qat_security_session_create(void *dev,
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		QAT_LOG(ERR, "Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
 	ret = qat_sec_session_set_docsis_parameters(cdev, conf,
 			sess_private_data);
 	if (ret != 0) {
 		QAT_LOG(ERR, "Failed to configure session parameters");
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sec_session_private_data(sess, sess_private_data);
-
 	return ret;
 }
 
 int
-qat_security_session_destroy(void *dev __rte_unused,
-				 struct rte_security_session *sess)
+qat_security_session_destroy(void *dev __rte_unused, void *sess_priv)
 {
-	void *sess_priv = get_sec_session_private_data(sess);
 	struct qat_sym_session *s = (struct qat_sym_session *)sess_priv;
 
 	if (sess_priv) {
 		if (s->bpi_ctx)
 			bpi_cipher_ctx_free(s->bpi_ctx);
 		memset(s, 0, qat_sym_session_get_private_size(dev));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-
-		set_sec_session_private_data(sess, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
 	}
 	return 0;
 }
diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h
index 6ebc176729..7fcc1d6f7b 100644
--- a/drivers/crypto/qat/qat_sym_session.h
+++ b/drivers/crypto/qat/qat_sym_session.h
@@ -166,9 +166,9 @@ qat_sym_validate_zuc_key(int key_len, enum icp_qat_hw_cipher_algo *alg);
 #ifdef RTE_LIB_SECURITY
 int
 qat_security_session_create(void *dev, struct rte_security_session_conf *conf,
-		struct rte_security_session *sess, struct rte_mempool *mempool);
+		void *sess);
 int
-qat_security_session_destroy(void *dev, struct rte_security_session *sess);
+qat_security_session_destroy(void *dev, void *sess);
 #endif
 
 #endif /* _QAT_SYM_SESSION_H_ */
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
index e45c5501e6..cd54a3beee 100644
--- a/drivers/net/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -369,24 +369,17 @@ ixgbe_crypto_remove_sa(struct rte_eth_dev *dev,
 static int
 ixgbe_crypto_create_session(void *device,
 		struct rte_security_session_conf *conf,
-		struct rte_security_session *session,
-		struct rte_mempool *mempool)
+		void *session)
 {
 	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
-	struct ixgbe_crypto_session *ic_session = NULL;
+	struct ixgbe_crypto_session *ic_session = session;
 	struct rte_crypto_aead_xform *aead_xform;
 	struct rte_eth_conf *dev_conf = &eth_dev->data->dev_conf;
 
-	if (rte_mempool_get(mempool, (void **)&ic_session)) {
-		PMD_DRV_LOG(ERR, "Cannot get object from ic_session mempool");
-		return -ENOMEM;
-	}
-
 	if (conf->crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AEAD ||
 			conf->crypto_xform->aead.algo !=
 					RTE_CRYPTO_AEAD_AES_GCM) {
 		PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode\n");
-		rte_mempool_put(mempool, (void *)ic_session);
 		return -ENOTSUP;
 	}
 	aead_xform = &conf->crypto_xform->aead;
@@ -396,7 +389,6 @@ ixgbe_crypto_create_session(void *device,
 			ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
-			rte_mempool_put(mempool, (void *)ic_session);
 			return -ENOTSUP;
 		}
 	} else {
@@ -404,7 +396,6 @@ ixgbe_crypto_create_session(void *device,
 			ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
-			rte_mempool_put(mempool, (void *)ic_session);
 			return -ENOTSUP;
 		}
 	}
@@ -416,12 +407,9 @@ ixgbe_crypto_create_session(void *device,
 	ic_session->spi = conf->ipsec.spi;
 	ic_session->dev = eth_dev;
 
-	set_sec_session_private_data(session, ic_session);
-
 	if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
 		if (ixgbe_crypto_add_sa(ic_session)) {
 			PMD_DRV_LOG(ERR, "Failed to add SA\n");
-			rte_mempool_put(mempool, (void *)ic_session);
 			return -EPERM;
 		}
 	}
@@ -436,14 +424,11 @@ ixgbe_crypto_session_get_size(__rte_unused void *device)
 }
 
 static int
-ixgbe_crypto_remove_session(void *device,
-		struct rte_security_session *session)
+ixgbe_crypto_remove_session(void *device, void *session)
 {
 	struct rte_eth_dev *eth_dev = device;
 	struct ixgbe_crypto_session *ic_session =
-		(struct ixgbe_crypto_session *)
-		get_sec_session_private_data(session);
-	struct rte_mempool *mempool = rte_mempool_from_obj(ic_session);
+		(struct ixgbe_crypto_session *)session;
 
 	if (eth_dev != ic_session->dev) {
 		PMD_DRV_LOG(ERR, "Session not bound to this device\n");
@@ -455,8 +440,6 @@ ixgbe_crypto_remove_session(void *device,
 		return -EFAULT;
 	}
 
-	rte_mempool_put(mempool, (void *)ic_session);
-
 	return 0;
 }
 
@@ -476,12 +459,11 @@ ixgbe_crypto_compute_pad_len(struct rte_mbuf *m)
 }
 
 static int
-ixgbe_crypto_update_mb(void *device __rte_unused,
-		struct rte_security_session *session,
+ixgbe_crypto_update_mb(void *device __rte_unused, void *session,
 		       struct rte_mbuf *m, void *params __rte_unused)
 {
-	struct ixgbe_crypto_session *ic_session =
-			get_sec_session_private_data(session);
+	struct ixgbe_crypto_session *ic_session = session;
+
 	if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
 		union ixgbe_crypto_tx_desc_md *mdata =
 			(union ixgbe_crypto_tx_desc_md *)
@@ -685,8 +667,10 @@ ixgbe_crypto_add_ingress_sa_from_flow(const void *sess,
 				      const void *ip_spec,
 				      uint8_t is_ipv6)
 {
-	struct ixgbe_crypto_session *ic_session
-		= get_sec_session_private_data(sess);
+	uint64_t sess_ptr = (uint64_t)sess;
+	struct ixgbe_crypto_session *ic_session =
+			(struct ixgbe_crypto_session *)sess_ptr;
+	/* TODO: A proper fix need to be added to remove above typecasting. */
 
 	if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
 		if (is_ipv6) {
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index bcf488f203..7a09f7183d 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -12,7 +12,7 @@ drivers = [
         'bnx2x',
         'bnxt',
         'bonding',
-        'cnxk',
+#        'cnxk',
         'cxgbe',
         'dpaa',
         'dpaa2',
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.c b/drivers/net/octeontx2/otx2_ethdev_sec.c
index c2a36883cb..ef851fe52c 100644
--- a/drivers/net/octeontx2/otx2_ethdev_sec.c
+++ b/drivers/net/octeontx2/otx2_ethdev_sec.c
@@ -350,7 +350,7 @@ static int
 eth_sec_ipsec_out_sess_create(struct rte_eth_dev *eth_dev,
 			      struct rte_security_ipsec_xform *ipsec,
 			      struct rte_crypto_sym_xform *crypto_xform,
-			      struct rte_security_session *sec_sess)
+			      struct otx2_sec_session *sec_sess)
 {
 	struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
 	struct otx2_sec_session_ipsec_ip *sess;
@@ -363,7 +363,7 @@ eth_sec_ipsec_out_sess_create(struct rte_eth_dev *eth_dev,
 	struct otx2_cpt_inst_s inst;
 	struct otx2_cpt_qp *qp;
 
-	priv = get_sec_session_private_data(sec_sess);
+	priv = sec_sess;
 	priv->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
 	sess = &priv->ipsec.ip;
 
@@ -468,7 +468,7 @@ static int
 eth_sec_ipsec_in_sess_create(struct rte_eth_dev *eth_dev,
 			     struct rte_security_ipsec_xform *ipsec,
 			     struct rte_crypto_sym_xform *crypto_xform,
-			     struct rte_security_session *sec_sess)
+			     struct otx2_sec_session *sec_sess)
 {
 	struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
 	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
@@ -495,7 +495,7 @@ eth_sec_ipsec_in_sess_create(struct rte_eth_dev *eth_dev,
 
 	ctl = &sa->ctl;
 
-	priv = get_sec_session_private_data(sec_sess);
+	priv = sec_sess;
 	priv->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
 	sess = &priv->ipsec.ip;
 
@@ -619,7 +619,7 @@ static int
 eth_sec_ipsec_sess_create(struct rte_eth_dev *eth_dev,
 			  struct rte_security_ipsec_xform *ipsec,
 			  struct rte_crypto_sym_xform *crypto_xform,
-			  struct rte_security_session *sess)
+			  struct otx2_sec_session *sess)
 {
 	int ret;
 
@@ -638,22 +638,14 @@ eth_sec_ipsec_sess_create(struct rte_eth_dev *eth_dev,
 static int
 otx2_eth_sec_session_create(void *device,
 			    struct rte_security_session_conf *conf,
-			    struct rte_security_session *sess,
-			    struct rte_mempool *mempool)
+			    void *sess)
 {
-	struct otx2_sec_session *priv;
+	struct otx2_sec_session *priv = sess;
 	int ret;
 
 	if (conf->action_type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
 		return -ENOTSUP;
 
-	if (rte_mempool_get(mempool, (void **)&priv)) {
-		otx2_err("Could not allocate security session private data");
-		return -ENOMEM;
-	}
-
-	set_sec_session_private_data(sess, priv);
-
 	/*
 	 * Save userdata provided by the application. For ingress packets, this
 	 * could be used to identify the SA.
@@ -663,19 +655,14 @@ otx2_eth_sec_session_create(void *device,
 	if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC)
 		ret = eth_sec_ipsec_sess_create(device, &conf->ipsec,
 						conf->crypto_xform,
-						sess);
+						priv);
 	else
 		ret = -ENOTSUP;
 
 	if (ret)
-		goto mempool_put;
+		return ret;
 
 	return 0;
-
-mempool_put:
-	rte_mempool_put(mempool, priv);
-	set_sec_session_private_data(sess, NULL);
-	return ret;
 }
 
 static void
@@ -688,20 +675,14 @@ otx2_eth_sec_free_anti_replay(struct otx2_ipsec_fp_in_sa *sa)
 }
 
 static int
-otx2_eth_sec_session_destroy(void *device,
-			     struct rte_security_session *sess)
+otx2_eth_sec_session_destroy(void *device, void *sess)
 {
 	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(device);
 	struct otx2_sec_session_ipsec_ip *sess_ip;
 	struct otx2_ipsec_fp_in_sa *sa;
-	struct otx2_sec_session *priv;
-	struct rte_mempool *sess_mp;
+	struct otx2_sec_session *priv = sess;
 	int ret;
 
-	priv = get_sec_session_private_data(sess);
-	if (priv == NULL)
-		return -EINVAL;
-
 	sess_ip = &priv->ipsec.ip;
 
 	if (priv->ipsec.dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
@@ -727,11 +708,6 @@ otx2_eth_sec_session_destroy(void *device,
 			return ret;
 	}
 
-	sess_mp = rte_mempool_from_obj(priv);
-
-	set_sec_session_private_data(sess, NULL);
-	rte_mempool_put(sess_mp, priv);
-
 	return 0;
 }
 
@@ -742,9 +718,8 @@ otx2_eth_sec_session_get_size(void *device __rte_unused)
 }
 
 static int
-otx2_eth_sec_set_pkt_mdata(void *device __rte_unused,
-			    struct rte_security_session *session,
-			    struct rte_mbuf *m, void *params __rte_unused)
+otx2_eth_sec_set_pkt_mdata(void *device __rte_unused, void *session,
+		struct rte_mbuf *m, void *params __rte_unused)
 {
 	/* Set security session as the pkt metadata */
 	*rte_security_dynfield(m) = (rte_security_dynfield_t)session;
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h b/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
index 623a2a841e..9ecb786947 100644
--- a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
+++ b/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
@@ -54,7 +54,7 @@ otx2_sec_event_tx(uint64_t base, struct rte_event *ev, struct rte_mbuf *m,
 		struct nix_iova_s nix_iova;
 	} *sd;
 
-	priv = get_sec_session_private_data((void *)(*rte_security_dynfield(m)));
+	priv = (void *)(*rte_security_dynfield(m));
 	sess = &priv->ipsec.ip;
 	sa = &sess->out_sa;
 
diff --git a/drivers/net/txgbe/txgbe_ipsec.c b/drivers/net/txgbe/txgbe_ipsec.c
index ccd747973b..444da5b8f3 100644
--- a/drivers/net/txgbe/txgbe_ipsec.c
+++ b/drivers/net/txgbe/txgbe_ipsec.c
@@ -349,24 +349,17 @@ txgbe_crypto_remove_sa(struct rte_eth_dev *dev,
 static int
 txgbe_crypto_create_session(void *device,
 		struct rte_security_session_conf *conf,
-		struct rte_security_session *session,
-		struct rte_mempool *mempool)
+		void *session)
 {
 	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
-	struct txgbe_crypto_session *ic_session = NULL;
+	struct txgbe_crypto_session *ic_session = session;
 	struct rte_crypto_aead_xform *aead_xform;
 	struct rte_eth_conf *dev_conf = &eth_dev->data->dev_conf;
 
-	if (rte_mempool_get(mempool, (void **)&ic_session)) {
-		PMD_DRV_LOG(ERR, "Cannot get object from ic_session mempool");
-		return -ENOMEM;
-	}
-
 	if (conf->crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AEAD ||
 			conf->crypto_xform->aead.algo !=
 					RTE_CRYPTO_AEAD_AES_GCM) {
 		PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode\n");
-		rte_mempool_put(mempool, (void *)ic_session);
 		return -ENOTSUP;
 	}
 	aead_xform = &conf->crypto_xform->aead;
@@ -376,7 +369,6 @@ txgbe_crypto_create_session(void *device,
 			ic_session->op = TXGBE_OP_AUTHENTICATED_DECRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
-			rte_mempool_put(mempool, (void *)ic_session);
 			return -ENOTSUP;
 		}
 	} else {
@@ -384,7 +376,6 @@ txgbe_crypto_create_session(void *device,
 			ic_session->op = TXGBE_OP_AUTHENTICATED_ENCRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
-			rte_mempool_put(mempool, (void *)ic_session);
 			return -ENOTSUP;
 		}
 	}
@@ -396,12 +387,9 @@ txgbe_crypto_create_session(void *device,
 	ic_session->spi = conf->ipsec.spi;
 	ic_session->dev = eth_dev;
 
-	set_sec_session_private_data(session, ic_session);
-
 	if (ic_session->op == TXGBE_OP_AUTHENTICATED_ENCRYPTION) {
 		if (txgbe_crypto_add_sa(ic_session)) {
 			PMD_DRV_LOG(ERR, "Failed to add SA\n");
-			rte_mempool_put(mempool, (void *)ic_session);
 			return -EPERM;
 		}
 	}
@@ -416,14 +404,11 @@ txgbe_crypto_session_get_size(__rte_unused void *device)
 }
 
 static int
-txgbe_crypto_remove_session(void *device,
-		struct rte_security_session *session)
+txgbe_crypto_remove_session(void *device, void *session)
 {
 	struct rte_eth_dev *eth_dev = device;
 	struct txgbe_crypto_session *ic_session =
-		(struct txgbe_crypto_session *)
-		get_sec_session_private_data(session);
-	struct rte_mempool *mempool = rte_mempool_from_obj(ic_session);
+		(struct txgbe_crypto_session *)session;
 
 	if (eth_dev != ic_session->dev) {
 		PMD_DRV_LOG(ERR, "Session not bound to this device\n");
@@ -435,8 +420,6 @@ txgbe_crypto_remove_session(void *device,
 		return -EFAULT;
 	}
 
-	rte_mempool_put(mempool, (void *)ic_session);
-
 	return 0;
 }
 
@@ -456,12 +439,11 @@ txgbe_crypto_compute_pad_len(struct rte_mbuf *m)
 }
 
 static int
-txgbe_crypto_update_mb(void *device __rte_unused,
-		struct rte_security_session *session,
-		       struct rte_mbuf *m, void *params __rte_unused)
+txgbe_crypto_update_mb(void *device __rte_unused, void *session,
+		struct rte_mbuf *m, void *params __rte_unused)
 {
-	struct txgbe_crypto_session *ic_session =
-			get_sec_session_private_data(session);
+	struct txgbe_crypto_session *ic_session = session;
+
 	if (ic_session->op == TXGBE_OP_AUTHENTICATED_ENCRYPTION) {
 		union txgbe_crypto_tx_desc_md *mdata =
 			(union txgbe_crypto_tx_desc_md *)
@@ -661,8 +643,10 @@ txgbe_crypto_add_ingress_sa_from_flow(const void *sess,
 				      const void *ip_spec,
 				      uint8_t is_ipv6)
 {
+	uint64_t sess_ptr = (uint64_t)sess;
 	struct txgbe_crypto_session *ic_session =
-			get_sec_session_private_data(sess);
+			(struct txgbe_crypto_session *)sess_ptr;
+	/* TODO: A proper fix need to be added to remove above typecasting. */
 
 	if (ic_session->op == TXGBE_OP_AUTHENTICATED_DECRYPTION) {
 		if (is_ipv6) {
diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index 6817139663..03d907cba8 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -117,8 +117,7 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa,
 			set_ipsec_conf(sa, &(sess_conf.ipsec));
 
 			ips->security.ses = rte_security_session_create(ctx,
-					&sess_conf, ipsec_ctx->session_pool,
-					ipsec_ctx->session_priv_pool);
+					&sess_conf, ipsec_ctx->session_pool);
 			if (ips->security.ses == NULL) {
 				RTE_LOG(ERR, IPSEC,
 				"SEC Session init failed: err: %d\n", ret);
@@ -199,8 +198,7 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa,
 		}
 
 		ips->security.ses = rte_security_session_create(sec_ctx,
-				&sess_conf, skt_ctx->session_pool,
-				skt_ctx->session_priv_pool);
+				&sess_conf, skt_ctx->session_pool);
 		if (ips->security.ses == NULL) {
 			RTE_LOG(ERR, IPSEC,
 				"SEC Session init failed: err: %d\n", ret);
@@ -380,8 +378,7 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa,
 		sess_conf.userdata = (void *) sa;
 
 		ips->security.ses = rte_security_session_create(sec_ctx,
-					&sess_conf, skt_ctx->session_pool,
-					skt_ctx->session_priv_pool);
+					&sess_conf, skt_ctx->session_pool);
 		if (ips->security.ses == NULL) {
 			RTE_LOG(ERR, IPSEC,
 				"SEC Session init failed: err: %d\n", ret);
diff --git a/lib/security/rte_security.c b/lib/security/rte_security.c
index fe81ed3e4c..06560b9cba 100644
--- a/lib/security/rte_security.c
+++ b/lib/security/rte_security.c
@@ -39,35 +39,37 @@ rte_security_dynfield_register(void)
 	return rte_security_dynfield_offset;
 }
 
-struct rte_security_session *
+void *
 rte_security_session_create(struct rte_security_ctx *instance,
 			    struct rte_security_session_conf *conf,
-			    struct rte_mempool *mp,
-			    struct rte_mempool *priv_mp)
+			    struct rte_mempool *mp)
 {
 	struct rte_security_session *sess = NULL;
 
 	RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, session_create, NULL, NULL);
 	RTE_PTR_OR_ERR_RET(conf, NULL);
 	RTE_PTR_OR_ERR_RET(mp, NULL);
-	RTE_PTR_OR_ERR_RET(priv_mp, NULL);
+
+	if (mp->elt_size < sizeof(struct rte_security_session) +
+			instance->ops->session_get_size(instance->device))
+		return NULL;
 
 	if (rte_mempool_get(mp, (void **)&sess))
 		return NULL;
 
 	if (instance->ops->session_create(instance->device, conf,
-				sess, priv_mp)) {
+				sess->sess_private_data)) {
 		rte_mempool_put(mp, (void *)sess);
 		return NULL;
 	}
 	instance->sess_cnt++;
 
-	return sess;
+	return sess->sess_private_data;
 }
 
 int
 rte_security_session_update(struct rte_security_ctx *instance,
-			    struct rte_security_session *sess,
+			    void *sess,
 			    struct rte_security_session_conf *conf)
 {
 	RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, session_update, -EINVAL,
@@ -88,8 +90,7 @@ rte_security_session_get_size(struct rte_security_ctx *instance)
 
 int
 rte_security_session_stats_get(struct rte_security_ctx *instance,
-			       struct rte_security_session *sess,
-			       struct rte_security_stats *stats)
+			       void *sess, struct rte_security_stats *stats)
 {
 	RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, session_stats_get, -EINVAL,
 			-ENOTSUP);
@@ -100,9 +101,9 @@ rte_security_session_stats_get(struct rte_security_ctx *instance,
 }
 
 int
-rte_security_session_destroy(struct rte_security_ctx *instance,
-			     struct rte_security_session *sess)
+rte_security_session_destroy(struct rte_security_ctx *instance, void *sess)
 {
+	struct rte_security_session *s;
 	int ret;
 
 	RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, session_destroy, -EINVAL,
@@ -113,7 +114,8 @@ rte_security_session_destroy(struct rte_security_ctx *instance,
 	if (ret != 0)
 		return ret;
 
-	rte_mempool_put(rte_mempool_from_obj(sess), (void *)sess);
+	s = container_of(sess, struct rte_security_session, sess_private_data);
+	rte_mempool_put(rte_mempool_from_obj(s), (void *)s);
 
 	if (instance->sess_cnt)
 		instance->sess_cnt--;
@@ -123,7 +125,7 @@ rte_security_session_destroy(struct rte_security_ctx *instance,
 
 int
 __rte_security_set_pkt_metadata(struct rte_security_ctx *instance,
-				struct rte_security_session *sess,
+				void *sess,
 				struct rte_mbuf *m, void *params)
 {
 #ifdef RTE_DEBUG
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 4c55dcd744..c5ceb3b588 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -509,10 +509,12 @@ struct rte_security_session_conf {
 };
 
 struct rte_security_session {
-	void *sess_private_data;
-	/**< Private session material */
 	uint64_t opaque_data;
 	/**< Opaque user defined data */
+	uint64_t fast_mdata;
+	/**< Fast metadata to be used for inline path */
+	__extension__ void *sess_private_data[0];
+	/**< Private session material */
 };
 
 /**
@@ -526,11 +528,10 @@ struct rte_security_session {
  *  - On success, pointer to session
  *  - On failure, NULL
  */
-struct rte_security_session *
+void *
 rte_security_session_create(struct rte_security_ctx *instance,
 			    struct rte_security_session_conf *conf,
-			    struct rte_mempool *mp,
-			    struct rte_mempool *priv_mp);
+			    struct rte_mempool *mp);
 
 /**
  * Update security session as specified by the session configuration
@@ -545,7 +546,7 @@ rte_security_session_create(struct rte_security_ctx *instance,
 __rte_experimental
 int
 rte_security_session_update(struct rte_security_ctx *instance,
-			    struct rte_security_session *sess,
+			    void *sess,
 			    struct rte_security_session_conf *conf);
 
 /**
@@ -576,7 +577,7 @@ rte_security_session_get_size(struct rte_security_ctx *instance);
  */
 int
 rte_security_session_destroy(struct rte_security_ctx *instance,
-			     struct rte_security_session *sess);
+			     void *sess);
 
 /** Device-specific metadata field type */
 typedef uint64_t rte_security_dynfield_t;
@@ -622,7 +623,7 @@ static inline bool rte_security_dynfield_is_registered(void)
 /** Function to call PMD specific function pointer set_pkt_metadata() */
 __rte_experimental
 extern int __rte_security_set_pkt_metadata(struct rte_security_ctx *instance,
-					   struct rte_security_session *sess,
+					   void *sess,
 					   struct rte_mbuf *m, void *params);
 
 /**
@@ -640,13 +641,13 @@ extern int __rte_security_set_pkt_metadata(struct rte_security_ctx *instance,
  */
 static inline int
 rte_security_set_pkt_metadata(struct rte_security_ctx *instance,
-			      struct rte_security_session *sess,
+			      void *sess,
 			      struct rte_mbuf *mb, void *params)
 {
 	/* Fast Path */
 	if (instance->flags & RTE_SEC_CTX_F_FAST_SET_MDATA) {
 		*rte_security_dynfield(mb) =
-			(rte_security_dynfield_t)(sess->sess_private_data);
+			(rte_security_dynfield_t)(sess);
 		return 0;
 	}
 
@@ -696,26 +697,13 @@ rte_security_get_userdata(struct rte_security_ctx *instance, uint64_t md)
  */
 static inline int
 __rte_security_attach_session(struct rte_crypto_sym_op *sym_op,
-			      struct rte_security_session *sess)
+			      void *sess)
 {
 	sym_op->sec_session = sess;
 
 	return 0;
 }
 
-static inline void *
-get_sec_session_private_data(const struct rte_security_session *sess)
-{
-	return sess->sess_private_data;
-}
-
-static inline void
-set_sec_session_private_data(struct rte_security_session *sess,
-			     void *private_data)
-{
-	sess->sess_private_data = private_data;
-}
-
 /**
  * Attach a session to a crypto operation.
  * This API is needed only in case of RTE_SECURITY_SESS_CRYPTO_PROTO_OFFLOAD
@@ -726,8 +714,7 @@ set_sec_session_private_data(struct rte_security_session *sess,
  * @param	sess	security session
  */
 static inline int
-rte_security_attach_session(struct rte_crypto_op *op,
-			    struct rte_security_session *sess)
+rte_security_attach_session(struct rte_crypto_op *op, void *sess)
 {
 	if (unlikely(op->type != RTE_CRYPTO_OP_TYPE_SYMMETRIC))
 		return -EINVAL;
@@ -789,7 +776,7 @@ struct rte_security_stats {
 __rte_experimental
 int
 rte_security_session_stats_get(struct rte_security_ctx *instance,
-			       struct rte_security_session *sess,
+			       void *sess,
 			       struct rte_security_stats *stats);
 
 /**
diff --git a/lib/security/rte_security_driver.h b/lib/security/rte_security_driver.h
index b0253e962e..5a177d72d7 100644
--- a/lib/security/rte_security_driver.h
+++ b/lib/security/rte_security_driver.h
@@ -35,8 +35,7 @@ extern "C" {
  */
 typedef int (*security_session_create_t)(void *device,
 		struct rte_security_session_conf *conf,
-		struct rte_security_session *sess,
-		struct rte_mempool *mp);
+		void *sess);
 
 /**
  * Free driver private session data.
@@ -44,8 +43,7 @@ typedef int (*security_session_create_t)(void *device,
  * @param	device		Crypto/eth device pointer
  * @param	sess		Security session structure
  */
-typedef int (*security_session_destroy_t)(void *device,
-		struct rte_security_session *sess);
+typedef int (*security_session_destroy_t)(void *device, void *sess);
 
 /**
  * Update driver private session data.
@@ -60,8 +58,7 @@ typedef int (*security_session_destroy_t)(void *device,
  *  - Returns -ENOTSUP if crypto device does not support the crypto transform.
  */
 typedef int (*security_session_update_t)(void *device,
-		struct rte_security_session *sess,
-		struct rte_security_session_conf *conf);
+		void *sess, struct rte_security_session_conf *conf);
 
 /**
  * Get the size of a security session
@@ -86,8 +83,7 @@ typedef unsigned int (*security_session_get_size)(void *device);
  *  - Returns -EINVAL if session parameters are invalid.
  */
 typedef int (*security_session_stats_get_t)(void *device,
-		struct rte_security_session *sess,
-		struct rte_security_stats *stats);
+		void *sess, struct rte_security_stats *stats);
 
 __rte_internal
 int rte_security_dynfield_register(void);
@@ -96,7 +92,7 @@ int rte_security_dynfield_register(void);
  * Update the mbuf with provided metadata.
  *
  * @param	device		Crypto/eth device pointer
- * @param	sess		Security session structure
+ * @param	sess		Security session
  * @param	mb		Packet buffer
  * @param	params		Metadata
  *
@@ -105,7 +101,7 @@ int rte_security_dynfield_register(void);
  *  - Returns -ve value for errors.
  */
 typedef int (*security_set_pkt_metadata_t)(void *device,
-		struct rte_security_session *sess, struct rte_mbuf *mb,
+		void *sess, struct rte_mbuf *mb,
 		void *params);
 
 /**
-- 
2.25.1


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v3 3/7] eal/interrupts: avoid direct access to interrupt handle
  2021-10-18 19:37  4% ` [dpdk-dev] [PATCH v3 " Harman Kalra
  @ 2021-10-18 19:37  1%   ` Harman Kalra
  1 sibling, 0 replies; 200+ results
From: Harman Kalra @ 2021-10-18 19:37 UTC (permalink / raw)
  To: dev, Harman Kalra, Bruce Richardson
  Cc: david.marchand, dmitry.kozliuk, mdr, thomas

Making changes to the interrupt framework to use interrupt handle
APIs to get/set any field. Direct access to any of the fields
should be avoided to avoid any ABI breakage in future.

Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
 lib/eal/freebsd/eal_interrupts.c |  92 ++++++----
 lib/eal/linux/eal_interrupts.c   | 287 +++++++++++++++++++------------
 2 files changed, 234 insertions(+), 145 deletions(-)

diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 86810845fe..846ca4aa89 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -40,7 +40,7 @@ struct rte_intr_callback {
 
 struct rte_intr_source {
 	TAILQ_ENTRY(rte_intr_source) next;
-	struct rte_intr_handle intr_handle; /**< interrupt handle */
+	struct rte_intr_handle *intr_handle; /**< interrupt handle */
 	struct rte_intr_cb_list callbacks;  /**< user callbacks */
 	uint32_t active;
 };
@@ -60,7 +60,7 @@ static int
 intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
 {
 	/* alarm callbacks are special case */
-	if (ih->type == RTE_INTR_HANDLE_ALARM) {
+	if (rte_intr_type_get(ih) == RTE_INTR_HANDLE_ALARM) {
 		uint64_t timeout_ns;
 
 		/* get soonest alarm timeout */
@@ -75,7 +75,7 @@ intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
 	} else {
 		ke->filter = EVFILT_READ;
 	}
-	ke->ident = ih->fd;
+	ke->ident = rte_intr_fd_get(ih);
 
 	return 0;
 }
@@ -89,7 +89,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 	int ret = 0, add_event = 0;
 
 	/* first do parameter checking */
-	if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0 ||
+	    cb == NULL) {
 		RTE_LOG(ERR, EAL,
 			"Registering with invalid input parameter\n");
 		return -EINVAL;
@@ -103,7 +104,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 
 	/* find the source for this intr_handle */
 	TAILQ_FOREACH(src, &intr_sources, next) {
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_fd_get(src->intr_handle) ==
+		    rte_intr_fd_get(intr_handle))
 			break;
 	}
 
@@ -112,8 +114,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 	 * thing on the list should be eal_alarm_callback() and we may
 	 * be called just to reset the timer.
 	 */
-	if (src != NULL && src->intr_handle.type == RTE_INTR_HANDLE_ALARM &&
-		 !TAILQ_EMPTY(&src->callbacks)) {
+	if (src != NULL && rte_intr_type_get(src->intr_handle) ==
+		RTE_INTR_HANDLE_ALARM && !TAILQ_EMPTY(&src->callbacks)) {
 		callback = NULL;
 	} else {
 		/* allocate a new interrupt callback entity */
@@ -135,9 +137,18 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 				ret = -ENOMEM;
 				goto fail;
 			} else {
-				src->intr_handle = *intr_handle;
-				TAILQ_INIT(&src->callbacks);
-				TAILQ_INSERT_TAIL(&intr_sources, src, next);
+				src->intr_handle = rte_intr_instance_alloc();
+				if (src->intr_handle == NULL) {
+					RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+					free(callback);
+					ret = -ENOMEM;
+				} else {
+					rte_intr_instance_copy(src->intr_handle,
+							       intr_handle);
+					TAILQ_INIT(&src->callbacks);
+					TAILQ_INSERT_TAIL(&intr_sources, src,
+							  next);
+				}
 			}
 		}
 
@@ -151,7 +162,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 	/* add events to the queue. timer events are special as we need to
 	 * re-set the timer.
 	 */
-	if (add_event || src->intr_handle.type == RTE_INTR_HANDLE_ALARM) {
+	if (add_event || rte_intr_type_get(src->intr_handle) ==
+							RTE_INTR_HANDLE_ALARM) {
 		struct kevent ke;
 
 		memset(&ke, 0, sizeof(ke));
@@ -173,12 +185,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 			 */
 			if (errno == ENODEV)
 				RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n",
-					src->intr_handle.fd);
+				rte_intr_fd_get(src->intr_handle));
 			else
 				RTE_LOG(ERR, EAL, "Error adding fd %d "
-						"kevent, %s\n",
-						src->intr_handle.fd,
-						strerror(errno));
+					"kevent, %s\n",
+					rte_intr_fd_get(
+							src->intr_handle),
+					strerror(errno));
 			ret = -errno;
 			goto fail;
 		}
@@ -213,7 +226,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 	struct rte_intr_callback *cb, *next;
 
 	/* do parameter checking first */
-	if (intr_handle == NULL || intr_handle->fd < 0) {
+	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
 		RTE_LOG(ERR, EAL,
 		"Unregistering with invalid input parameter\n");
 		return -EINVAL;
@@ -228,7 +241,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 
 	/* check if the insterrupt source for the fd is existent */
 	TAILQ_FOREACH(src, &intr_sources, next)
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_fd_get(src->intr_handle) ==
+					rte_intr_fd_get(intr_handle))
 			break;
 
 	/* No interrupt source registered for the fd */
@@ -268,7 +282,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 	struct rte_intr_callback *cb, *next;
 
 	/* do parameter checking first */
-	if (intr_handle == NULL || intr_handle->fd < 0) {
+	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
 		RTE_LOG(ERR, EAL,
 		"Unregistering with invalid input parameter\n");
 		return -EINVAL;
@@ -282,7 +296,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 
 	/* check if the insterrupt source for the fd is existent */
 	TAILQ_FOREACH(src, &intr_sources, next)
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_fd_get(src->intr_handle) ==
+					rte_intr_fd_get(intr_handle))
 			break;
 
 	/* No interrupt source registered for the fd */
@@ -314,7 +329,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 		 */
 		if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
 			RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
-				src->intr_handle.fd, strerror(errno));
+				rte_intr_fd_get(src->intr_handle),
+				strerror(errno));
 			/* removing non-existent even is an expected condition
 			 * in some circumstances (e.g. oneshot events).
 			 */
@@ -365,17 +381,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
 	if (intr_handle == NULL)
 		return -1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
 		rc = 0;
 		goto out;
 	}
 
-	if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+	if (rte_intr_fd_get(intr_handle) < 0 ||
+				rte_intr_dev_fd_get(intr_handle) < 0) {
 		rc = -1;
 		goto out;
 	}
 
-	switch (intr_handle->type) {
+	switch (rte_intr_type_get(intr_handle)) {
 	/* not used at this moment */
 	case RTE_INTR_HANDLE_ALARM:
 		rc = -1;
@@ -388,7 +405,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
 	default:
 		RTE_LOG(ERR, EAL,
 			"Unknown handle type of fd %d\n",
-					intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		rc = -1;
 		break;
 	}
@@ -406,17 +423,18 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
 	if (intr_handle == NULL)
 		return -1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
 		rc = 0;
 		goto out;
 	}
 
-	if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+	if (rte_intr_fd_get(intr_handle) < 0 ||
+				rte_intr_dev_fd_get(intr_handle) < 0) {
 		rc = -1;
 		goto out;
 	}
 
-	switch (intr_handle->type) {
+	switch (rte_intr_type_get(intr_handle)) {
 	/* not used at this moment */
 	case RTE_INTR_HANDLE_ALARM:
 		rc = -1;
@@ -429,7 +447,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
 	default:
 		RTE_LOG(ERR, EAL,
 			"Unknown handle type of fd %d\n",
-					intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		rc = -1;
 		break;
 	}
@@ -441,7 +459,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
 int
 rte_intr_ack(const struct rte_intr_handle *intr_handle)
 {
-	if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+	if (intr_handle &&
+	    rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
 		return 0;
 
 	return -1;
@@ -463,7 +482,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 
 		rte_spinlock_lock(&intr_lock);
 		TAILQ_FOREACH(src, &intr_sources, next)
-			if (src->intr_handle.fd == event_fd)
+			if (rte_intr_fd_get(src->intr_handle) ==
+								event_fd)
 				break;
 		if (src == NULL) {
 			rte_spinlock_unlock(&intr_lock);
@@ -475,7 +495,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 		rte_spinlock_unlock(&intr_lock);
 
 		/* set the length to be read dor different handle type */
-		switch (src->intr_handle.type) {
+		switch (rte_intr_type_get(src->intr_handle)) {
 		case RTE_INTR_HANDLE_ALARM:
 			bytes_read = 0;
 			call = true;
@@ -546,7 +566,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 				/* mark for deletion from the queue */
 				ke.flags = EV_DELETE;
 
-				if (intr_source_to_kevent(&src->intr_handle, &ke) < 0) {
+				if (intr_source_to_kevent(src->intr_handle,
+							  &ke) < 0) {
 					RTE_LOG(ERR, EAL, "Cannot convert to kevent\n");
 					rte_spinlock_unlock(&intr_lock);
 					return;
@@ -557,7 +578,9 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 				 */
 				if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
 					RTE_LOG(ERR, EAL, "Error removing fd %d kevent, "
-						"%s\n", src->intr_handle.fd,
+						"%s\n",
+						rte_intr_fd_get(
+							src->intr_handle),
 						strerror(errno));
 					/* removing non-existent even is an expected
 					 * condition in some circumstances
@@ -567,7 +590,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 
 				TAILQ_REMOVE(&src->callbacks, cb, next);
 				if (cb->ucb_fn)
-					cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+					cb->ucb_fn(src->intr_handle,
+						   cb->cb_arg);
 				free(cb);
 			}
 		}
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index 22b3b7bcd9..a250a9df66 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -20,6 +20,7 @@
 #include <stdbool.h>
 
 #include <rte_common.h>
+#include <rte_epoll.h>
 #include <rte_interrupts.h>
 #include <rte_memory.h>
 #include <rte_launch.h>
@@ -82,7 +83,7 @@ struct rte_intr_callback {
 
 struct rte_intr_source {
 	TAILQ_ENTRY(rte_intr_source) next;
-	struct rte_intr_handle intr_handle; /**< interrupt handle */
+	struct rte_intr_handle *intr_handle; /**< interrupt handle */
 	struct rte_intr_cb_list callbacks;  /**< user callbacks */
 	uint32_t active;
 };
@@ -112,7 +113,7 @@ static int
 vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 	int *fd_ptr;
 
 	len = sizeof(irq_set_buf);
@@ -125,13 +126,14 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set->start = 0;
 	fd_ptr = (int *) &irq_set->data;
-	*fd_ptr = intr_handle->fd;
+	*fd_ptr = rte_intr_fd_get(intr_handle);
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -144,11 +146,11 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 	return 0;
@@ -159,7 +161,7 @@ static int
 vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 
 	len = sizeof(struct vfio_irq_set);
 
@@ -171,11 +173,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -187,11 +190,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL,
-			"Error disabling INTx interrupts for fd %d\n", intr_handle->fd);
+			"Error disabling INTx interrupts for fd %d\n",
+			rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 	return 0;
@@ -202,6 +206,7 @@ static int
 vfio_ack_intx(const struct rte_intr_handle *intr_handle)
 {
 	struct vfio_irq_set irq_set;
+	int vfio_dev_fd;
 
 	/* unmask INTx */
 	memset(&irq_set, 0, sizeof(irq_set));
@@ -211,9 +216,10 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle)
 	irq_set.index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set.start = 0;
 
-	if (ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
 		RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
-			intr_handle->fd);
+			rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 	return 0;
@@ -225,7 +231,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
 	int len, ret;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
 	struct vfio_irq_set *irq_set;
-	int *fd_ptr;
+	int *fd_ptr, vfio_dev_fd;
 
 	len = sizeof(irq_set_buf);
 
@@ -236,13 +242,14 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
 	irq_set->start = 0;
 	fd_ptr = (int *) &irq_set->data;
-	*fd_ptr = intr_handle->fd;
+	*fd_ptr = rte_intr_fd_get(intr_handle);
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 	return 0;
@@ -253,7 +260,7 @@ static int
 vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 
 	len = sizeof(struct vfio_irq_set);
 
@@ -264,11 +271,13 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret)
 		RTE_LOG(ERR, EAL,
-			"Error disabling MSI interrupts for fd %d\n", intr_handle->fd);
+			"Error disabling MSI interrupts for fd %d\n",
+			rte_intr_fd_get(intr_handle));
 
 	return ret;
 }
@@ -279,30 +288,34 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) {
 	int len, ret;
 	char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
 	struct vfio_irq_set *irq_set;
-	int *fd_ptr;
+	int *fd_ptr, vfio_dev_fd, i;
 
 	len = sizeof(irq_set_buf);
 
 	irq_set = (struct vfio_irq_set *) irq_set_buf;
 	irq_set->argsz = len;
 	/* 0 < irq_set->count < RTE_MAX_RXTX_INTR_VEC_ID + 1 */
-	irq_set->count = intr_handle->max_intr ?
-		(intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID + 1 ?
-		RTE_MAX_RXTX_INTR_VEC_ID + 1 : intr_handle->max_intr) : 1;
+	irq_set->count = rte_intr_max_intr_get(intr_handle) ?
+		(rte_intr_max_intr_get(intr_handle) >
+		 RTE_MAX_RXTX_INTR_VEC_ID + 1 ?	RTE_MAX_RXTX_INTR_VEC_ID + 1 :
+		 rte_intr_max_intr_get(intr_handle)) : 1;
+
 	irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
 	irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
 	irq_set->start = 0;
 	fd_ptr = (int *) &irq_set->data;
 	/* INTR vector offset 0 reserve for non-efds mapping */
-	fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = intr_handle->fd;
-	memcpy(&fd_ptr[RTE_INTR_VEC_RXTX_OFFSET], intr_handle->efds,
-		sizeof(*intr_handle->efds) * intr_handle->nb_efd);
+	fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = rte_intr_fd_get(intr_handle);
+	for (i = 0; i < rte_intr_nb_efd_get(intr_handle); i++)
+		fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] =
+			rte_intr_efds_index_get(intr_handle, i);
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -314,7 +327,7 @@ static int
 vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 
 	len = sizeof(struct vfio_irq_set);
 
@@ -325,11 +338,13 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret)
 		RTE_LOG(ERR, EAL,
-			"Error disabling MSI-X interrupts for fd %d\n", intr_handle->fd);
+			"Error disabling MSI-X interrupts for fd %d\n",
+			rte_intr_fd_get(intr_handle));
 
 	return ret;
 }
@@ -342,7 +357,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
 	int len, ret;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
 	struct vfio_irq_set *irq_set;
-	int *fd_ptr;
+	int *fd_ptr, vfio_dev_fd;
 
 	len = sizeof(irq_set_buf);
 
@@ -354,13 +369,14 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
 	irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
 	irq_set->start = 0;
 	fd_ptr = (int *) &irq_set->data;
-	*fd_ptr = intr_handle->fd;
+	*fd_ptr = rte_intr_fd_get(intr_handle);
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -373,7 +389,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
 {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 
 	len = sizeof(struct vfio_irq_set);
 
@@ -384,11 +400,12 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
 	irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret)
 		RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n",
-			intr_handle->fd);
+			rte_intr_fd_get(intr_handle));
 
 	return ret;
 }
@@ -399,20 +416,22 @@ static int
 uio_intx_intr_disable(const struct rte_intr_handle *intr_handle)
 {
 	unsigned char command_high;
+	int uio_cfg_fd;
 
 	/* use UIO config file descriptor for uio_pci_generic */
-	if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+	uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+	if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
 		RTE_LOG(ERR, EAL,
 			"Error reading interrupts status for fd %d\n",
-			intr_handle->uio_cfg_fd);
+			uio_cfg_fd);
 		return -1;
 	}
 	/* disable interrupts */
 	command_high |= 0x4;
-	if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+	if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
 		RTE_LOG(ERR, EAL,
 			"Error disabling interrupts for fd %d\n",
-			intr_handle->uio_cfg_fd);
+			uio_cfg_fd);
 		return -1;
 	}
 
@@ -423,20 +442,22 @@ static int
 uio_intx_intr_enable(const struct rte_intr_handle *intr_handle)
 {
 	unsigned char command_high;
+	int uio_cfg_fd;
 
 	/* use UIO config file descriptor for uio_pci_generic */
-	if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+	uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+	if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
 		RTE_LOG(ERR, EAL,
 			"Error reading interrupts status for fd %d\n",
-			intr_handle->uio_cfg_fd);
+			uio_cfg_fd);
 		return -1;
 	}
 	/* enable interrupts */
 	command_high &= ~0x4;
-	if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+	if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
 		RTE_LOG(ERR, EAL,
 			"Error enabling interrupts for fd %d\n",
-			intr_handle->uio_cfg_fd);
+			uio_cfg_fd);
 		return -1;
 	}
 
@@ -448,10 +469,11 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle)
 {
 	const int value = 0;
 
-	if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+	if (write(rte_intr_fd_get(intr_handle), &value,
+		  sizeof(value)) < 0) {
 		RTE_LOG(ERR, EAL,
 			"Error disabling interrupts for fd %d (%s)\n",
-			intr_handle->fd, strerror(errno));
+			rte_intr_fd_get(intr_handle), strerror(errno));
 		return -1;
 	}
 	return 0;
@@ -462,10 +484,11 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
 {
 	const int value = 1;
 
-	if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+	if (write(rte_intr_fd_get(intr_handle), &value,
+		  sizeof(value)) < 0) {
 		RTE_LOG(ERR, EAL,
 			"Error enabling interrupts for fd %d (%s)\n",
-			intr_handle->fd, strerror(errno));
+			rte_intr_fd_get(intr_handle), strerror(errno));
 		return -1;
 	}
 	return 0;
@@ -482,7 +505,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 	wake_thread = 0;
 
 	/* first do parameter checking */
-	if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0 ||
+	    cb == NULL) {
 		RTE_LOG(ERR, EAL,
 			"Registering with invalid input parameter\n");
 		return -EINVAL;
@@ -503,7 +527,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 
 	/* check if there is at least one callback registered for the fd */
 	TAILQ_FOREACH(src, &intr_sources, next) {
-		if (src->intr_handle.fd == intr_handle->fd) {
+		if (rte_intr_fd_get(src->intr_handle) ==
+					rte_intr_fd_get(intr_handle)) {
 			/* we had no interrupts for this */
 			if (TAILQ_EMPTY(&src->callbacks))
 				wake_thread = 1;
@@ -522,12 +547,21 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 			free(callback);
 			ret = -ENOMEM;
 		} else {
-			src->intr_handle = *intr_handle;
-			TAILQ_INIT(&src->callbacks);
-			TAILQ_INSERT_TAIL(&(src->callbacks), callback, next);
-			TAILQ_INSERT_TAIL(&intr_sources, src, next);
-			wake_thread = 1;
-			ret = 0;
+			src->intr_handle = rte_intr_instance_alloc();
+			if (src->intr_handle == NULL) {
+				RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+				free(callback);
+				ret = -ENOMEM;
+			} else {
+				rte_intr_instance_copy(src->intr_handle,
+						       intr_handle);
+				TAILQ_INIT(&src->callbacks);
+				TAILQ_INSERT_TAIL(&(src->callbacks), callback,
+						  next);
+				TAILQ_INSERT_TAIL(&intr_sources, src, next);
+				wake_thread = 1;
+				ret = 0;
+			}
 		}
 	}
 
@@ -555,7 +589,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 	struct rte_intr_callback *cb, *next;
 
 	/* do parameter checking first */
-	if (intr_handle == NULL || intr_handle->fd < 0) {
+	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
 		RTE_LOG(ERR, EAL,
 		"Unregistering with invalid input parameter\n");
 		return -EINVAL;
@@ -565,7 +599,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 
 	/* check if the insterrupt source for the fd is existent */
 	TAILQ_FOREACH(src, &intr_sources, next)
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_fd_get(src->intr_handle) ==
+					rte_intr_fd_get(intr_handle))
 			break;
 
 	/* No interrupt source registered for the fd */
@@ -605,7 +640,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 	struct rte_intr_callback *cb, *next;
 
 	/* do parameter checking first */
-	if (intr_handle == NULL || intr_handle->fd < 0) {
+	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
 		RTE_LOG(ERR, EAL,
 		"Unregistering with invalid input parameter\n");
 		return -EINVAL;
@@ -615,7 +650,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 
 	/* check if the insterrupt source for the fd is existent */
 	TAILQ_FOREACH(src, &intr_sources, next)
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_fd_get(src->intr_handle) ==
+					rte_intr_fd_get(intr_handle))
 			break;
 
 	/* No interrupt source registered for the fd */
@@ -646,6 +682,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 		/* all callbacks for that source are removed. */
 		if (TAILQ_EMPTY(&src->callbacks)) {
 			TAILQ_REMOVE(&intr_sources, src, next);
+			rte_intr_instance_free(src->intr_handle);
 			free(src);
 		}
 	}
@@ -677,22 +714,23 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
 int
 rte_intr_enable(const struct rte_intr_handle *intr_handle)
 {
-	int rc = 0;
+	int rc = 0, uio_cfg_fd;
 
 	if (intr_handle == NULL)
 		return -1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
 		rc = 0;
 		goto out;
 	}
 
-	if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+	uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+	if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
 		rc = -1;
 		goto out;
 	}
 
-	switch (intr_handle->type){
+	switch (rte_intr_type_get(intr_handle)) {
 	/* write to the uio fd to enable the interrupt */
 	case RTE_INTR_HANDLE_UIO:
 		if (uio_intr_enable(intr_handle))
@@ -734,7 +772,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
 	default:
 		RTE_LOG(ERR, EAL,
 			"Unknown handle type of fd %d\n",
-					intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		rc = -1;
 		break;
 	}
@@ -757,13 +795,17 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
 int
 rte_intr_ack(const struct rte_intr_handle *intr_handle)
 {
-	if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+	int uio_cfg_fd;
+
+	if (intr_handle && rte_intr_type_get(intr_handle) ==
+							RTE_INTR_HANDLE_VDEV)
 		return 0;
 
-	if (!intr_handle || intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0)
+	uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+	if (!intr_handle || rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0)
 		return -1;
 
-	switch (intr_handle->type) {
+	switch (rte_intr_type_get(intr_handle)) {
 	/* Both acking and enabling are same for UIO */
 	case RTE_INTR_HANDLE_UIO:
 		if (uio_intr_enable(intr_handle))
@@ -796,7 +838,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
 	/* unknown handle type */
 	default:
 		RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
-			intr_handle->fd);
+			rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -806,22 +848,23 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
 int
 rte_intr_disable(const struct rte_intr_handle *intr_handle)
 {
-	int rc = 0;
+	int rc = 0, uio_cfg_fd;
 
 	if (intr_handle == NULL)
 		return -1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
 		rc = 0;
 		goto out;
 	}
 
-	if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+	uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+	if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
 		rc = -1;
 		goto out;
 	}
 
-	switch (intr_handle->type){
+	switch (rte_intr_type_get(intr_handle)) {
 	/* write to the uio fd to disable the interrupt */
 	case RTE_INTR_HANDLE_UIO:
 		if (uio_intr_disable(intr_handle))
@@ -863,7 +906,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
 	default:
 		RTE_LOG(ERR, EAL,
 			"Unknown handle type of fd %d\n",
-					intr_handle->fd);
+			rte_intr_fd_get(intr_handle));
 		rc = -1;
 		break;
 	}
@@ -896,7 +939,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 		}
 		rte_spinlock_lock(&intr_lock);
 		TAILQ_FOREACH(src, &intr_sources, next)
-			if (src->intr_handle.fd ==
+			if (rte_intr_fd_get(src->intr_handle) ==
 					events[n].data.fd)
 				break;
 		if (src == NULL){
@@ -909,7 +952,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 		rte_spinlock_unlock(&intr_lock);
 
 		/* set the length to be read dor different handle type */
-		switch (src->intr_handle.type) {
+		switch (rte_intr_type_get(src->intr_handle)) {
 		case RTE_INTR_HANDLE_UIO:
 		case RTE_INTR_HANDLE_UIO_INTX:
 			bytes_read = sizeof(buf.uio_intr_count);
@@ -973,6 +1016,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 					TAILQ_REMOVE(&src->callbacks, cb, next);
 					free(cb);
 				}
+				rte_intr_instance_free(src->intr_handle);
 				free(src);
 				return -1;
 			} else if (bytes_read == 0)
@@ -1012,7 +1056,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 			if (cb->pending_delete) {
 				TAILQ_REMOVE(&src->callbacks, cb, next);
 				if (cb->ucb_fn)
-					cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+					cb->ucb_fn(src->intr_handle,
+						   cb->cb_arg);
 				free(cb);
 				rv++;
 			}
@@ -1021,6 +1066,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 		/* all callbacks for that source are removed. */
 		if (TAILQ_EMPTY(&src->callbacks)) {
 			TAILQ_REMOVE(&intr_sources, src, next);
+			rte_intr_instance_free(src->intr_handle);
 			free(src);
 		}
 
@@ -1123,16 +1169,18 @@ eal_intr_thread_main(__rte_unused void *arg)
 				continue; /* skip those with no callbacks */
 			memset(&ev, 0, sizeof(ev));
 			ev.events = EPOLLIN | EPOLLPRI | EPOLLRDHUP | EPOLLHUP;
-			ev.data.fd = src->intr_handle.fd;
+			ev.data.fd = rte_intr_fd_get(src->intr_handle);
 
 			/**
 			 * add all the uio device file descriptor
 			 * into wait list.
 			 */
 			if (epoll_ctl(pfd, EPOLL_CTL_ADD,
-					src->intr_handle.fd, &ev) < 0){
+				rte_intr_fd_get(src->intr_handle),
+								&ev) < 0) {
 				rte_panic("Error adding fd %d epoll_ctl, %s\n",
-					src->intr_handle.fd, strerror(errno));
+				rte_intr_fd_get(src->intr_handle),
+				strerror(errno));
 			}
 			else
 				numfds++;
@@ -1185,7 +1233,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
 	int bytes_read = 0;
 	int nbytes;
 
-	switch (intr_handle->type) {
+	switch (rte_intr_type_get(intr_handle)) {
 	case RTE_INTR_HANDLE_UIO:
 	case RTE_INTR_HANDLE_UIO_INTX:
 		bytes_read = sizeof(buf.uio_intr_count);
@@ -1198,7 +1246,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
 		break;
 #endif
 	case RTE_INTR_HANDLE_VDEV:
-		bytes_read = intr_handle->efd_counter_size;
+		bytes_read = rte_intr_efd_counter_size_get(intr_handle);
 		/* For vdev, number of bytes to read is set by driver */
 		break;
 	case RTE_INTR_HANDLE_EXT:
@@ -1419,8 +1467,8 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 	efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
 		(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
 
-	if (!intr_handle || intr_handle->nb_efd == 0 ||
-	    efd_idx >= intr_handle->nb_efd) {
+	if (!intr_handle || rte_intr_nb_efd_get(intr_handle) == 0 ||
+	    efd_idx >= (unsigned int)rte_intr_nb_efd_get(intr_handle)) {
 		RTE_LOG(ERR, EAL, "Wrong intr vector number.\n");
 		return -EPERM;
 	}
@@ -1428,7 +1476,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 	switch (op) {
 	case RTE_INTR_EVENT_ADD:
 		epfd_op = EPOLL_CTL_ADD;
-		rev = &intr_handle->elist[efd_idx];
+		rev = rte_intr_elist_index_get(intr_handle, efd_idx);
 		if (__atomic_load_n(&rev->status,
 				__ATOMIC_RELAXED) != RTE_EPOLL_INVALID) {
 			RTE_LOG(INFO, EAL, "Event already been added.\n");
@@ -1442,7 +1490,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 		epdata->cb_fun = (rte_intr_event_cb_t)eal_intr_proc_rxtx_intr;
 		epdata->cb_arg = (void *)intr_handle;
 		rc = rte_epoll_ctl(epfd, epfd_op,
-				   intr_handle->efds[efd_idx], rev);
+				   rte_intr_efds_index_get(intr_handle,
+								  efd_idx),
+				   rev);
 		if (!rc)
 			RTE_LOG(DEBUG, EAL,
 				"efd %d associated with vec %d added on epfd %d"
@@ -1452,7 +1502,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 		break;
 	case RTE_INTR_EVENT_DEL:
 		epfd_op = EPOLL_CTL_DEL;
-		rev = &intr_handle->elist[efd_idx];
+		rev = rte_intr_elist_index_get(intr_handle, efd_idx);
 		if (__atomic_load_n(&rev->status,
 				__ATOMIC_RELAXED) == RTE_EPOLL_INVALID) {
 			RTE_LOG(INFO, EAL, "Event does not exist.\n");
@@ -1477,8 +1527,9 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
 	uint32_t i;
 	struct rte_epoll_event *rev;
 
-	for (i = 0; i < intr_handle->nb_efd; i++) {
-		rev = &intr_handle->elist[i];
+	for (i = 0; i < (uint32_t)rte_intr_nb_efd_get(intr_handle);
+									i++) {
+		rev = rte_intr_elist_index_get(intr_handle, i);
 		if (__atomic_load_n(&rev->status,
 				__ATOMIC_RELAXED) == RTE_EPOLL_INVALID)
 			continue;
@@ -1498,7 +1549,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 
 	assert(nb_efd != 0);
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX) {
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX) {
 		for (i = 0; i < n; i++) {
 			fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
 			if (fd < 0) {
@@ -1507,21 +1558,32 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 					errno, strerror(errno));
 				return -errno;
 			}
-			intr_handle->efds[i] = fd;
+
+			if (rte_intr_efds_index_set(intr_handle, i, fd))
+				return -rte_errno;
 		}
-		intr_handle->nb_efd   = n;
-		intr_handle->max_intr = NB_OTHER_INTR + n;
-	} else if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+
+		if (rte_intr_nb_efd_set(intr_handle, n))
+			return -rte_errno;
+
+		if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR + n))
+			return -rte_errno;
+	} else if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
 		/* only check, initialization would be done in vdev driver.*/
-		if (intr_handle->efd_counter_size >
+		if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) >
 		    sizeof(union rte_intr_read_buffer)) {
 			RTE_LOG(ERR, EAL, "the efd_counter_size is oversized");
 			return -EINVAL;
 		}
 	} else {
-		intr_handle->efds[0]  = intr_handle->fd;
-		intr_handle->nb_efd   = RTE_MIN(nb_efd, 1U);
-		intr_handle->max_intr = NB_OTHER_INTR;
+		if (rte_intr_efds_index_set(intr_handle, 0,
+					    rte_intr_fd_get(intr_handle)))
+			return -rte_errno;
+		if (rte_intr_nb_efd_set(intr_handle,
+					RTE_MIN(nb_efd, 1U)))
+			return -rte_errno;
+		if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR))
+			return -rte_errno;
 	}
 
 	return 0;
@@ -1533,18 +1595,20 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
 	uint32_t i;
 
 	rte_intr_free_epoll_fd(intr_handle);
-	if (intr_handle->max_intr > intr_handle->nb_efd) {
-		for (i = 0; i < intr_handle->nb_efd; i++)
-			close(intr_handle->efds[i]);
+	if (rte_intr_max_intr_get(intr_handle) >
+				rte_intr_nb_efd_get(intr_handle)) {
+		for (i = 0; i <
+			(uint32_t)rte_intr_nb_efd_get(intr_handle); i++)
+			close(rte_intr_efds_index_get(intr_handle, i));
 	}
-	intr_handle->nb_efd = 0;
-	intr_handle->max_intr = 0;
+	rte_intr_nb_efd_set(intr_handle, 0);
+	rte_intr_max_intr_set(intr_handle, 0);
 }
 
 int
 rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
 {
-	return !(!intr_handle->nb_efd);
+	return !(!rte_intr_nb_efd_get(intr_handle));
 }
 
 int
@@ -1553,16 +1617,17 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
 	if (!rte_intr_dp_is_en(intr_handle))
 		return 1;
 	else
-		return !!(intr_handle->max_intr - intr_handle->nb_efd);
+		return !!(rte_intr_max_intr_get(intr_handle) -
+				rte_intr_nb_efd_get(intr_handle));
 }
 
 int
 rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
 {
-	if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX)
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX)
 		return 1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV)
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
 		return 1;
 
 	return 0;
-- 
2.18.0


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v3 0/7] make rte_intr_handle internal
  @ 2021-10-18 19:37  4% ` Harman Kalra
    2021-10-18 19:37  1%   ` [dpdk-dev] [PATCH v3 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
  2021-10-19 18:35  4% ` [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal Harman Kalra
  2021-10-22 20:49  4% ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
  2 siblings, 2 replies; 200+ results
From: Harman Kalra @ 2021-10-18 19:37 UTC (permalink / raw)
  To: dev; +Cc: david.marchand, dmitry.kozliuk, mdr, thomas, Harman Kalra

Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.

Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=

This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.

Details on each patch of the series:
Patch 1: malloc: introduce malloc is ready API
This patch introduces a new API which tells if DPDK memory
subsystem is initialized and rte_malloc* APIs are ready to be
used. If rte_malloc* are setup, memory for interrupt instance
is allocated using rte_malloc else using traditional heap APIs.

Patch 2: eal/interrupts: implement get set APIs
This patch provides prototypes and implementation of all the new
get set APIs. Alloc APIs are implemented to allocate memory for
interrupt handle instance. Currently most of the drivers defines
interrupt handle instance as static but now it cant be static as
size of rte_intr_handle is unknown to all the drivers. Drivers are
expected to allocate interrupt instances during initialization
and free these instances during cleanup phase.
This patch also rearranges the headers related to interrupt
framework. Epoll related definitions prototypes are moved into a
new header i.e. rte_epoll.h and APIs defined in rte_eal_interrupts.h
which were driver specific are moved to rte_interrupts.h (as anyways
it was accessible and used outside DPDK library. Later in the series
rte_eal_interrupts.h is removed.

Patch 3: eal/interrupts: avoid direct access to interrupt handle
Modifying the interrupt framework for linux and freebsd to use these
get set alloc APIs as per requirement and avoid accessing the fields
directly.

Patch 4: test/interrupt: apply get set interrupt handle APIs
Updating interrupt test suite to use interrupt handle APIs.

Patch 5: drivers: remove direct access to interrupt handle fields
Modifying all the drivers and libraries which are currently directly
accessing the interrupt handle fields. Drivers are expected to
allocated the interrupt instance, use get set APIs with the allocated
interrupt handle and free it on cleanup.

Patch 6: eal/interrupts: make interrupt handle structure opaque
In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
definition is moved to c file to make it completely opaque. As part of
interrupt handle allocation, array like efds and elist(which are currently
static) are dynamically allocated with default size
(RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
device requirement using new API rte_intr_handle_event_list_update().
Eg, on PCI device probing MSIX size can be queried and these arrays can
be reallocated accordingly.

Patch 7: eal/alarm: introduce alarm fini routine
Introducing alarm fini routine, as the memory allocated for alarm interrupt
instance can be freed in alarm fini.

Testing performed:
1. Validated the series by running interrupts and alarm test suite.
2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
   where interrupts are expected on packet arrival.

v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif

v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.

v3:
* Removed flag from instance alloc API, rather auto detect
if memory should be allocated using glibc malloc APIs or
rte_malloc*
* Added APIs for get/set windows handle.
* Defined macros for repeated checks.

Harman Kalra (7):
  malloc: introduce malloc is ready API
  eal/interrupts: implement get set APIs
  eal/interrupts: avoid direct access to interrupt handle
  test/interrupt: apply get set interrupt handle APIs
  drivers: remove direct access to interrupt handle
  eal/interrupts: make interrupt handle structure opaque
  eal/alarm: introduce alarm fini routine

 MAINTAINERS                                   |   1 +
 app/test/test_interrupts.c                    | 162 +++--
 drivers/baseband/acc100/rte_acc100_pmd.c      |  18 +-
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |  21 +-
 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |  21 +-
 drivers/bus/auxiliary/auxiliary_common.c      |   2 +
 drivers/bus/auxiliary/linux/auxiliary.c       |   9 +
 drivers/bus/auxiliary/rte_bus_auxiliary.h     |   2 +-
 drivers/bus/dpaa/dpaa_bus.c                   |  26 +-
 drivers/bus/dpaa/rte_dpaa_bus.h               |   2 +-
 drivers/bus/fslmc/fslmc_bus.c                 |  15 +-
 drivers/bus/fslmc/fslmc_vfio.c                |  32 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |  19 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |   2 +-
 drivers/bus/fslmc/rte_fslmc.h                 |   2 +-
 drivers/bus/ifpga/ifpga_bus.c                 |  14 +-
 drivers/bus/ifpga/rte_bus_ifpga.h             |   2 +-
 drivers/bus/pci/bsd/pci.c                     |  21 +-
 drivers/bus/pci/linux/pci.c                   |   4 +-
 drivers/bus/pci/linux/pci_uio.c               |  73 +-
 drivers/bus/pci/linux/pci_vfio.c              | 115 +++-
 drivers/bus/pci/pci_common.c                  |  27 +-
 drivers/bus/pci/pci_common_uio.c              |  21 +-
 drivers/bus/pci/rte_bus_pci.h                 |   4 +-
 drivers/bus/vmbus/linux/vmbus_bus.c           |   5 +
 drivers/bus/vmbus/linux/vmbus_uio.c           |  37 +-
 drivers/bus/vmbus/rte_bus_vmbus.h             |   2 +-
 drivers/bus/vmbus/vmbus_common_uio.c          |  24 +-
 drivers/common/cnxk/roc_cpt.c                 |   8 +-
 drivers/common/cnxk/roc_dev.c                 |  14 +-
 drivers/common/cnxk/roc_irq.c                 | 108 +--
 drivers/common/cnxk/roc_nix_inl_dev_irq.c     |   8 +-
 drivers/common/cnxk/roc_nix_irq.c             |  36 +-
 drivers/common/cnxk/roc_npa.c                 |   2 +-
 drivers/common/cnxk/roc_platform.h            |  49 +-
 drivers/common/cnxk/roc_sso.c                 |   4 +-
 drivers/common/cnxk/roc_tim.c                 |   4 +-
 drivers/common/octeontx2/otx2_dev.c           |  14 +-
 drivers/common/octeontx2/otx2_irq.c           | 117 ++--
 .../octeontx2/otx2_cryptodev_hw_access.c      |   4 +-
 drivers/event/octeontx2/otx2_evdev_irq.c      |  12 +-
 drivers/mempool/octeontx2/otx2_mempool.c      |   2 +-
 drivers/net/atlantic/atl_ethdev.c             |  20 +-
 drivers/net/avp/avp_ethdev.c                  |   8 +-
 drivers/net/axgbe/axgbe_ethdev.c              |  12 +-
 drivers/net/axgbe/axgbe_mdio.c                |   6 +-
 drivers/net/bnx2x/bnx2x_ethdev.c              |  10 +-
 drivers/net/bnxt/bnxt_ethdev.c                |  33 +-
 drivers/net/bnxt/bnxt_irq.c                   |   4 +-
 drivers/net/dpaa/dpaa_ethdev.c                |  47 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  10 +-
 drivers/net/e1000/em_ethdev.c                 |  23 +-
 drivers/net/e1000/igb_ethdev.c                |  79 +--
 drivers/net/ena/ena_ethdev.c                  |  35 +-
 drivers/net/enic/enic_main.c                  |  26 +-
 drivers/net/failsafe/failsafe.c               |  22 +-
 drivers/net/failsafe/failsafe_intr.c          |  43 +-
 drivers/net/failsafe/failsafe_ops.c           |  21 +-
 drivers/net/failsafe/failsafe_private.h       |   2 +-
 drivers/net/fm10k/fm10k_ethdev.c              |  32 +-
 drivers/net/hinic/hinic_pmd_ethdev.c          |  10 +-
 drivers/net/hns3/hns3_ethdev.c                |  57 +-
 drivers/net/hns3/hns3_ethdev_vf.c             |  64 +-
 drivers/net/hns3/hns3_rxtx.c                  |   2 +-
 drivers/net/i40e/i40e_ethdev.c                |  53 +-
 drivers/net/iavf/iavf_ethdev.c                |  42 +-
 drivers/net/iavf/iavf_vchnl.c                 |   4 +-
 drivers/net/ice/ice_dcf.c                     |  10 +-
 drivers/net/ice/ice_dcf_ethdev.c              |  21 +-
 drivers/net/ice/ice_ethdev.c                  |  49 +-
 drivers/net/igc/igc_ethdev.c                  |  45 +-
 drivers/net/ionic/ionic_ethdev.c              |  17 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |  66 +-
 drivers/net/memif/memif_socket.c              | 108 ++-
 drivers/net/memif/memif_socket.h              |   4 +-
 drivers/net/memif/rte_eth_memif.c             |  59 +-
 drivers/net/memif/rte_eth_memif.h             |   2 +-
 drivers/net/mlx4/mlx4.c                       |  18 +-
 drivers/net/mlx4/mlx4.h                       |   2 +-
 drivers/net/mlx4/mlx4_intr.c                  |  47 +-
 drivers/net/mlx5/linux/mlx5_os.c              |  51 +-
 drivers/net/mlx5/linux/mlx5_socket.c          |  24 +-
 drivers/net/mlx5/mlx5.h                       |   6 +-
 drivers/net/mlx5/mlx5_rxq.c                   |  42 +-
 drivers/net/mlx5/mlx5_trigger.c               |   4 +-
 drivers/net/mlx5/mlx5_txpp.c                  |  25 +-
 drivers/net/netvsc/hn_ethdev.c                |   4 +-
 drivers/net/nfp/nfp_common.c                  |  34 +-
 drivers/net/nfp/nfp_ethdev.c                  |  13 +-
 drivers/net/nfp/nfp_ethdev_vf.c               |  13 +-
 drivers/net/ngbe/ngbe_ethdev.c                |  29 +-
 drivers/net/octeontx2/otx2_ethdev_irq.c       |  35 +-
 drivers/net/qede/qede_ethdev.c                |  16 +-
 drivers/net/sfc/sfc_intr.c                    |  30 +-
 drivers/net/tap/rte_eth_tap.c                 |  35 +-
 drivers/net/tap/rte_eth_tap.h                 |   2 +-
 drivers/net/tap/tap_intr.c                    |  32 +-
 drivers/net/thunderx/nicvf_ethdev.c           |  11 +
 drivers/net/thunderx/nicvf_struct.h           |   2 +-
 drivers/net/txgbe/txgbe_ethdev.c              |  34 +-
 drivers/net/txgbe/txgbe_ethdev_vf.c           |  33 +-
 drivers/net/vhost/rte_eth_vhost.c             |  75 +-
 drivers/net/virtio/virtio_ethdev.c            |  21 +-
 .../net/virtio/virtio_user/virtio_user_dev.c  |  47 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.c          |  43 +-
 drivers/raw/ifpga/ifpga_rawdev.c              |  61 +-
 drivers/raw/ntb/ntb.c                         |   9 +-
 .../regex/octeontx2/otx2_regexdev_hw_access.c |   4 +-
 drivers/vdpa/ifc/ifcvf_vdpa.c                 |   5 +-
 drivers/vdpa/mlx5/mlx5_vdpa.c                 |   9 +
 drivers/vdpa/mlx5/mlx5_vdpa.h                 |   4 +-
 drivers/vdpa/mlx5/mlx5_vdpa_event.c           |  22 +-
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c           |  44 +-
 lib/bbdev/rte_bbdev.c                         |   4 +-
 lib/eal/common/eal_common_interrupts.c        | 586 ++++++++++++++++
 lib/eal/common/eal_private.h                  |  11 +
 lib/eal/common/malloc_heap.c                  |  16 +-
 lib/eal/common/malloc_heap.h                  |   3 +
 lib/eal/common/meson.build                    |   1 +
 lib/eal/freebsd/eal.c                         |   1 +
 lib/eal/freebsd/eal_alarm.c                   |  52 +-
 lib/eal/freebsd/eal_interrupts.c              |  92 ++-
 lib/eal/include/meson.build                   |   2 +-
 lib/eal/include/rte_eal_interrupts.h          | 269 --------
 lib/eal/include/rte_eal_trace.h               |  24 +-
 lib/eal/include/rte_epoll.h                   | 118 ++++
 lib/eal/include/rte_interrupts.h              | 650 +++++++++++++++++-
 lib/eal/linux/eal.c                           |   1 +
 lib/eal/linux/eal_alarm.c                     |  37 +-
 lib/eal/linux/eal_dev.c                       |  63 +-
 lib/eal/linux/eal_interrupts.c                | 287 +++++---
 lib/eal/version.map                           |  47 +-
 lib/ethdev/ethdev_pci.h                       |   2 +-
 lib/ethdev/rte_ethdev.c                       |  14 +-
 134 files changed, 3568 insertions(+), 1709 deletions(-)
 create mode 100644 lib/eal/common/eal_common_interrupts.c
 delete mode 100644 lib/eal/include/rte_eal_interrupts.h
 create mode 100644 lib/eal/include/rte_epoll.h

-- 
2.18.0


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v6 0/6] hide eth dev related structures
  2021-10-18 16:04  0%     ` Ali Alnubani
@ 2021-10-18 16:47  0%       ` Ferruh Yigit
  2021-10-18 23:47  0%         ` Ajit Khaparde
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-10-18 16:47 UTC (permalink / raw)
  To: Ali Alnubani, Konstantin Ananyev, dev, jerinj, Ajit Khaparde,
	Raslan Darawsheh, Andrew Rybchenko, Qi Zhang,
	Honnappa Nagarahalli
  Cc: xiaoyun.li, anoobj, ndabilpuram, adwivedi, shepard.siegel,
	ed.czeck, john.miller, irusskikh, somnath.kotur,
	rahul.lakkireddy, hemant.agrawal, sachin.saxena, haiyue.wang,
	johndale, hyonkim, xiao.w.wang, humin29, yisen.zhuang, oulijun,
	beilei.xing, jingjing.wu, qiming.yang, Matan Azrad,
	Slava Ovsiienko, sthemmin, NBU-Contact-longli, heinrich.kuhn,
	kirankumark, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, NBU-Contact-Thomas Monjalon, mdr, jay.jayatheerthan

On 10/18/2021 5:04 PM, Ali Alnubani wrote:
>> -----Original Message-----
>> From: dev <dev-bounces@dpdk.org> On Behalf Of Ferruh Yigit
>> Sent: Wednesday, October 13, 2021 11:16 PM
>> To: Konstantin Ananyev <konstantin.ananyev@intel.com>; dev@dpdk.org;
>> jerinj@marvell.com; Ajit Khaparde <ajit.khaparde@broadcom.com>; Raslan
>> Darawsheh <rasland@nvidia.com>; Andrew Rybchenko
>> <andrew.rybchenko@oktetlabs.ru>; Qi Zhang <qi.z.zhang@intel.com>;
>> Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
>> Cc: xiaoyun.li@intel.com; anoobj@marvell.com; jerinj@marvell.com;
>> ndabilpuram@marvell.com; adwivedi@marvell.com;
>> shepard.siegel@atomicrules.com; ed.czeck@atomicrules.com;
>> john.miller@atomicrules.com; irusskikh@marvell.com;
>> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
>> rahul.lakkireddy@chelsio.com; hemant.agrawal@nxp.com;
>> sachin.saxena@oss.nxp.com; haiyue.wang@intel.com; johndale@cisco.com;
>> hyonkim@cisco.com; qi.z.zhang@intel.com; xiao.w.wang@intel.com;
>> humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com;
>> beilei.xing@intel.com; jingjing.wu@intel.com; qiming.yang@intel.com;
>> Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
>> <viacheslavo@nvidia.com>; sthemmin@microsoft.com; NBU-Contact-longli
>> <longli@microsoft.com>; heinrich.kuhn@corigine.com;
>> kirankumark@marvell.com; andrew.rybchenko@oktetlabs.ru;
>> mczekaj@marvell.com; jiawenwu@trustnetic.com;
>> jianwang@trustnetic.com; maxime.coquelin@redhat.com;
>> chenbo.xia@intel.com; NBU-Contact-Thomas Monjalon
>> <thomas@monjalon.net>; mdr@ashroe.eu; jay.jayatheerthan@intel.com
>> Subject: Re: [dpdk-dev] [PATCH v6 0/6] hide eth dev related structures
>>
>> On 10/13/2021 2:36 PM, Konstantin Ananyev wrote:
>>> v6 changes:
>>> - Update comments (Andrew)
>>> - Move callback related variables under corresponding ifdefs (Andrew)
>>> - Few nits in rte_eth_macaddrs_get (Andrew)
>>> - Rebased on top of next-net tree
>>>
>>> v5 changes:
>>> - Fix spelling (Thomas/David)
>>> - Rename internal helper functions (David)
>>> - Reorder patches and update commit messages (Thomas)
>>> - Update comments (Thomas)
>>> - Changed layout in rte_eth_fp_ops, to group functions and
>>>      related data based on their functionality:
>>>      first 64B line for Rx, second one for Tx.
>>>      Didn't observe any real performance difference comparing to
>>>      original layout. Though decided to keep a new one, as it seems
>>>      a bit more plausible.
>>>
>>> v4 changes:
>>>    - Fix secondary process attach (Pavan)
>>>    - Fix build failure (Ferruh)
>>>    - Update lib/ethdev/verion.map (Ferruh)
>>>      Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
>>>      section makes checkpatch.sh to complain.
>>>
>>> v3 changes:
>>>    - Changes in public struct naming (Jerin/Haiyue)
>>>    - Split patches
>>>    - Update docs
>>>    - Shamelessly included Andrew's patch:
>>>      https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-
>> 1-andrew.rybchenko@oktetlabs.ru/
>>>      into these series.
>>>      I have to do similar thing here, so decided to avoid duplicated effort.
>>>
>>> The aim of these patch series is to make rte_ethdev core data structures
>>> (rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
>>> DPDK and not visible to the user.
>>> That should allow future possible changes to core ethdev related structures
>>> to be transparent to the user and help to improve ABI/API stability.
>>> Note that current ethdev API is preserved, but it is a formal ABI break.
>>>
>>> The work is based on previous discussions at:
>>> https://www.mail-archive.com/dev@dpdk.org/msg211405.html
>>> https://www.mail-archive.com/dev@dpdk.org/msg216685.html
>>> and consists of the following main points:
>>> 1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
>>>      related data pointer from rte_eth_dev into a separate flat array.
>>>      We keep it public to still be able to use inline functions for these
>>>      'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>>>      Note that apart from function pointers itself, each element of this
>>>      flat array also contains two opaque pointers for each ethdev:
>>>      1) a pointer to an array of internal queue data pointers
>>>      2)  points to array of queue callback data pointers.
>>>      Note that exposing this extra information allows us to avoid extra
>>>      changes inside PMD level, plus should help to avoid possible
>>>      performance degradation.
>>> 2. Change implementation of 'fast' inline ethdev functions
>>>      (rte_eth_rx_burst(), etc.) to use new public flat array.
>>>      While it is an ABI breakage, this change is intended to be transparent
>>>      for both users (no changes in user app is required) and PMD developers
>>>      (no changes in PMD is required).
>>>      One extra note - with new implementation RX/TX callback invocation
>>>      will cost one extra function call with this changes. That might cause
>>>      some slowdown for code-path with RX/TX callbacks heavily involved.
>>>      Hope such trade-off is acceptable for the community.
>>> 3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and
>> related
>>>      things into internal header: <ethdev_driver.h>.
>>>
>>> That approach was selected to:
>>>     - Avoid(/minimize) possible performance losses.
>>>     - Minimize required changes inside PMDs.
>>>
>>> Performance testing results (ICX 2.0GHz, E810 (ice)):
>>>    - testpmd macswap fwd mode, plus
>>>      a) no RX/TX callbacks:
>>>         no actual slowdown observed
>>>      b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
>>>         ~2% slowdown
>>>    - l3fwd: no actual slowdown observed
>>>
>>> Would like to thank everyone who already reviewed and tested previous
>>> versions of these series. All other interested parties please don't be shy
>>> and provide your feedback.
>>>
>>> Konstantin Ananyev (6):
>>>     ethdev: allocate max space for internal queue array
>>>     ethdev: change input parameters for rx_queue_count
>>>     ethdev: copy fast-path API into separate structure
>>>     ethdev: make fast-path functions to use new flat array
>>>     ethdev: add API to retrieve multiple ethernet addresses
>>>     ethdev: hide eth dev related structures
>>>
>>
>> For series,
>> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>
>> No performance regression detected on my testing.
>>
>> I am merging the series to next-net now which helps testing,
>> but before merging to main repo it will be good to get more
>> ack and test results (I can squash new tags later).
>>
>> @Jerin, @Ajit, @Raslan, @Andrew, @Qi, @Honnappa,
>> Can you please test this set for any possible regression?
>>
>> Series applied to dpdk-next-net/main, thanks.
>>
> 
> Tested (on dpdk-next-net/main) single and multi-core packet forwarding performance with testpmd on both ConnectX-5 and ConnectX-6 Dx. I didn't see any noticeable regressions.
> 

Thanks!

At this stage I am putting set to pull request for main repo.
Last day for anyone who wants to test the set.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6 0/6] hide eth dev related structures
  @ 2021-10-18 16:04  0%     ` Ali Alnubani
  2021-10-18 16:47  0%       ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Ali Alnubani @ 2021-10-18 16:04 UTC (permalink / raw)
  To: Ferruh Yigit, Konstantin Ananyev, dev, jerinj, Ajit Khaparde,
	Raslan Darawsheh, Andrew Rybchenko, Qi Zhang,
	Honnappa Nagarahalli
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	Matan Azrad, Slava Ovsiienko, sthemmin, NBU-Contact-longli,
	heinrich.kuhn, kirankumark, andrew.rybchenko, mczekaj, jiawenwu,
	jianwang, maxime.coquelin, chenbo.xia,
	NBU-Contact-Thomas Monjalon, mdr, jay.jayatheerthan

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Ferruh Yigit
> Sent: Wednesday, October 13, 2021 11:16 PM
> To: Konstantin Ananyev <konstantin.ananyev@intel.com>; dev@dpdk.org;
> jerinj@marvell.com; Ajit Khaparde <ajit.khaparde@broadcom.com>; Raslan
> Darawsheh <rasland@nvidia.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>; Qi Zhang <qi.z.zhang@intel.com>;
> Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Cc: xiaoyun.li@intel.com; anoobj@marvell.com; jerinj@marvell.com;
> ndabilpuram@marvell.com; adwivedi@marvell.com;
> shepard.siegel@atomicrules.com; ed.czeck@atomicrules.com;
> john.miller@atomicrules.com; irusskikh@marvell.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> rahul.lakkireddy@chelsio.com; hemant.agrawal@nxp.com;
> sachin.saxena@oss.nxp.com; haiyue.wang@intel.com; johndale@cisco.com;
> hyonkim@cisco.com; qi.z.zhang@intel.com; xiao.w.wang@intel.com;
> humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com;
> beilei.xing@intel.com; jingjing.wu@intel.com; qiming.yang@intel.com;
> Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; sthemmin@microsoft.com; NBU-Contact-longli
> <longli@microsoft.com>; heinrich.kuhn@corigine.com;
> kirankumark@marvell.com; andrew.rybchenko@oktetlabs.ru;
> mczekaj@marvell.com; jiawenwu@trustnetic.com;
> jianwang@trustnetic.com; maxime.coquelin@redhat.com;
> chenbo.xia@intel.com; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; mdr@ashroe.eu; jay.jayatheerthan@intel.com
> Subject: Re: [dpdk-dev] [PATCH v6 0/6] hide eth dev related structures
> 
> On 10/13/2021 2:36 PM, Konstantin Ananyev wrote:
> > v6 changes:
> > - Update comments (Andrew)
> > - Move callback related variables under corresponding ifdefs (Andrew)
> > - Few nits in rte_eth_macaddrs_get (Andrew)
> > - Rebased on top of next-net tree
> >
> > v5 changes:
> > - Fix spelling (Thomas/David)
> > - Rename internal helper functions (David)
> > - Reorder patches and update commit messages (Thomas)
> > - Update comments (Thomas)
> > - Changed layout in rte_eth_fp_ops, to group functions and
> >     related data based on their functionality:
> >     first 64B line for Rx, second one for Tx.
> >     Didn't observe any real performance difference comparing to
> >     original layout. Though decided to keep a new one, as it seems
> >     a bit more plausible.
> >
> > v4 changes:
> >   - Fix secondary process attach (Pavan)
> >   - Fix build failure (Ferruh)
> >   - Update lib/ethdev/verion.map (Ferruh)
> >     Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
> >     section makes checkpatch.sh to complain.
> >
> > v3 changes:
> >   - Changes in public struct naming (Jerin/Haiyue)
> >   - Split patches
> >   - Update docs
> >   - Shamelessly included Andrew's patch:
> >     https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-
> 1-andrew.rybchenko@oktetlabs.ru/
> >     into these series.
> >     I have to do similar thing here, so decided to avoid duplicated effort.
> >
> > The aim of these patch series is to make rte_ethdev core data structures
> > (rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
> > DPDK and not visible to the user.
> > That should allow future possible changes to core ethdev related structures
> > to be transparent to the user and help to improve ABI/API stability.
> > Note that current ethdev API is preserved, but it is a formal ABI break.
> >
> > The work is based on previous discussions at:
> > https://www.mail-archive.com/dev@dpdk.org/msg211405.html
> > https://www.mail-archive.com/dev@dpdk.org/msg216685.html
> > and consists of the following main points:
> > 1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
> >     related data pointer from rte_eth_dev into a separate flat array.
> >     We keep it public to still be able to use inline functions for these
> >     'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> >     Note that apart from function pointers itself, each element of this
> >     flat array also contains two opaque pointers for each ethdev:
> >     1) a pointer to an array of internal queue data pointers
> >     2)  points to array of queue callback data pointers.
> >     Note that exposing this extra information allows us to avoid extra
> >     changes inside PMD level, plus should help to avoid possible
> >     performance degradation.
> > 2. Change implementation of 'fast' inline ethdev functions
> >     (rte_eth_rx_burst(), etc.) to use new public flat array.
> >     While it is an ABI breakage, this change is intended to be transparent
> >     for both users (no changes in user app is required) and PMD developers
> >     (no changes in PMD is required).
> >     One extra note - with new implementation RX/TX callback invocation
> >     will cost one extra function call with this changes. That might cause
> >     some slowdown for code-path with RX/TX callbacks heavily involved.
> >     Hope such trade-off is acceptable for the community.
> > 3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and
> related
> >     things into internal header: <ethdev_driver.h>.
> >
> > That approach was selected to:
> >    - Avoid(/minimize) possible performance losses.
> >    - Minimize required changes inside PMDs.
> >
> > Performance testing results (ICX 2.0GHz, E810 (ice)):
> >   - testpmd macswap fwd mode, plus
> >     a) no RX/TX callbacks:
> >        no actual slowdown observed
> >     b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
> >        ~2% slowdown
> >   - l3fwd: no actual slowdown observed
> >
> > Would like to thank everyone who already reviewed and tested previous
> > versions of these series. All other interested parties please don't be shy
> > and provide your feedback.
> >
> > Konstantin Ananyev (6):
> >    ethdev: allocate max space for internal queue array
> >    ethdev: change input parameters for rx_queue_count
> >    ethdev: copy fast-path API into separate structure
> >    ethdev: make fast-path functions to use new flat array
> >    ethdev: add API to retrieve multiple ethernet addresses
> >    ethdev: hide eth dev related structures
> >
> 
> For series,
> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
> 
> No performance regression detected on my testing.
> 
> I am merging the series to next-net now which helps testing,
> but before merging to main repo it will be good to get more
> ack and test results (I can squash new tags later).
> 
> @Jerin, @Ajit, @Raslan, @Andrew, @Qi, @Honnappa,
> Can you please test this set for any possible regression?
> 
> Series applied to dpdk-next-net/main, thanks.
> 

Tested (on dpdk-next-net/main) single and multi-core packet forwarding performance with testpmd on both ConnectX-5 and ConnectX-6 Dx. I didn't see any noticeable regressions.

Thanks,
Ali

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v4] ethdev: add namespace
  @ 2021-10-18 15:43  1% ` Ferruh Yigit
  2021-10-20 19:23  1%   ` [dpdk-dev] [PATCH v5] " Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-10-18 15:43 UTC (permalink / raw)
  To: Maryam Tahhan, Reshma Pattan, Jerin Jacob, Wisam Jaddo,
	Cristian Dumitrescu, Xiaoyun Li, Thomas Monjalon,
	Andrew Rybchenko, Jay Jayatheerthan, Chas Williams,
	Min Hu (Connor),
	Pavan Nikhilesh, Shijith Thotton, Ajit Khaparde, Somnath Kotur,
	John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, Beilei Xing,
	Haiyue Wang, Matan Azrad, Viacheslav Ovsiienko, Keith Wiles,
	Jiayu Hu, Olivier Matz, Ori Kam, Akhil Goyal, Declan Doherty,
	Ray Kinsella, Radu Nicolau, Hemant Agrawal, Sachin Saxena,
	Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
	John W. Linville, Ciara Loftus, Shepard Siegel, Ed Czeck,
	John Miller, Igor Russkikh, Steven Webster, Matt Peters,
	Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh,
	Bruce Richardson, Konstantin Ananyev, Ruifeng Wang,
	Rahul Lakkireddy, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
	Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, Gaetan Rivet,
	Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Yisen Zhuang, Lijun Ou,
	Jingjing Wu, Qiming Yang, Andrew Boyer, Rosen Xu,
	Srisivasubramanian Srinivasan, Jakub Grajciar, Zyta Szpak,
	Liron Himi, Stephen Hemminger, Long Li, Martin Spinler,
	Heinrich Kuhn, Jiawen Wu, Tetsuya Mukawa, Harman Kalra,
	Anoob Joseph, Nalla Pradeep, Radha Mohan Chintakuntla,
	Veerasenareddy Burru, Devendra Singh Rawat, Jasvinder Singh,
	Maciej Czekaj, Jian Wang, Maxime Coquelin, Chenbo Xia, Yong Wang,
	Nicolas Chautru, David Hunt, Harry van Haaren, Bernard Iremonger,
	Anatoly Burakov, John McNamara, Kirill Rybalchenko, Byron Marohn,
	Yipeng Wang
  Cc: Ferruh Yigit, dev, Tyler Retzlaff, David Marchand

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=true, Size: 1212305 bytes --]

Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.

All internal components switched to using new names.

Syntax fixed on lines that this patch touches.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
Cc: David Marchand <david.marchand@redhat.com>
Cc: Thomas Monjalon <thomas@monjalon.net>

v2:
* Updated internal components
* Removed deprecation notice

v3:
* Updated missing macros / structs that David highlighted
* Added release notes update

v4:
* rebased on latest next-net
* depends on https://patches.dpdk.org/user/todo/dpdk/?series=19744
* Not able to complete scripts to update user code, although some
  shared by Aman:
  https://patches.dpdk.org/project/dpdk/patch/20211008102949.70716-1-aman.deep.singh@intel.com/
  Sending new version for possible option to get this patch for -rc1 and
  work for scripts later, before release.
---
 app/proc-info/main.c                          |    8 +-
 app/test-eventdev/test_perf_common.c          |    4 +-
 app/test-eventdev/test_pipeline_common.c      |   10 +-
 app/test-flow-perf/config.h                   |    2 +-
 app/test-pipeline/init.c                      |    8 +-
 app/test-pmd/cmdline.c                        |  286 ++---
 app/test-pmd/config.c                         |  200 ++--
 app/test-pmd/csumonly.c                       |   28 +-
 app/test-pmd/flowgen.c                        |    6 +-
 app/test-pmd/macfwd.c                         |    6 +-
 app/test-pmd/macswap_common.h                 |    6 +-
 app/test-pmd/parameters.c                     |   54 +-
 app/test-pmd/testpmd.c                        |   52 +-
 app/test-pmd/testpmd.h                        |    2 +-
 app/test-pmd/txonly.c                         |    6 +-
 app/test/test_ethdev_link.c                   |   68 +-
 app/test/test_event_eth_rx_adapter.c          |    4 +-
 app/test/test_kni.c                           |    2 +-
 app/test/test_link_bonding.c                  |    4 +-
 app/test/test_link_bonding_mode4.c            |    4 +-
 app/test/test_link_bonding_rssconf.c          |   28 +-
 app/test/test_pmd_perf.c                      |   12 +-
 app/test/virtual_pmd.c                        |   10 +-
 doc/guides/eventdevs/cnxk.rst                 |    2 +-
 doc/guides/eventdevs/octeontx2.rst            |    2 +-
 doc/guides/nics/af_packet.rst                 |    2 +-
 doc/guides/nics/bnxt.rst                      |   24 +-
 doc/guides/nics/enic.rst                      |    2 +-
 doc/guides/nics/features.rst                  |  114 +-
 doc/guides/nics/fm10k.rst                     |    6 +-
 doc/guides/nics/intel_vf.rst                  |   10 +-
 doc/guides/nics/ixgbe.rst                     |   12 +-
 doc/guides/nics/mlx5.rst                      |    4 +-
 doc/guides/nics/tap.rst                       |    2 +-
 .../generic_segmentation_offload_lib.rst      |    8 +-
 doc/guides/prog_guide/mbuf_lib.rst            |   18 +-
 doc/guides/prog_guide/poll_mode_drv.rst       |    8 +-
 doc/guides/prog_guide/rte_flow.rst            |   34 +-
 doc/guides/prog_guide/rte_security.rst        |    2 +-
 doc/guides/rel_notes/deprecation.rst          |   10 +-
 doc/guides/rel_notes/release_21_11.rst        |    3 +
 doc/guides/sample_app_ug/ipsec_secgw.rst      |    4 +-
 doc/guides/testpmd_app_ug/run_app.rst         |    2 +-
 drivers/bus/dpaa/include/process.h            |   16 +-
 drivers/common/cnxk/roc_npc.h                 |    2 +-
 drivers/net/af_packet/rte_eth_af_packet.c     |   20 +-
 drivers/net/af_xdp/rte_eth_af_xdp.c           |   12 +-
 drivers/net/ark/ark_ethdev.c                  |   16 +-
 drivers/net/atlantic/atl_ethdev.c             |   88 +-
 drivers/net/atlantic/atl_ethdev.h             |   18 +-
 drivers/net/atlantic/atl_rxtx.c               |    6 +-
 drivers/net/avp/avp_ethdev.c                  |   26 +-
 drivers/net/axgbe/axgbe_dev.c                 |    6 +-
 drivers/net/axgbe/axgbe_ethdev.c              |  104 +-
 drivers/net/axgbe/axgbe_ethdev.h              |   12 +-
 drivers/net/axgbe/axgbe_mdio.c                |    2 +-
 drivers/net/axgbe/axgbe_rxtx.c                |    6 +-
 drivers/net/bnx2x/bnx2x_ethdev.c              |   12 +-
 drivers/net/bnxt/bnxt.h                       |   62 +-
 drivers/net/bnxt/bnxt_ethdev.c                |  172 +--
 drivers/net/bnxt/bnxt_flow.c                  |    6 +-
 drivers/net/bnxt/bnxt_hwrm.c                  |  112 +-
 drivers/net/bnxt/bnxt_reps.c                  |    2 +-
 drivers/net/bnxt/bnxt_ring.c                  |    4 +-
 drivers/net/bnxt/bnxt_rxq.c                   |   28 +-
 drivers/net/bnxt/bnxt_rxr.c                   |    4 +-
 drivers/net/bnxt/bnxt_rxtx_vec_avx2.c         |    2 +-
 drivers/net/bnxt/bnxt_rxtx_vec_common.h       |    2 +-
 drivers/net/bnxt/bnxt_rxtx_vec_neon.c         |    2 +-
 drivers/net/bnxt/bnxt_rxtx_vec_sse.c          |    2 +-
 drivers/net/bnxt/bnxt_txr.c                   |    4 +-
 drivers/net/bnxt/bnxt_vnic.c                  |   30 +-
 drivers/net/bnxt/rte_pmd_bnxt.c               |    8 +-
 drivers/net/bonding/eth_bond_private.h        |    4 +-
 drivers/net/bonding/rte_eth_bond_8023ad.c     |   16 +-
 drivers/net/bonding/rte_eth_bond_api.c        |    6 +-
 drivers/net/bonding/rte_eth_bond_pmd.c        |   50 +-
 drivers/net/cnxk/cn10k_ethdev.c               |   42 +-
 drivers/net/cnxk/cn10k_rx.c                   |    4 +-
 drivers/net/cnxk/cn10k_tx.c                   |    4 +-
 drivers/net/cnxk/cn9k_ethdev.c                |   60 +-
 drivers/net/cnxk/cn9k_rx.c                    |    4 +-
 drivers/net/cnxk/cn9k_tx.c                    |    4 +-
 drivers/net/cnxk/cnxk_ethdev.c                |  112 +-
 drivers/net/cnxk/cnxk_ethdev.h                |   49 +-
 drivers/net/cnxk/cnxk_ethdev_devargs.c        |    6 +-
 drivers/net/cnxk/cnxk_ethdev_ops.c            |  106 +-
 drivers/net/cnxk/cnxk_link.c                  |   14 +-
 drivers/net/cnxk/cnxk_ptp.c                   |    4 +-
 drivers/net/cnxk/cnxk_rte_flow.c              |    2 +-
 drivers/net/cxgbe/cxgbe.h                     |   46 +-
 drivers/net/cxgbe/cxgbe_ethdev.c              |   42 +-
 drivers/net/cxgbe/cxgbe_main.c                |   12 +-
 drivers/net/dpaa/dpaa_ethdev.c                |  180 +--
 drivers/net/dpaa/dpaa_ethdev.h                |   10 +-
 drivers/net/dpaa/dpaa_flow.c                  |   32 +-
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c        |   47 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  138 +--
 drivers/net/dpaa2/dpaa2_ethdev.h              |   22 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |    8 +-
 drivers/net/e1000/e1000_ethdev.h              |   18 +-
 drivers/net/e1000/em_ethdev.c                 |   64 +-
 drivers/net/e1000/em_rxtx.c                   |   38 +-
 drivers/net/e1000/igb_ethdev.c                |  158 +--
 drivers/net/e1000/igb_pf.c                    |    2 +-
 drivers/net/e1000/igb_rxtx.c                  |  116 +-
 drivers/net/ena/ena_ethdev.c                  |   66 +-
 drivers/net/ena/ena_ethdev.h                  |    4 +-
 drivers/net/ena/ena_rss.c                     |   74 +-
 drivers/net/enetc/enetc_ethdev.c              |   30 +-
 drivers/net/enic/enic.h                       |    2 +-
 drivers/net/enic/enic_ethdev.c                |   88 +-
 drivers/net/enic/enic_main.c                  |   40 +-
 drivers/net/enic/enic_res.c                   |   50 +-
 drivers/net/failsafe/failsafe.c               |    8 +-
 drivers/net/failsafe/failsafe_intr.c          |    4 +-
 drivers/net/failsafe/failsafe_ops.c           |   78 +-
 drivers/net/fm10k/fm10k.h                     |    4 +-
 drivers/net/fm10k/fm10k_ethdev.c              |  146 +--
 drivers/net/fm10k/fm10k_rxtx_vec.c            |    6 +-
 drivers/net/hinic/base/hinic_pmd_hwdev.c      |   22 +-
 drivers/net/hinic/hinic_pmd_ethdev.c          |  136 +--
 drivers/net/hinic/hinic_pmd_rx.c              |   36 +-
 drivers/net/hinic/hinic_pmd_rx.h              |   22 +-
 drivers/net/hns3/hns3_dcb.c                   |   14 +-
 drivers/net/hns3/hns3_ethdev.c                |  352 +++---
 drivers/net/hns3/hns3_ethdev.h                |   12 +-
 drivers/net/hns3/hns3_ethdev_vf.c             |  100 +-
 drivers/net/hns3/hns3_flow.c                  |    6 +-
 drivers/net/hns3/hns3_ptp.c                   |    2 +-
 drivers/net/hns3/hns3_rss.c                   |  108 +-
 drivers/net/hns3/hns3_rss.h                   |   28 +-
 drivers/net/hns3/hns3_rxtx.c                  |   30 +-
 drivers/net/hns3/hns3_rxtx.h                  |    2 +-
 drivers/net/hns3/hns3_rxtx_vec.c              |   10 +-
 drivers/net/i40e/i40e_ethdev.c                |  272 ++---
 drivers/net/i40e/i40e_ethdev.h                |   24 +-
 drivers/net/i40e/i40e_flow.c                  |   32 +-
 drivers/net/i40e/i40e_hash.c                  |  156 +--
 drivers/net/i40e/i40e_pf.c                    |   14 +-
 drivers/net/i40e/i40e_rxtx.c                  |    8 +-
 drivers/net/i40e/i40e_rxtx.h                  |    4 +-
 drivers/net/i40e/i40e_rxtx_vec_avx512.c       |    2 +-
 drivers/net/i40e/i40e_rxtx_vec_common.h       |    8 +-
 drivers/net/i40e/i40e_vf_representor.c        |   48 +-
 drivers/net/iavf/iavf.h                       |   24 +-
 drivers/net/iavf/iavf_ethdev.c                |  178 +--
 drivers/net/iavf/iavf_hash.c                  |  320 ++---
 drivers/net/iavf/iavf_rxtx.c                  |    2 +-
 drivers/net/iavf/iavf_rxtx.h                  |   24 +-
 drivers/net/iavf/iavf_rxtx_vec_avx2.c         |    4 +-
 drivers/net/iavf/iavf_rxtx_vec_avx512.c       |    6 +-
 drivers/net/iavf/iavf_rxtx_vec_sse.c          |    2 +-
 drivers/net/ice/ice_dcf.c                     |    2 +-
 drivers/net/ice/ice_dcf_ethdev.c              |   86 +-
 drivers/net/ice/ice_dcf_vf_representor.c      |   56 +-
 drivers/net/ice/ice_ethdev.c                  |  180 +--
 drivers/net/ice/ice_ethdev.h                  |   26 +-
 drivers/net/ice/ice_hash.c                    |  290 ++---
 drivers/net/ice/ice_rxtx.c                    |   16 +-
 drivers/net/ice/ice_rxtx_vec_avx2.c           |    2 +-
 drivers/net/ice/ice_rxtx_vec_avx512.c         |    4 +-
 drivers/net/ice/ice_rxtx_vec_common.h         |   28 +-
 drivers/net/ice/ice_rxtx_vec_sse.c            |    2 +-
 drivers/net/igc/igc_ethdev.c                  |  138 +--
 drivers/net/igc/igc_ethdev.h                  |   54 +-
 drivers/net/igc/igc_txrx.c                    |   48 +-
 drivers/net/ionic/ionic_ethdev.c              |  138 +--
 drivers/net/ionic/ionic_ethdev.h              |   12 +-
 drivers/net/ionic/ionic_lif.c                 |   36 +-
 drivers/net/ionic/ionic_rxtx.c                |   10 +-
 drivers/net/ipn3ke/ipn3ke_representor.c       |   64 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |  285 +++--
 drivers/net/ixgbe/ixgbe_ethdev.h              |   18 +-
 drivers/net/ixgbe/ixgbe_fdir.c                |   24 +-
 drivers/net/ixgbe/ixgbe_flow.c                |    2 +-
 drivers/net/ixgbe/ixgbe_ipsec.c               |   12 +-
 drivers/net/ixgbe/ixgbe_pf.c                  |   34 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                |  249 ++--
 drivers/net/ixgbe/ixgbe_rxtx.h                |    4 +-
 drivers/net/ixgbe/ixgbe_rxtx_vec_common.h     |    2 +-
 drivers/net/ixgbe/ixgbe_tm.c                  |   16 +-
 drivers/net/ixgbe/ixgbe_vf_representor.c      |   16 +-
 drivers/net/ixgbe/rte_pmd_ixgbe.c             |   14 +-
 drivers/net/ixgbe/rte_pmd_ixgbe.h             |    4 +-
 drivers/net/kni/rte_eth_kni.c                 |    8 +-
 drivers/net/liquidio/lio_ethdev.c             |  114 +-
 drivers/net/memif/memif_socket.c              |    2 +-
 drivers/net/memif/rte_eth_memif.c             |   16 +-
 drivers/net/mlx4/mlx4_ethdev.c                |   32 +-
 drivers/net/mlx4/mlx4_flow.c                  |   30 +-
 drivers/net/mlx4/mlx4_intr.c                  |    8 +-
 drivers/net/mlx4/mlx4_rxq.c                   |   18 +-
 drivers/net/mlx4/mlx4_txq.c                   |   24 +-
 drivers/net/mlx5/linux/mlx5_ethdev_os.c       |   54 +-
 drivers/net/mlx5/linux/mlx5_os.c              |    6 +-
 drivers/net/mlx5/mlx5.c                       |    4 +-
 drivers/net/mlx5/mlx5.h                       |    2 +-
 drivers/net/mlx5/mlx5_defs.h                  |    6 +-
 drivers/net/mlx5/mlx5_ethdev.c                |    6 +-
 drivers/net/mlx5/mlx5_flow.c                  |   54 +-
 drivers/net/mlx5/mlx5_flow.h                  |   12 +-
 drivers/net/mlx5/mlx5_flow_dv.c               |   44 +-
 drivers/net/mlx5/mlx5_flow_verbs.c            |    4 +-
 drivers/net/mlx5/mlx5_rss.c                   |   10 +-
 drivers/net/mlx5/mlx5_rxq.c                   |   40 +-
 drivers/net/mlx5/mlx5_rxtx_vec.h              |    8 +-
 drivers/net/mlx5/mlx5_tx.c                    |   30 +-
 drivers/net/mlx5/mlx5_txq.c                   |   58 +-
 drivers/net/mlx5/mlx5_vlan.c                  |    4 +-
 drivers/net/mlx5/windows/mlx5_os.c            |    4 +-
 drivers/net/mvneta/mvneta_ethdev.c            |   32 +-
 drivers/net/mvneta/mvneta_ethdev.h            |   10 +-
 drivers/net/mvneta/mvneta_rxtx.c              |    2 +-
 drivers/net/mvpp2/mrvl_ethdev.c               |  112 +-
 drivers/net/netvsc/hn_ethdev.c                |   70 +-
 drivers/net/netvsc/hn_rndis.c                 |   50 +-
 drivers/net/nfb/nfb_ethdev.c                  |   20 +-
 drivers/net/nfb/nfb_rx.c                      |    2 +-
 drivers/net/nfp/nfp_common.c                  |  122 +-
 drivers/net/nfp/nfp_ethdev.c                  |    2 +-
 drivers/net/nfp/nfp_ethdev_vf.c               |    2 +-
 drivers/net/ngbe/ngbe_ethdev.c                |   50 +-
 drivers/net/null/rte_eth_null.c               |   28 +-
 drivers/net/octeontx/octeontx_ethdev.c        |   74 +-
 drivers/net/octeontx/octeontx_ethdev.h        |   30 +-
 drivers/net/octeontx/octeontx_ethdev_ops.c    |   26 +-
 drivers/net/octeontx2/otx2_ethdev.c           |   96 +-
 drivers/net/octeontx2/otx2_ethdev.h           |   64 +-
 drivers/net/octeontx2/otx2_ethdev_devargs.c   |   12 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c       |   14 +-
 drivers/net/octeontx2/otx2_ethdev_sec.c       |    8 +-
 drivers/net/octeontx2/otx2_flow.c             |    2 +-
 drivers/net/octeontx2/otx2_flow_ctrl.c        |   36 +-
 drivers/net/octeontx2/otx2_flow_parse.c       |    4 +-
 drivers/net/octeontx2/otx2_link.c             |   40 +-
 drivers/net/octeontx2/otx2_mcast.c            |    2 +-
 drivers/net/octeontx2/otx2_ptp.c              |    4 +-
 drivers/net/octeontx2/otx2_rss.c              |   70 +-
 drivers/net/octeontx2/otx2_rx.c               |    4 +-
 drivers/net/octeontx2/otx2_tx.c               |    2 +-
 drivers/net/octeontx2/otx2_vlan.c             |   42 +-
 drivers/net/octeontx_ep/otx_ep_ethdev.c       |    6 +-
 drivers/net/octeontx_ep/otx_ep_rxtx.c         |    6 +-
 drivers/net/pcap/pcap_ethdev.c                |   12 +-
 drivers/net/pfe/pfe_ethdev.c                  |   18 +-
 drivers/net/qede/base/mcp_public.h            |    4 +-
 drivers/net/qede/qede_ethdev.c                |  156 +--
 drivers/net/qede/qede_filter.c                |   42 +-
 drivers/net/qede/qede_rxtx.c                  |    2 +-
 drivers/net/qede/qede_rxtx.h                  |   16 +-
 drivers/net/ring/rte_eth_ring.c               |   20 +-
 drivers/net/sfc/sfc.c                         |   30 +-
 drivers/net/sfc/sfc_ef100_rx.c                |   10 +-
 drivers/net/sfc/sfc_ef100_tx.c                |   20 +-
 drivers/net/sfc/sfc_ef10_essb_rx.c            |    4 +-
 drivers/net/sfc/sfc_ef10_rx.c                 |    8 +-
 drivers/net/sfc/sfc_ef10_tx.c                 |   32 +-
 drivers/net/sfc/sfc_ethdev.c                  |   50 +-
 drivers/net/sfc/sfc_flow.c                    |    2 +-
 drivers/net/sfc/sfc_port.c                    |   52 +-
 drivers/net/sfc/sfc_repr.c                    |   10 +-
 drivers/net/sfc/sfc_rx.c                      |   50 +-
 drivers/net/sfc/sfc_tx.c                      |   50 +-
 drivers/net/softnic/rte_eth_softnic.c         |   12 +-
 drivers/net/szedata2/rte_eth_szedata2.c       |   14 +-
 drivers/net/tap/rte_eth_tap.c                 |  104 +-
 drivers/net/tap/tap_rss.h                     |    2 +-
 drivers/net/thunderx/nicvf_ethdev.c           |  102 +-
 drivers/net/thunderx/nicvf_ethdev.h           |   40 +-
 drivers/net/txgbe/txgbe_ethdev.c              |  242 ++--
 drivers/net/txgbe/txgbe_ethdev.h              |   18 +-
 drivers/net/txgbe/txgbe_ethdev_vf.c           |   24 +-
 drivers/net/txgbe/txgbe_fdir.c                |   20 +-
 drivers/net/txgbe/txgbe_flow.c                |    2 +-
 drivers/net/txgbe/txgbe_ipsec.c               |   12 +-
 drivers/net/txgbe/txgbe_pf.c                  |   34 +-
 drivers/net/txgbe/txgbe_rxtx.c                |  308 ++---
 drivers/net/txgbe/txgbe_rxtx.h                |    4 +-
 drivers/net/txgbe/txgbe_tm.c                  |   16 +-
 drivers/net/vhost/rte_eth_vhost.c             |   16 +-
 drivers/net/virtio/virtio_ethdev.c            |  124 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.c          |   72 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.h          |   16 +-
 drivers/net/vmxnet3/vmxnet3_rxtx.c            |   16 +-
 examples/bbdev_app/main.c                     |    6 +-
 examples/bond/main.c                          |   14 +-
 examples/distributor/main.c                   |   12 +-
 examples/ethtool/ethtool-app/main.c           |    2 +-
 examples/ethtool/lib/rte_ethtool.c            |   18 +-
 .../pipeline_worker_generic.c                 |   16 +-
 .../eventdev_pipeline/pipeline_worker_tx.c    |   12 +-
 examples/flow_classify/flow_classify.c        |    4 +-
 examples/flow_filtering/main.c                |   16 +-
 examples/ioat/ioatfwd.c                       |    8 +-
 examples/ip_fragmentation/main.c              |   12 +-
 examples/ip_pipeline/link.c                   |   20 +-
 examples/ip_reassembly/main.c                 |   18 +-
 examples/ipsec-secgw/ipsec-secgw.c            |   32 +-
 examples/ipsec-secgw/sa.c                     |    8 +-
 examples/ipv4_multicast/main.c                |    6 +-
 examples/kni/main.c                           |    8 +-
 examples/l2fwd-crypto/main.c                  |   10 +-
 examples/l2fwd-event/l2fwd_common.c           |   10 +-
 examples/l2fwd-event/main.c                   |    2 +-
 examples/l2fwd-jobstats/main.c                |    8 +-
 examples/l2fwd-keepalive/main.c               |    8 +-
 examples/l2fwd/main.c                         |    8 +-
 examples/l3fwd-acl/main.c                     |   18 +-
 examples/l3fwd-graph/main.c                   |   14 +-
 examples/l3fwd-power/main.c                   |   16 +-
 examples/l3fwd/l3fwd_event.c                  |    4 +-
 examples/l3fwd/main.c                         |   18 +-
 examples/link_status_interrupt/main.c         |   10 +-
 .../client_server_mp/mp_server/init.c         |    4 +-
 examples/multi_process/symmetric_mp/main.c    |   14 +-
 examples/ntb/ntb_fwd.c                        |    6 +-
 examples/packet_ordering/main.c               |    4 +-
 .../performance-thread/l3fwd-thread/main.c    |   16 +-
 examples/pipeline/obj.c                       |   20 +-
 examples/ptpclient/ptpclient.c                |   10 +-
 examples/qos_meter/main.c                     |   16 +-
 examples/qos_sched/init.c                     |    6 +-
 examples/rxtx_callbacks/main.c                |    8 +-
 examples/server_node_efd/server/init.c        |    8 +-
 examples/skeleton/basicfwd.c                  |    4 +-
 examples/vhost/main.c                         |   26 +-
 examples/vm_power_manager/main.c              |    6 +-
 examples/vmdq/main.c                          |   20 +-
 examples/vmdq_dcb/main.c                      |   40 +-
 lib/ethdev/ethdev_driver.h                    |   36 +-
 lib/ethdev/rte_ethdev.c                       |  181 ++-
 lib/ethdev/rte_ethdev.h                       | 1029 +++++++++++------
 lib/ethdev/rte_flow.h                         |    2 +-
 lib/gso/rte_gso.c                             |   20 +-
 lib/gso/rte_gso.h                             |    4 +-
 lib/mbuf/rte_mbuf_core.h                      |    8 +-
 lib/mbuf/rte_mbuf_dyn.h                       |    2 +-
 338 files changed, 6639 insertions(+), 6382 deletions(-)

diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index a8e928fa9ff3..963b6aa5c589 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -757,11 +757,11 @@ show_port(void)
 		}
 
 		ret = rte_eth_dev_flow_ctrl_get(i, &fc_conf);
-		if (ret == 0 && fc_conf.mode != RTE_FC_NONE)  {
+		if (ret == 0 && fc_conf.mode != RTE_ETH_FC_NONE)  {
 			printf("\t  -- flow control mode %s%s high %u low %u pause %u%s%s\n",
-			       fc_conf.mode == RTE_FC_RX_PAUSE ? "rx " :
-			       fc_conf.mode == RTE_FC_TX_PAUSE ? "tx " :
-			       fc_conf.mode == RTE_FC_FULL ? "full" : "???",
+			       fc_conf.mode == RTE_ETH_FC_RX_PAUSE ? "rx " :
+			       fc_conf.mode == RTE_ETH_FC_TX_PAUSE ? "tx " :
+			       fc_conf.mode == RTE_ETH_FC_FULL ? "full" : "???",
 			       fc_conf.autoneg ? " auto" : "",
 			       fc_conf.high_water,
 			       fc_conf.low_water,
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index 660d5a0364b6..31d1b0e14653 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -668,13 +668,13 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 	struct test_perf *t = evt_test_priv(test);
 	struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 			.split_hdr_size = 0,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 			},
 		},
 	};
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 2775e72c580d..d202091077a6 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -176,12 +176,12 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 	struct rte_eth_rxconf rx_conf;
 	struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 			},
 		},
 	};
@@ -223,7 +223,7 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 
 		if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT))
 			local_port_conf.rxmode.offloads |=
-				DEV_RX_OFFLOAD_RSS_HASH;
+				RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 		ret = rte_eth_dev_info_get(i, &dev_info);
 		if (ret != 0) {
@@ -233,9 +233,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 		}
 
 		/* Enable mbuf fast free if PMD has the capability. */
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		rx_conf = dev_info.default_rxconf;
 		rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h
index a14d4e05e185..4249b6175b82 100644
--- a/app/test-flow-perf/config.h
+++ b/app/test-flow-perf/config.h
@@ -5,7 +5,7 @@
 #define FLOW_ITEM_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ACTION_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ATTR_MASK(_x) (UINT64_C(1) << _x)
-#define GET_RSS_HF() (ETH_RSS_IP)
+#define GET_RSS_HF() (RTE_ETH_RSS_IP)
 
 /* Configuration */
 #define RXQ_NUM 4
diff --git a/app/test-pipeline/init.c b/app/test-pipeline/init.c
index fe37d63730c6..c73801904103 100644
--- a/app/test-pipeline/init.c
+++ b/app/test-pipeline/init.c
@@ -70,16 +70,16 @@ struct app_params app = {
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -178,7 +178,7 @@ app_ports_check_link(void)
 		RTE_LOG(INFO, USER1, "Port %u %s\n",
 			port,
 			link_status_text);
-		if (link.link_status == ETH_LINK_DOWN)
+		if (link.link_status == RTE_ETH_LINK_DOWN)
 			all_ports_up = 0;
 	}
 
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 88354ccab9d4..02011f668034 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1478,51 +1478,51 @@ parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
 	int duplex;
 
 	if (!strcmp(duplexstr, "half")) {
-		duplex = ETH_LINK_HALF_DUPLEX;
+		duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	} else if (!strcmp(duplexstr, "full")) {
-		duplex = ETH_LINK_FULL_DUPLEX;
+		duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	} else if (!strcmp(duplexstr, "auto")) {
-		duplex = ETH_LINK_FULL_DUPLEX;
+		duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	} else {
 		fprintf(stderr, "Unknown duplex parameter\n");
 		return -1;
 	}
 
 	if (!strcmp(speedstr, "10")) {
-		*speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
-				ETH_LINK_SPEED_10M_HD : ETH_LINK_SPEED_10M;
+		*speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+				RTE_ETH_LINK_SPEED_10M_HD : RTE_ETH_LINK_SPEED_10M;
 	} else if (!strcmp(speedstr, "100")) {
-		*speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
-				ETH_LINK_SPEED_100M_HD : ETH_LINK_SPEED_100M;
+		*speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+				RTE_ETH_LINK_SPEED_100M_HD : RTE_ETH_LINK_SPEED_100M;
 	} else {
-		if (duplex != ETH_LINK_FULL_DUPLEX) {
+		if (duplex != RTE_ETH_LINK_FULL_DUPLEX) {
 			fprintf(stderr, "Invalid speed/duplex parameters\n");
 			return -1;
 		}
 		if (!strcmp(speedstr, "1000")) {
-			*speed = ETH_LINK_SPEED_1G;
+			*speed = RTE_ETH_LINK_SPEED_1G;
 		} else if (!strcmp(speedstr, "10000")) {
-			*speed = ETH_LINK_SPEED_10G;
+			*speed = RTE_ETH_LINK_SPEED_10G;
 		} else if (!strcmp(speedstr, "25000")) {
-			*speed = ETH_LINK_SPEED_25G;
+			*speed = RTE_ETH_LINK_SPEED_25G;
 		} else if (!strcmp(speedstr, "40000")) {
-			*speed = ETH_LINK_SPEED_40G;
+			*speed = RTE_ETH_LINK_SPEED_40G;
 		} else if (!strcmp(speedstr, "50000")) {
-			*speed = ETH_LINK_SPEED_50G;
+			*speed = RTE_ETH_LINK_SPEED_50G;
 		} else if (!strcmp(speedstr, "100000")) {
-			*speed = ETH_LINK_SPEED_100G;
+			*speed = RTE_ETH_LINK_SPEED_100G;
 		} else if (!strcmp(speedstr, "200000")) {
-			*speed = ETH_LINK_SPEED_200G;
+			*speed = RTE_ETH_LINK_SPEED_200G;
 		} else if (!strcmp(speedstr, "auto")) {
-			*speed = ETH_LINK_SPEED_AUTONEG;
+			*speed = RTE_ETH_LINK_SPEED_AUTONEG;
 		} else {
 			fprintf(stderr, "Unknown speed parameter\n");
 			return -1;
 		}
 	}
 
-	if (*speed != ETH_LINK_SPEED_AUTONEG)
-		*speed |= ETH_LINK_SPEED_FIXED;
+	if (*speed != RTE_ETH_LINK_SPEED_AUTONEG)
+		*speed |= RTE_ETH_LINK_SPEED_FIXED;
 
 	return 0;
 }
@@ -2166,33 +2166,33 @@ cmd_config_rss_parsed(void *parsed_result,
 	int ret;
 
 	if (!strcmp(res->value, "all"))
-		rss_conf.rss_hf = ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP |
-			ETH_RSS_TCP | ETH_RSS_UDP | ETH_RSS_SCTP |
-			ETH_RSS_L2_PAYLOAD | ETH_RSS_L2TPV3 | ETH_RSS_ESP |
-			ETH_RSS_AH | ETH_RSS_PFCP | ETH_RSS_GTPU |
-			ETH_RSS_ECPRI;
+		rss_conf.rss_hf = RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP |
+			RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP |
+			RTE_ETH_RSS_L2_PAYLOAD | RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP |
+			RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP | RTE_ETH_RSS_GTPU |
+			RTE_ETH_RSS_ECPRI;
 	else if (!strcmp(res->value, "eth"))
-		rss_conf.rss_hf = ETH_RSS_ETH;
+		rss_conf.rss_hf = RTE_ETH_RSS_ETH;
 	else if (!strcmp(res->value, "vlan"))
-		rss_conf.rss_hf = ETH_RSS_VLAN;
+		rss_conf.rss_hf = RTE_ETH_RSS_VLAN;
 	else if (!strcmp(res->value, "ip"))
-		rss_conf.rss_hf = ETH_RSS_IP;
+		rss_conf.rss_hf = RTE_ETH_RSS_IP;
 	else if (!strcmp(res->value, "udp"))
-		rss_conf.rss_hf = ETH_RSS_UDP;
+		rss_conf.rss_hf = RTE_ETH_RSS_UDP;
 	else if (!strcmp(res->value, "tcp"))
-		rss_conf.rss_hf = ETH_RSS_TCP;
+		rss_conf.rss_hf = RTE_ETH_RSS_TCP;
 	else if (!strcmp(res->value, "sctp"))
-		rss_conf.rss_hf = ETH_RSS_SCTP;
+		rss_conf.rss_hf = RTE_ETH_RSS_SCTP;
 	else if (!strcmp(res->value, "ether"))
-		rss_conf.rss_hf = ETH_RSS_L2_PAYLOAD;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2_PAYLOAD;
 	else if (!strcmp(res->value, "port"))
-		rss_conf.rss_hf = ETH_RSS_PORT;
+		rss_conf.rss_hf = RTE_ETH_RSS_PORT;
 	else if (!strcmp(res->value, "vxlan"))
-		rss_conf.rss_hf = ETH_RSS_VXLAN;
+		rss_conf.rss_hf = RTE_ETH_RSS_VXLAN;
 	else if (!strcmp(res->value, "geneve"))
-		rss_conf.rss_hf = ETH_RSS_GENEVE;
+		rss_conf.rss_hf = RTE_ETH_RSS_GENEVE;
 	else if (!strcmp(res->value, "nvgre"))
-		rss_conf.rss_hf = ETH_RSS_NVGRE;
+		rss_conf.rss_hf = RTE_ETH_RSS_NVGRE;
 	else if (!strcmp(res->value, "l3-pre32"))
 		rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE32;
 	else if (!strcmp(res->value, "l3-pre40"))
@@ -2206,46 +2206,46 @@ cmd_config_rss_parsed(void *parsed_result,
 	else if (!strcmp(res->value, "l3-pre96"))
 		rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE96;
 	else if (!strcmp(res->value, "l3-src-only"))
-		rss_conf.rss_hf = ETH_RSS_L3_SRC_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L3_SRC_ONLY;
 	else if (!strcmp(res->value, "l3-dst-only"))
-		rss_conf.rss_hf = ETH_RSS_L3_DST_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L3_DST_ONLY;
 	else if (!strcmp(res->value, "l4-src-only"))
-		rss_conf.rss_hf = ETH_RSS_L4_SRC_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L4_SRC_ONLY;
 	else if (!strcmp(res->value, "l4-dst-only"))
-		rss_conf.rss_hf = ETH_RSS_L4_DST_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L4_DST_ONLY;
 	else if (!strcmp(res->value, "l2-src-only"))
-		rss_conf.rss_hf = ETH_RSS_L2_SRC_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2_SRC_ONLY;
 	else if (!strcmp(res->value, "l2-dst-only"))
-		rss_conf.rss_hf = ETH_RSS_L2_DST_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2_DST_ONLY;
 	else if (!strcmp(res->value, "l2tpv3"))
-		rss_conf.rss_hf = ETH_RSS_L2TPV3;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2TPV3;
 	else if (!strcmp(res->value, "esp"))
-		rss_conf.rss_hf = ETH_RSS_ESP;
+		rss_conf.rss_hf = RTE_ETH_RSS_ESP;
 	else if (!strcmp(res->value, "ah"))
-		rss_conf.rss_hf = ETH_RSS_AH;
+		rss_conf.rss_hf = RTE_ETH_RSS_AH;
 	else if (!strcmp(res->value, "pfcp"))
-		rss_conf.rss_hf = ETH_RSS_PFCP;
+		rss_conf.rss_hf = RTE_ETH_RSS_PFCP;
 	else if (!strcmp(res->value, "pppoe"))
-		rss_conf.rss_hf = ETH_RSS_PPPOE;
+		rss_conf.rss_hf = RTE_ETH_RSS_PPPOE;
 	else if (!strcmp(res->value, "gtpu"))
-		rss_conf.rss_hf = ETH_RSS_GTPU;
+		rss_conf.rss_hf = RTE_ETH_RSS_GTPU;
 	else if (!strcmp(res->value, "ecpri"))
-		rss_conf.rss_hf = ETH_RSS_ECPRI;
+		rss_conf.rss_hf = RTE_ETH_RSS_ECPRI;
 	else if (!strcmp(res->value, "mpls"))
-		rss_conf.rss_hf = ETH_RSS_MPLS;
+		rss_conf.rss_hf = RTE_ETH_RSS_MPLS;
 	else if (!strcmp(res->value, "ipv4-chksum"))
-		rss_conf.rss_hf = ETH_RSS_IPV4_CHKSUM;
+		rss_conf.rss_hf = RTE_ETH_RSS_IPV4_CHKSUM;
 	else if (!strcmp(res->value, "none"))
 		rss_conf.rss_hf = 0;
 	else if (!strcmp(res->value, "level-default")) {
-		rss_hf &= (~ETH_RSS_LEVEL_MASK);
-		rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_PMD_DEFAULT);
+		rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+		rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_PMD_DEFAULT);
 	} else if (!strcmp(res->value, "level-outer")) {
-		rss_hf &= (~ETH_RSS_LEVEL_MASK);
-		rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_OUTERMOST);
+		rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+		rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_OUTERMOST);
 	} else if (!strcmp(res->value, "level-inner")) {
-		rss_hf &= (~ETH_RSS_LEVEL_MASK);
-		rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_INNERMOST);
+		rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+		rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_INNERMOST);
 	} else if (!strcmp(res->value, "default"))
 		use_default = 1;
 	else if (isdigit(res->value[0]) && atoi(res->value) > 0 &&
@@ -2982,8 +2982,8 @@ parse_reta_config(const char *str,
 			return -1;
 		}
 
-		idx = hash_index / RTE_RETA_GROUP_SIZE;
-		shift = hash_index % RTE_RETA_GROUP_SIZE;
+		idx = hash_index / RTE_ETH_RETA_GROUP_SIZE;
+		shift = hash_index % RTE_ETH_RETA_GROUP_SIZE;
 		reta_conf[idx].mask |= (1ULL << shift);
 		reta_conf[idx].reta[shift] = nb_queue;
 	}
@@ -3012,10 +3012,10 @@ cmd_set_rss_reta_parsed(void *parsed_result,
 	} else
 		printf("The reta size of port %d is %u\n",
 			res->port_id, dev_info.reta_size);
-	if (dev_info.reta_size > ETH_RSS_RETA_SIZE_512) {
+	if (dev_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
 		fprintf(stderr,
 			"Currently do not support more than %u entries of redirection table\n",
-			ETH_RSS_RETA_SIZE_512);
+			RTE_ETH_RSS_RETA_SIZE_512);
 		return;
 	}
 
@@ -3086,8 +3086,8 @@ showport_parse_reta_config(struct rte_eth_rss_reta_entry64 *conf,
 	char *end;
 	char *str_fld[8];
 	uint16_t i;
-	uint16_t num = (nb_entries + RTE_RETA_GROUP_SIZE - 1) /
-			RTE_RETA_GROUP_SIZE;
+	uint16_t num = (nb_entries + RTE_ETH_RETA_GROUP_SIZE - 1) /
+			RTE_ETH_RETA_GROUP_SIZE;
 	int ret;
 
 	p = strchr(p0, '(');
@@ -3132,7 +3132,7 @@ cmd_showport_reta_parsed(void *parsed_result,
 	if (ret != 0)
 		return;
 
-	max_reta_size = RTE_MIN(dev_info.reta_size, ETH_RSS_RETA_SIZE_512);
+	max_reta_size = RTE_MIN(dev_info.reta_size, RTE_ETH_RSS_RETA_SIZE_512);
 	if (res->size == 0 || res->size > max_reta_size) {
 		fprintf(stderr, "Invalid redirection table size: %u (1-%u)\n",
 			res->size, max_reta_size);
@@ -3272,7 +3272,7 @@ cmd_config_dcb_parsed(void *parsed_result,
 		return;
 	}
 
-	if ((res->num_tcs != ETH_4_TCS) && (res->num_tcs != ETH_8_TCS)) {
+	if ((res->num_tcs != RTE_ETH_4_TCS) && (res->num_tcs != RTE_ETH_8_TCS)) {
 		fprintf(stderr,
 			"The invalid number of traffic class, only 4 or 8 allowed.\n");
 		return;
@@ -4276,9 +4276,9 @@ cmd_vlan_tpid_parsed(void *parsed_result,
 	enum rte_vlan_type vlan_type;
 
 	if (!strcmp(res->vlan_type, "inner"))
-		vlan_type = ETH_VLAN_TYPE_INNER;
+		vlan_type = RTE_ETH_VLAN_TYPE_INNER;
 	else if (!strcmp(res->vlan_type, "outer"))
-		vlan_type = ETH_VLAN_TYPE_OUTER;
+		vlan_type = RTE_ETH_VLAN_TYPE_OUTER;
 	else {
 		fprintf(stderr, "Unknown vlan type\n");
 		return;
@@ -4615,55 +4615,55 @@ csum_show(int port_id)
 	printf("Parse tunnel is %s\n",
 		(ports[port_id].parse_tunnel) ? "on" : "off");
 	printf("IP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
 	printf("UDP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
 	printf("TCP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
 	printf("SCTP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
 	printf("Outer-Ip checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
 	printf("Outer-Udp checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
 
 	/* display warnings if configuration is not supported by the NIC */
 	ret = eth_dev_info_get_print_err(port_id, &dev_info);
 	if (ret != 0)
 		return;
 
-	if ((tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware IP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware UDP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware TCP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_SCTP_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware SCTP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware outer IP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 			== 0) {
 		fprintf(stderr,
 			"Warning: hardware outer UDP checksum enabled but not supported by port %d\n",
@@ -4713,8 +4713,8 @@ cmd_csum_parsed(void *parsed_result,
 
 		if (!strcmp(res->proto, "ip")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_IPV4_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+						RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 			} else {
 				fprintf(stderr,
 					"IP checksum offload is not supported by port %u\n",
@@ -4722,8 +4722,8 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "udp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_UDP_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"UDP checksum offload is not supported by port %u\n",
@@ -4731,8 +4731,8 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "tcp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_TCP_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"TCP checksum offload is not supported by port %u\n",
@@ -4740,8 +4740,8 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "sctp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_SCTP_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_SCTP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"SCTP checksum offload is not supported by port %u\n",
@@ -4749,9 +4749,9 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "outer-ip")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-					DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+					RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 				csum_offloads |=
-						DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+						RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 			} else {
 				fprintf(stderr,
 					"Outer IP checksum offload is not supported by port %u\n",
@@ -4759,9 +4759,9 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "outer-udp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-					DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+					RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
 				csum_offloads |=
-						DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"Outer UDP checksum offload is not supported by port %u\n",
@@ -4916,7 +4916,7 @@ cmd_tso_set_parsed(void *parsed_result,
 		return;
 
 	if ((ports[res->port_id].tso_segsz != 0) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
 		fprintf(stderr, "Error: TSO is not supported by port %d\n",
 			res->port_id);
 		return;
@@ -4924,11 +4924,11 @@ cmd_tso_set_parsed(void *parsed_result,
 
 	if (ports[res->port_id].tso_segsz == 0) {
 		ports[res->port_id].dev_conf.txmode.offloads &=
-						~DEV_TX_OFFLOAD_TCP_TSO;
+						~RTE_ETH_TX_OFFLOAD_TCP_TSO;
 		printf("TSO for non-tunneled packets is disabled\n");
 	} else {
 		ports[res->port_id].dev_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_TCP_TSO;
+						RTE_ETH_TX_OFFLOAD_TCP_TSO;
 		printf("TSO segment size for non-tunneled packets is %d\n",
 			ports[res->port_id].tso_segsz);
 	}
@@ -4940,7 +4940,7 @@ cmd_tso_set_parsed(void *parsed_result,
 		return;
 
 	if ((ports[res->port_id].tso_segsz != 0) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
 		fprintf(stderr,
 			"Warning: TSO enabled but not supported by port %d\n",
 			res->port_id);
@@ -5011,27 +5011,27 @@ check_tunnel_tso_nic_support(portid_t port_id)
 	if (eth_dev_info_get_print_err(port_id, &dev_info) != 0)
 		return dev_info;
 
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VXLAN_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO))
 		fprintf(stderr,
 			"Warning: VXLAN TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		fprintf(stderr,
 			"Warning: GRE TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPIP_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO))
 		fprintf(stderr,
 			"Warning: IPIP TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
 		fprintf(stderr,
 			"Warning: GENEVE TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IP_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IP_TNL_TSO))
 		fprintf(stderr,
 			"Warning: IP TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO))
 		fprintf(stderr,
 			"Warning: UDP TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
@@ -5059,20 +5059,20 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
 	dev_info = check_tunnel_tso_nic_support(res->port_id);
 	if (ports[res->port_id].tunnel_tso_segsz == 0) {
 		ports[res->port_id].dev_conf.txmode.offloads &=
-			~(DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			  DEV_TX_OFFLOAD_GRE_TNL_TSO |
-			  DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-			  DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-			  DEV_TX_OFFLOAD_IP_TNL_TSO |
-			  DEV_TX_OFFLOAD_UDP_TNL_TSO);
+			~(RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 		printf("TSO for tunneled packets is disabled\n");
 	} else {
-		uint64_t tso_offloads = (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-					 DEV_TX_OFFLOAD_GRE_TNL_TSO |
-					 DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-					 DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-					 DEV_TX_OFFLOAD_IP_TNL_TSO |
-					 DEV_TX_OFFLOAD_UDP_TNL_TSO);
+		uint64_t tso_offloads = (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 
 		ports[res->port_id].dev_conf.txmode.offloads |=
 			(tso_offloads & dev_info.tx_offload_capa);
@@ -5095,7 +5095,7 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
 			fprintf(stderr,
 				"Warning: csum parse_tunnel must be set so that tunneled packets are recognized\n");
 		if (!(ports[res->port_id].dev_conf.txmode.offloads &
-		      DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+		      RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
 			fprintf(stderr,
 				"Warning: csum set outer-ip must be set to hw if outer L3 is IPv4; not necessary for IPv6\n");
 	}
@@ -7227,9 +7227,9 @@ cmd_link_flow_ctrl_show_parsed(void *parsed_result,
 		return;
 	}
 
-	if (fc_conf.mode == RTE_FC_RX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+	if (fc_conf.mode == RTE_ETH_FC_RX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
 		rx_fc_en = true;
-	if (fc_conf.mode == RTE_FC_TX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+	if (fc_conf.mode == RTE_ETH_FC_TX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
 		tx_fc_en = true;
 
 	printf("\n%s Flow control infos for port %-2d %s\n",
@@ -7507,12 +7507,12 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
 
 	/*
 	 * Rx on/off, flow control is enabled/disabled on RX side. This can indicate
-	 * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+	 * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
 	 * Tx on/off, flow control is enabled/disabled on TX side. This can indicate
-	 * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+	 * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
 	 */
 	static enum rte_eth_fc_mode rx_tx_onoff_2_lfc_mode[2][2] = {
-			{RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+			{RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
 	};
 
 	/* Partial command line, retrieve current configuration */
@@ -7525,11 +7525,11 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
 			return;
 		}
 
-		if ((fc_conf.mode == RTE_FC_RX_PAUSE) ||
-		    (fc_conf.mode == RTE_FC_FULL))
+		if ((fc_conf.mode == RTE_ETH_FC_RX_PAUSE) ||
+		    (fc_conf.mode == RTE_ETH_FC_FULL))
 			rx_fc_en = 1;
-		if ((fc_conf.mode == RTE_FC_TX_PAUSE) ||
-		    (fc_conf.mode == RTE_FC_FULL))
+		if ((fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ||
+		    (fc_conf.mode == RTE_ETH_FC_FULL))
 			tx_fc_en = 1;
 	}
 
@@ -7597,12 +7597,12 @@ cmd_priority_flow_ctrl_set_parsed(void *parsed_result,
 
 	/*
 	 * Rx on/off, flow control is enabled/disabled on RX side. This can indicate
-	 * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+	 * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
 	 * Tx on/off, flow control is enabled/disabled on TX side. This can indicate
-	 * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+	 * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
 	 */
 	static enum rte_eth_fc_mode rx_tx_onoff_2_pfc_mode[2][2] = {
-		{RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+		{RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
 	};
 
 	memset(&pfc_conf, 0, sizeof(struct rte_eth_pfc_conf));
@@ -9250,13 +9250,13 @@ cmd_set_vf_rxmode_parsed(void *parsed_result,
 	int is_on = (strcmp(res->on, "on") == 0) ? 1 : 0;
 	if (!strcmp(res->what,"rxmode")) {
 		if (!strcmp(res->mode, "AUPE"))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_UNTAG;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_UNTAG;
 		else if (!strcmp(res->mode, "ROPE"))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_HASH_UC;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_HASH_UC;
 		else if (!strcmp(res->mode, "BAM"))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_BROADCAST;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_BROADCAST;
 		else if (!strncmp(res->mode, "MPE",3))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_MULTICAST;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_MULTICAST;
 	}
 
 	RTE_SET_USED(is_on);
@@ -9656,7 +9656,7 @@ cmd_tunnel_udp_config_parsed(void *parsed_result,
 	int ret;
 
 	tunnel_udp.udp_port = res->udp_port;
-	tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+	tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
 
 	if (!strcmp(res->what, "add"))
 		ret = rte_eth_dev_udp_tunnel_port_add(res->port_id,
@@ -9722,13 +9722,13 @@ cmd_cfg_tunnel_udp_port_parsed(void *parsed_result,
 	tunnel_udp.udp_port = res->udp_port;
 
 	if (!strcmp(res->tunnel_type, "vxlan")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
 	} else if (!strcmp(res->tunnel_type, "geneve")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_GENEVE;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_GENEVE;
 	} else if (!strcmp(res->tunnel_type, "vxlan-gpe")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN_GPE;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN_GPE;
 	} else if (!strcmp(res->tunnel_type, "ecpri")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_ECPRI;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_ECPRI;
 	} else {
 		fprintf(stderr, "Invalid tunnel type\n");
 		return;
@@ -11859,7 +11859,7 @@ cmd_set_macsec_offload_on_parsed(
 	if (ret != 0)
 		return;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
 #ifdef RTE_NET_IXGBE
 		ret = rte_pmd_ixgbe_macsec_enable(port_id, en, rp);
 #endif
@@ -11870,7 +11870,7 @@ cmd_set_macsec_offload_on_parsed(
 	switch (ret) {
 	case 0:
 		ports[port_id].dev_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_MACSEC_INSERT;
+						RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 		cmd_reconfig_device_queue(port_id, 1, 1);
 		break;
 	case -ENODEV:
@@ -11956,7 +11956,7 @@ cmd_set_macsec_offload_off_parsed(
 	if (ret != 0)
 		return;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
 #ifdef RTE_NET_IXGBE
 		ret = rte_pmd_ixgbe_macsec_disable(port_id);
 #endif
@@ -11964,7 +11964,7 @@ cmd_set_macsec_offload_off_parsed(
 	switch (ret) {
 	case 0:
 		ports[port_id].dev_conf.txmode.offloads &=
-						~DEV_TX_OFFLOAD_MACSEC_INSERT;
+						~RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 		cmd_reconfig_device_queue(port_id, 1, 1);
 		break;
 	case -ENODEV:
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index bdcd826490d1..47ff307e39c0 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -86,62 +86,62 @@ static const struct {
 };
 
 const struct rss_type_info rss_type_table[] = {
-	{ "all", ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP | ETH_RSS_TCP |
-		ETH_RSS_UDP | ETH_RSS_SCTP | ETH_RSS_L2_PAYLOAD |
-		ETH_RSS_L2TPV3 | ETH_RSS_ESP | ETH_RSS_AH | ETH_RSS_PFCP |
-		ETH_RSS_GTPU | ETH_RSS_ECPRI | ETH_RSS_MPLS},
+	{ "all", RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP |
+		RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_L2_PAYLOAD |
+		RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP | RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP |
+		RTE_ETH_RSS_GTPU | RTE_ETH_RSS_ECPRI | RTE_ETH_RSS_MPLS},
 	{ "none", 0 },
-	{ "eth", ETH_RSS_ETH },
-	{ "l2-src-only", ETH_RSS_L2_SRC_ONLY },
-	{ "l2-dst-only", ETH_RSS_L2_DST_ONLY },
-	{ "vlan", ETH_RSS_VLAN },
-	{ "s-vlan", ETH_RSS_S_VLAN },
-	{ "c-vlan", ETH_RSS_C_VLAN },
-	{ "ipv4", ETH_RSS_IPV4 },
-	{ "ipv4-frag", ETH_RSS_FRAG_IPV4 },
-	{ "ipv4-tcp", ETH_RSS_NONFRAG_IPV4_TCP },
-	{ "ipv4-udp", ETH_RSS_NONFRAG_IPV4_UDP },
-	{ "ipv4-sctp", ETH_RSS_NONFRAG_IPV4_SCTP },
-	{ "ipv4-other", ETH_RSS_NONFRAG_IPV4_OTHER },
-	{ "ipv6", ETH_RSS_IPV6 },
-	{ "ipv6-frag", ETH_RSS_FRAG_IPV6 },
-	{ "ipv6-tcp", ETH_RSS_NONFRAG_IPV6_TCP },
-	{ "ipv6-udp", ETH_RSS_NONFRAG_IPV6_UDP },
-	{ "ipv6-sctp", ETH_RSS_NONFRAG_IPV6_SCTP },
-	{ "ipv6-other", ETH_RSS_NONFRAG_IPV6_OTHER },
-	{ "l2-payload", ETH_RSS_L2_PAYLOAD },
-	{ "ipv6-ex", ETH_RSS_IPV6_EX },
-	{ "ipv6-tcp-ex", ETH_RSS_IPV6_TCP_EX },
-	{ "ipv6-udp-ex", ETH_RSS_IPV6_UDP_EX },
-	{ "port", ETH_RSS_PORT },
-	{ "vxlan", ETH_RSS_VXLAN },
-	{ "geneve", ETH_RSS_GENEVE },
-	{ "nvgre", ETH_RSS_NVGRE },
-	{ "ip", ETH_RSS_IP },
-	{ "udp", ETH_RSS_UDP },
-	{ "tcp", ETH_RSS_TCP },
-	{ "sctp", ETH_RSS_SCTP },
-	{ "tunnel", ETH_RSS_TUNNEL },
+	{ "eth", RTE_ETH_RSS_ETH },
+	{ "l2-src-only", RTE_ETH_RSS_L2_SRC_ONLY },
+	{ "l2-dst-only", RTE_ETH_RSS_L2_DST_ONLY },
+	{ "vlan", RTE_ETH_RSS_VLAN },
+	{ "s-vlan", RTE_ETH_RSS_S_VLAN },
+	{ "c-vlan", RTE_ETH_RSS_C_VLAN },
+	{ "ipv4", RTE_ETH_RSS_IPV4 },
+	{ "ipv4-frag", RTE_ETH_RSS_FRAG_IPV4 },
+	{ "ipv4-tcp", RTE_ETH_RSS_NONFRAG_IPV4_TCP },
+	{ "ipv4-udp", RTE_ETH_RSS_NONFRAG_IPV4_UDP },
+	{ "ipv4-sctp", RTE_ETH_RSS_NONFRAG_IPV4_SCTP },
+	{ "ipv4-other", RTE_ETH_RSS_NONFRAG_IPV4_OTHER },
+	{ "ipv6", RTE_ETH_RSS_IPV6 },
+	{ "ipv6-frag", RTE_ETH_RSS_FRAG_IPV6 },
+	{ "ipv6-tcp", RTE_ETH_RSS_NONFRAG_IPV6_TCP },
+	{ "ipv6-udp", RTE_ETH_RSS_NONFRAG_IPV6_UDP },
+	{ "ipv6-sctp", RTE_ETH_RSS_NONFRAG_IPV6_SCTP },
+	{ "ipv6-other", RTE_ETH_RSS_NONFRAG_IPV6_OTHER },
+	{ "l2-payload", RTE_ETH_RSS_L2_PAYLOAD },
+	{ "ipv6-ex", RTE_ETH_RSS_IPV6_EX },
+	{ "ipv6-tcp-ex", RTE_ETH_RSS_IPV6_TCP_EX },
+	{ "ipv6-udp-ex", RTE_ETH_RSS_IPV6_UDP_EX },
+	{ "port", RTE_ETH_RSS_PORT },
+	{ "vxlan", RTE_ETH_RSS_VXLAN },
+	{ "geneve", RTE_ETH_RSS_GENEVE },
+	{ "nvgre", RTE_ETH_RSS_NVGRE },
+	{ "ip", RTE_ETH_RSS_IP },
+	{ "udp", RTE_ETH_RSS_UDP },
+	{ "tcp", RTE_ETH_RSS_TCP },
+	{ "sctp", RTE_ETH_RSS_SCTP },
+	{ "tunnel", RTE_ETH_RSS_TUNNEL },
 	{ "l3-pre32", RTE_ETH_RSS_L3_PRE32 },
 	{ "l3-pre40", RTE_ETH_RSS_L3_PRE40 },
 	{ "l3-pre48", RTE_ETH_RSS_L3_PRE48 },
 	{ "l3-pre56", RTE_ETH_RSS_L3_PRE56 },
 	{ "l3-pre64", RTE_ETH_RSS_L3_PRE64 },
 	{ "l3-pre96", RTE_ETH_RSS_L3_PRE96 },
-	{ "l3-src-only", ETH_RSS_L3_SRC_ONLY },
-	{ "l3-dst-only", ETH_RSS_L3_DST_ONLY },
-	{ "l4-src-only", ETH_RSS_L4_SRC_ONLY },
-	{ "l4-dst-only", ETH_RSS_L4_DST_ONLY },
-	{ "esp", ETH_RSS_ESP },
-	{ "ah", ETH_RSS_AH },
-	{ "l2tpv3", ETH_RSS_L2TPV3 },
-	{ "pfcp", ETH_RSS_PFCP },
-	{ "pppoe", ETH_RSS_PPPOE },
-	{ "gtpu", ETH_RSS_GTPU },
-	{ "ecpri", ETH_RSS_ECPRI },
-	{ "mpls", ETH_RSS_MPLS },
-	{ "ipv4-chksum", ETH_RSS_IPV4_CHKSUM },
-	{ "l4-chksum", ETH_RSS_L4_CHKSUM },
+	{ "l3-src-only", RTE_ETH_RSS_L3_SRC_ONLY },
+	{ "l3-dst-only", RTE_ETH_RSS_L3_DST_ONLY },
+	{ "l4-src-only", RTE_ETH_RSS_L4_SRC_ONLY },
+	{ "l4-dst-only", RTE_ETH_RSS_L4_DST_ONLY },
+	{ "esp", RTE_ETH_RSS_ESP },
+	{ "ah", RTE_ETH_RSS_AH },
+	{ "l2tpv3", RTE_ETH_RSS_L2TPV3 },
+	{ "pfcp", RTE_ETH_RSS_PFCP },
+	{ "pppoe", RTE_ETH_RSS_PPPOE },
+	{ "gtpu", RTE_ETH_RSS_GTPU },
+	{ "ecpri", RTE_ETH_RSS_ECPRI },
+	{ "mpls", RTE_ETH_RSS_MPLS },
+	{ "ipv4-chksum", RTE_ETH_RSS_IPV4_CHKSUM },
+	{ "l4-chksum", RTE_ETH_RSS_L4_CHKSUM },
 	{ NULL, 0 },
 };
 
@@ -538,39 +538,39 @@ static void
 device_infos_display_speeds(uint32_t speed_capa)
 {
 	printf("\n\tDevice speed capability:");
-	if (speed_capa == ETH_LINK_SPEED_AUTONEG)
+	if (speed_capa == RTE_ETH_LINK_SPEED_AUTONEG)
 		printf(" Autonegotiate (all speeds)");
-	if (speed_capa & ETH_LINK_SPEED_FIXED)
+	if (speed_capa & RTE_ETH_LINK_SPEED_FIXED)
 		printf(" Disable autonegotiate (fixed speed)  ");
-	if (speed_capa & ETH_LINK_SPEED_10M_HD)
+	if (speed_capa & RTE_ETH_LINK_SPEED_10M_HD)
 		printf(" 10 Mbps half-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_10M)
+	if (speed_capa & RTE_ETH_LINK_SPEED_10M)
 		printf(" 10 Mbps full-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_100M_HD)
+	if (speed_capa & RTE_ETH_LINK_SPEED_100M_HD)
 		printf(" 100 Mbps half-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_100M)
+	if (speed_capa & RTE_ETH_LINK_SPEED_100M)
 		printf(" 100 Mbps full-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_1G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_1G)
 		printf(" 1 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_2_5G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_2_5G)
 		printf(" 2.5 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_5G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_5G)
 		printf(" 5 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_10G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_10G)
 		printf(" 10 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_20G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_20G)
 		printf(" 20 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_25G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_25G)
 		printf(" 25 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_40G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_40G)
 		printf(" 40 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_50G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_50G)
 		printf(" 50 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_56G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_56G)
 		printf(" 56 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_100G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_100G)
 		printf(" 100 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_200G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_200G)
 		printf(" 200 Gbps  ");
 }
 
@@ -700,9 +700,9 @@ port_infos_display(portid_t port_id)
 
 	printf("\nLink status: %s\n", (link.link_status) ? ("up") : ("down"));
 	printf("Link speed: %s\n", rte_eth_link_speed_to_str(link.link_speed));
-	printf("Link duplex: %s\n", (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+	printf("Link duplex: %s\n", (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 	       ("full-duplex") : ("half-duplex"));
-	printf("Autoneg status: %s\n", (link.link_autoneg == ETH_LINK_AUTONEG) ?
+	printf("Autoneg status: %s\n", (link.link_autoneg == RTE_ETH_LINK_AUTONEG) ?
 	       ("On") : ("Off"));
 
 	if (!rte_eth_dev_get_mtu(port_id, &mtu))
@@ -720,22 +720,22 @@ port_infos_display(portid_t port_id)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 	if (vlan_offload >= 0){
 		printf("VLAN offload: \n");
-		if (vlan_offload & ETH_VLAN_STRIP_OFFLOAD)
+		if (vlan_offload & RTE_ETH_VLAN_STRIP_OFFLOAD)
 			printf("  strip on, ");
 		else
 			printf("  strip off, ");
 
-		if (vlan_offload & ETH_VLAN_FILTER_OFFLOAD)
+		if (vlan_offload & RTE_ETH_VLAN_FILTER_OFFLOAD)
 			printf("filter on, ");
 		else
 			printf("filter off, ");
 
-		if (vlan_offload & ETH_VLAN_EXTEND_OFFLOAD)
+		if (vlan_offload & RTE_ETH_VLAN_EXTEND_OFFLOAD)
 			printf("extend on, ");
 		else
 			printf("extend off, ");
 
-		if (vlan_offload & ETH_QINQ_STRIP_OFFLOAD)
+		if (vlan_offload & RTE_ETH_QINQ_STRIP_OFFLOAD)
 			printf("qinq strip on\n");
 		else
 			printf("qinq strip off\n");
@@ -2904,8 +2904,8 @@ port_rss_reta_info(portid_t port_id,
 	}
 
 	for (i = 0; i < nb_entries; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (!(reta_conf[idx].mask & (1ULL << shift)))
 			continue;
 		printf("RSS RETA configuration: hash index=%u, queue=%u\n",
@@ -3273,7 +3273,7 @@ dcb_fwd_config_setup(void)
 	for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
 		fwd_lcores[lc_id]->stream_nb = 0;
 		fwd_lcores[lc_id]->stream_idx = sm_id;
-		for (i = 0; i < ETH_MAX_VMDQ_POOL; i++) {
+		for (i = 0; i < RTE_ETH_MAX_VMDQ_POOL; i++) {
 			/* if the nb_queue is zero, means this tc is
 			 * not enabled on the POOL
 			 */
@@ -4336,11 +4336,11 @@ vlan_extend_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_VLAN_EXTEND_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+		vlan_offload |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	} else {
-		vlan_offload &= ~ETH_VLAN_EXTEND_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
+		vlan_offload &= ~RTE_ETH_VLAN_EXTEND_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4366,11 +4366,11 @@ rx_vlan_strip_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
-		vlan_offload &= ~ETH_VLAN_STRIP_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		vlan_offload &= ~RTE_ETH_VLAN_STRIP_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4411,11 +4411,11 @@ rx_vlan_filter_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_VLAN_FILTER_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+		vlan_offload |= RTE_ETH_VLAN_FILTER_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	} else {
-		vlan_offload &= ~ETH_VLAN_FILTER_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+		vlan_offload &= ~RTE_ETH_VLAN_FILTER_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4441,11 +4441,11 @@ rx_vlan_qinq_strip_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_QINQ_STRIP_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+		vlan_offload |= RTE_ETH_QINQ_STRIP_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 	} else {
-		vlan_offload &= ~ETH_QINQ_STRIP_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+		vlan_offload &= ~RTE_ETH_QINQ_STRIP_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4515,7 +4515,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
 		return;
 
 	if (ports[port_id].dev_conf.txmode.offloads &
-	    DEV_TX_OFFLOAD_QINQ_INSERT) {
+	    RTE_ETH_TX_OFFLOAD_QINQ_INSERT) {
 		fprintf(stderr, "Error, as QinQ has been enabled.\n");
 		return;
 	}
@@ -4524,7 +4524,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
 	if (ret != 0)
 		return;
 
-	if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VLAN_INSERT) == 0) {
+	if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) == 0) {
 		fprintf(stderr,
 			"Error: vlan insert is not supported by port %d\n",
 			port_id);
@@ -4532,7 +4532,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
 	}
 
 	tx_vlan_reset(port_id);
-	ports[port_id].dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
+	ports[port_id].dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	ports[port_id].tx_vlan_id = vlan_id;
 }
 
@@ -4551,7 +4551,7 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
 	if (ret != 0)
 		return;
 
-	if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_QINQ_INSERT) == 0) {
+	if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) == 0) {
 		fprintf(stderr,
 			"Error: qinq insert not supported by port %d\n",
 			port_id);
@@ -4559,8 +4559,8 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
 	}
 
 	tx_vlan_reset(port_id);
-	ports[port_id].dev_conf.txmode.offloads |= (DEV_TX_OFFLOAD_VLAN_INSERT |
-						    DEV_TX_OFFLOAD_QINQ_INSERT);
+	ports[port_id].dev_conf.txmode.offloads |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+						    RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
 	ports[port_id].tx_vlan_id = vlan_id;
 	ports[port_id].tx_vlan_id_outer = vlan_id_outer;
 }
@@ -4569,8 +4569,8 @@ void
 tx_vlan_reset(portid_t port_id)
 {
 	ports[port_id].dev_conf.txmode.offloads &=
-				~(DEV_TX_OFFLOAD_VLAN_INSERT |
-				  DEV_TX_OFFLOAD_QINQ_INSERT);
+				~(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				  RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
 	ports[port_id].tx_vlan_id = 0;
 	ports[port_id].tx_vlan_id_outer = 0;
 }
@@ -4976,7 +4976,7 @@ set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint16_t rate)
 	ret = eth_link_get_nowait_print_err(port_id, &link);
 	if (ret < 0)
 		return 1;
-	if (link.link_speed != ETH_SPEED_NUM_UNKNOWN &&
+	if (link.link_speed != RTE_ETH_SPEED_NUM_UNKNOWN &&
 	    rate > link.link_speed) {
 		fprintf(stderr,
 			"Invalid rate value:%u bigger than link speed: %u\n",
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 090797318a35..75b24487e72e 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -485,7 +485,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		if (info->l4_proto == IPPROTO_TCP && tso_segsz) {
 			ol_flags |= PKT_TX_IP_CKSUM;
 		} else {
-			if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+			if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
 				ol_flags |= PKT_TX_IP_CKSUM;
 			} else {
 				ipv4_hdr->hdr_checksum = 0;
@@ -502,7 +502,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		udp_hdr = (struct rte_udp_hdr *)((char *)l3_hdr + info->l3_len);
 		/* do not recalculate udp cksum if it was 0 */
 		if (udp_hdr->dgram_cksum != 0) {
-			if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+			if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
 				ol_flags |= PKT_TX_UDP_CKSUM;
 			} else {
 				udp_hdr->dgram_cksum = 0;
@@ -517,7 +517,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		tcp_hdr = (struct rte_tcp_hdr *)((char *)l3_hdr + info->l3_len);
 		if (tso_segsz)
 			ol_flags |= PKT_TX_TCP_SEG;
-		else if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+		else if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
 			ol_flags |= PKT_TX_TCP_CKSUM;
 		} else {
 			tcp_hdr->cksum = 0;
@@ -532,7 +532,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 			((char *)l3_hdr + info->l3_len);
 		/* sctp payload must be a multiple of 4 to be
 		 * offloaded */
-		if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
+		if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
 			((ipv4_hdr->total_length & 0x3) == 0)) {
 			ol_flags |= PKT_TX_SCTP_CKSUM;
 		} else {
@@ -559,7 +559,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 		ipv4_hdr->hdr_checksum = 0;
 		ol_flags |= PKT_TX_OUTER_IPV4;
 
-		if (tx_offloads	& DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+		if (tx_offloads	& RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 			ol_flags |= PKT_TX_OUTER_IP_CKSUM;
 		else
 			ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
@@ -576,7 +576,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 		ol_flags |= PKT_TX_TCP_SEG;
 
 	/* Skip SW outer UDP checksum generation if HW supports it */
-	if (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) {
 		if (info->outer_ethertype == _htons(RTE_ETHER_TYPE_IPV4))
 			udp_hdr->dgram_cksum
 				= rte_ipv4_phdr_cksum(ipv4_hdr, ol_flags);
@@ -959,9 +959,9 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 		if (info.is_tunnel == 1) {
 			if (info.tunnel_tso_segsz ||
 			    (tx_offloads &
-			     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+			     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
 			    (tx_offloads &
-			     DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+			     RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
 				m->outer_l2_len = info.outer_l2_len;
 				m->outer_l3_len = info.outer_l3_len;
 				m->l2_len = info.l2_len;
@@ -1022,19 +1022,19 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 					rte_be_to_cpu_16(info.outer_ethertype),
 					info.outer_l3_len);
 			/* dump tx packet info */
-			if ((tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-					    DEV_TX_OFFLOAD_UDP_CKSUM |
-					    DEV_TX_OFFLOAD_TCP_CKSUM |
-					    DEV_TX_OFFLOAD_SCTP_CKSUM)) ||
+			if ((tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+					    RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+					    RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+					    RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) ||
 				info.tso_segsz != 0)
 				printf("tx: m->l2_len=%d m->l3_len=%d "
 					"m->l4_len=%d\n",
 					m->l2_len, m->l3_len, m->l4_len);
 			if (info.is_tunnel == 1) {
 				if ((tx_offloads &
-				    DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+				    RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
 				    (tx_offloads &
-				    DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
+				    RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
 				    (tx_ol_flags & PKT_TX_OUTER_IPV6))
 					printf("tx: m->outer_l2_len=%d "
 						"m->outer_l3_len=%d\n",
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 7ebed9fed334..03d026dec169 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -99,11 +99,11 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
 	vlan_tci_outer = ports[fs->tx_port].tx_vlan_id_outer;
 
 	tx_offloads = ports[fs->tx_port].dev_conf.txmode.offloads;
-	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ol_flags |= PKT_TX_VLAN_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		ol_flags |= PKT_TX_QINQ_PKT;
-	if (tx_offloads	& DEV_TX_OFFLOAD_MACSEC_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 
 	for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index ee76df7f0323..57e00bca20e7 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -72,11 +72,11 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
 	fs->rx_packets += nb_rx;
 	txp = &ports[fs->tx_port];
 	tx_offloads = txp->dev_conf.txmode.offloads;
-	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ol_flags = PKT_TX_VLAN_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		ol_flags |= PKT_TX_QINQ_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 	for (i = 0; i < nb_rx; i++) {
 		if (likely(i < nb_rx - 1))
diff --git a/app/test-pmd/macswap_common.h b/app/test-pmd/macswap_common.h
index 7e9a3590a436..7ade9a686b7c 100644
--- a/app/test-pmd/macswap_common.h
+++ b/app/test-pmd/macswap_common.h
@@ -10,11 +10,11 @@ ol_flags_init(uint64_t tx_offload)
 {
 	uint64_t ol_flags = 0;
 
-	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_VLAN_INSERT) ?
+	ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) ?
 			PKT_TX_VLAN : 0;
-	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_QINQ_INSERT) ?
+	ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) ?
 			PKT_TX_QINQ : 0;
-	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_MACSEC_INSERT) ?
+	ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) ?
 			PKT_TX_MACSEC : 0;
 
 	return ol_flags;
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index ab8e8f7e694a..693e77eff2c0 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -546,29 +546,29 @@ parse_xstats_list(const char *in_str, struct rte_eth_xstat_name **xstats,
 static int
 parse_link_speed(int n)
 {
-	uint32_t speed = ETH_LINK_SPEED_FIXED;
+	uint32_t speed = RTE_ETH_LINK_SPEED_FIXED;
 
 	switch (n) {
 	case 1000:
-		speed |= ETH_LINK_SPEED_1G;
+		speed |= RTE_ETH_LINK_SPEED_1G;
 		break;
 	case 10000:
-		speed |= ETH_LINK_SPEED_10G;
+		speed |= RTE_ETH_LINK_SPEED_10G;
 		break;
 	case 25000:
-		speed |= ETH_LINK_SPEED_25G;
+		speed |= RTE_ETH_LINK_SPEED_25G;
 		break;
 	case 40000:
-		speed |= ETH_LINK_SPEED_40G;
+		speed |= RTE_ETH_LINK_SPEED_40G;
 		break;
 	case 50000:
-		speed |= ETH_LINK_SPEED_50G;
+		speed |= RTE_ETH_LINK_SPEED_50G;
 		break;
 	case 100000:
-		speed |= ETH_LINK_SPEED_100G;
+		speed |= RTE_ETH_LINK_SPEED_100G;
 		break;
 	case 200000:
-		speed |= ETH_LINK_SPEED_200G;
+		speed |= RTE_ETH_LINK_SPEED_200G;
 		break;
 	case 100:
 	case 10:
@@ -1000,13 +1000,13 @@ launch_args_parse(int argc, char** argv)
 			if (!strcmp(lgopts[opt_idx].name, "pkt-filter-size")) {
 				if (!strcmp(optarg, "64K"))
 					fdir_conf.pballoc =
-						RTE_FDIR_PBALLOC_64K;
+						RTE_ETH_FDIR_PBALLOC_64K;
 				else if (!strcmp(optarg, "128K"))
 					fdir_conf.pballoc =
-						RTE_FDIR_PBALLOC_128K;
+						RTE_ETH_FDIR_PBALLOC_128K;
 				else if (!strcmp(optarg, "256K"))
 					fdir_conf.pballoc =
-						RTE_FDIR_PBALLOC_256K;
+						RTE_ETH_FDIR_PBALLOC_256K;
 				else
 					rte_exit(EXIT_FAILURE, "pkt-filter-size %s invalid -"
 						 " must be: 64K or 128K or 256K\n",
@@ -1048,34 +1048,34 @@ launch_args_parse(int argc, char** argv)
 			}
 #endif
 			if (!strcmp(lgopts[opt_idx].name, "disable-crc-strip"))
-				rx_offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 			if (!strcmp(lgopts[opt_idx].name, "enable-lro"))
-				rx_offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 			if (!strcmp(lgopts[opt_idx].name, "enable-scatter"))
-				rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 			if (!strcmp(lgopts[opt_idx].name, "enable-rx-cksum"))
-				rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-rx-timestamp"))
-				rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 			if (!strcmp(lgopts[opt_idx].name, "enable-hw-vlan"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-vlan-filter"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-vlan-strip"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-vlan-extend"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-qinq-strip"))
-				rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 
 			if (!strcmp(lgopts[opt_idx].name, "enable-drop-en"))
 				rx_drop_en = 1;
@@ -1097,13 +1097,13 @@ launch_args_parse(int argc, char** argv)
 			if (!strcmp(lgopts[opt_idx].name, "forward-mode"))
 				set_pkt_forwarding_mode(optarg);
 			if (!strcmp(lgopts[opt_idx].name, "rss-ip"))
-				rss_hf = ETH_RSS_IP;
+				rss_hf = RTE_ETH_RSS_IP;
 			if (!strcmp(lgopts[opt_idx].name, "rss-udp"))
-				rss_hf = ETH_RSS_UDP;
+				rss_hf = RTE_ETH_RSS_UDP;
 			if (!strcmp(lgopts[opt_idx].name, "rss-level-inner"))
-				rss_hf |= ETH_RSS_LEVEL_INNERMOST;
+				rss_hf |= RTE_ETH_RSS_LEVEL_INNERMOST;
 			if (!strcmp(lgopts[opt_idx].name, "rss-level-outer"))
-				rss_hf |= ETH_RSS_LEVEL_OUTERMOST;
+				rss_hf |= RTE_ETH_RSS_LEVEL_OUTERMOST;
 			if (!strcmp(lgopts[opt_idx].name, "rxq")) {
 				n = atoi(optarg);
 				if (n >= 0 && check_nb_rxq((queueid_t)n) == 0)
@@ -1482,12 +1482,12 @@ launch_args_parse(int argc, char** argv)
 			if (!strcmp(lgopts[opt_idx].name, "rx-mq-mode")) {
 				char *end = NULL;
 				n = strtoul(optarg, &end, 16);
-				if (n >= 0 && n <= ETH_MQ_RX_VMDQ_DCB_RSS)
+				if (n >= 0 && n <= RTE_ETH_MQ_RX_VMDQ_DCB_RSS)
 					rx_mq_mode = (enum rte_eth_rx_mq_mode)n;
 				else
 					rte_exit(EXIT_FAILURE,
 						 "rx-mq-mode must be >= 0 and <= %d\n",
-						 ETH_MQ_RX_VMDQ_DCB_RSS);
+						 RTE_ETH_MQ_RX_VMDQ_DCB_RSS);
 			}
 			if (!strcmp(lgopts[opt_idx].name, "record-core-cycles"))
 				record_core_cycles = 1;
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index de7a8c295527..df7d16fee71e 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -348,7 +348,7 @@ uint64_t noisy_lkup_num_reads_writes;
 /*
  * Receive Side Scaling (RSS) configuration.
  */
-uint64_t rss_hf = ETH_RSS_IP; /* RSS IP by default. */
+uint64_t rss_hf = RTE_ETH_RSS_IP; /* RSS IP by default. */
 
 /*
  * Port topology configuration
@@ -459,12 +459,12 @@ lcoreid_t latencystats_lcore_id = -1;
 struct rte_eth_rxmode rx_mode;
 
 struct rte_eth_txmode tx_mode = {
-	.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
+	.offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
 };
 
-struct rte_fdir_conf fdir_conf = {
+struct rte_eth_fdir_conf fdir_conf = {
 	.mode = RTE_FDIR_MODE_NONE,
-	.pballoc = RTE_FDIR_PBALLOC_64K,
+	.pballoc = RTE_ETH_FDIR_PBALLOC_64K,
 	.status = RTE_FDIR_REPORT_STATUS,
 	.mask = {
 		.vlan_tci_mask = 0xFFEF,
@@ -518,7 +518,7 @@ uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
 /*
  * hexadecimal bitmask of RX mq mode can be enabled.
  */
-enum rte_eth_rx_mq_mode rx_mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
+enum rte_eth_rx_mq_mode rx_mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
 
 /*
  * Used to set forced link speed
@@ -1572,9 +1572,9 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
 	if (ret != 0)
 		rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
 
-	if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(port->dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		port->dev_conf.txmode.offloads &=
-			~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Apply Rx offloads configuration */
 	for (i = 0; i < port->dev_info.max_rx_queues; i++)
@@ -1711,8 +1711,8 @@ init_config(void)
 
 	init_port_config();
 
-	gso_types = DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_UDP_TSO;
+	gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO;
 	/*
 	 * Records which Mbuf pool to use by each logical core, if needed.
 	 */
@@ -3456,7 +3456,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -3750,17 +3750,17 @@ init_port_config(void)
 			if (port->dev_conf.rx_adv_conf.rss_conf.rss_hf != 0) {
 				port->dev_conf.rxmode.mq_mode =
 					(enum rte_eth_rx_mq_mode)
-						(rx_mq_mode & ETH_MQ_RX_RSS);
+						(rx_mq_mode & RTE_ETH_MQ_RX_RSS);
 			} else {
-				port->dev_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+				port->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
 				port->dev_conf.rxmode.offloads &=
-						~DEV_RX_OFFLOAD_RSS_HASH;
+						~RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 				for (i = 0;
 				     i < port->dev_info.nb_rx_queues;
 				     i++)
 					port->rx_conf[i].offloads &=
-						~DEV_RX_OFFLOAD_RSS_HASH;
+						~RTE_ETH_RX_OFFLOAD_RSS_HASH;
 			}
 		}
 
@@ -3848,9 +3848,9 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 		vmdq_rx_conf->enable_default_pool = 0;
 		vmdq_rx_conf->default_pool = 0;
 		vmdq_rx_conf->nb_queue_pools =
-			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+			(num_tcs ==  RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
 		vmdq_tx_conf->nb_queue_pools =
-			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+			(num_tcs ==  RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
 
 		vmdq_rx_conf->nb_pool_maps = vmdq_rx_conf->nb_queue_pools;
 		for (i = 0; i < vmdq_rx_conf->nb_pool_maps; i++) {
@@ -3858,7 +3858,7 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 			vmdq_rx_conf->pool_map[i].pools =
 				1 << (i % vmdq_rx_conf->nb_queue_pools);
 		}
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			vmdq_rx_conf->dcb_tc[i] = i % num_tcs;
 			vmdq_tx_conf->dcb_tc[i] = i % num_tcs;
 		}
@@ -3866,8 +3866,8 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 		/* set DCB mode of RX and TX of multiple queues */
 		eth_conf->rxmode.mq_mode =
 				(enum rte_eth_rx_mq_mode)
-					(rx_mq_mode & ETH_MQ_RX_VMDQ_DCB);
-		eth_conf->txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+					(rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB);
+		eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
 	} else {
 		struct rte_eth_dcb_rx_conf *rx_conf =
 				&eth_conf->rx_adv_conf.dcb_rx_conf;
@@ -3883,23 +3883,23 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 		rx_conf->nb_tcs = num_tcs;
 		tx_conf->nb_tcs = num_tcs;
 
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			rx_conf->dcb_tc[i] = i % num_tcs;
 			tx_conf->dcb_tc[i] = i % num_tcs;
 		}
 
 		eth_conf->rxmode.mq_mode =
 				(enum rte_eth_rx_mq_mode)
-					(rx_mq_mode & ETH_MQ_RX_DCB_RSS);
+					(rx_mq_mode & RTE_ETH_MQ_RX_DCB_RSS);
 		eth_conf->rx_adv_conf.rss_conf = rss_conf;
-		eth_conf->txmode.mq_mode = ETH_MQ_TX_DCB;
+		eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_DCB;
 	}
 
 	if (pfc_en)
 		eth_conf->dcb_capability_en =
-				ETH_DCB_PG_SUPPORT | ETH_DCB_PFC_SUPPORT;
+				RTE_ETH_DCB_PG_SUPPORT | RTE_ETH_DCB_PFC_SUPPORT;
 	else
-		eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
+		eth_conf->dcb_capability_en = RTE_ETH_DCB_PG_SUPPORT;
 
 	return 0;
 }
@@ -3928,7 +3928,7 @@ init_port_dcb_config(portid_t pid,
 	retval = get_eth_dcb_conf(pid, &port_conf, dcb_mode, num_tcs, pfc_en);
 	if (retval < 0)
 		return retval;
-	port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	/* re-configure the device . */
 	retval = rte_eth_dev_configure(pid, nb_rxq, nb_rxq, &port_conf);
@@ -3978,7 +3978,7 @@ init_port_dcb_config(portid_t pid,
 
 	rxtx_port_config(rte_port);
 	/* VLAN filter */
-	rte_port->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	rte_port->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	for (i = 0; i < RTE_DIM(vlan_tags); i++)
 		rx_vft_set(pid, vlan_tags[i], 1);
 
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index dd8f27a296b6..697f1bf8cac6 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -465,7 +465,7 @@ extern lcoreid_t bitrate_lcore_id;
 extern uint8_t bitrate_enabled;
 #endif
 
-extern struct rte_fdir_conf fdir_conf;
+extern struct rte_eth_fdir_conf fdir_conf;
 
 extern uint32_t max_rx_pkt_len;
 
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index e45f8840c91c..9eb7992815e8 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -354,11 +354,11 @@ pkt_burst_transmit(struct fwd_stream *fs)
 	tx_offloads = txp->dev_conf.txmode.offloads;
 	vlan_tci = txp->tx_vlan_id;
 	vlan_tci_outer = txp->tx_vlan_id_outer;
-	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ol_flags = PKT_TX_VLAN_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		ol_flags |= PKT_TX_QINQ_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 
 	/*
diff --git a/app/test/test_ethdev_link.c b/app/test/test_ethdev_link.c
index ee11987bae28..6248aea49abd 100644
--- a/app/test/test_ethdev_link.c
+++ b/app/test/test_ethdev_link.c
@@ -14,10 +14,10 @@ test_link_status_up_default(void)
 {
 	int ret = 0;
 	struct rte_eth_link link_status = {
-		.link_speed = ETH_SPEED_NUM_2_5G,
-		.link_status = ETH_LINK_UP,
-		.link_autoneg = ETH_LINK_AUTONEG,
-		.link_duplex = ETH_LINK_FULL_DUPLEX
+		.link_speed = RTE_ETH_SPEED_NUM_2_5G,
+		.link_status = RTE_ETH_LINK_UP,
+		.link_autoneg = RTE_ETH_LINK_AUTONEG,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -27,9 +27,9 @@ test_link_status_up_default(void)
 	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg",
 		text, strlen(text), "Invalid default link status string");
 
-	link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link_status.link_autoneg = ETH_LINK_FIXED;
-	link_status.link_speed = ETH_SPEED_NUM_10M,
+	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link_status.link_autoneg = RTE_ETH_LINK_FIXED;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_10M;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #2: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -37,7 +37,7 @@ test_link_status_up_default(void)
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
-	link_status.link_speed = ETH_SPEED_NUM_UNKNOWN;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -45,7 +45,7 @@ test_link_status_up_default(void)
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
-	link_status.link_speed = ETH_SPEED_NUM_NONE;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -54,9 +54,9 @@ test_link_status_up_default(void)
 		"string with HDX");
 
 	/* test max str len */
-	link_status.link_speed = ETH_SPEED_NUM_200G;
-	link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link_status.link_autoneg = ETH_LINK_AUTONEG;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_200G;
+	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link_status.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #4:len = %d, %s\n", ret, text);
 	RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
@@ -69,10 +69,10 @@ test_link_status_down_default(void)
 {
 	int ret = 0;
 	struct rte_eth_link link_status = {
-		.link_speed = ETH_SPEED_NUM_2_5G,
-		.link_status = ETH_LINK_DOWN,
-		.link_autoneg = ETH_LINK_AUTONEG,
-		.link_duplex = ETH_LINK_FULL_DUPLEX
+		.link_speed = RTE_ETH_SPEED_NUM_2_5G,
+		.link_status = RTE_ETH_LINK_DOWN,
+		.link_autoneg = RTE_ETH_LINK_AUTONEG,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -90,9 +90,9 @@ test_link_status_invalid(void)
 	int ret = 0;
 	struct rte_eth_link link_status = {
 		.link_speed = 55555,
-		.link_status = ETH_LINK_UP,
-		.link_autoneg = ETH_LINK_AUTONEG,
-		.link_duplex = ETH_LINK_FULL_DUPLEX
+		.link_status = RTE_ETH_LINK_UP,
+		.link_autoneg = RTE_ETH_LINK_AUTONEG,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -116,21 +116,21 @@ test_link_speed_all_values(void)
 		const char *value;
 		uint32_t link_speed;
 	} speed_str_map[] = {
-		{ "None",   ETH_SPEED_NUM_NONE },
-		{ "10 Mbps",  ETH_SPEED_NUM_10M },
-		{ "100 Mbps", ETH_SPEED_NUM_100M },
-		{ "1 Gbps",   ETH_SPEED_NUM_1G },
-		{ "2.5 Gbps", ETH_SPEED_NUM_2_5G },
-		{ "5 Gbps",   ETH_SPEED_NUM_5G },
-		{ "10 Gbps",  ETH_SPEED_NUM_10G },
-		{ "20 Gbps",  ETH_SPEED_NUM_20G },
-		{ "25 Gbps",  ETH_SPEED_NUM_25G },
-		{ "40 Gbps",  ETH_SPEED_NUM_40G },
-		{ "50 Gbps",  ETH_SPEED_NUM_50G },
-		{ "56 Gbps",  ETH_SPEED_NUM_56G },
-		{ "100 Gbps", ETH_SPEED_NUM_100G },
-		{ "200 Gbps", ETH_SPEED_NUM_200G },
-		{ "Unknown",  ETH_SPEED_NUM_UNKNOWN },
+		{ "None",   RTE_ETH_SPEED_NUM_NONE },
+		{ "10 Mbps",  RTE_ETH_SPEED_NUM_10M },
+		{ "100 Mbps", RTE_ETH_SPEED_NUM_100M },
+		{ "1 Gbps",   RTE_ETH_SPEED_NUM_1G },
+		{ "2.5 Gbps", RTE_ETH_SPEED_NUM_2_5G },
+		{ "5 Gbps",   RTE_ETH_SPEED_NUM_5G },
+		{ "10 Gbps",  RTE_ETH_SPEED_NUM_10G },
+		{ "20 Gbps",  RTE_ETH_SPEED_NUM_20G },
+		{ "25 Gbps",  RTE_ETH_SPEED_NUM_25G },
+		{ "40 Gbps",  RTE_ETH_SPEED_NUM_40G },
+		{ "50 Gbps",  RTE_ETH_SPEED_NUM_50G },
+		{ "56 Gbps",  RTE_ETH_SPEED_NUM_56G },
+		{ "100 Gbps", RTE_ETH_SPEED_NUM_100G },
+		{ "200 Gbps", RTE_ETH_SPEED_NUM_200G },
+		{ "Unknown",  RTE_ETH_SPEED_NUM_UNKNOWN },
 		{ "Invalid",   50505 }
 	};
 
diff --git a/app/test/test_event_eth_rx_adapter.c b/app/test/test_event_eth_rx_adapter.c
index add4d8a67821..a09253e91814 100644
--- a/app/test/test_event_eth_rx_adapter.c
+++ b/app/test/test_event_eth_rx_adapter.c
@@ -103,7 +103,7 @@ port_init_rx_intr(uint16_t port, struct rte_mempool *mp)
 {
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_NONE,
+			.mq_mode = RTE_ETH_MQ_RX_NONE,
 		},
 		.intr_conf = {
 			.rxq = 1,
@@ -118,7 +118,7 @@ port_init(uint16_t port, struct rte_mempool *mp)
 {
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_NONE,
+			.mq_mode = RTE_ETH_MQ_RX_NONE,
 		},
 	};
 
diff --git a/app/test/test_kni.c b/app/test/test_kni.c
index 96733554b6c4..40ab0d5c4ca4 100644
--- a/app/test/test_kni.c
+++ b/app/test/test_kni.c
@@ -74,7 +74,7 @@ static const struct rte_eth_txconf tx_conf = {
 
 static const struct rte_eth_conf port_conf = {
 	.txmode = {
-		.mq_mode = ETH_DCB_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 5388d18125a6..8a9ef851789f 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -134,11 +134,11 @@ static uint16_t vlan_id = 0x100;
 
 static struct rte_eth_conf default_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 189d2430f27e..351129de2f9b 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -107,11 +107,11 @@ static struct link_bonding_unittest_params test_params  = {
 
 static struct rte_eth_conf default_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index e7bb0497b663..f9eae9397386 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -52,7 +52,7 @@ struct slave_conf {
 
 	struct rte_eth_rss_conf rss_conf;
 	uint8_t rss_key[40];
-	struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
 
 	uint8_t is_slave;
 	struct rte_ring *rxtx_queue[RXTX_QUEUE_COUNT];
@@ -61,7 +61,7 @@ struct slave_conf {
 struct link_bonding_rssconf_unittest_params {
 	uint8_t bond_port_id;
 	struct rte_eth_dev_info bond_dev_info;
-	struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
 	struct slave_conf slave_ports[SLAVE_COUNT];
 
 	struct rte_mempool *mbuf_pool;
@@ -80,27 +80,27 @@ static struct link_bonding_rssconf_unittest_params test_params  = {
  */
 static struct rte_eth_conf default_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
 
 static struct rte_eth_conf rss_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IPV6,
+			.rss_hf = RTE_ETH_RSS_IPV6,
 		},
 	},
 	.lpbk_mode = 0,
@@ -207,13 +207,13 @@ bond_slaves(void)
 static int
 reta_set(uint16_t port_id, uint8_t value, int reta_size)
 {
-	struct rte_eth_rss_reta_entry64 reta_conf[512/RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[512/RTE_ETH_RETA_GROUP_SIZE];
 	int i, j;
 
-	for (i = 0; i < reta_size / RTE_RETA_GROUP_SIZE; i++) {
+	for (i = 0; i < reta_size / RTE_ETH_RETA_GROUP_SIZE; i++) {
 		/* select all fields to set */
 		reta_conf[i].mask = ~0LL;
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			reta_conf[i].reta[j] = value;
 	}
 
@@ -232,8 +232,8 @@ reta_check_synced(struct slave_conf *port)
 	for (i = 0; i < test_params.bond_dev_info.reta_size;
 			i++) {
 
-		int index = i / RTE_RETA_GROUP_SIZE;
-		int shift = i % RTE_RETA_GROUP_SIZE;
+		int index = i / RTE_ETH_RETA_GROUP_SIZE;
+		int shift = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (port->reta_conf[index].reta[shift] !=
 				test_params.bond_reta_conf[index].reta[shift])
@@ -251,7 +251,7 @@ static int
 bond_reta_fetch(void) {
 	unsigned j;
 
-	for (j = 0; j < test_params.bond_dev_info.reta_size / RTE_RETA_GROUP_SIZE;
+	for (j = 0; j < test_params.bond_dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE;
 			j++)
 		test_params.bond_reta_conf[j].mask = ~0LL;
 
@@ -268,7 +268,7 @@ static int
 slave_reta_fetch(struct slave_conf *port) {
 	unsigned j;
 
-	for (j = 0; j < port->dev_info.reta_size / RTE_RETA_GROUP_SIZE; j++)
+	for (j = 0; j < port->dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE; j++)
 		port->reta_conf[j].mask = ~0LL;
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_rss_reta_query(port->port_id,
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index a3b4f52c65e6..1df86ce080e5 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -62,11 +62,11 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 1,  /* enable loopback */
 };
@@ -155,7 +155,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -822,7 +822,7 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
 		/* bulk alloc rx, full-featured tx */
 		tx_conf.tx_rs_thresh = 32;
 		tx_conf.tx_free_thresh = 32;
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 		return 0;
 	} else if (!strcmp(mode, "hybrid")) {
 		/* bulk alloc rx, vector tx
@@ -831,13 +831,13 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
 		 */
 		tx_conf.tx_rs_thresh = 32;
 		tx_conf.tx_free_thresh = 32;
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 		return 0;
 	} else if (!strcmp(mode, "full")) {
 		/* full feature rx,tx pair */
 		tx_conf.tx_rs_thresh = 32;
 		tx_conf.tx_free_thresh = 32;
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 		return 0;
 	}
 
diff --git a/app/test/virtual_pmd.c b/app/test/virtual_pmd.c
index 7e15b47eb0fb..d9f2e4f66bde 100644
--- a/app/test/virtual_pmd.c
+++ b/app/test/virtual_pmd.c
@@ -53,7 +53,7 @@ static int  virtual_ethdev_stop(struct rte_eth_dev *eth_dev __rte_unused)
 	void *pkt = NULL;
 	struct virtual_ethdev_private *prv = eth_dev->data->dev_private;
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 0;
 	while (rte_ring_dequeue(prv->rx_queue, &pkt) != -ENOENT)
 		rte_pktmbuf_free(pkt);
@@ -168,7 +168,7 @@ virtual_ethdev_link_update_success(struct rte_eth_dev *bonded_eth_dev,
 		int wait_to_complete __rte_unused)
 {
 	if (!bonded_eth_dev->data->dev_started)
-		bonded_eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		bonded_eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -562,9 +562,9 @@ virtual_ethdev_create(const char *name, struct rte_ether_addr *mac_addr,
 	eth_dev->data->nb_rx_queues = (uint16_t)1;
 	eth_dev->data->nb_tx_queues = (uint16_t)1;
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
-	eth_dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
-	eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	eth_dev->data->mac_addrs = rte_zmalloc(name, RTE_ETHER_ADDR_LEN, 0);
 	if (eth_dev->data->mac_addrs == NULL)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 53560d3830d7..1c0ea988f239 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -42,7 +42,7 @@ Features of the OCTEON cnxk SSO PMD are:
 - HW managed packets enqueued from ethdev to eventdev exposed through event eth
   RX adapter.
 - N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
   capability while maintaining receive packet order.
 - Full Rx/Tx offload support defined through ethdev queue configuration.
 - HW managed event vectorization on CN10K for packets enqueued from ethdev to
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 11fbebfcd243..0fa57abfa3e0 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -35,7 +35,7 @@ Features of the OCTEON TX2 SSO PMD are:
 - HW managed packets enqueued from ethdev to eventdev exposed through event eth
   RX adapter.
 - N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
   capability while maintaining receive packet order.
 - Full Rx/Tx offload support defined through ethdev queue config.
 
diff --git a/doc/guides/nics/af_packet.rst b/doc/guides/nics/af_packet.rst
index bdd6e7263c85..54feffdef4bd 100644
--- a/doc/guides/nics/af_packet.rst
+++ b/doc/guides/nics/af_packet.rst
@@ -70,5 +70,5 @@ Features and Limitations
 ------------------------
 
 The PMD will re-insert the VLAN tag transparently to the packet if the kernel
-strips it, as long as the ``DEV_RX_OFFLOAD_VLAN_STRIP`` is not enabled by the
+strips it, as long as the ``RTE_ETH_RX_OFFLOAD_VLAN_STRIP`` is not enabled by the
 application.
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index aa6032889a55..b3d10f30dc77 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -877,21 +877,21 @@ processing. This improved performance is derived from a number of optimizations:
     * TX: only the following reduced set of transmit offloads is supported in
       vector mode::
 
-       DEV_TX_OFFLOAD_MBUF_FAST_FREE
+       RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
 
     * RX: only the following reduced set of receive offloads is supported in
       vector mode (note that jumbo MTU is allowed only when the MTU setting
-      does not require `DEV_RX_OFFLOAD_SCATTER` to be enabled)::
-
-       DEV_RX_OFFLOAD_VLAN_STRIP
-       DEV_RX_OFFLOAD_KEEP_CRC
-       DEV_RX_OFFLOAD_IPV4_CKSUM
-       DEV_RX_OFFLOAD_UDP_CKSUM
-       DEV_RX_OFFLOAD_TCP_CKSUM
-       DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
-       DEV_RX_OFFLOAD_OUTER_UDP_CKSUM
-       DEV_RX_OFFLOAD_RSS_HASH
-       DEV_RX_OFFLOAD_VLAN_FILTER
+      does not require `RTE_ETH_RX_OFFLOAD_SCATTER` to be enabled)::
+
+       RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+       RTE_ETH_RX_OFFLOAD_KEEP_CRC
+       RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+       RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+       RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+       RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+       RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+       RTE_ETH_RX_OFFLOAD_RSS_HASH
+       RTE_ETH_RX_OFFLOAD_VLAN_FILTER
 
 The BNXT Vector PMD is enabled in DPDK builds by default. The decision to enable
 vector processing is made at run-time when the port is started; if no transmit
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index 91bdcd065a95..0209730b904a 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -432,7 +432,7 @@ Limitations
 .. code-block:: console
 
      vlan_offload = rte_eth_dev_get_vlan_offload(port);
-     vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
+     vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
      rte_eth_dev_set_vlan_offload(port, vlan_offload);
 
 Another alternative is modify the adapter's ingress VLAN rewrite mode so that
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 8dd421ca013b..b48d9dcb9591 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -30,7 +30,7 @@ Speed capabilities
 
 Supports getting the speed capabilities that the current device is capable of.
 
-* **[provides] rte_eth_dev_info**: ``speed_capa:ETH_LINK_SPEED_*``.
+* **[provides] rte_eth_dev_info**: ``speed_capa:RTE_ETH_LINK_SPEED_*``.
 * **[related]  API**: ``rte_eth_dev_info_get()``.
 
 
@@ -101,11 +101,11 @@ Supports Rx interrupts.
 Lock-free Tx queue
 ------------------
 
-If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+If a PMD advertises RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
 invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
 
-* **[uses]    rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
+* **[uses]    rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
 * **[related]  API**: ``rte_eth_tx_burst()``.
 
 
@@ -117,8 +117,8 @@ Fast mbuf free
 Supports optimization for fast release of mbufs following successful Tx.
 Requires that per queue, all mbufs come from the same mempool and has refcnt = 1.
 
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
-* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
 
 
 .. _nic_features_free_tx_mbuf_on_demand:
@@ -177,7 +177,7 @@ Scattered Rx
 
 Supports receiving segmented mbufs.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SCATTER``.
 * **[implements] datapath**: ``Scattered Rx function``.
 * **[implements] rte_eth_dev_data**: ``scattered_rx``.
 * **[provides]   eth_dev_ops**: ``rxq_info_get:scattered_rx``.
@@ -205,12 +205,12 @@ LRO
 
 Supports Large Receive Offload.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
   ``dev_conf.rxmode.max_lro_pkt_size``.
 * **[implements] datapath**: ``LRO functionality``.
 * **[implements] rte_eth_dev_data**: ``lro``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
 * **[provides]   rte_eth_dev_info**: ``max_lro_pkt_size``.
 
 
@@ -221,12 +221,12 @@ TSO
 
 Supports TCP Segmentation Offloading.
 
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_TCP_TSO``.
 * **[uses]       rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:`` ``PKT_TX_TCP_SEG``, ``PKT_TX_IPV4``, ``PKT_TX_IPV6``, ``PKT_TX_IP_CKSUM``.
 * **[uses]       mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
 * **[implements] datapath**: ``TSO functionality``.
-* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_TCP_TSO,RTE_ETH_TX_OFFLOAD_UDP_TSO``.
 
 
 .. _nic_features_promiscuous_mode:
@@ -287,9 +287,9 @@ RSS hash
 
 Supports RSS hashing on RX.
 
-* **[uses]     user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_RSS_FLAG``.
+* **[uses]     user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_RSS_FLAG``.
 * **[uses]     user config**: ``dev_conf.rx_adv_conf.rss_conf``.
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
 * **[provides] rte_eth_dev_info**: ``flow_type_rss_offloads``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
 
@@ -302,7 +302,7 @@ Inner RSS
 Supports RX RSS hashing on Inner headers.
 
 * **[uses]    rte_flow_action_rss**: ``level``.
-* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
 
 
@@ -339,7 +339,7 @@ VMDq
 
 Supports Virtual Machine Device Queues (VMDq).
 
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_VMDQ_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_VMDQ_FLAG``.
 * **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
 * **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_rx_conf``.
 * **[uses] user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -362,7 +362,7 @@ DCB
 
 Supports Data Center Bridging (DCB).
 
-* **[uses]       user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_DCB_FLAG``.
+* **[uses]       user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_DCB_FLAG``.
 * **[uses]       user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
 * **[uses]       user config**: ``dev_conf.rx_adv_conf.dcb_rx_conf``.
 * **[uses]       user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -378,7 +378,7 @@ VLAN filter
 
 Supports filtering of a VLAN Tag identifier.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_FILTER``.
 * **[implements] eth_dev_ops**: ``vlan_filter_set``.
 * **[related]    API**: ``rte_eth_dev_vlan_filter()``.
 
@@ -416,13 +416,13 @@ Supports inline crypto processing defined by rte_security library to perform cry
 operations of security protocol while packet is received in NIC. NIC is not aware
 of protocol operations. See Security library and PMD documentation for more details.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[uses]       mbuf**: ``mbuf.l2_len``.
 * **[implements] rte_security_ops**: ``session_create``, ``session_update``,
   ``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
   ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO``
@@ -438,14 +438,14 @@ protocol processing for the security protocol (e.g. IPsec, MACSEC) while the
 packet is received at NIC. The NIC is capable of understanding the security
 protocol operations. See security library and PMD documentation for more details.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[uses]       mbuf**: ``mbuf.l2_len``.
 * **[implements] rte_security_ops**: ``session_create``, ``session_update``,
   ``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``get_userdata``,
   ``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
   ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL``
@@ -459,7 +459,7 @@ CRC offload
 Supports CRC stripping by hardware.
 A PMD assumed to support CRC stripping by default. PMD should advertise if it supports keeping CRC.
 
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_KEEP_CRC``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_KEEP_CRC``.
 
 
 .. _nic_features_vlan_offload:
@@ -469,13 +469,13 @@ VLAN offload
 
 Supports VLAN offload to hardware.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_STRIP,RTE_ETH_RX_OFFLOAD_VLAN_FILTER,RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:PKT_TX_VLAN``, ``mbuf.vlan_tci``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN`` ``mbuf.vlan_tci``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_VLAN_STRIP``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
   ``rte_eth_dev_get_vlan_offload()``.
 
@@ -487,14 +487,14 @@ QinQ offload
 
 Supports QinQ (queue in queue) offload.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ``, ``mbuf.vlan_tci_outer``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.ol_flags:PKT_RX_QINQ``,
   ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN``
   ``mbuf.vlan_tci``, ``mbuf.vlan_tci_outer``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
 
 
 .. _nic_features_fec:
@@ -508,7 +508,7 @@ information to correct the bit errors generated during data packet transmission
 improves signal quality but also brings a delay to signals. This function can be enabled or disabled as required.
 
 * **[implements] eth_dev_ops**: ``fec_get_capability``, ``fec_get``, ``fec_set``.
-* **[provides]   rte_eth_fec_capa**: ``speed:ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
+* **[provides]   rte_eth_fec_capa**: ``speed:RTE_ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
 * **[related]    API**: ``rte_eth_fec_get_capability()``, ``rte_eth_fec_get()``, ``rte_eth_fec_set()``.
 
 
@@ -519,16 +519,16 @@ L3 checksum offload
 
 Supports L3 checksum offload.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[uses]     mbuf**: ``mbuf.l2_len``, ``mbuf.l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
   ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
   ``PKT_RX_IP_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
 
 
 .. _nic_features_l4_checksum_offload:
@@ -538,8 +538,8 @@ L4 checksum offload
 
 Supports L4 checksum offload.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -547,8 +547,8 @@ Supports L4 checksum offload.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
   ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
   ``PKT_RX_L4_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
 
 .. _nic_features_hw_timestamp:
 
@@ -557,10 +557,10 @@ Timestamp offload
 
 Supports Timestamp.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_TIMESTAMP``.
 * **[provides] mbuf**: ``mbuf.timestamp``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
 * **[related] eth_dev_ops**: ``read_clock``.
 
 .. _nic_features_macsec_offload:
@@ -570,11 +570,11 @@ MACsec offload
 
 Supports MACsec.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
 
 
 .. _nic_features_inner_l3_checksum:
@@ -584,16 +584,16 @@ Inner L3 checksum
 
 Supports inner packet L3 checksum.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_IP_CKSUM_BAD``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
 
 .. _nic_features_inner_l4_checksum:
@@ -603,15 +603,15 @@ Inner L4 checksum
 
 Supports inner packet L4 checksum.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_L4_CKSUM_UNKNOWN`` |
   ``PKT_RX_OUTER_L4_CKSUM_BAD`` | ``PKT_RX_OUTER_L4_CKSUM_GOOD`` | ``PKT_RX_OUTER_L4_CKSUM_INVALID``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
   ``mbuf.ol_flags:PKT_TX_OUTER_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
 
 
 .. _nic_features_packet_type_parsing:
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index ed6afd62703d..bba53f5a64ee 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -78,11 +78,11 @@ To enable via ``RX_OLFLAGS`` use ``RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y``.
 To guarantee the constraint, the following capabilities in ``dev_conf.rxmode.offloads``
 will be checked:
 
-*   ``DEV_RX_OFFLOAD_VLAN_EXTEND``
+*   ``RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``
 
-*   ``DEV_RX_OFFLOAD_CHECKSUM``
+*   ``RTE_ETH_RX_OFFLOAD_CHECKSUM``
 
-*   ``DEV_RX_OFFLOAD_HEADER_SPLIT``
+*   ``RTE_ETH_RX_OFFLOAD_HEADER_SPLIT``
 
 *   ``fdir_conf->mode``
 
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index 2efdd1a41bb4..a1e236ad75e5 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -216,21 +216,21 @@ For example,
     *   If the max number of VFs (max_vfs) is set in the range of 1 to 32:
 
         If the number of Rx queues is specified as 4 (``--rxq=4`` in testpmd), then there are totally 32
-        pools (ETH_32_POOLS), and each VF could have 4 Rx queues;
+        pools (RTE_ETH_32_POOLS), and each VF could have 4 Rx queues;
 
         If the number of Rx queues is specified as 2 (``--rxq=2`` in testpmd), then there are totally 32
-        pools (ETH_32_POOLS), and each VF could have 2 Rx queues;
+        pools (RTE_ETH_32_POOLS), and each VF could have 2 Rx queues;
 
     *   If the max number of VFs (max_vfs) is in the range of 33 to 64:
 
         If the number of Rx queues in specified as 4 (``--rxq=4`` in testpmd), then error message is expected
         as ``rxq`` is not correct at this case;
 
-        If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (ETH_64_POOLS),
+        If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (RTE_ETH_64_POOLS),
         and each VF have 2 Rx queues;
 
-    On host, to enable VF RSS functionality, rx mq mode should be set as ETH_MQ_RX_VMDQ_RSS
-    or ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
+    On host, to enable VF RSS functionality, rx mq mode should be set as RTE_ETH_MQ_RX_VMDQ_RSS
+    or RTE_ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
     It also needs config VF RSS information like hash function, RSS key, RSS key length.
 
 .. note::
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index 20a74b9b5bcd..148d2f5fc2be 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -89,13 +89,13 @@ Other features are supported using optional MACRO configuration. They include:
 
 To guarantee the constraint, capabilities in dev_conf.rxmode.offloads will be checked:
 
-*   DEV_RX_OFFLOAD_VLAN_STRIP
+*   RTE_ETH_RX_OFFLOAD_VLAN_STRIP
 
-*   DEV_RX_OFFLOAD_VLAN_EXTEND
+*   RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
 
-*   DEV_RX_OFFLOAD_CHECKSUM
+*   RTE_ETH_RX_OFFLOAD_CHECKSUM
 
-*   DEV_RX_OFFLOAD_HEADER_SPLIT
+*   RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
 
 *   dev_conf
 
@@ -163,13 +163,13 @@ l3fwd
 ~~~~~
 
 When running l3fwd with vPMD, there is one thing to note.
-In the configuration, ensure that DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
+In the configuration, ensure that RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
 Otherwise, by default, RX vPMD is disabled.
 
 load_balancer
 ~~~~~~~~~~~~~
 
-As in the case of l3fwd, to enable vPMD, do NOT set DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
+As in the case of l3fwd, to enable vPMD, do NOT set RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
 In addition, for improved performance, use -bsz "(32,32),(64,64),(32,32)" in load_balancer to avoid using the default burst size of 144.
 
 
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index e4f58c899031..cc1726207f6c 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -371,7 +371,7 @@ Limitations
 
 - CRC:
 
-  - ``DEV_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
+  - ``RTE_ETH_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
     for some NICs (such as ConnectX-6 Dx, ConnectX-6 Lx, and BlueField-2).
     The capability bit ``scatter_fcs_w_decap_disable`` shows NIC support.
 
@@ -607,7 +607,7 @@ Driver options
   small-packet traffic.
 
   When MPRQ is enabled, MTU can be larger than the size of
-  user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
+  user-provided mbuf even if RTE_ETH_RX_OFFLOAD_SCATTER isn't enabled. PMD will
   configure large stride size enough to accommodate MTU as long as
   device allows. Note that this can waste system memory compared to enabling Rx
   scatter and multi-segment packet.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 3ce696b605d1..681010d9ed7d 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -275,7 +275,7 @@ An example utility for eBPF instruction generation in the format of C arrays wil
 be added in next releases
 
 TAP reports on supported RSS functions as part of dev_infos_get callback:
-``ETH_RSS_IP``, ``ETH_RSS_UDP`` and ``ETH_RSS_TCP``.
+``RTE_ETH_RSS_IP``, ``RTE_ETH_RSS_UDP`` and ``RTE_ETH_RSS_TCP``.
 **Known limitation:** TAP supports all of the above hash functions together
 and not in partial combinations.
 
diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
index 7bff0aef0b74..9b2c31a2f0bc 100644
--- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
@@ -194,11 +194,11 @@ To segment an outgoing packet, an application must:
 
    - the bit mask of required GSO types. The GSO library uses the same macros as
      those that describe a physical device's TX offloading capabilities (i.e.
-     ``DEV_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
+     ``RTE_ETH_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
      wants to segment TCP/IPv4 packets, it should set gso_types to
-     ``DEV_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
-     supported for gso_types are ``DEV_TX_OFFLOAD_VXLAN_TNL_TSO``, and
-     ``DEV_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
+     ``RTE_ETH_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
+     supported for gso_types are ``RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO``, and
+     ``RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
      allowed.
 
    - a flag, that indicates whether the IPv4 headers of output segments should
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 2f190b40e43a..dc6186a44ae2 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -137,7 +137,7 @@ a vxlan-encapsulated tcp packet:
     mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
     set out_ip checksum to 0 in the packet
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
 
 - calculate checksum of out_ip and out_udp::
 
@@ -147,8 +147,8 @@ a vxlan-encapsulated tcp packet:
     set out_ip checksum to 0 in the packet
     set out_udp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM
-  and DEV_TX_OFFLOAD_UDP_CKSUM.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+  and RTE_ETH_TX_OFFLOAD_UDP_CKSUM.
 
 - calculate checksum of in_ip::
 
@@ -158,7 +158,7 @@ a vxlan-encapsulated tcp packet:
     set in_ip checksum to 0 in the packet
 
   This is similar to case 1), but l2_len is different. It is supported
-  on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+  on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
   Note that it can only work if outer L4 checksum is 0.
 
 - calculate checksum of in_ip and in_tcp::
@@ -170,8 +170,8 @@ a vxlan-encapsulated tcp packet:
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
   This is similar to case 2), but l2_len is different. It is supported
-  on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM and
-  DEV_TX_OFFLOAD_TCP_CKSUM.
+  on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM and
+  RTE_ETH_TX_OFFLOAD_TCP_CKSUM.
   Note that it can only work if outer L4 checksum is 0.
 
 - segment inner TCP::
@@ -185,7 +185,7 @@ a vxlan-encapsulated tcp packet:
     set in_tcp checksum to pseudo header without including the IP
       payload length using rte_ipv4_phdr_cksum()
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_TCP_TSO.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_TCP_TSO.
   Note that it can only work if outer L4 checksum is 0.
 
 - calculate checksum of out_ip, in_ip, in_tcp::
@@ -200,8 +200,8 @@ a vxlan-encapsulated tcp packet:
     set in_ip checksum to 0 in the packet
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM,
-  DEV_TX_OFFLOAD_UDP_CKSUM and DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
+  RTE_ETH_TX_OFFLOAD_UDP_CKSUM and RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM.
 
 The list of flags and their precise meaning is described in the mbuf API
 documentation (rte_mbuf.h). Also refer to the testpmd source code
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 0d4ac77a7ccf..68312898448c 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -57,7 +57,7 @@ Whenever needed and appropriate, asynchronous communication should be introduced
 
 Avoiding lock contention is a key issue in a multi-core environment.
 To address this issue, PMDs are designed to work with per-core private resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable.
+For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
 In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
 
 To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core
@@ -119,7 +119,7 @@ This is also true for the pipe-line model provided all logical cores used are lo
 
 Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance.
 
-If the PMD is ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
+If the PMD is ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
 concurrently on the same tx queue without SW lock. This PMD feature found in some NICs and useful in the following use cases:
 
 *  Remove explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
@@ -127,7 +127,7 @@ concurrently on the same tx queue without SW lock. This PMD feature found in som
 *  In the eventdev use case, avoid dedicating a separate TX core for transmitting and thus
    enables more scaling as all workers can send the packets.
 
-See `Hardware Offload`_ for ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
+See `Hardware Offload`_ for ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
 
 Device Identification, Ownership and Configuration
 --------------------------------------------------
@@ -311,7 +311,7 @@ The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get(
 The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities.
 Supported offloads can be either per-port or per-queue.
 
-Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
+Offloads are enabled using the existing ``RTE_ETH_TX_OFFLOAD_*`` or ``RTE_ETH_RX_OFFLOAD_*`` flags.
 Any requested offloading by an application must be within the device capabilities.
 Any offloading is disabled by default if it is not set in the parameter
 ``dev_conf->[rt]xmode.offloads`` to ``rte_eth_dev_configure()`` and
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index fa05fe084500..b507396fb4d7 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1943,23 +1943,23 @@ only matching traffic goes through.
 
 .. table:: RSS
 
-   +---------------+---------------------------------------------+
-   | Field         | Value                                       |
-   +===============+=============================================+
-   | ``func``      | RSS hash function to apply                  |
-   +---------------+---------------------------------------------+
-   | ``level``     | encapsulation level for ``types``           |
-   +---------------+---------------------------------------------+
-   | ``types``     | specific RSS hash types (see ``ETH_RSS_*``) |
-   +---------------+---------------------------------------------+
-   | ``key_len``   | hash key length in bytes                    |
-   +---------------+---------------------------------------------+
-   | ``queue_num`` | number of entries in ``queue``              |
-   +---------------+---------------------------------------------+
-   | ``key``       | hash key                                    |
-   +---------------+---------------------------------------------+
-   | ``queue``     | queue indices to use                        |
-   +---------------+---------------------------------------------+
+   +---------------+-------------------------------------------------+
+   | Field         | Value                                           |
+   +===============+=================================================+
+   | ``func``      | RSS hash function to apply                      |
+   +---------------+-------------------------------------------------+
+   | ``level``     | encapsulation level for ``types``               |
+   +---------------+-------------------------------------------------+
+   | ``types``     | specific RSS hash types (see ``RTE_ETH_RSS_*``) |
+   +---------------+-------------------------------------------------+
+   | ``key_len``   | hash key length in bytes                        |
+   +---------------+-------------------------------------------------+
+   | ``queue_num`` | number of entries in ``queue``                  |
+   +---------------+-------------------------------------------------+
+   | ``key``       | hash key                                        |
+   +---------------+-------------------------------------------------+
+   | ``queue``     | queue indices to use                            |
+   +---------------+-------------------------------------------------+
 
 Action: ``PF``
 ^^^^^^^^^^^^^^
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
index ad92c16868c1..46c9b51d1bf9 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -569,7 +569,7 @@ created by the application is attached to the security session by the API
 
 For Inline Crypto and Inline protocol offload, device specific defined metadata is
 updated in the mbuf using ``rte_security_set_pkt_metadata()`` if
-``DEV_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
+``RTE_ETH_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
 
 For inline protocol offloaded ingress traffic, the application can register a
 pointer, ``userdata`` , in the security session. When the packet is received,
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 0b4d03fb961f..199c3fa0bd70 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -58,22 +58,16 @@ Deprecation Notices
   ``RTE_ETH_FLOW_MAX`` is one sample of the mentioned case, adding a new flow
   type will break the ABI because of ``flex_mask[RTE_ETH_FLOW_MAX]`` array
   usage in following public struct hierarchy:
-  ``rte_eth_fdir_flex_conf -> rte_fdir_conf -> rte_eth_conf (in the middle)``.
+  ``rte_eth_fdir_flex_conf -> rte_eth_fdir_conf -> rte_eth_conf (in the middle)``.
   Need to identify this kind of usages and fix in 20.11, otherwise this blocks
   us extending existing enum/define.
   One solution can be using a fixed size array instead of ``.*MAX.*`` value.
 
-* ethdev: Will add ``RTE_ETH_`` prefix to all ethdev macros/enums in v21.11.
-  Macros will be added for backward compatibility.
-  Backward compatibility macros will be removed on v22.11.
-  A few old backward compatibility macros from 2013 that does not have
-  proper prefix will be removed on v21.11.
-
 * ethdev: The flow director API, including ``rte_eth_conf.fdir_conf`` field,
   and the related structures (``rte_fdir_*`` and ``rte_eth_fdir_*``),
   will be removed in DPDK 20.11.
 
-* ethdev: New offload flags ``DEV_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
+* ethdev: New offload flags ``RTE_ETH_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
   This will allow application to enable or disable PMDs from updating
   ``rte_mbuf::hash::fdir``.
   This scheme will allow PMDs to avoid writes to ``rte_mbuf`` fields on Rx and
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index ec2a788789f7..9a50c3281dd2 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -350,6 +350,9 @@ ABI Changes
   to be transparent for both users (no changes in user app is required) and
   PMD developers (no changes in PMD is required).
 
+* ethdev: All enums & macros updated to have ``RTE_ETH`` prefix and structures
+  updated to have ``rte_eth`` prefix. DPDK components updated to use new names.
+
 
 Known Issues
 ------------
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 78171b25f96e..782574dd39d5 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -209,12 +209,12 @@ Where:
     device will ensure the ordering. Ordering will be lost when tried in PARALLEL.
 
 *   ``--rxoffload MASK``: RX HW offload capabilities to enable/use on this port
-    (bitmask of DEV_RX_OFFLOAD_* values). It is an optional parameter and
+    (bitmask of RTE_ETH_RX_OFFLOAD_* values). It is an optional parameter and
     allows user to disable some of the RX HW offload capabilities.
     By default all HW RX offloads are enabled.
 
 *   ``--txoffload MASK``: TX HW offload capabilities to enable/use on this port
-    (bitmask of DEV_TX_OFFLOAD_* values). It is an optional parameter and
+    (bitmask of RTE_ETH_TX_OFFLOAD_* values). It is an optional parameter and
     allows user to disable some of the TX HW offload capabilities.
     By default all HW TX offloads are enabled.
 
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index 8ff7ab85369c..2e1446ee461b 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -537,7 +537,7 @@ The command line options are:
     Set the hexadecimal bitmask of RX multi queue mode which can be enabled.
     The default value is 0x7::
 
-       ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG | ETH_MQ_RX_VMDQ_FLAG
+       RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG
 
 *   ``--record-core-cycles``
 
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index be52e6f72dab..a922988607ef 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -90,20 +90,20 @@ int dpaa_intr_disable(char *if_name);
 struct usdpaa_ioctl_link_status_args_old {
 	/* network device node name */
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link status(ETH_LINK_UP/DOWN) */
+	/* link status(RTE_ETH_LINK_UP/DOWN) */
 	int     link_status;
 };
 
 struct usdpaa_ioctl_link_status_args {
 	/* network device node name */
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link status(ETH_LINK_UP/DOWN) */
+	/* link status(RTE_ETH_LINK_UP/DOWN) */
 	int     link_status;
-	/* link speed (ETH_SPEED_NUM_)*/
+	/* link speed (RTE_ETH_SPEED_NUM_)*/
 	int     link_speed;
-	/* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+	/* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
 	int     link_duplex;
-	/* link autoneg (ETH_LINK_AUTONEG/FIXED)*/
+	/* link autoneg (RTE_ETH_LINK_AUTONEG/FIXED)*/
 	int     link_autoneg;
 
 };
@@ -111,16 +111,16 @@ struct usdpaa_ioctl_link_status_args {
 struct usdpaa_ioctl_update_link_status_args {
 	/* network device node name */
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link status(ETH_LINK_UP/DOWN) */
+	/* link status(RTE_ETH_LINK_UP/DOWN) */
 	int     link_status;
 };
 
 struct usdpaa_ioctl_update_link_speed {
 	/* network device node name*/
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link speed (ETH_SPEED_NUM_)*/
+	/* link speed (RTE_ETH_SPEED_NUM_)*/
 	int     link_speed;
-	/* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+	/* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
 	int     link_duplex;
 };
 
diff --git a/drivers/common/cnxk/roc_npc.h b/drivers/common/cnxk/roc_npc.h
index 65d4bd6edcec..c12400ff5110 100644
--- a/drivers/common/cnxk/roc_npc.h
+++ b/drivers/common/cnxk/roc_npc.h
@@ -154,7 +154,7 @@ enum roc_npc_rss_hash_function {
 struct roc_npc_action_rss {
 	enum roc_npc_rss_hash_function func;
 	uint32_t level;
-	uint64_t types;	       /**< Specific RSS hash types (see ETH_RSS_*). */
+	uint64_t types;	       /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
 	uint32_t key_len;      /**< Hash key length in bytes. */
 	uint32_t queue_num;    /**< Number of entries in @p queue. */
 	const uint8_t *key;    /**< Hash key. */
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index a077376dc0fb..8f778f0c2419 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -93,10 +93,10 @@ static const char *valid_arguments[] = {
 };
 
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(af_packet_logtype, NOTICE);
@@ -290,7 +290,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 static int
 eth_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -320,7 +320,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
 		internals->tx_queue[i].sockfd = -1;
 	}
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
@@ -331,7 +331,7 @@ eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 	const struct rte_eth_rxmode *rxmode = &dev_conf->rxmode;
 	struct pmd_internals *internals = dev->data->dev_private;
 
-	internals->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	internals->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 	return 0;
 }
 
@@ -346,9 +346,9 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_rx_queues = (uint16_t)internals->nb_queues;
 	dev_info->max_tx_queues = (uint16_t)internals->nb_queues;
 	dev_info->min_rx_bufsize = 0;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_VLAN_INSERT;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	return 0;
 }
diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index b362ccdcd38c..e156246f24df 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -163,10 +163,10 @@ static const char * const valid_arguments[] = {
 };
 
 static const struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_AUTONEG
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_AUTONEG
 };
 
 /* List which tracks PMDs to facilitate sharing UMEMs across them. */
@@ -652,7 +652,7 @@ eth_af_xdp_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 static int
 eth_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -661,7 +661,7 @@ eth_dev_start(struct rte_eth_dev *dev)
 static int
 eth_dev_stop(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index 377299b14c7a..b618cba3f023 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -736,14 +736,14 @@ eth_ark_dev_info_get(struct rte_eth_dev *dev,
 		.nb_align = ARK_TX_MIN_QUEUE}; /* power of 2 */
 
 	/* ARK PMD supports all line rates, how do we indicate that here ?? */
-	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
-				ETH_LINK_SPEED_10G |
-				ETH_LINK_SPEED_25G |
-				ETH_LINK_SPEED_40G |
-				ETH_LINK_SPEED_50G |
-				ETH_LINK_SPEED_100G);
-
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_TIMESTAMP;
+	dev_info->speed_capa = (RTE_ETH_LINK_SPEED_1G |
+				RTE_ETH_LINK_SPEED_10G |
+				RTE_ETH_LINK_SPEED_25G |
+				RTE_ETH_LINK_SPEED_40G |
+				RTE_ETH_LINK_SPEED_50G |
+				RTE_ETH_LINK_SPEED_100G);
+
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	return 0;
 }
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 5a198f53fce7..f7bfac796c07 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -154,20 +154,20 @@ static struct rte_pci_driver rte_atl_pmd = {
 	.remove = eth_atl_pci_remove,
 };
 
-#define ATL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP \
-			| DEV_RX_OFFLOAD_IPV4_CKSUM \
-			| DEV_RX_OFFLOAD_UDP_CKSUM \
-			| DEV_RX_OFFLOAD_TCP_CKSUM \
-			| DEV_RX_OFFLOAD_MACSEC_STRIP \
-			| DEV_RX_OFFLOAD_VLAN_FILTER)
-
-#define ATL_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT \
-			| DEV_TX_OFFLOAD_IPV4_CKSUM \
-			| DEV_TX_OFFLOAD_UDP_CKSUM \
-			| DEV_TX_OFFLOAD_TCP_CKSUM \
-			| DEV_TX_OFFLOAD_TCP_TSO \
-			| DEV_TX_OFFLOAD_MACSEC_INSERT \
-			| DEV_TX_OFFLOAD_MULTI_SEGS)
+#define ATL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP \
+			| RTE_ETH_RX_OFFLOAD_IPV4_CKSUM \
+			| RTE_ETH_RX_OFFLOAD_UDP_CKSUM \
+			| RTE_ETH_RX_OFFLOAD_TCP_CKSUM \
+			| RTE_ETH_RX_OFFLOAD_MACSEC_STRIP \
+			| RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+
+#define ATL_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT \
+			| RTE_ETH_TX_OFFLOAD_IPV4_CKSUM \
+			| RTE_ETH_TX_OFFLOAD_UDP_CKSUM \
+			| RTE_ETH_TX_OFFLOAD_TCP_CKSUM \
+			| RTE_ETH_TX_OFFLOAD_TCP_TSO \
+			| RTE_ETH_TX_OFFLOAD_MACSEC_INSERT \
+			| RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define SFP_EEPROM_SIZE 0x100
 
@@ -488,7 +488,7 @@ atl_dev_start(struct rte_eth_dev *dev)
 	/* set adapter started */
 	hw->adapter_stopped = 0;
 
-	if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(ERR,
 		"Invalid link_speeds for port %u, fix speed not supported",
 				dev->data->port_id);
@@ -655,18 +655,18 @@ atl_dev_set_link_up(struct rte_eth_dev *dev)
 	uint32_t link_speeds = dev->data->dev_conf.link_speeds;
 	uint32_t speed_mask = 0;
 
-	if (link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		speed_mask = hw->aq_nic_cfg->link_speed_msk;
 	} else {
-		if (link_speeds & ETH_LINK_SPEED_10G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 			speed_mask |= AQ_NIC_RATE_10G;
-		if (link_speeds & ETH_LINK_SPEED_5G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_5G)
 			speed_mask |= AQ_NIC_RATE_5G;
-		if (link_speeds & ETH_LINK_SPEED_1G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed_mask |= AQ_NIC_RATE_1G;
-		if (link_speeds & ETH_LINK_SPEED_2_5G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_2_5G)
 			speed_mask |=  AQ_NIC_RATE_2G5;
-		if (link_speeds & ETH_LINK_SPEED_100M)
+		if (link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed_mask |= AQ_NIC_RATE_100M;
 	}
 
@@ -1127,10 +1127,10 @@ atl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->reta_size = HW_ATL_B0_RSS_REDIRECTION_MAX;
 	dev_info->flow_type_rss_offloads = ATL_RSS_OFFLOAD_ALL;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
-	dev_info->speed_capa |= ETH_LINK_SPEED_100M;
-	dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
-	dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
 
 	return 0;
 }
@@ -1175,10 +1175,10 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
 	u32 fc = AQ_NIC_FC_OFF;
 	int err = 0;
 
-	link.link_status = ETH_LINK_DOWN;
+	link.link_status = RTE_ETH_LINK_DOWN;
 	link.link_speed = 0;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_autoneg = hw->is_autoneg ? ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_autoneg = hw->is_autoneg ? RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
 	memset(&old, 0, sizeof(old));
 
 	/* load old link status */
@@ -1198,8 +1198,8 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
 		return 0;
 	}
 
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_speed = hw->aq_link_status.mbps;
 
 	rte_eth_linkstatus_set(dev, &link);
@@ -1333,7 +1333,7 @@ atl_dev_link_status_print(struct rte_eth_dev *dev)
 		PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned int)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -1532,13 +1532,13 @@ atl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	hw->aq_fw_ops->get_flow_control(hw, &fc);
 
 	if (fc == AQ_NIC_FC_OFF)
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	else if ((fc & AQ_NIC_FC_RX) && (fc & AQ_NIC_FC_TX))
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (fc & AQ_NIC_FC_RX)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (fc & AQ_NIC_FC_TX)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 
 	return 0;
 }
@@ -1553,13 +1553,13 @@ atl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	if (hw->aq_fw_ops->set_flow_control == NULL)
 		return -ENOTSUP;
 
-	if (fc_conf->mode == RTE_FC_NONE)
+	if (fc_conf->mode == RTE_ETH_FC_NONE)
 		hw->aq_nic_cfg->flow_control = AQ_NIC_FC_OFF;
-	else if (fc_conf->mode == RTE_FC_RX_PAUSE)
+	else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
 		hw->aq_nic_cfg->flow_control = AQ_NIC_FC_RX;
-	else if (fc_conf->mode == RTE_FC_TX_PAUSE)
+	else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
 		hw->aq_nic_cfg->flow_control = AQ_NIC_FC_TX;
-	else if (fc_conf->mode == RTE_FC_FULL)
+	else if (fc_conf->mode == RTE_ETH_FC_FULL)
 		hw->aq_nic_cfg->flow_control = (AQ_NIC_FC_RX | AQ_NIC_FC_TX);
 
 	if (old_flow_control != hw->aq_nic_cfg->flow_control)
@@ -1727,14 +1727,14 @@ atl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	PMD_INIT_FUNC_TRACE();
 
-	ret = atl_enable_vlan_filter(dev, mask & ETH_VLAN_FILTER_MASK);
+	ret = atl_enable_vlan_filter(dev, mask & RTE_ETH_VLAN_FILTER_MASK);
 
-	cfg->vlan_strip = !!(mask & ETH_VLAN_STRIP_MASK);
+	cfg->vlan_strip = !!(mask & RTE_ETH_VLAN_STRIP_MASK);
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++)
 		hw_atl_rpo_rx_desc_vlan_stripping_set(hw, cfg->vlan_strip, i);
 
-	if (mask & ETH_VLAN_EXTEND_MASK)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK)
 		ret = -ENOTSUP;
 
 	return ret;
@@ -1750,10 +1750,10 @@ atl_vlan_tpid_set(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 	PMD_INIT_FUNC_TRACE();
 
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
+	case RTE_ETH_VLAN_TYPE_INNER:
 		hw_atl_rpf_vlan_inner_etht_set(hw, tpid);
 		break;
-	case ETH_VLAN_TYPE_OUTER:
+	case RTE_ETH_VLAN_TYPE_OUTER:
 		hw_atl_rpf_vlan_outer_etht_set(hw, tpid);
 		break;
 	default:
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index fbc9917ed30d..ed9ef9f0cc52 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -11,15 +11,15 @@
 #include "hw_atl/hw_atl_utils.h"
 
 #define ATL_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define ATL_DEV_PRIVATE_TO_HW(adapter) \
 	(&((struct atl_adapter *)adapter)->hw)
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 0d3460383a50..2ff426892df2 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -145,10 +145,10 @@ atl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
 
 	rxq->l3_csum_enabled = dev->data->dev_conf.rxmode.offloads &
-		DEV_RX_OFFLOAD_IPV4_CKSUM;
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 	rxq->l4_csum_enabled = dev->data->dev_conf.rxmode.offloads &
-		(DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM);
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		(RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		PMD_DRV_LOG(ERR, "PMD does not support KEEP_CRC offload");
 
 	/* allocate memory for the software ring */
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 932ec90265cf..5d94db02c506 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1998,9 +1998,9 @@ avp_dev_configure(struct rte_eth_dev *eth_dev)
 	/* Setup required number of queues */
 	_avp_set_queue_counts(eth_dev);
 
-	mask = (ETH_VLAN_STRIP_MASK |
-		ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK);
+	mask = (RTE_ETH_VLAN_STRIP_MASK |
+		RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK);
 	ret = avp_vlan_offload_set(eth_dev, mask);
 	if (ret < 0) {
 		PMD_DRV_LOG(ERR, "VLAN offload set failed by host, ret=%d\n",
@@ -2140,8 +2140,8 @@ avp_dev_link_update(struct rte_eth_dev *eth_dev,
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	struct rte_eth_link *link = &eth_dev->data->dev_link;
 
-	link->link_speed = ETH_SPEED_NUM_10G;
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_speed = RTE_ETH_SPEED_NUM_10G;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link->link_status = !!(avp->flags & AVP_F_LINKUP);
 
 	return -1;
@@ -2191,8 +2191,8 @@ avp_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->max_rx_pktlen = avp->max_rx_pkt_len;
 	dev_info->max_mac_addrs = AVP_MAX_MAC_ADDRS;
 	if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
-		dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
-		dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+		dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+		dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	}
 
 	return 0;
@@ -2205,9 +2205,9 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 	struct rte_eth_conf *dev_conf = &eth_dev->data->dev_conf;
 	uint64_t offloads = dev_conf->rxmode.offloads;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
-			if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 				avp->features |= RTE_AVP_FEATURE_VLAN_OFFLOAD;
 			else
 				avp->features &= ~RTE_AVP_FEATURE_VLAN_OFFLOAD;
@@ -2216,13 +2216,13 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 		}
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			PMD_DRV_LOG(ERR, "VLAN filter offload not supported\n");
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			PMD_DRV_LOG(ERR, "VLAN extend offload not supported\n");
 	}
 
diff --git a/drivers/net/axgbe/axgbe_dev.c b/drivers/net/axgbe/axgbe_dev.c
index ca32ad641873..3aaa2193272f 100644
--- a/drivers/net/axgbe/axgbe_dev.c
+++ b/drivers/net/axgbe/axgbe_dev.c
@@ -840,11 +840,11 @@ static void axgbe_rss_options(struct axgbe_port *pdata)
 	pdata->rss_hf = rss_conf->rss_hf;
 	rss_hf = rss_conf->rss_hf;
 
-	if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+	if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
-	if (rss_hf & (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+	if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
-	if (rss_hf & (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+	if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
 }
 
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 0250256830ac..dab0c6775d1d 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -326,7 +326,7 @@ axgbe_dev_configure(struct rte_eth_dev *dev)
 	struct axgbe_port *pdata =  dev->data->dev_private;
 	/* Checksum offload to hardware */
 	pdata->rx_csum_enable = dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_CHECKSUM;
+				RTE_ETH_RX_OFFLOAD_CHECKSUM;
 	return 0;
 }
 
@@ -335,9 +335,9 @@ axgbe_dev_rx_mq_config(struct rte_eth_dev *dev)
 {
 	struct axgbe_port *pdata = dev->data->dev_private;
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 		pdata->rss_enable = 1;
-	else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+	else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
 		pdata->rss_enable = 0;
 	else
 		return  -1;
@@ -385,7 +385,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
 	rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
 
 	max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 				max_pkt_len > pdata->rx_buf_size)
 		dev_data->scattered_rx = 1;
 
@@ -521,8 +521,8 @@ axgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & (1ULL << shift)) == 0)
 			continue;
 		pdata->rss_table[i] = reta_conf[idx].reta[shift];
@@ -552,8 +552,8 @@ axgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & (1ULL << shift)) == 0)
 			continue;
 		reta_conf[idx].reta[shift] = pdata->rss_table[i];
@@ -590,13 +590,13 @@ axgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
 
 	pdata->rss_hf = rss_conf->rss_hf & AXGBE_RSS_OFFLOAD;
 
-	if (pdata->rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+	if (pdata->rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
 	if (pdata->rss_hf &
-	    (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+	    (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
 	if (pdata->rss_hf &
-	    (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+	    (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
 
 	/* Set the RSS options */
@@ -765,7 +765,7 @@ axgbe_dev_link_update(struct rte_eth_dev *dev,
 	link.link_status = pdata->phy_link;
 	link.link_speed = pdata->phy_speed;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			      ETH_LINK_SPEED_FIXED);
+			      RTE_ETH_LINK_SPEED_FIXED);
 	ret = rte_eth_linkstatus_set(dev, &link);
 	if (ret == -1)
 		PMD_DRV_LOG(ERR, "No change in link status\n");
@@ -1208,24 +1208,24 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_rx_pktlen = AXGBE_RX_MAX_BUF_SIZE;
 	dev_info->max_mac_addrs = pdata->hw_feat.addn_mac + 1;
 	dev_info->max_hash_mac_addrs = pdata->hw_feat.hash_table_size;
-	dev_info->speed_capa =  ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM  |
-		DEV_RX_OFFLOAD_TCP_CKSUM  |
-		DEV_RX_OFFLOAD_SCATTER	  |
-		DEV_RX_OFFLOAD_KEEP_CRC;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_SCATTER	  |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if (pdata->hw_feat.rss) {
 		dev_info->flow_type_rss_offloads = AXGBE_RSS_OFFLOAD;
@@ -1262,13 +1262,13 @@ axgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	fc.autoneg = pdata->pause_autoneg;
 
 	if (pdata->rx_pause && pdata->tx_pause)
-		fc.mode = RTE_FC_FULL;
+		fc.mode = RTE_ETH_FC_FULL;
 	else if (pdata->rx_pause)
-		fc.mode = RTE_FC_RX_PAUSE;
+		fc.mode = RTE_ETH_FC_RX_PAUSE;
 	else if (pdata->tx_pause)
-		fc.mode = RTE_FC_TX_PAUSE;
+		fc.mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc.mode = RTE_FC_NONE;
+		fc.mode = RTE_ETH_FC_NONE;
 
 	fc_conf->high_water =  (1024 + (fc.low_water[0] << 9)) / 1024;
 	fc_conf->low_water =  (1024 + (fc.high_water[0] << 9)) / 1024;
@@ -1298,13 +1298,13 @@ axgbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	AXGMAC_IOWRITE(pdata, reg, reg_val);
 	fc.mode = fc_conf->mode;
 
-	if (fc.mode == RTE_FC_FULL) {
+	if (fc.mode == RTE_ETH_FC_FULL) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 1;
-	} else if (fc.mode == RTE_FC_RX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
 		pdata->tx_pause = 0;
 		pdata->rx_pause = 1;
-	} else if (fc.mode == RTE_FC_TX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 0;
 	} else {
@@ -1386,15 +1386,15 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
 
 	fc.mode = pfc_conf->fc.mode;
 
-	if (fc.mode == RTE_FC_FULL) {
+	if (fc.mode == RTE_ETH_FC_FULL) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 1;
 		AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
-	} else if (fc.mode == RTE_FC_RX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
 		pdata->tx_pause = 0;
 		pdata->rx_pause = 1;
 		AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
-	} else if (fc.mode == RTE_FC_TX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 0;
 		AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 0);
@@ -1830,8 +1830,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 	PMD_DRV_LOG(DEBUG, "EDVLP: qinq = 0x%x\n", qinq);
 
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
-		PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_INNER\n");
+	case RTE_ETH_VLAN_TYPE_INNER:
+		PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_INNER\n");
 		if (qinq) {
 			if (tpid != 0x8100 && tpid != 0x88a8)
 				PMD_DRV_LOG(ERR,
@@ -1848,8 +1848,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 				    "Inner type not supported in single tag\n");
 		}
 		break;
-	case ETH_VLAN_TYPE_OUTER:
-		PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_OUTER\n");
+	case RTE_ETH_VLAN_TYPE_OUTER:
+		PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_OUTER\n");
 		if (qinq) {
 			PMD_DRV_LOG(DEBUG, "double tagging is enabled\n");
 			/*Enable outer VLAN tag*/
@@ -1866,11 +1866,11 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 					    "tag supported 0x8100/0x88A8\n");
 		}
 		break;
-	case ETH_VLAN_TYPE_MAX:
-		PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_MAX\n");
+	case RTE_ETH_VLAN_TYPE_MAX:
+		PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_MAX\n");
 		break;
-	case ETH_VLAN_TYPE_UNKNOWN:
-		PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_UNKNOWN\n");
+	case RTE_ETH_VLAN_TYPE_UNKNOWN:
+		PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_UNKNOWN\n");
 		break;
 	}
 	return 0;
@@ -1904,8 +1904,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, CSVL, 0);
 	AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, VLTI, 1);
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 			PMD_DRV_LOG(DEBUG, "Strip ON for device = %s\n",
 				    pdata->eth_dev->device->name);
 			pdata->hw_if.enable_rx_vlan_stripping(pdata);
@@ -1915,8 +1915,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			pdata->hw_if.disable_rx_vlan_stripping(pdata);
 		}
 	}
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 			PMD_DRV_LOG(DEBUG, "Filter ON for device = %s\n",
 				    pdata->eth_dev->device->name);
 			pdata->hw_if.enable_rx_vlan_filtering(pdata);
@@ -1926,14 +1926,14 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			pdata->hw_if.disable_rx_vlan_filtering(pdata);
 		}
 	}
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
 			PMD_DRV_LOG(DEBUG, "enabling vlan extended mode\n");
 			axgbe_vlan_extend_enable(pdata);
 			/* Set global registers with default ethertype*/
-			axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+			axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					    RTE_ETHER_TYPE_VLAN);
-			axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+			axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
 					    RTE_ETHER_TYPE_VLAN);
 		} else {
 			PMD_DRV_LOG(DEBUG, "disabling vlan extended mode\n");
diff --git a/drivers/net/axgbe/axgbe_ethdev.h b/drivers/net/axgbe/axgbe_ethdev.h
index a6226729fe4d..0a3e1c59df1a 100644
--- a/drivers/net/axgbe/axgbe_ethdev.h
+++ b/drivers/net/axgbe/axgbe_ethdev.h
@@ -97,12 +97,12 @@
 
 /* Receive Side Scaling */
 #define AXGBE_RSS_OFFLOAD  ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define AXGBE_RSS_HASH_KEY_SIZE		40
 #define AXGBE_RSS_MAX_TABLE_SIZE	256
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 4f98e695ae74..59fa9175aded 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -597,7 +597,7 @@ static void axgbe_an73_state_machine(struct axgbe_port *pdata)
 		pdata->an_int = 0;
 		axgbe_an73_clear_interrupts(pdata);
 		pdata->eth_dev->data->dev_link.link_status =
-			ETH_LINK_DOWN;
+			RTE_ETH_LINK_DOWN;
 	} else if (pdata->an_state == AXGBE_AN_ERROR) {
 		PMD_DRV_LOG(ERR, "error during auto-negotiation, state=%u\n",
 			    cur_state);
diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c
index c8618d2d6daa..aa2c27ebaa49 100644
--- a/drivers/net/axgbe/axgbe_rxtx.c
+++ b/drivers/net/axgbe/axgbe_rxtx.c
@@ -75,7 +75,7 @@ int axgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		(DMA_CH_INC * rxq->queue_id));
 	rxq->dma_tail_reg = (volatile uint32_t *)((uint8_t *)rxq->dma_regs +
 						  DMA_CH_RDTR_LO);
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -286,7 +286,7 @@ axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
-				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+				if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
 				else
 					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
@@ -430,7 +430,7 @@ uint16_t eth_axgbe_recv_scattered_pkts(void *rx_queue,
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
-				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+				if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
 				else
 					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 567ea2382864..78fc717ec44a 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -94,14 +94,14 @@ bnx2x_link_update(struct rte_eth_dev *dev)
 	link.link_speed = sc->link_vars.line_speed;
 	switch (sc->link_vars.duplex) {
 		case DUPLEX_FULL:
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			break;
 		case DUPLEX_HALF:
-			link.link_duplex = ETH_LINK_HALF_DUPLEX;
+			link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 			break;
 	}
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+		 RTE_ETH_LINK_SPEED_FIXED);
 	link.link_status = sc->link_vars.link_up;
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -408,7 +408,7 @@ bnx2xvf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_comple
 	if (sc->old_bulletin.valid_bitmap & (1 << CHANNEL_DOWN)) {
 		PMD_DRV_LOG(ERR, sc, "PF indicated channel is down."
 				"VF device is no longer operational");
-		dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	}
 
 	return ret;
@@ -534,7 +534,7 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_rx_bufsize = BNX2X_MIN_RX_BUF_SIZE;
 	dev_info->max_rx_pktlen  = BNX2X_MAX_RX_PKT_LEN;
 	dev_info->max_mac_addrs  = BNX2X_MAX_MAC_ADDRS;
-	dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G;
 
 	dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
 	dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
@@ -669,7 +669,7 @@ bnx2x_common_dev_init(struct rte_eth_dev *eth_dev, int is_vf)
 	bnx2x_load_firmware(sc);
 	assert(sc->firmware);
 
-	if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		sc->udp_rss = 1;
 
 	sc->rx_budget = BNX2X_RX_BUDGET;
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 6743cf92b0e6..39bd739c7bc9 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -569,37 +569,37 @@ struct bnxt_rep_info {
 #define BNXT_FW_STATUS_SHUTDOWN		0x100000
 
 #define BNXT_ETH_RSS_SUPPORT (	\
-	ETH_RSS_IPV4 |		\
-	ETH_RSS_NONFRAG_IPV4_TCP |	\
-	ETH_RSS_NONFRAG_IPV4_UDP |	\
-	ETH_RSS_IPV6 |		\
-	ETH_RSS_NONFRAG_IPV6_TCP |	\
-	ETH_RSS_NONFRAG_IPV6_UDP |	\
-	ETH_RSS_LEVEL_MASK)
-
-#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_CKSUM | \
-				     DEV_TX_OFFLOAD_UDP_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_TSO | \
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GRE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_QINQ_INSERT | \
-				     DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
-				     DEV_RX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_UDP_CKSUM | \
-				     DEV_RX_OFFLOAD_TCP_CKSUM | \
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
-				     DEV_RX_OFFLOAD_KEEP_CRC | \
-				     DEV_RX_OFFLOAD_VLAN_EXTEND | \
-				     DEV_RX_OFFLOAD_TCP_LRO | \
-				     DEV_RX_OFFLOAD_SCATTER | \
-				     DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RSS_IPV4 |		\
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP |	\
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP |	\
+	RTE_ETH_RSS_IPV6 |		\
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP |	\
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP |	\
+	RTE_ETH_RSS_LEVEL_MASK)
+
+#define BNXT_DEV_TX_OFFLOAD_SUPPORT (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+				     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+				     RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define BNXT_DEV_RX_OFFLOAD_SUPPORT (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+				     RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_KEEP_CRC | \
+				     RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+				     RTE_ETH_RX_OFFLOAD_TCP_LRO | \
+				     RTE_ETH_RX_OFFLOAD_SCATTER | \
+				     RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index f385723a9f65..2791a5c62db1 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -426,7 +426,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 		goto err_out;
 
 	/* Alloc RSS context only if RSS mode is enabled */
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
 		int j, nr_ctxs = bnxt_rss_ctxts(bp);
 
 		/* RSS table size in Thor is 512.
@@ -458,7 +458,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 	 * setting is not available at this time, it will not be
 	 * configured correctly in the CFA.
 	 */
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 		vnic->vlan_strip = true;
 	else
 		vnic->vlan_strip = false;
@@ -493,7 +493,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 	bnxt_hwrm_vnic_plcmode_cfg(bp, vnic);
 
 	rc = bnxt_hwrm_vnic_tpa_cfg(bp, vnic,
-				    (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) ?
+				    (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) ?
 				    true : false);
 	if (rc)
 		goto err_out;
@@ -923,35 +923,35 @@ uint32_t bnxt_get_speed_capabilities(struct bnxt *bp)
 		link_speed = bp->link_info->support_pam4_speeds;
 
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB)
-		speed_capa |= ETH_LINK_SPEED_100M;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100MBHD)
-		speed_capa |= ETH_LINK_SPEED_100M_HD;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_1GB)
-		speed_capa |= ETH_LINK_SPEED_1G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_2_5GB)
-		speed_capa |= ETH_LINK_SPEED_2_5G;
+		speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_10GB)
-		speed_capa |= ETH_LINK_SPEED_10G;
+		speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_20GB)
-		speed_capa |= ETH_LINK_SPEED_20G;
+		speed_capa |= RTE_ETH_LINK_SPEED_20G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_25GB)
-		speed_capa |= ETH_LINK_SPEED_25G;
+		speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_40GB)
-		speed_capa |= ETH_LINK_SPEED_40G;
+		speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_50GB)
-		speed_capa |= ETH_LINK_SPEED_50G;
+		speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100GB)
-		speed_capa |= ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_50G)
-		speed_capa |= ETH_LINK_SPEED_50G;
+		speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_100G)
-		speed_capa |= ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_200G)
-		speed_capa |= ETH_LINK_SPEED_200G;
+		speed_capa |= RTE_ETH_LINK_SPEED_200G;
 
 	if (bp->link_info->auto_mode ==
 	    HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE)
-		speed_capa |= ETH_LINK_SPEED_FIXED;
+		speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
 
 	return speed_capa;
 }
@@ -995,14 +995,14 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 
 	dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
 	if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	if (bp->vnic_cap_flags & BNXT_VNIC_CAP_VLAN_RX_STRIP)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_STRIP;
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT |
 				    dev_info->tx_queue_offload_capa;
 	if (bp->fw_cap & BNXT_FW_CAP_VLAN_TX_INSERT)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_VLAN_INSERT;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
 
 	dev_info->speed_capa = bnxt_get_speed_capabilities(bp);
@@ -1049,8 +1049,8 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	 */
 
 	/* VMDq resources */
-	vpool = 64; /* ETH_64_POOLS */
-	vrxq = 128; /* ETH_VMDQ_DCB_NUM_QUEUES */
+	vpool = 64; /* RTE_ETH_64_POOLS */
+	vrxq = 128; /* RTE_ETH_VMDQ_DCB_NUM_QUEUES */
 	for (i = 0; i < 4; vpool >>= 1, i++) {
 		if (max_vnics > vpool) {
 			for (j = 0; j < 5; vrxq >>= 1, j++) {
@@ -1145,15 +1145,15 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
 	    (uint32_t)(eth_dev->data->nb_rx_queues) > bp->max_ring_grps)
 		goto resource_error;
 
-	if (!(eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) &&
+	if (!(eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) &&
 	    bp->max_vnics < eth_dev->data->nb_rx_queues)
 		goto resource_error;
 
 	bp->rx_cp_nr_rings = bp->rx_nr_rings;
 	bp->tx_cp_nr_rings = bp->tx_nr_rings;
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rx_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
 
 	bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
@@ -1182,7 +1182,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
 		PMD_DRV_LOG(INFO, "Port %d Link Up - speed %u Mbps - %s\n",
 			eth_dev->data->port_id,
 			(uint32_t)link->link_speed,
-			(link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+			(link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 			("full-duplex") : ("half-duplex\n"));
 	else
 		PMD_DRV_LOG(INFO, "Port %d Link Down\n",
@@ -1199,10 +1199,10 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
 	uint16_t buf_size;
 	int i;
 
-	if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		return 1;
 
-	if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO)
+	if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		return 1;
 
 	for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1247,15 +1247,15 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
 	 * a limited subset have been enabled.
 	 */
 	if (eth_dev->data->dev_conf.rxmode.offloads &
-		~(DEV_RX_OFFLOAD_VLAN_STRIP |
-		  DEV_RX_OFFLOAD_KEEP_CRC |
-		  DEV_RX_OFFLOAD_IPV4_CKSUM |
-		  DEV_RX_OFFLOAD_UDP_CKSUM |
-		  DEV_RX_OFFLOAD_TCP_CKSUM |
-		  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		  DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-		  DEV_RX_OFFLOAD_RSS_HASH |
-		  DEV_RX_OFFLOAD_VLAN_FILTER))
+		~(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		  RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		  RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_RSS_HASH |
+		  RTE_ETH_RX_OFFLOAD_VLAN_FILTER))
 		goto use_scalar_rx;
 
 #if defined(RTE_ARCH_X86) && defined(CC_AVX2_SUPPORT)
@@ -1307,7 +1307,7 @@ bnxt_transmit_function(struct rte_eth_dev *eth_dev)
 	 * or tx offloads.
 	 */
 	if (eth_dev->data->scattered_rx ||
-	    (offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) ||
+	    (offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) ||
 	    BNXT_TRUFLOW_EN(bp))
 		goto use_scalar_tx;
 
@@ -1608,10 +1608,10 @@ static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
 
 	bnxt_link_update_op(eth_dev, 1);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
-		vlan_mask |= ETH_VLAN_FILTER_MASK;
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-		vlan_mask |= ETH_VLAN_STRIP_MASK;
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+		vlan_mask |= RTE_ETH_VLAN_FILTER_MASK;
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+		vlan_mask |= RTE_ETH_VLAN_STRIP_MASK;
 	rc = bnxt_vlan_offload_set_op(eth_dev, vlan_mask);
 	if (rc)
 		goto error;
@@ -1833,8 +1833,8 @@ int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete)
 		/* Retrieve link info from hardware */
 		rc = bnxt_get_hwrm_link_config(bp, &new);
 		if (rc) {
-			new.link_speed = ETH_LINK_SPEED_100M;
-			new.link_duplex = ETH_LINK_FULL_DUPLEX;
+			new.link_speed = RTE_ETH_LINK_SPEED_100M;
+			new.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR,
 				"Failed to retrieve link rc = 0x%x!\n", rc);
 			goto out;
@@ -2028,7 +2028,7 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
 	if (!vnic->rss_table)
 		return -EINVAL;
 
-	if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+	if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 		return -EINVAL;
 
 	if (reta_size != tbl_size) {
@@ -2041,8 +2041,8 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
 	for (i = 0; i < reta_size; i++) {
 		struct bnxt_rx_queue *rxq;
 
-		idx = i / RTE_RETA_GROUP_SIZE;
-		sft = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		sft = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (!(reta_conf[idx].mask & (1ULL << sft)))
 			continue;
@@ -2095,8 +2095,8 @@ static int bnxt_reta_query_op(struct rte_eth_dev *eth_dev,
 	}
 
 	for (idx = 0, i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		sft = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		sft = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (reta_conf[idx].mask & (1ULL << sft)) {
 			uint16_t qid;
@@ -2134,7 +2134,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
 	 * If RSS enablement were different than dev_configure,
 	 * then return -EINVAL
 	 */
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (!rss_conf->rss_hf)
 			PMD_DRV_LOG(ERR, "Hash type NONE\n");
 	} else {
@@ -2152,7 +2152,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
 	vnic->hash_type = bnxt_rte_to_hwrm_hash_types(rss_conf->rss_hf);
 	vnic->hash_mode =
 		bnxt_rte_to_hwrm_hash_level(bp, rss_conf->rss_hf,
-					    ETH_RSS_LEVEL(rss_conf->rss_hf));
+					    RTE_ETH_RSS_LEVEL(rss_conf->rss_hf));
 
 	/*
 	 * If hashkey is not specified, use the previously configured
@@ -2197,30 +2197,30 @@ static int bnxt_rss_hash_conf_get_op(struct rte_eth_dev *eth_dev,
 		hash_types = vnic->hash_type;
 		rss_conf->rss_hf = 0;
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4) {
-			rss_conf->rss_hf |= ETH_RSS_IPV4;
+			rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
 			hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6) {
-			rss_conf->rss_hf |= ETH_RSS_IPV6;
+			rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
 			hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
 		}
@@ -2260,17 +2260,17 @@ static int bnxt_flow_ctrl_get_op(struct rte_eth_dev *dev,
 		fc_conf->autoneg = 1;
 	switch (bp->link_info->pause) {
 	case 0:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case (HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX |
 			HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX):
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	}
 	return 0;
@@ -2293,11 +2293,11 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		bp->link_info->auto_pause = 0;
 		bp->link_info->force_pause = 0;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		if (fc_conf->autoneg) {
 			bp->link_info->auto_pause =
 					HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_RX;
@@ -2308,7 +2308,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
 					HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_RX;
 		}
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		if (fc_conf->autoneg) {
 			bp->link_info->auto_pause =
 					HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX;
@@ -2319,7 +2319,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
 					HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_TX;
 		}
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		if (fc_conf->autoneg) {
 			bp->link_info->auto_pause =
 					HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX |
@@ -2350,7 +2350,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
 		return rc;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (bp->vxlan_port_cnt) {
 			PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
 				udp_tunnel->udp_port);
@@ -2364,7 +2364,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
 		tunnel_type =
 			HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_VXLAN;
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (bp->geneve_port_cnt) {
 			PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
 				udp_tunnel->udp_port);
@@ -2413,7 +2413,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
 		return rc;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (!bp->vxlan_port_cnt) {
 			PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
 			return -EINVAL;
@@ -2430,7 +2430,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
 			HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_VXLAN;
 		port = bp->vxlan_fw_dst_port_id;
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (!bp->geneve_port_cnt) {
 			PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
 			return -EINVAL;
@@ -2608,7 +2608,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
 	int rc;
 
 	vnic = BNXT_GET_DEFAULT_VNIC(bp);
-	if (!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)) {
+	if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
 		/* Remove any VLAN filters programmed */
 		for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
 			bnxt_del_vlan_filter(bp, i);
@@ -2628,7 +2628,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
 		bnxt_add_vlan_filter(bp, 0);
 	}
 	PMD_DRV_LOG(DEBUG, "VLAN Filtering: %d\n",
-		    !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER));
+		    !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER));
 
 	return 0;
 }
@@ -2641,7 +2641,7 @@ static int bnxt_free_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 
 	/* Destroy vnic filters and vnic */
 	if (bp->eth_dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_VLAN_FILTER) {
+	    RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
 			bnxt_del_vlan_filter(bp, i);
 	}
@@ -2680,7 +2680,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
 		return rc;
 
 	if (bp->eth_dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_VLAN_FILTER) {
+	    RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		rc = bnxt_add_vlan_filter(bp, 0);
 		if (rc)
 			return rc;
@@ -2698,7 +2698,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
 		return rc;
 
 	PMD_DRV_LOG(DEBUG, "VLAN Strip Offload: %d\n",
-		    !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP));
+		    !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP));
 
 	return rc;
 }
@@ -2718,22 +2718,22 @@ bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask)
 	if (!dev->data->dev_started)
 		return 0;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* Enable or disable VLAN filtering */
 		rc = bnxt_config_vlan_hw_filter(bp, rx_offloads);
 		if (rc)
 			return rc;
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
 		rc = bnxt_config_vlan_hw_stripping(bp, rx_offloads);
 		if (rc)
 			return rc;
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			PMD_DRV_LOG(DEBUG, "Extend VLAN supported\n");
 		else
 			PMD_DRV_LOG(INFO, "Extend VLAN unsupported\n");
@@ -2748,10 +2748,10 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 {
 	struct bnxt *bp = dev->data->dev_private;
 	int qinq = dev->data->dev_conf.rxmode.offloads &
-		   DEV_RX_OFFLOAD_VLAN_EXTEND;
+		   RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
-	if (vlan_type != ETH_VLAN_TYPE_INNER &&
-	    vlan_type != ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+	    vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
 		PMD_DRV_LOG(ERR,
 			    "Unsupported vlan type.");
 		return -EINVAL;
@@ -2763,7 +2763,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 		return -EINVAL;
 	}
 
-	if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		switch (tpid) {
 		case RTE_ETHER_TYPE_QINQ:
 			bp->outer_tpid_bd =
@@ -2791,7 +2791,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 		}
 		bp->outer_tpid_bd |= tpid;
 		PMD_DRV_LOG(INFO, "outer_tpid_bd = %x\n", bp->outer_tpid_bd);
-	} else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+	} else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
 		PMD_DRV_LOG(ERR,
 			    "Can accelerate only outer vlan in QinQ\n");
 		return -EINVAL;
@@ -2831,7 +2831,7 @@ bnxt_set_default_mac_addr_op(struct rte_eth_dev *dev,
 	bnxt_del_dflt_mac_filter(bp, vnic);
 
 	memcpy(bp->mac_addr, addr, RTE_ETHER_ADDR_LEN);
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		/* This filter will allow only untagged packets */
 		rc = bnxt_add_vlan_filter(bp, 0);
 	} else {
@@ -6556,4 +6556,4 @@ bool is_bnxt_supported(struct rte_eth_dev *dev)
 RTE_LOG_REGISTER_SUFFIX(bnxt_logtype_driver, driver, NOTICE);
 RTE_PMD_REGISTER_PCI(net_bnxt, bnxt_rte_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_bnxt, bnxt_pci_id_map);
-RTE_PMD_REGISTER_KMOD_DEP(net_bnxt, "* igb_uio | uio_pci_generic | vfio-pci");
+
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index b2ebb5634e3a..ced697a73980 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -978,7 +978,7 @@ static int bnxt_vnic_prep(struct bnxt *bp, struct bnxt_vnic_info *vnic,
 		}
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 		vnic->vlan_strip = true;
 	else
 		vnic->vlan_strip = false;
@@ -1177,7 +1177,7 @@ bnxt_vnic_rss_cfg_update(struct bnxt *bp,
 	}
 
 	/* If RSS types is 0, use a best effort configuration */
-	types = rss->types ? rss->types : ETH_RSS_IPV4;
+	types = rss->types ? rss->types : RTE_ETH_RSS_IPV4;
 
 	hash_type = bnxt_rte_to_hwrm_hash_types(types);
 
@@ -1322,7 +1322,7 @@ bnxt_validate_and_parse_flow(struct rte_eth_dev *dev,
 
 		rxq = bp->rx_queues[act_q->index];
 
-		if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) && rxq &&
+		if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) && rxq &&
 		    vnic->fw_vnic_id != INVALID_HW_RING_ID)
 			goto use_vnic;
 
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 181e607d7bf8..82e89b7c8af7 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -628,7 +628,7 @@ int bnxt_hwrm_set_l2_filter(struct bnxt *bp,
 	uint16_t j = dst_id - 1;
 
 	//TODO: Is there a better way to add VLANs to each VNIC in case of VMDQ
-	if ((dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) &&
+	if ((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) &&
 	    conf->pool_map[j].pools & (1UL << j)) {
 		PMD_DRV_LOG(DEBUG,
 			"Add vlan %u to vmdq pool %u\n",
@@ -2979,12 +2979,12 @@ static uint16_t bnxt_parse_eth_link_duplex(uint32_t conf_link_speed)
 {
 	uint8_t hw_link_duplex = HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
 
-	if ((conf_link_speed & ETH_LINK_SPEED_FIXED) == ETH_LINK_SPEED_AUTONEG)
+	if ((conf_link_speed & RTE_ETH_LINK_SPEED_FIXED) == RTE_ETH_LINK_SPEED_AUTONEG)
 		return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
 
 	switch (conf_link_speed) {
-	case ETH_LINK_SPEED_10M_HD:
-	case ETH_LINK_SPEED_100M_HD:
+	case RTE_ETH_LINK_SPEED_10M_HD:
+	case RTE_ETH_LINK_SPEED_100M_HD:
 		/* FALLTHROUGH */
 		return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF;
 	}
@@ -3001,51 +3001,51 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
 {
 	uint16_t eth_link_speed = 0;
 
-	if (conf_link_speed == ETH_LINK_SPEED_AUTONEG)
-		return ETH_LINK_SPEED_AUTONEG;
+	if (conf_link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
+		return RTE_ETH_LINK_SPEED_AUTONEG;
 
-	switch (conf_link_speed & ~ETH_LINK_SPEED_FIXED) {
-	case ETH_LINK_SPEED_100M:
-	case ETH_LINK_SPEED_100M_HD:
+	switch (conf_link_speed & ~RTE_ETH_LINK_SPEED_FIXED) {
+	case RTE_ETH_LINK_SPEED_100M:
+	case RTE_ETH_LINK_SPEED_100M_HD:
 		/* FALLTHROUGH */
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_100MB;
 		break;
-	case ETH_LINK_SPEED_1G:
+	case RTE_ETH_LINK_SPEED_1G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_1GB;
 		break;
-	case ETH_LINK_SPEED_2_5G:
+	case RTE_ETH_LINK_SPEED_2_5G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_2_5GB;
 		break;
-	case ETH_LINK_SPEED_10G:
+	case RTE_ETH_LINK_SPEED_10G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_10GB;
 		break;
-	case ETH_LINK_SPEED_20G:
+	case RTE_ETH_LINK_SPEED_20G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_20GB;
 		break;
-	case ETH_LINK_SPEED_25G:
+	case RTE_ETH_LINK_SPEED_25G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_25GB;
 		break;
-	case ETH_LINK_SPEED_40G:
+	case RTE_ETH_LINK_SPEED_40G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_40GB;
 		break;
-	case ETH_LINK_SPEED_50G:
+	case RTE_ETH_LINK_SPEED_50G:
 		eth_link_speed = pam4_link ?
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_50GB :
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_50GB;
 		break;
-	case ETH_LINK_SPEED_100G:
+	case RTE_ETH_LINK_SPEED_100G:
 		eth_link_speed = pam4_link ?
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_100GB :
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_100GB;
 		break;
-	case ETH_LINK_SPEED_200G:
+	case RTE_ETH_LINK_SPEED_200G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
 		break;
@@ -3058,11 +3058,11 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
 	return eth_link_speed;
 }
 
-#define BNXT_SUPPORTED_SPEEDS (ETH_LINK_SPEED_100M | ETH_LINK_SPEED_100M_HD | \
-		ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G | \
-		ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G | ETH_LINK_SPEED_25G | \
-		ETH_LINK_SPEED_40G | ETH_LINK_SPEED_50G | \
-		ETH_LINK_SPEED_100G | ETH_LINK_SPEED_200G)
+#define BNXT_SUPPORTED_SPEEDS (RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_100M_HD | \
+		RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G | \
+		RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G | RTE_ETH_LINK_SPEED_25G | \
+		RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_50G | \
+		RTE_ETH_LINK_SPEED_100G | RTE_ETH_LINK_SPEED_200G)
 
 static int bnxt_validate_link_speed(struct bnxt *bp)
 {
@@ -3071,13 +3071,13 @@ static int bnxt_validate_link_speed(struct bnxt *bp)
 	uint32_t link_speed_capa;
 	uint32_t one_speed;
 
-	if (link_speed == ETH_LINK_SPEED_AUTONEG)
+	if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
 		return 0;
 
 	link_speed_capa = bnxt_get_speed_capabilities(bp);
 
-	if (link_speed & ETH_LINK_SPEED_FIXED) {
-		one_speed = link_speed & ~ETH_LINK_SPEED_FIXED;
+	if (link_speed & RTE_ETH_LINK_SPEED_FIXED) {
+		one_speed = link_speed & ~RTE_ETH_LINK_SPEED_FIXED;
 
 		if (one_speed & (one_speed - 1)) {
 			PMD_DRV_LOG(ERR,
@@ -3107,71 +3107,71 @@ bnxt_parse_eth_link_speed_mask(struct bnxt *bp, uint32_t link_speed)
 {
 	uint16_t ret = 0;
 
-	if (link_speed == ETH_LINK_SPEED_AUTONEG) {
+	if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG) {
 		if (bp->link_info->support_speeds)
 			return bp->link_info->support_speeds;
 		link_speed = BNXT_SUPPORTED_SPEEDS;
 	}
 
-	if (link_speed & ETH_LINK_SPEED_100M)
+	if (link_speed & RTE_ETH_LINK_SPEED_100M)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
-	if (link_speed & ETH_LINK_SPEED_100M_HD)
+	if (link_speed & RTE_ETH_LINK_SPEED_100M_HD)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
-	if (link_speed & ETH_LINK_SPEED_1G)
+	if (link_speed & RTE_ETH_LINK_SPEED_1G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_1GB;
-	if (link_speed & ETH_LINK_SPEED_2_5G)
+	if (link_speed & RTE_ETH_LINK_SPEED_2_5G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_2_5GB;
-	if (link_speed & ETH_LINK_SPEED_10G)
+	if (link_speed & RTE_ETH_LINK_SPEED_10G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_10GB;
-	if (link_speed & ETH_LINK_SPEED_20G)
+	if (link_speed & RTE_ETH_LINK_SPEED_20G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_20GB;
-	if (link_speed & ETH_LINK_SPEED_25G)
+	if (link_speed & RTE_ETH_LINK_SPEED_25G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_25GB;
-	if (link_speed & ETH_LINK_SPEED_40G)
+	if (link_speed & RTE_ETH_LINK_SPEED_40G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_40GB;
-	if (link_speed & ETH_LINK_SPEED_50G)
+	if (link_speed & RTE_ETH_LINK_SPEED_50G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_50GB;
-	if (link_speed & ETH_LINK_SPEED_100G)
+	if (link_speed & RTE_ETH_LINK_SPEED_100G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100GB;
-	if (link_speed & ETH_LINK_SPEED_200G)
+	if (link_speed & RTE_ETH_LINK_SPEED_200G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
 	return ret;
 }
 
 static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
 {
-	uint32_t eth_link_speed = ETH_SPEED_NUM_NONE;
+	uint32_t eth_link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	switch (hw_link_speed) {
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB:
-		eth_link_speed = ETH_SPEED_NUM_100M;
+		eth_link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_1GB:
-		eth_link_speed = ETH_SPEED_NUM_1G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2_5GB:
-		eth_link_speed = ETH_SPEED_NUM_2_5G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_10GB:
-		eth_link_speed = ETH_SPEED_NUM_10G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_20GB:
-		eth_link_speed = ETH_SPEED_NUM_20G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_25GB:
-		eth_link_speed = ETH_SPEED_NUM_25G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_40GB:
-		eth_link_speed = ETH_SPEED_NUM_40G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_50GB:
-		eth_link_speed = ETH_SPEED_NUM_50G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100GB:
-		eth_link_speed = ETH_SPEED_NUM_100G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_200GB:
-		eth_link_speed = ETH_SPEED_NUM_200G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_200G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2GB:
 	default:
@@ -3184,16 +3184,16 @@ static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
 
 static uint16_t bnxt_parse_hw_link_duplex(uint16_t hw_link_duplex)
 {
-	uint16_t eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+	uint16_t eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (hw_link_duplex) {
 	case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH:
 	case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_FULL:
 		/* FALLTHROUGH */
-		eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+		eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF:
-		eth_link_duplex = ETH_LINK_HALF_DUPLEX;
+		eth_link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "HWRM link duplex %d not defined\n",
@@ -3222,12 +3222,12 @@ int bnxt_get_hwrm_link_config(struct bnxt *bp, struct rte_eth_link *link)
 		link->link_speed =
 			bnxt_parse_hw_link_speed(link_info->link_speed);
 	else
-		link->link_speed = ETH_SPEED_NUM_NONE;
+		link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 	link->link_duplex = bnxt_parse_hw_link_duplex(link_info->duplex);
 	link->link_status = link_info->link_up;
 	link->link_autoneg = link_info->auto_mode ==
 		HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE ?
-		ETH_LINK_FIXED : ETH_LINK_AUTONEG;
+		RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
 exit:
 	return rc;
 }
@@ -3253,7 +3253,7 @@ int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up)
 
 	autoneg = bnxt_check_eth_link_autoneg(dev_conf->link_speeds);
 	if (BNXT_CHIP_P5(bp) &&
-	    dev_conf->link_speeds == ETH_LINK_SPEED_40G) {
+	    dev_conf->link_speeds == RTE_ETH_LINK_SPEED_40G) {
 		/* 40G is not supported as part of media auto detect.
 		 * The speed should be forced and autoneg disabled
 		 * to configure 40G speed.
@@ -3344,7 +3344,7 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 
 	HWRM_CHECK_RESULT();
 
-	bp->vlan = rte_le_to_cpu_16(resp->vlan) & ETH_VLAN_ID_MAX;
+	bp->vlan = rte_le_to_cpu_16(resp->vlan) & RTE_ETH_VLAN_ID_MAX;
 
 	svif_info = rte_le_to_cpu_16(resp->svif_info);
 	if (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID)
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index b7e88e013a84..1c07db3ca9c5 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -537,7 +537,7 @@ int bnxt_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
 
 	dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
 	if (parent_bp->flags & BNXT_FLAG_PTP_SUPPORTED)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT;
 	dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
 
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index 08cefa1baaef..7940d489a102 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -187,7 +187,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
 			rx_ring_info->rx_ring_struct->ring_size *
 			AGG_RING_SIZE_FACTOR)) : 0;
 
-		if (rx_ring_info && (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+		if (rx_ring_info && (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 			int tpa_max = BNXT_TPA_MAX_AGGS(bp);
 
 			tpa_info_len = tpa_max * sizeof(struct bnxt_tpa_info);
@@ -283,7 +283,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
 					    ag_bitmap_start, ag_bitmap_len);
 
 			/* TPA info */
-			if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+			if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 				rx_ring_info->tpa_info =
 					((struct bnxt_tpa_info *)
 					 ((char *)mz->addr + tpa_info_start));
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index 38ec4aa14b77..1456f8b54ffa 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -52,13 +52,13 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 	bp->nr_vnics = 0;
 
 	/* Multi-queue mode */
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB_RSS) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
 		/* VMDq ONLY, VMDq+RSS, VMDq+DCB, VMDq+DCB+RSS */
 
 		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_RSS:
-		case ETH_MQ_RX_VMDQ_ONLY:
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
 			/* FALLTHROUGH */
 			/* ETH_8/64_POOLs */
 			pools = conf->nb_queue_pools;
@@ -66,14 +66,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 			max_pools = RTE_MIN(bp->max_vnics,
 					    RTE_MIN(bp->max_l2_ctx,
 					    RTE_MIN(bp->max_rsscos_ctx,
-						    ETH_64_POOLS)));
+						    RTE_ETH_64_POOLS)));
 			PMD_DRV_LOG(DEBUG,
 				    "pools = %u max_pools = %u\n",
 				    pools, max_pools);
 			if (pools > max_pools)
 				pools = max_pools;
 			break;
-		case ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_RSS:
 			pools = bp->rx_cosq_cnt ? bp->rx_cosq_cnt : 1;
 			break;
 		default:
@@ -111,7 +111,7 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 				    ring_idx, rxq, i, vnic);
 		}
 		if (i == 0) {
-			if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB) {
+			if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB) {
 				bp->eth_dev->data->promiscuous = 1;
 				vnic->flags |= BNXT_VNIC_INFO_PROMISC;
 			}
@@ -121,8 +121,8 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 		vnic->end_grp_id = end_grp_id;
 
 		if (i) {
-			if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB ||
-			    !(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS))
+			if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB ||
+			    !(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS))
 				vnic->rss_dflt_cr = true;
 			goto skip_filter_allocation;
 		}
@@ -147,14 +147,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 
 	bp->rx_num_qs_per_vnic = nb_q_per_grp;
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		struct rte_eth_rss_conf *rss = &dev_conf->rx_adv_conf.rss_conf;
 
 		if (bp->flags & BNXT_FLAG_UPDATE_HASH)
 			bp->flags &= ~BNXT_FLAG_UPDATE_HASH;
 
 		for (i = 0; i < bp->nr_vnics; i++) {
-			uint32_t lvl = ETH_RSS_LEVEL(rss->rss_hf);
+			uint32_t lvl = RTE_ETH_RSS_LEVEL(rss->rss_hf);
 
 			vnic = &bp->vnic_info[i];
 			vnic->hash_type =
@@ -363,7 +363,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 	PMD_DRV_LOG(DEBUG, "RX Buf size is %d\n", rxq->rx_buf_size);
 	rxq->queue_id = queue_idx;
 	rxq->port_id = eth_dev->data->port_id;
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -478,7 +478,7 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 	PMD_DRV_LOG(INFO, "Rx queue started %d\n", rx_queue_id);
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		vnic = rxq->vnic;
 
 		if (BNXT_HAS_RING_GRPS(bp)) {
@@ -549,7 +549,7 @@ int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	rxq->rx_started = false;
 	PMD_DRV_LOG(DEBUG, "Rx queue stopped\n");
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (BNXT_HAS_RING_GRPS(bp))
 			vnic->fw_grp_ids[rx_queue_id] = INVALID_HW_RING_ID;
 
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index aeacc60a0127..eb555c4545e6 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -566,8 +566,8 @@ bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
 	dev_conf = &rxq->bp->eth_dev->data->dev_conf;
 	offloads = dev_conf->rxmode.offloads;
 
-	outer_cksum_enabled = !!(offloads & (DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-					     DEV_RX_OFFLOAD_OUTER_UDP_CKSUM));
+	outer_cksum_enabled = !!(offloads & (RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+					     RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM));
 
 	/* Initialize ol_flags table. */
 	pt = rxr->ol_flags_table;
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
index d08854ff61e2..e4905b4fd169 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
@@ -416,7 +416,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_common.h b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
index 9b9489a695a2..0627fd212d0a 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_common.h
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
@@ -96,7 +96,7 @@ bnxt_rxq_rearm(struct bnxt_rx_queue *rxq, struct bnxt_rx_ring_info *rxr)
 }
 
 /*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
  * is enabled.
  */
 static inline void
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
index 13211060cf0e..f15e2d3b4ed4 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
@@ -352,7 +352,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
index 6e563053260a..ffd560166cac 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
@@ -333,7 +333,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 9e45ddd7a82e..f2fcaf53021c 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -353,7 +353,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 }
 
 /*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
  * is enabled.
  */
 static void bnxt_tx_cmp_fast(struct bnxt_tx_queue *txq, int nr_pkts)
@@ -479,7 +479,7 @@ static int bnxt_handle_tx_cp(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index 26253a7e17f2..c63cf4b943fa 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -239,17 +239,17 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
 {
 	uint16_t hwrm_type = 0;
 
-	if (rte_type & ETH_RSS_IPV4)
+	if (rte_type & RTE_ETH_RSS_IPV4)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
-	if (rte_type & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
-	if (rte_type & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
-	if (rte_type & ETH_RSS_IPV6)
+	if (rte_type & RTE_ETH_RSS_IPV6)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
-	if (rte_type & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
-	if (rte_type & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
 
 	return hwrm_type;
@@ -258,11 +258,11 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
 int bnxt_rte_to_hwrm_hash_level(struct bnxt *bp, uint64_t hash_f, uint32_t lvl)
 {
 	uint32_t mode = HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_DEFAULT;
-	bool l3 = (hash_f & (ETH_RSS_IPV4 | ETH_RSS_IPV6));
-	bool l4 = (hash_f & (ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV6_UDP |
-			     ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV6_TCP));
+	bool l3 = (hash_f & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6));
+	bool l4 = (hash_f & (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_TCP));
 	bool l3_only = l3 && !l4;
 	bool l3_and_l4 = l3 && l4;
 
@@ -307,16 +307,16 @@ uint64_t bnxt_hwrm_to_rte_rss_level(struct bnxt *bp, uint32_t mode)
 	 * return default hash mode.
 	 */
 	if (!(bp->vnic_cap_flags & BNXT_VNIC_CAP_OUTER_RSS))
-		return ETH_RSS_LEVEL_PMD_DEFAULT;
+		return RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
 
 	if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_2 ||
 	    mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_4)
-		rss_level |= ETH_RSS_LEVEL_OUTERMOST;
+		rss_level |= RTE_ETH_RSS_LEVEL_OUTERMOST;
 	else if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_2 ||
 		 mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_4)
-		rss_level |= ETH_RSS_LEVEL_INNERMOST;
+		rss_level |= RTE_ETH_RSS_LEVEL_INNERMOST;
 	else
-		rss_level |= ETH_RSS_LEVEL_PMD_DEFAULT;
+		rss_level |= RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
 
 	return rss_level;
 }
diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c
index f71543810970..77ecbef04c3d 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt.c
+++ b/drivers/net/bnxt/rte_pmd_bnxt.c
@@ -421,18 +421,18 @@ int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf,
 	if (vf >= bp->pdev->max_vfs)
 		return -EINVAL;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG) {
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG) {
 		PMD_DRV_LOG(ERR, "Currently cannot toggle this setting\n");
 		return -ENOTSUP;
 	}
 
 	/* Is this really the correct mapping?  VFd seems to think it is. */
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 		flag |= BNXT_VNIC_INFO_PROMISC;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 		flag |= BNXT_VNIC_INFO_BCAST;
-	if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 		flag |= BNXT_VNIC_INFO_ALLMULTI | BNXT_VNIC_INFO_MCAST;
 
 	if (on)
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index fc179a2732ac..8b104b639184 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -167,8 +167,8 @@ struct bond_dev_private {
 	struct rte_eth_desc_lim tx_desc_lim;	/**< Tx descriptor limits */
 
 	uint16_t reta_size;
-	struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_512 /
-			RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_512 /
+			RTE_ETH_RETA_GROUP_SIZE];
 
 	uint8_t rss_key[52];				/**< 52-byte hash key buffer. */
 	uint8_t rss_key_len;				/**< hash key length in bytes. */
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 2029955c1092..ca50583d62d8 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -770,25 +770,25 @@ link_speed_key(uint16_t speed) {
 	uint16_t key_speed;
 
 	switch (speed) {
-	case ETH_SPEED_NUM_NONE:
+	case RTE_ETH_SPEED_NUM_NONE:
 		key_speed = 0x00;
 		break;
-	case ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_10M:
 		key_speed = BOND_LINK_SPEED_KEY_10M;
 		break;
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		key_speed = BOND_LINK_SPEED_KEY_100M;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		key_speed = BOND_LINK_SPEED_KEY_1000M;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		key_speed = BOND_LINK_SPEED_KEY_10G;
 		break;
-	case ETH_SPEED_NUM_20G:
+	case RTE_ETH_SPEED_NUM_20G:
 		key_speed = BOND_LINK_SPEED_KEY_20G;
 		break;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		key_speed = BOND_LINK_SPEED_KEY_40G;
 		break;
 	default:
@@ -887,7 +887,7 @@ bond_mode_8023ad_periodic_cb(void *arg)
 
 		if (ret >= 0 && link_info.link_status != 0) {
 			key = link_speed_key(link_info.link_speed) << 1;
-			if (link_info.link_duplex == ETH_LINK_FULL_DUPLEX)
+			if (link_info.link_duplex == RTE_ETH_LINK_FULL_DUPLEX)
 				key |= BOND_LINK_FULL_DUPLEX_KEY;
 		} else {
 			key = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 5140ef14c2ee..84943cffe2bb 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -204,7 +204,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
 
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	if ((bonded_eth_dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_VLAN_FILTER) == 0)
+			RTE_ETH_RX_OFFLOAD_VLAN_FILTER) == 0)
 		return 0;
 
 	internals = bonded_eth_dev->data->dev_private;
@@ -592,7 +592,7 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
 			return -1;
 		}
 
-		 if (link_props.link_status == ETH_LINK_UP) {
+		if (link_props.link_status == RTE_ETH_LINK_UP) {
 			if (internals->active_slave_count == 0 &&
 			    !internals->user_defined_primary_port)
 				bond_ethdev_primary_set(internals,
@@ -727,7 +727,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
 		internals->tx_offload_capa = 0;
 		internals->rx_queue_offload_capa = 0;
 		internals->tx_queue_offload_capa = 0;
-		internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+		internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
 		internals->reta_size = 0;
 		internals->candidate_max_rx_pktlen = 0;
 		internals->max_rx_pktlen = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 8d038ba6b6c4..834a5937b3aa 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1369,8 +1369,8 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
 		 * In any other mode the link properties are set to default
 		 * values of AUTONEG/DUPLEX
 		 */
-		ethdev->data->dev_link.link_autoneg = ETH_LINK_AUTONEG;
-		ethdev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		ethdev->data->dev_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
+		ethdev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	}
 }
 
@@ -1700,7 +1700,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 		slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
 
 	/* If RSS is enabled for bonding, try to enable it for slaves  */
-	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		/* rss_key won't be empty if RSS is configured in bonded dev */
 		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
 					internals->rss_key_len;
@@ -1714,12 +1714,12 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 	}
 
 	if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_VLAN_FILTER)
+			RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 		slave_eth_dev->data->dev_conf.rxmode.offloads |=
-				DEV_RX_OFFLOAD_VLAN_FILTER;
+				RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	else
 		slave_eth_dev->data->dev_conf.rxmode.offloads &=
-				~DEV_RX_OFFLOAD_VLAN_FILTER;
+				~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	slave_eth_dev->data->dev_conf.rxmode.mtu =
 			bonded_eth_dev->data->dev_conf.rxmode.mtu;
@@ -1823,7 +1823,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 	}
 
 	/* If RSS is enabled for bonding, synchronize RETA */
-	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
 		int i;
 		struct bond_dev_private *internals;
 
@@ -1946,7 +1946,7 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 		return -1;
 	}
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 1;
 
 	internals = eth_dev->data->dev_private;
@@ -2086,7 +2086,7 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 			tlb_last_obytets[internals->active_slaves[i]] = 0;
 	}
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 0;
 
 	internals->link_status_polling_enabled = 0;
@@ -2416,15 +2416,15 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 
 	bond_ctx = ethdev->data->dev_private;
 
-	ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+	ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	if (ethdev->data->dev_started == 0 ||
 			bond_ctx->active_slave_count == 0) {
-		ethdev->data->dev_link.link_status = ETH_LINK_DOWN;
+		ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 		return 0;
 	}
 
-	ethdev->data->dev_link.link_status = ETH_LINK_UP;
+	ethdev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	if (wait_to_complete)
 		link_update = rte_eth_link_get;
@@ -2449,7 +2449,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 					  &slave_link);
 			if (ret < 0) {
 				ethdev->data->dev_link.link_speed =
-					ETH_SPEED_NUM_NONE;
+					RTE_ETH_SPEED_NUM_NONE;
 				RTE_BOND_LOG(ERR,
 					"Slave (port %u) link get failed: %s",
 					bond_ctx->active_slaves[idx],
@@ -2491,7 +2491,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 		 * In theses mode the maximum theoretical link speed is the sum
 		 * of all the slaves
 		 */
-		ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		one_link_update_succeeded = false;
 
 		for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
@@ -2865,7 +2865,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 			goto link_update;
 
 		/* check link state properties if bonded link is up*/
-		if (bonded_eth_dev->data->dev_link.link_status == ETH_LINK_UP) {
+		if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
 			if (link_properties_valid(bonded_eth_dev, &link) != 0)
 				RTE_BOND_LOG(ERR, "Invalid link properties "
 					     "for slave %d in bonding mode %d",
@@ -2881,7 +2881,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 		if (internals->active_slave_count < 1) {
 			/* If first active slave, then change link status */
 			bonded_eth_dev->data->dev_link.link_status =
-								ETH_LINK_UP;
+								RTE_ETH_LINK_UP;
 			internals->current_primary_port = port_id;
 			lsc_flag = 1;
 
@@ -2973,12 +2973,12 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	 /* Copy RETA table */
-	reta_count = (reta_size + RTE_RETA_GROUP_SIZE - 1) /
-			RTE_RETA_GROUP_SIZE;
+	reta_count = (reta_size + RTE_ETH_RETA_GROUP_SIZE - 1) /
+			RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < reta_count; i++) {
 		internals->reta_conf[i].mask = reta_conf[i].mask;
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				internals->reta_conf[i].reta[j] = reta_conf[i].reta[j];
 	}
@@ -3011,8 +3011,8 @@ bond_ethdev_rss_reta_query(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	 /* Copy RETA table */
-	for (i = 0; i < reta_size / RTE_RETA_GROUP_SIZE; i++)
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < reta_size / RTE_ETH_RETA_GROUP_SIZE; i++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = internals->reta_conf[i].reta[j];
 
@@ -3274,7 +3274,7 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
 	internals->max_rx_pktlen = 0;
 
 	/* Initially allow to choose any offload type */
-	internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+	internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
 
 	memset(&internals->default_rxconf, 0,
 	       sizeof(internals->default_rxconf));
@@ -3501,7 +3501,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 	 * set key to the the value specified in port RSS configuration.
 	 * Fall back to default RSS key if the key is not specified
 	 */
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
 		struct rte_eth_rss_conf *rss_conf =
 			&dev->data->dev_conf.rx_adv_conf.rss_conf;
 		if (rss_conf->rss_key != NULL) {
@@ -3526,9 +3526,9 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 
 		for (i = 0; i < RTE_DIM(internals->reta_conf); i++) {
 			internals->reta_conf[i].mask = ~0LL;
-			for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+			for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 				internals->reta_conf[i].reta[j] =
-						(i * RTE_RETA_GROUP_SIZE + j) %
+						(i * RTE_ETH_RETA_GROUP_SIZE + j) %
 						dev->data->nb_rx_queues;
 		}
 	}
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 9dfea99db9b2..d52f8ffecf23 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -15,28 +15,28 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rxmode *rxmode = &conf->rxmode;
 	uint16_t flags = 0;
 
-	if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
-	    (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+	    (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		flags |= NIX_RX_OFFLOAD_RSS_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		flags |= NIX_RX_MULTI_SEG_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 
 	if (!dev->ptype_disable)
 		flags |= NIX_RX_OFFLOAD_PTYPE_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		flags |= NIX_RX_OFFLOAD_SECURITY_F;
 
 	return flags;
@@ -72,39 +72,39 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
 			 offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
 
-	if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
-	    conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+	    conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
 
-	if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= NIX_TX_MULTI_SEG_F;
 
 	/* Enable Inner checksum for TSO */
-	if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+	if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
 	/* Enable Inner and Outer checksum for Tunnel TSO */
-	if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		    DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
-	if (conf & DEV_TX_OFFLOAD_SECURITY)
+	if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
 		flags |= NIX_TX_OFFLOAD_SECURITY_F;
 
 	return flags;
diff --git a/drivers/net/cnxk/cn10k_rx.c b/drivers/net/cnxk/cn10k_rx.c
index d6af54b56de6..5d603514c045 100644
--- a/drivers/net/cnxk/cn10k_rx.c
+++ b/drivers/net/cnxk/cn10k_rx.c
@@ -77,12 +77,12 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 			nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
 
 	if (dev->scalar_ena) {
-		if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+		if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 			return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
 		return pick_rx_func(eth_dev, nix_eth_rx_burst);
 	}
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
 	return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
 }
diff --git a/drivers/net/cnxk/cn10k_tx.c b/drivers/net/cnxk/cn10k_tx.c
index eb962ef08cab..5e6c5ee11188 100644
--- a/drivers/net/cnxk/cn10k_tx.c
+++ b/drivers/net/cnxk/cn10k_tx.c
@@ -78,11 +78,11 @@ cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 
 	if (dev->scalar_ena) {
 		pick_tx_func(eth_dev, nix_eth_tx_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
 	} else {
 		pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
 	}
 
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 08c86f9e6b7b..17f8f6debbc8 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -15,28 +15,28 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rxmode *rxmode = &conf->rxmode;
 	uint16_t flags = 0;
 
-	if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
-	    (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+	    (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		flags |= NIX_RX_OFFLOAD_RSS_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		flags |= NIX_RX_MULTI_SEG_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 
 	if (!dev->ptype_disable)
 		flags |= NIX_RX_OFFLOAD_PTYPE_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		flags |= NIX_RX_OFFLOAD_SECURITY_F;
 
 	return flags;
@@ -72,39 +72,39 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
 			 offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
 
-	if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
-	    conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+	    conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
 
-	if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= NIX_TX_MULTI_SEG_F;
 
 	/* Enable Inner checksum for TSO */
-	if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+	if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
 	/* Enable Inner and Outer checksum for Tunnel TSO */
-	if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		    DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY)
 		flags |= NIX_TX_OFFLOAD_SECURITY_F;
 
 	return flags;
@@ -298,9 +298,9 @@ cn9k_nix_configure(struct rte_eth_dev *eth_dev)
 
 	/* Platform specific checks */
 	if ((roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) &&
-	    (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
-	    ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
-	     (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+	    ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+	     (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
 		plt_err("Outer IP and SCTP checksum unsupported");
 		return -EINVAL;
 	}
@@ -553,17 +553,17 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
 	 * TSO not supported for earlier chip revisions
 	 */
 	if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0())
-		dev->tx_offload_capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
-					  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-					  DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-					  DEV_TX_OFFLOAD_GRE_TNL_TSO);
+		dev->tx_offload_capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+					  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+					  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+					  RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
 
 	/* 50G and 100G to be supported for board version C0
 	 * and above of CN9K.
 	 */
 	if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) {
-		dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_50G;
-		dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_100G;
+		dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_50G;
+		dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_100G;
 	}
 
 	dev->hwcap = 0;
diff --git a/drivers/net/cnxk/cn9k_rx.c b/drivers/net/cnxk/cn9k_rx.c
index 5c4387e74e0b..8d504c4a6d92 100644
--- a/drivers/net/cnxk/cn9k_rx.c
+++ b/drivers/net/cnxk/cn9k_rx.c
@@ -77,12 +77,12 @@ cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 			nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
 
 	if (dev->scalar_ena) {
-		if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+		if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 			return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
 		return pick_rx_func(eth_dev, nix_eth_rx_burst);
 	}
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
 	return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
 }
diff --git a/drivers/net/cnxk/cn9k_tx.c b/drivers/net/cnxk/cn9k_tx.c
index e5691a2a7e16..f3f19fed9780 100644
--- a/drivers/net/cnxk/cn9k_tx.c
+++ b/drivers/net/cnxk/cn9k_tx.c
@@ -77,11 +77,11 @@ cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 
 	if (dev->scalar_ena) {
 		pick_tx_func(eth_dev, nix_eth_tx_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
 	} else {
 		pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
 	}
 
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index c94fc505fef1..330256a0d34b 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -10,7 +10,7 @@ nix_get_rx_offload_capa(struct cnxk_eth_dev *dev)
 
 	if (roc_nix_is_vf_or_sdp(&dev->nix) ||
 	    dev->npc.switch_header_type == ROC_PRIV_FLAGS_HIGIG)
-		capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+		capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	return capa;
 }
@@ -28,11 +28,11 @@ nix_get_speed_capa(struct cnxk_eth_dev *dev)
 	uint32_t speed_capa;
 
 	/* Auto negotiation disabled */
-	speed_capa = ETH_LINK_SPEED_FIXED;
+	speed_capa = RTE_ETH_LINK_SPEED_FIXED;
 	if (!roc_nix_is_vf_or_sdp(&dev->nix) && !roc_nix_is_lbk(&dev->nix)) {
-		speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			      ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
-			      ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			      RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+			      RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
 	}
 
 	return speed_capa;
@@ -65,7 +65,7 @@ nix_security_setup(struct cnxk_eth_dev *dev)
 	struct roc_nix *nix = &dev->nix;
 	int i, rc = 0;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		/* Setup Inline Inbound */
 		rc = roc_nix_inl_inb_init(nix);
 		if (rc) {
@@ -80,8 +80,8 @@ nix_security_setup(struct cnxk_eth_dev *dev)
 		cnxk_nix_inb_mode_set(dev, true);
 	}
 
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
-	    dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY ||
+	    dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		struct plt_bitmap *bmap;
 		size_t bmap_sz;
 		void *mem;
@@ -100,8 +100,8 @@ nix_security_setup(struct cnxk_eth_dev *dev)
 
 		dev->outb.lf_base = roc_nix_inl_outb_lf_base_get(nix);
 
-		/* Skip the rest if DEV_TX_OFFLOAD_SECURITY is not enabled */
-		if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY))
+		/* Skip the rest if RTE_ETH_TX_OFFLOAD_SECURITY is not enabled */
+		if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY))
 			goto done;
 
 		rc = -ENOMEM;
@@ -136,7 +136,7 @@ nix_security_setup(struct cnxk_eth_dev *dev)
 done:
 	return 0;
 cleanup:
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		rc |= roc_nix_inl_inb_fini(nix);
 	return rc;
 }
@@ -150,7 +150,7 @@ nix_security_release(struct cnxk_eth_dev *dev)
 	int rc, ret = 0;
 
 	/* Cleanup Inline inbound */
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		/* Destroy inbound sessions */
 		tvar = NULL;
 		RTE_TAILQ_FOREACH_SAFE(eth_sec, &dev->inb.list, entry, tvar)
@@ -167,8 +167,8 @@ nix_security_release(struct cnxk_eth_dev *dev)
 	}
 
 	/* Cleanup Inline outbound */
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
-	    dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY ||
+	    dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		/* Destroy outbound sessions */
 		tvar = NULL;
 		RTE_TAILQ_FOREACH_SAFE(eth_sec, &dev->outb.list, entry, tvar)
@@ -210,8 +210,8 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
 	buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
 
 	if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD > buffsz) {
-		dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
-		dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+		dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	}
 }
 
@@ -241,7 +241,7 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
 	struct rte_eth_fc_conf fc_conf = {0};
 	int rc;
 
-	/* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+	/* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
 	 * by AF driver, update those info in PMD structure.
 	 */
 	rc = cnxk_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -249,10 +249,10 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
 		goto exit;
 
 	fc->mode = fc_conf.mode;
-	fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_RX_PAUSE);
-	fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_TX_PAUSE);
+	fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+	fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
 
 exit:
 	return rc;
@@ -273,11 +273,11 @@ nix_update_flow_ctrl_config(struct rte_eth_dev *eth_dev)
 	/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
 	if (roc_model_is_cn96_ax() &&
 	    dev->npc.switch_header_type != ROC_PRIV_FLAGS_HIGIG &&
-	    (fc_cfg.mode == RTE_FC_FULL || fc_cfg.mode == RTE_FC_RX_PAUSE)) {
+	    (fc_cfg.mode == RTE_ETH_FC_FULL || fc_cfg.mode == RTE_ETH_FC_RX_PAUSE)) {
 		fc_cfg.mode =
-				(fc_cfg.mode == RTE_FC_FULL ||
-				fc_cfg.mode == RTE_FC_TX_PAUSE) ?
-				RTE_FC_TX_PAUSE : RTE_FC_NONE;
+				(fc_cfg.mode == RTE_ETH_FC_FULL ||
+				fc_cfg.mode == RTE_ETH_FC_TX_PAUSE) ?
+				RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
 	}
 
 	return cnxk_nix_flow_ctrl_set(eth_dev, &fc_cfg);
@@ -320,7 +320,7 @@ nix_sq_max_sqe_sz(struct cnxk_eth_dev *dev)
 	 * Maximum three segments can be supported with W8, Choose
 	 * NIX_MAXSQESZ_W16 for multi segment offload.
 	 */
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		return NIX_MAXSQESZ_W16;
 	else
 		return NIX_MAXSQESZ_W8;
@@ -348,7 +348,7 @@ cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	/* When Tx Security offload is enabled, increase tx desc count by
 	 * max possible outbound desc count.
 	 */
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY)
 		nb_desc += dev->outb.nb_desc;
 
 	/* Setup ROC SQ */
@@ -467,7 +467,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	 * to avoid meta packet drop as LBK does not currently support
 	 * backpressure.
 	 */
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY && roc_nix_is_lbk(nix)) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY && roc_nix_is_lbk(nix)) {
 		uint64_t pkt_pool_limit = roc_nix_inl_dev_rq_limit_get();
 
 		/* Use current RQ's aura limit if inl rq is not available */
@@ -529,7 +529,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	rxq_sp->qconf.nb_desc = nb_desc;
 	rxq_sp->qconf.mp = mp;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		/* Setup rq reference for inline dev if present */
 		rc = roc_nix_inl_dev_rq_get(rq);
 		if (rc)
@@ -547,7 +547,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	 * These are needed in deriving raw clock value from tsc counter.
 	 * read_clock eth op returns raw clock value.
 	 */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
 		rc = cnxk_nix_tsc_convert(dev);
 		if (rc) {
 			plt_err("Failed to calculate delta and freq mult");
@@ -586,7 +586,7 @@ cnxk_nix_rx_queue_release(struct rte_eth_dev *eth_dev, uint16_t qid)
 	plt_nix_dbg("Releasing rxq %u", qid);
 
 	/* Release rq reference for inline dev if present */
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		roc_nix_inl_dev_rq_put(rq);
 
 	/* Cleanup ROC RQ */
@@ -625,24 +625,24 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
 
 	dev->ethdev_rss_hf = ethdev_rss;
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
 	    dev->npc.switch_header_type == ROC_PRIV_FLAGS_LEN_90B) {
 		flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
 	}
 
-	if (ethdev_rss & ETH_RSS_C_VLAN)
+	if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
 
-	if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
 
-	if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
 
-	if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
 
-	if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
 
 	if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -651,34 +651,34 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
 	if (ethdev_rss & RSS_IPV6_ENABLE)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
 
-	if (ethdev_rss & ETH_RSS_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_TCP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_UDP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_SCTP)
+	if (ethdev_rss & RTE_ETH_RSS_SCTP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
 
 	if (ethdev_rss & RSS_IPV6_EX_ENABLE)
 		flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
 
-	if (ethdev_rss & ETH_RSS_PORT)
+	if (ethdev_rss & RTE_ETH_RSS_PORT)
 		flowkey_cfg |= FLOW_KEY_TYPE_PORT;
 
-	if (ethdev_rss & ETH_RSS_NVGRE)
+	if (ethdev_rss & RTE_ETH_RSS_NVGRE)
 		flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
 
-	if (ethdev_rss & ETH_RSS_VXLAN)
+	if (ethdev_rss & RTE_ETH_RSS_VXLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
 
-	if (ethdev_rss & ETH_RSS_GENEVE)
+	if (ethdev_rss & RTE_ETH_RSS_GENEVE)
 		flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
 
-	if (ethdev_rss & ETH_RSS_GTPU)
+	if (ethdev_rss & RTE_ETH_RSS_GTPU)
 		flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
 
 	return flowkey_cfg;
@@ -704,7 +704,7 @@ nix_rss_default_setup(struct cnxk_eth_dev *dev)
 	uint64_t rss_hf;
 
 	rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
-	rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 
@@ -916,8 +916,8 @@ nix_lso_fmt_setup(struct cnxk_eth_dev *dev)
 
 	/* Nothing much to do if offload is not enabled */
 	if (!(dev->tx_offloads &
-	      (DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-	       DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO)))
+	      (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+	       RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))
 		return 0;
 
 	/* Setup LSO formats in AF. Its a no-op if other ethdev has
@@ -965,13 +965,13 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 		goto fail_configure;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-	    rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+	    rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		plt_err("Unsupported mq rx mode %d", rxmode->mq_mode);
 		goto fail_configure;
 	}
 
-	if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		plt_err("Unsupported mq tx mode %d", txmode->mq_mode);
 		goto fail_configure;
 	}
@@ -1007,7 +1007,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 	/* Prepare rx cfg */
 	rx_cfg = ROC_NIX_LF_RX_CFG_DIS_APAD;
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+	    (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
 		rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_OL4;
 		rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_IL4;
 	}
@@ -1015,7 +1015,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 		   ROC_NIX_LF_RX_CFG_LEN_IL4 | ROC_NIX_LF_RX_CFG_LEN_IL3 |
 		   ROC_NIX_LF_RX_CFG_LEN_OL4 | ROC_NIX_LF_RX_CFG_LEN_OL3);
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		rx_cfg |= ROC_NIX_LF_RX_CFG_IP6_UDP_OPT;
 		/* Disable drop re if rx offload security is enabled and
 		 * platform does not support it.
@@ -1401,12 +1401,12 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
 	 * enabled on PF owning this VF
 	 */
 	memset(&dev->tstamp, 0, sizeof(struct cnxk_timesync_info));
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
 		cnxk_eth_dev_ops.timesync_enable(eth_dev);
 	else
 		cnxk_eth_dev_ops.timesync_disable(eth_dev);
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 		rc = rte_mbuf_dyn_rx_timestamp_register
 			(&dev->tstamp.tstamp_dynfield_offset,
 			 &dev->tstamp.rx_tstamp_dynflag);
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 2304af6ffa8b..a4247e52523a 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -57,41 +57,44 @@
 	 CNXK_NIX_TX_NB_SEG_MAX)
 
 #define CNXK_NIX_RSS_L3_L4_SRC_DST                                             \
-	(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | ETH_RSS_L4_SRC_ONLY |     \
-	 ETH_RSS_L4_DST_ONLY)
+	(RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |                   \
+	 RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
 
 #define CNXK_NIX_RSS_OFFLOAD                                                   \
-	(ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP |               \
-	 ETH_RSS_SCTP | ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD |                  \
-	 CNXK_NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | ETH_RSS_C_VLAN)
+	(RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |                 \
+	 RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_TUNNEL |             \
+	 RTE_ETH_RSS_L2_PAYLOAD | CNXK_NIX_RSS_L3_L4_SRC_DST |                 \
+	 RTE_ETH_RSS_LEVEL_MASK | RTE_ETH_RSS_C_VLAN)
 
 #define CNXK_NIX_TX_OFFLOAD_CAPA                                               \
-	(DEV_TX_OFFLOAD_MBUF_FAST_FREE | DEV_TX_OFFLOAD_MT_LOCKFREE |          \
-	 DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT |             \
-	 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |    \
-	 DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM |                 \
-	 DEV_TX_OFFLOAD_SCTP_CKSUM | DEV_TX_OFFLOAD_TCP_TSO |                  \
-	 DEV_TX_OFFLOAD_VXLAN_TNL_TSO | DEV_TX_OFFLOAD_GENEVE_TNL_TSO |        \
-	 DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_MULTI_SEGS |              \
-	 DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_SECURITY)
+	(RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |          \
+	 RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT |             \
+	 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |    \
+	 RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM |                 \
+	 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO |                  \
+	 RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |        \
+	 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS |              \
+	 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_SECURITY)
 
 #define CNXK_NIX_RX_OFFLOAD_CAPA                                               \
-	(DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM |                 \
-	 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER |            \
-	 DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | DEV_RX_OFFLOAD_RSS_HASH |            \
-	 DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_VLAN_STRIP |                \
-	 DEV_RX_OFFLOAD_SECURITY)
+	(RTE_ETH_RX_OFFLOAD_CHECKSUM | RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |         \
+	 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_SCATTER |    \
+	 RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_RSS_HASH |    \
+	 RTE_ETH_RX_OFFLOAD_TIMESTAMP | RTE_ETH_RX_OFFLOAD_VLAN_STRIP |        \
+	 RTE_ETH_RX_OFFLOAD_SECURITY)
 
 #define RSS_IPV4_ENABLE                                                        \
-	(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP |         \
-	 ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_SCTP)
+	(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |                            \
+	 RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV4_TCP |         \
+	 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
 #define RSS_IPV6_ENABLE                                                        \
-	(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP |         \
-	 ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_SCTP)
+	(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |                            \
+	 RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |         \
+	 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 #define RSS_IPV6_EX_ENABLE                                                     \
-	(ETH_RSS_IPV6_EX | ETH_RSS_IPV6_TCP_EX | ETH_RSS_IPV6_UDP_EX)
+	(RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define RSS_MAX_LEVELS 3
 
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index c0b949e21ab0..e068f553495c 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -104,11 +104,11 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
 
 	val = atoi(value);
 
-	if (val <= ETH_RSS_RETA_SIZE_64)
+	if (val <= RTE_ETH_RSS_RETA_SIZE_64)
 		val = ROC_NIX_RSS_RETA_SZ_64;
-	else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
+	else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
 		val = ROC_NIX_RSS_RETA_SZ_128;
-	else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
+	else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
 		val = ROC_NIX_RSS_RETA_SZ_256;
 	else
 		val = ROC_NIX_RSS_RETA_SZ_64;
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index d0924df76152..67464302653d 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -81,24 +81,24 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
 		uint64_t flags;
 		const char *output;
 	} rx_offload_map[] = {
-		{DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
-		{DEV_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
-		{DEV_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
-		{DEV_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
-		{DEV_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
-		{DEV_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
-		{DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
-		{DEV_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
-		{DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
-		{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
-		{DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
-		{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
-		{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
-		{DEV_RX_OFFLOAD_SECURITY, " Security,"},
-		{DEV_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
-		{DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
-		{DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
-		{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+		{RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
+		{RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
+		{RTE_ETH_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
+		{RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
+		{RTE_ETH_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
+		{RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
+		{RTE_ETH_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
+		{RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+		{RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+		{RTE_ETH_RX_OFFLOAD_SECURITY, " Security,"},
+		{RTE_ETH_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
+		{RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
+		{RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
 	};
 	static const char *const burst_mode[] = {"Vector Neon, Rx Offloads:",
 						 "Scalar, Rx Offloads:"
@@ -142,28 +142,28 @@ cnxk_nix_tx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
 		uint64_t flags;
 		const char *output;
 	} tx_offload_map[] = {
-		{DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
-		{DEV_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
-		{DEV_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
-		{DEV_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
-		{DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
-		{DEV_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
-		{DEV_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
-		{DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
-		{DEV_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
-		{DEV_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
-		{DEV_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
-		{DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
-		{DEV_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
-		{DEV_TX_OFFLOAD_SECURITY, " Security,"},
-		{DEV_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
-		{DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
+		{RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+		{RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
+		{RTE_ETH_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
+		{RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
+		{RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
+		{RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
+		{RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
+		{RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
+		{RTE_ETH_TX_OFFLOAD_SECURITY, " Security,"},
+		{RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
 	};
 	static const char *const burst_mode[] = {"Vector Neon, Tx Offloads:",
 						 "Scalar, Tx Offloads:"
@@ -203,8 +203,8 @@ cnxk_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 	enum rte_eth_fc_mode mode_map[] = {
-					   RTE_FC_NONE, RTE_FC_RX_PAUSE,
-					   RTE_FC_TX_PAUSE, RTE_FC_FULL
+					   RTE_ETH_FC_NONE, RTE_ETH_FC_RX_PAUSE,
+					   RTE_ETH_FC_TX_PAUSE, RTE_ETH_FC_FULL
 					  };
 	struct roc_nix *nix = &dev->nix;
 	int mode;
@@ -264,10 +264,10 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	if (fc_conf->mode == fc->mode)
 		return 0;
 
-	rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_RX_PAUSE);
-	tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_TX_PAUSE);
+	rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+	tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
 
 	/* Check if TX pause frame is already enabled or not */
 	if (fc->tx_pause ^ tx_pause) {
@@ -408,13 +408,13 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	 * when this feature has not been enabled before.
 	 */
 	if (data->dev_started && frame_size > buffsz &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		plt_err("Scatter offload is not enabled for mtu");
 		goto exit;
 	}
 
 	/* Check <seg size> * <max_seg>  >= max_frame */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)	&&
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	&&
 	    frame_size > (buffsz * CNXK_NIX_RX_NB_SEG_MAX)) {
 		plt_err("Greater than maximum supported packet length");
 		goto exit;
@@ -734,8 +734,8 @@ cnxk_nix_reta_update(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Copy RETA table */
-	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta[idx] = reta_conf[i].reta[j];
 			idx++;
@@ -770,8 +770,8 @@ cnxk_nix_reta_query(struct rte_eth_dev *eth_dev,
 		goto fail;
 
 	/* Copy RETA table */
-	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = reta[idx];
 			idx++;
@@ -804,7 +804,7 @@ cnxk_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
 	if (rss_conf->rss_key)
 		roc_nix_rss_key_set(nix, rss_conf->rss_key);
 
-	rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 	flowkey_cfg =
diff --git a/drivers/net/cnxk/cnxk_link.c b/drivers/net/cnxk/cnxk_link.c
index 6a7080167598..f10a502826c6 100644
--- a/drivers/net/cnxk/cnxk_link.c
+++ b/drivers/net/cnxk/cnxk_link.c
@@ -38,7 +38,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
 		plt_info("Port %d: Link Up - speed %u Mbps - %s",
 			 (int)(eth_dev->data->port_id),
 			 (uint32_t)link->link_speed,
-			 link->link_duplex == ETH_LINK_FULL_DUPLEX
+			 link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX
 				 ? "full-duplex"
 				 : "half-duplex");
 	else
@@ -89,7 +89,7 @@ cnxk_eth_dev_link_status_cb(struct roc_nix *nix, struct roc_nix_link_info *link)
 
 	eth_link.link_status = link->status;
 	eth_link.link_speed = link->speed;
-	eth_link.link_autoneg = ETH_LINK_AUTONEG;
+	eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	eth_link.link_duplex = link->full_duplex;
 
 	/* Print link info */
@@ -117,17 +117,17 @@ cnxk_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
 		return 0;
 
 	if (roc_nix_is_lbk(&dev->nix)) {
-		link.link_status = ETH_LINK_UP;
-		link.link_speed = ETH_SPEED_NUM_100G;
-		link.link_autoneg = ETH_LINK_FIXED;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_status = RTE_ETH_LINK_UP;
+		link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	} else {
 		rc = roc_nix_mac_link_info_get(&dev->nix, &info);
 		if (rc)
 			return rc;
 		link.link_status = info.status;
 		link.link_speed = info.speed;
-		link.link_autoneg = ETH_LINK_AUTONEG;
+		link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 		if (info.full_duplex)
 			link.link_duplex = info.full_duplex;
 	}
diff --git a/drivers/net/cnxk/cnxk_ptp.c b/drivers/net/cnxk/cnxk_ptp.c
index 449489f599c4..139fea256ccd 100644
--- a/drivers/net/cnxk/cnxk_ptp.c
+++ b/drivers/net/cnxk/cnxk_ptp.c
@@ -227,7 +227,7 @@ cnxk_nix_timesync_enable(struct rte_eth_dev *eth_dev)
 	dev->rx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
 	dev->tx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
 
-	dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+	dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	rc = roc_nix_ptp_rx_ena_dis(nix, true);
 	if (!rc) {
@@ -257,7 +257,7 @@ int
 cnxk_nix_timesync_disable(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
-	uint64_t rx_offloads = DEV_RX_OFFLOAD_TIMESTAMP;
+	uint64_t rx_offloads = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	struct roc_nix *nix = &dev->nix;
 	int rc = 0;
 
diff --git a/drivers/net/cnxk/cnxk_rte_flow.c b/drivers/net/cnxk/cnxk_rte_flow.c
index 27defd2fa984..2dfc3730a0da 100644
--- a/drivers/net/cnxk/cnxk_rte_flow.c
+++ b/drivers/net/cnxk/cnxk_rte_flow.c
@@ -69,7 +69,7 @@ npc_rss_action_validate(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		plt_err("multi-queue mode is disabled");
 		return -ENOTSUP;
 	}
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 37625c5bfb69..dbcbfaf68a30 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -28,31 +28,31 @@
 #define CXGBE_LINK_STATUS_POLL_CNT 100 /* Max number of times to poll */
 
 #define CXGBE_DEFAULT_RSS_KEY_LEN     40 /* 320-bits */
-#define CXGBE_RSS_HF_IPV4_MASK (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
-				ETH_RSS_NONFRAG_IPV4_OTHER)
-#define CXGBE_RSS_HF_IPV6_MASK (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
-				ETH_RSS_NONFRAG_IPV6_OTHER | \
-				ETH_RSS_IPV6_EX)
-#define CXGBE_RSS_HF_TCP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_TCP | \
-				    ETH_RSS_IPV6_TCP_EX)
-#define CXGBE_RSS_HF_UDP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_UDP | \
-				    ETH_RSS_IPV6_UDP_EX)
-#define CXGBE_RSS_HF_ALL (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP)
+#define CXGBE_RSS_HF_IPV4_MASK (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+				RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
+#define CXGBE_RSS_HF_IPV6_MASK (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
+				RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+				RTE_ETH_RSS_IPV6_EX)
+#define CXGBE_RSS_HF_TCP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+				    RTE_ETH_RSS_IPV6_TCP_EX)
+#define CXGBE_RSS_HF_UDP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+				    RTE_ETH_RSS_IPV6_UDP_EX)
+#define CXGBE_RSS_HF_ALL (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP)
 
 /* Tx/Rx Offloads supported */
-#define CXGBE_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT | \
-			   DEV_TX_OFFLOAD_IPV4_CKSUM | \
-			   DEV_TX_OFFLOAD_UDP_CKSUM | \
-			   DEV_TX_OFFLOAD_TCP_CKSUM | \
-			   DEV_TX_OFFLOAD_TCP_TSO | \
-			   DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define CXGBE_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP | \
-			   DEV_RX_OFFLOAD_IPV4_CKSUM | \
-			   DEV_RX_OFFLOAD_UDP_CKSUM | \
-			   DEV_RX_OFFLOAD_TCP_CKSUM | \
-			   DEV_RX_OFFLOAD_SCATTER | \
-			   DEV_RX_OFFLOAD_RSS_HASH)
+#define CXGBE_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+			   RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+			   RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+			   RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+			   RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define CXGBE_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+			   RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+			   RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+			   RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+			   RTE_ETH_RX_OFFLOAD_SCATTER | \
+			   RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 /* Devargs filtermode and filtermask representation */
 enum cxgbe_devargs_filter_mode_flags {
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index f77b2976002c..4758321778d1 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -231,9 +231,9 @@ int cxgbe_dev_link_update(struct rte_eth_dev *eth_dev,
 	}
 
 	new_link.link_status = cxgbe_force_linkup(adapter) ?
-			       ETH_LINK_UP : pi->link_cfg.link_ok;
+			       RTE_ETH_LINK_UP : pi->link_cfg.link_ok;
 	new_link.link_autoneg = (lc->link_caps & FW_PORT_CAP32_ANEG) ? 1 : 0;
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	new_link.link_speed = t4_fwcap_to_speed(lc->link_caps);
 
 	return rte_eth_linkstatus_set(eth_dev, &new_link);
@@ -374,7 +374,7 @@ int cxgbe_dev_start(struct rte_eth_dev *eth_dev)
 			goto out;
 	}
 
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		eth_dev->data->scattered_rx = 1;
 	else
 		eth_dev->data->scattered_rx = 0;
@@ -438,9 +438,9 @@ int cxgbe_dev_configure(struct rte_eth_dev *eth_dev)
 
 	CXGBE_FUNC_TRACE();
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (!(adapter->flags & FW_QUEUE_BOUND)) {
 		err = cxgbe_setup_sge_fwevtq(adapter);
@@ -1080,13 +1080,13 @@ static int cxgbe_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 		rx_pause = 1;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	return 0;
 }
 
@@ -1099,12 +1099,12 @@ static int cxgbe_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	u8 tx_pause = 0, rx_pause = 0;
 	int ret;
 
-	if (fc_conf->mode == RTE_FC_FULL) {
+	if (fc_conf->mode == RTE_ETH_FC_FULL) {
 		tx_pause = 1;
 		rx_pause = 1;
-	} else if (fc_conf->mode == RTE_FC_TX_PAUSE) {
+	} else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE) {
 		tx_pause = 1;
-	} else if (fc_conf->mode == RTE_FC_RX_PAUSE) {
+	} else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE) {
 		rx_pause = 1;
 	}
 
@@ -1200,9 +1200,9 @@ static int cxgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 		rss_hf |= CXGBE_RSS_HF_IPV6_MASK;
 
 	if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN) {
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		if (flags & F_FW_RSS_VI_CONFIG_CMD_UDPEN)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	}
 
 	if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN)
@@ -1246,8 +1246,8 @@ static int cxgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 
 	rte_memcpy(rss, pi->rss, pi->rss_size * sizeof(u16));
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (!(reta_conf[idx].mask & (1ULL << shift)))
 			continue;
 
@@ -1277,8 +1277,8 @@ static int cxgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (!(reta_conf[idx].mask & (1ULL << shift)))
 			continue;
 
@@ -1479,7 +1479,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
 
 	if (lc->pcaps & FW_PORT_CAP32_SPEED_100G) {
 		if (capa_arr) {
-			capa_arr[num].speed = ETH_SPEED_NUM_100G;
+			capa_arr[num].speed = RTE_ETH_SPEED_NUM_100G;
 			capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(RS);
 		}
@@ -1488,7 +1488,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
 
 	if (lc->pcaps & FW_PORT_CAP32_SPEED_50G) {
 		if (capa_arr) {
-			capa_arr[num].speed = ETH_SPEED_NUM_50G;
+			capa_arr[num].speed = RTE_ETH_SPEED_NUM_50G;
 			capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(BASER);
 		}
@@ -1497,7 +1497,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
 
 	if (lc->pcaps & FW_PORT_CAP32_SPEED_25G) {
 		if (capa_arr) {
-			capa_arr[num].speed = ETH_SPEED_NUM_25G;
+			capa_arr[num].speed = RTE_ETH_SPEED_NUM_25G;
 			capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(RS);
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 91d6bb9bbcb0..f1ac32270961 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -1670,7 +1670,7 @@ int cxgbe_link_start(struct port_info *pi)
 	 * that step explicitly.
 	 */
 	ret = t4_set_rxmode(adapter, adapter->mbox, pi->viid, mtu, -1, -1, -1,
-			    !!(conf_offloads & DEV_RX_OFFLOAD_VLAN_STRIP),
+			    !!(conf_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP),
 			    true);
 	if (ret == 0) {
 		ret = cxgbe_mpstcam_modify(pi, (int)pi->xact_addr_filt,
@@ -1694,7 +1694,7 @@ int cxgbe_link_start(struct port_info *pi)
 	}
 
 	if (ret == 0 && cxgbe_force_linkup(adapter))
-		pi->eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+		pi->eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return ret;
 }
 
@@ -1725,10 +1725,10 @@ int cxgbe_write_rss_conf(const struct port_info *pi, uint64_t rss_hf)
 	if (rss_hf & CXGBE_RSS_HF_IPV4_MASK)
 		flags |= F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN |
 			 F_FW_RSS_VI_CONFIG_CMD_UDPEN;
 
@@ -1865,7 +1865,7 @@ static void fw_caps_to_speed_caps(enum fw_port_type port_type,
 {
 #define SET_SPEED(__speed_name) \
 	do { \
-		*speed_caps |= ETH_LINK_ ## __speed_name; \
+		*speed_caps |= RTE_ETH_LINK_ ## __speed_name; \
 	} while (0)
 
 #define FW_CAPS_TO_SPEED(__fw_name) \
@@ -1952,7 +1952,7 @@ void cxgbe_get_speed_caps(struct port_info *pi, u32 *speed_caps)
 			      speed_caps);
 
 	if (!(pi->link_cfg.pcaps & FW_PORT_CAP32_ANEG))
-		*speed_caps |= ETH_LINK_SPEED_FIXED;
+		*speed_caps |= RTE_ETH_LINK_SPEED_FIXED;
 }
 
 /**
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index c79cdb8d8ad7..89ea7dd47c0b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -54,29 +54,29 @@
 
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
-		DEV_RX_OFFLOAD_SCATTER;
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 /* Rx offloads which cannot be disabled */
 static uint64_t dev_rx_offloads_nodis =
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 /* Supported Tx offloads */
 static uint64_t dev_tx_offloads_sup =
-		DEV_TX_OFFLOAD_MT_LOCKFREE |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 /* Tx offloads which cannot be disabled */
 static uint64_t dev_tx_offloads_nodis =
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 /* Keep track of whether QMAN and BMAN have been globally initialized */
 static int is_global_init;
@@ -238,7 +238,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 
 	fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		DPAA_PMD_DEBUG("enabling scatter mode");
 		fman_if_set_sg(dev->process_private, 1);
 		dev->data->scattered_rx = 1;
@@ -283,43 +283,43 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 
 	/* Configure link only if link is UP*/
 	if (link->link_status) {
-		if (eth_conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
+		if (eth_conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 			/* Start autoneg only if link is not in autoneg mode */
 			if (!link->link_autoneg)
 				dpaa_restart_link_autoneg(__fif->node_name);
-		} else if (eth_conf->link_speeds & ETH_LINK_SPEED_FIXED) {
-			switch (eth_conf->link_speeds & ~ETH_LINK_SPEED_FIXED) {
-			case ETH_LINK_SPEED_10M_HD:
-				speed = ETH_SPEED_NUM_10M;
-				duplex = ETH_LINK_HALF_DUPLEX;
+		} else if (eth_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+			switch (eth_conf->link_speeds &  RTE_ETH_LINK_SPEED_FIXED) {
+			case RTE_ETH_LINK_SPEED_10M_HD:
+				speed = RTE_ETH_SPEED_NUM_10M;
+				duplex = RTE_ETH_LINK_HALF_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_10M:
-				speed = ETH_SPEED_NUM_10M;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_10M:
+				speed = RTE_ETH_SPEED_NUM_10M;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_100M_HD:
-				speed = ETH_SPEED_NUM_100M;
-				duplex = ETH_LINK_HALF_DUPLEX;
+			case RTE_ETH_LINK_SPEED_100M_HD:
+				speed = RTE_ETH_SPEED_NUM_100M;
+				duplex = RTE_ETH_LINK_HALF_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_100M:
-				speed = ETH_SPEED_NUM_100M;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_100M:
+				speed = RTE_ETH_SPEED_NUM_100M;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_1G:
-				speed = ETH_SPEED_NUM_1G;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_1G:
+				speed = RTE_ETH_SPEED_NUM_1G;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_2_5G:
-				speed = ETH_SPEED_NUM_2_5G;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_2_5G:
+				speed = RTE_ETH_SPEED_NUM_2_5G;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_10G:
-				speed = ETH_SPEED_NUM_10G;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_10G:
+				speed = RTE_ETH_SPEED_NUM_10G;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
 			default:
-				speed = ETH_SPEED_NUM_NONE;
-				duplex = ETH_LINK_FULL_DUPLEX;
+				speed = RTE_ETH_SPEED_NUM_NONE;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
 			}
 			/* Set link speed */
@@ -535,30 +535,30 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_mac_addrs = DPAA_MAX_MAC_FILTER;
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
-	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
 
 	if (fif->mac_type == fman_mac_1g) {
-		dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
-					| ETH_LINK_SPEED_10M
-					| ETH_LINK_SPEED_100M_HD
-					| ETH_LINK_SPEED_100M
-					| ETH_LINK_SPEED_1G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+					| RTE_ETH_LINK_SPEED_10M
+					| RTE_ETH_LINK_SPEED_100M_HD
+					| RTE_ETH_LINK_SPEED_100M
+					| RTE_ETH_LINK_SPEED_1G;
 	} else if (fif->mac_type == fman_mac_2_5g) {
-		dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
-					| ETH_LINK_SPEED_10M
-					| ETH_LINK_SPEED_100M_HD
-					| ETH_LINK_SPEED_100M
-					| ETH_LINK_SPEED_1G
-					| ETH_LINK_SPEED_2_5G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+					| RTE_ETH_LINK_SPEED_10M
+					| RTE_ETH_LINK_SPEED_100M_HD
+					| RTE_ETH_LINK_SPEED_100M
+					| RTE_ETH_LINK_SPEED_1G
+					| RTE_ETH_LINK_SPEED_2_5G;
 	} else if (fif->mac_type == fman_mac_10g) {
-		dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
-					| ETH_LINK_SPEED_10M
-					| ETH_LINK_SPEED_100M_HD
-					| ETH_LINK_SPEED_100M
-					| ETH_LINK_SPEED_1G
-					| ETH_LINK_SPEED_2_5G
-					| ETH_LINK_SPEED_10G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+					| RTE_ETH_LINK_SPEED_10M
+					| RTE_ETH_LINK_SPEED_100M_HD
+					| RTE_ETH_LINK_SPEED_100M
+					| RTE_ETH_LINK_SPEED_1G
+					| RTE_ETH_LINK_SPEED_2_5G
+					| RTE_ETH_LINK_SPEED_10G;
 	} else {
 		DPAA_PMD_ERR("invalid link_speed: %s, %d",
 			     dpaa_intf->name, fif->mac_type);
@@ -591,12 +591,12 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} rx_offload_map[] = {
-			{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
-			{DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
-			{DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
-			{DEV_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
-			{DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+			{RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+			{RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+			{RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+			{RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+			{RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
 	};
 
 	/* Update Rx offload info */
@@ -623,14 +623,14 @@ dpaa_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} tx_offload_map[] = {
-			{DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
-			{DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
-			{DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
-			{DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
-			{DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
-			{DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
-			{DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+			{RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+			{RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+			{RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+			{RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+			{RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+			{RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
 	};
 
 	/* Update Tx offload info */
@@ -664,7 +664,7 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 			ret = dpaa_get_link_status(__fif->node_name, link);
 			if (ret)
 				return ret;
-			if (link->link_status == ETH_LINK_DOWN &&
+			if (link->link_status == RTE_ETH_LINK_DOWN &&
 			    wait_to_complete)
 				rte_delay_ms(CHECK_INTERVAL);
 			else
@@ -675,15 +675,15 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	}
 
 	if (ioctl_version < 2) {
-		link->link_duplex = ETH_LINK_FULL_DUPLEX;
-		link->link_autoneg = ETH_LINK_AUTONEG;
+		link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+		link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 		if (fif->mac_type == fman_mac_1g)
-			link->link_speed = ETH_SPEED_NUM_1G;
+			link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		else if (fif->mac_type == fman_mac_2_5g)
-			link->link_speed = ETH_SPEED_NUM_2_5G;
+			link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		else if (fif->mac_type == fman_mac_10g)
-			link->link_speed = ETH_SPEED_NUM_10G;
+			link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		else
 			DPAA_PMD_ERR("invalid link_speed: %s, %d",
 				     dpaa_intf->name, fif->mac_type);
@@ -962,7 +962,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (max_rx_pktlen <= buffsz) {
 		;
 	} else if (dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_SCATTER) {
+			RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
 			DPAA_PMD_ERR("Maximum Rx packet size %d too big to fit "
 				"MaxSGlist %d",
@@ -1268,7 +1268,7 @@ static int dpaa_link_down(struct rte_eth_dev *dev)
 	__fif = container_of(fif, struct __fman_if, __if);
 
 	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
-		dpaa_update_link_status(__fif->node_name, ETH_LINK_DOWN);
+		dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_DOWN);
 	else
 		return dpaa_eth_dev_stop(dev);
 	return 0;
@@ -1284,7 +1284,7 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
 	__fif = container_of(fif, struct __fman_if, __if);
 
 	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
-		dpaa_update_link_status(__fif->node_name, ETH_LINK_UP);
+		dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_UP);
 	else
 		dpaa_eth_dev_start(dev);
 	return 0;
@@ -1314,10 +1314,10 @@ dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
 		return -EINVAL;
 	}
 
-	if (fc_conf->mode == RTE_FC_NONE) {
+	if (fc_conf->mode == RTE_ETH_FC_NONE) {
 		return 0;
-	} else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
-		 fc_conf->mode == RTE_FC_FULL) {
+	} else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE ||
+		 fc_conf->mode == RTE_ETH_FC_FULL) {
 		fman_if_set_fc_threshold(dev->process_private,
 					 fc_conf->high_water,
 					 fc_conf->low_water,
@@ -1361,11 +1361,11 @@ dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
 	}
 	ret = fman_if_get_fc_threshold(dev->process_private);
 	if (ret) {
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		fc_conf->pause_time =
 			fman_if_get_fc_quanta(dev->process_private);
 	} else {
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return 0;
@@ -1626,10 +1626,10 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf,
 	fc_conf = dpaa_intf->fc_conf;
 	ret = fman_if_get_fc_threshold(fman_intf);
 	if (ret) {
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		fc_conf->pause_time = fman_if_get_fc_quanta(fman_intf);
 	} else {
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b5728e09c29f..c868e9d5bd9b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -74,11 +74,11 @@
 #define DPAA_DEBUG_FQ_TX_ERROR   1
 
 #define DPAA_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_L2_PAYLOAD | \
-	ETH_RSS_IP | \
-	ETH_RSS_UDP | \
-	ETH_RSS_TCP | \
-	ETH_RSS_SCTP)
+	RTE_ETH_RSS_L2_PAYLOAD | \
+	RTE_ETH_RSS_IP | \
+	RTE_ETH_RSS_UDP | \
+	RTE_ETH_RSS_TCP | \
+	RTE_ETH_RSS_SCTP)
 
 #define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
 		PKT_TX_IP_CKSUM |                \
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index c5b5ec869519..1ccd03602790 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -394,7 +394,7 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 		if (req_dist_set % 2 != 0) {
 			dist_field = 1U << loop;
 			switch (dist_field) {
-			case ETH_RSS_L2_PAYLOAD:
+			case RTE_ETH_RSS_L2_PAYLOAD:
 
 				if (l2_configured)
 					break;
@@ -404,9 +404,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_ETH;
 				break;
 
-			case ETH_RSS_IPV4:
-			case ETH_RSS_FRAG_IPV4:
-			case ETH_RSS_NONFRAG_IPV4_OTHER:
+			case RTE_ETH_RSS_IPV4:
+			case RTE_ETH_RSS_FRAG_IPV4:
+			case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
 
 				if (ipv4_configured)
 					break;
@@ -415,10 +415,10 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_IPV4;
 				break;
 
-			case ETH_RSS_IPV6:
-			case ETH_RSS_FRAG_IPV6:
-			case ETH_RSS_NONFRAG_IPV6_OTHER:
-			case ETH_RSS_IPV6_EX:
+			case RTE_ETH_RSS_IPV6:
+			case RTE_ETH_RSS_FRAG_IPV6:
+			case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+			case RTE_ETH_RSS_IPV6_EX:
 
 				if (ipv6_configured)
 					break;
@@ -427,9 +427,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_IPV6;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_TCP:
-			case ETH_RSS_NONFRAG_IPV6_TCP:
-			case ETH_RSS_IPV6_TCP_EX:
+			case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+			case RTE_ETH_RSS_IPV6_TCP_EX:
 
 				if (tcp_configured)
 					break;
@@ -438,9 +438,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_TCP;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_UDP:
-			case ETH_RSS_NONFRAG_IPV6_UDP:
-			case ETH_RSS_IPV6_UDP_EX:
+			case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+			case RTE_ETH_RSS_IPV6_UDP_EX:
 
 				if (udp_configured)
 					break;
@@ -449,8 +449,8 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_UDP;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_SCTP:
-			case ETH_RSS_NONFRAG_IPV6_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
 
 				if (sctp_configured)
 					break;
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 08f49af7685d..3170694841df 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -220,9 +220,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
 		if (req_dist_set % 2 != 0) {
 			dist_field = 1ULL << loop;
 			switch (dist_field) {
-			case ETH_RSS_L2_PAYLOAD:
-			case ETH_RSS_ETH:
-
+			case RTE_ETH_RSS_L2_PAYLOAD:
+			case RTE_ETH_RSS_ETH:
 				if (l2_configured)
 					break;
 				l2_configured = 1;
@@ -238,7 +237,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_PPPOE:
+			case RTE_ETH_RSS_PPPOE:
 				if (pppoe_configured)
 					break;
 				kg_cfg->extracts[i].extract.from_hdr.prot =
@@ -252,7 +251,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_ESP:
+			case RTE_ETH_RSS_ESP:
 				if (esp_configured)
 					break;
 				esp_configured = 1;
@@ -268,7 +267,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_AH:
+			case RTE_ETH_RSS_AH:
 				if (ah_configured)
 					break;
 				ah_configured = 1;
@@ -284,8 +283,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_C_VLAN:
-			case ETH_RSS_S_VLAN:
+			case RTE_ETH_RSS_C_VLAN:
+			case RTE_ETH_RSS_S_VLAN:
 				if (vlan_configured)
 					break;
 				vlan_configured = 1;
@@ -301,7 +300,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_MPLS:
+			case RTE_ETH_RSS_MPLS:
 
 				if (mpls_configured)
 					break;
@@ -338,13 +337,13 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_IPV4:
-			case ETH_RSS_FRAG_IPV4:
-			case ETH_RSS_NONFRAG_IPV4_OTHER:
-			case ETH_RSS_IPV6:
-			case ETH_RSS_FRAG_IPV6:
-			case ETH_RSS_NONFRAG_IPV6_OTHER:
-			case ETH_RSS_IPV6_EX:
+			case RTE_ETH_RSS_IPV4:
+			case RTE_ETH_RSS_FRAG_IPV4:
+			case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
+			case RTE_ETH_RSS_IPV6:
+			case RTE_ETH_RSS_FRAG_IPV6:
+			case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+			case RTE_ETH_RSS_IPV6_EX:
 
 				if (l3_configured)
 					break;
@@ -382,12 +381,12 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 			break;
 
-			case ETH_RSS_NONFRAG_IPV4_TCP:
-			case ETH_RSS_NONFRAG_IPV6_TCP:
-			case ETH_RSS_NONFRAG_IPV4_UDP:
-			case ETH_RSS_NONFRAG_IPV6_UDP:
-			case ETH_RSS_IPV6_TCP_EX:
-			case ETH_RSS_IPV6_UDP_EX:
+			case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+			case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+			case RTE_ETH_RSS_IPV6_TCP_EX:
+			case RTE_ETH_RSS_IPV6_UDP_EX:
 
 				if (l4_configured)
 					break;
@@ -414,8 +413,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_SCTP:
-			case ETH_RSS_NONFRAG_IPV6_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
 
 				if (sctp_configured)
 					break;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index a0270e78520e..59e728577f53 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -38,33 +38,33 @@
 
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
-		DEV_RX_OFFLOAD_CHECKSUM |
-		DEV_RX_OFFLOAD_SCTP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_CHECKSUM |
+		RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 /* Rx offloads which cannot be disabled */
 static uint64_t dev_rx_offloads_nodis =
-		DEV_RX_OFFLOAD_RSS_HASH |
-		DEV_RX_OFFLOAD_SCATTER;
+		RTE_ETH_RX_OFFLOAD_RSS_HASH |
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 /* Supported Tx offloads */
 static uint64_t dev_tx_offloads_sup =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_MT_LOCKFREE |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 /* Tx offloads which cannot be disabled */
 static uint64_t dev_tx_offloads_nodis =
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 /* enable timestamp in mbuf */
 bool dpaa2_enable_ts[RTE_MAX_ETHPORTS];
@@ -142,7 +142,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* VLAN Filter not avaialble */
 		if (!priv->max_vlan_filters) {
 			DPAA2_PMD_INFO("VLAN filter not available");
@@ -150,7 +150,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		}
 
 		if (dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_VLAN_FILTER)
+			RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ret = dpni_enable_vlan_filter(dpni, CMD_PRI_LOW,
 						      priv->token, true);
 		else
@@ -251,13 +251,13 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 					dev_rx_offloads_nodis;
 	dev_info->tx_offload_capa = dev_tx_offloads_sup |
 					dev_tx_offloads_nodis;
-	dev_info->speed_capa = ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_2_5G |
-			ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_2_5G |
+			RTE_ETH_LINK_SPEED_10G;
 
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
-	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	dev_info->flow_type_rss_offloads = DPAA2_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxportconf.burst_size = dpaa2_dqrr_size;
@@ -270,10 +270,10 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->default_rxportconf.ring_size = DPAA2_RX_DEFAULT_NBDESC;
 
 	if (dpaa2_svr_family == SVR_LX2160A) {
-		dev_info->speed_capa |= ETH_LINK_SPEED_25G |
-				ETH_LINK_SPEED_40G |
-				ETH_LINK_SPEED_50G |
-				ETH_LINK_SPEED_100G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G |
+				RTE_ETH_LINK_SPEED_40G |
+				RTE_ETH_LINK_SPEED_50G |
+				RTE_ETH_LINK_SPEED_100G;
 	}
 
 	return 0;
@@ -291,15 +291,15 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} rx_offload_map[] = {
-			{DEV_RX_OFFLOAD_CHECKSUM, " Checksum,"},
-			{DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
-			{DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
-			{DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
-			{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
-			{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
-			{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
-			{DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
+			{RTE_ETH_RX_OFFLOAD_CHECKSUM, " Checksum,"},
+			{RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+			{RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
+			{RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
+			{RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
+			{RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+			{RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"},
+			{RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"}
 	};
 
 	/* Update Rx offload info */
@@ -326,15 +326,15 @@ dpaa2_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} tx_offload_map[] = {
-			{DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
-			{DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
-			{DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
-			{DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
-			{DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
-			{DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
-			{DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
-			{DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+			{RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+			{RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+			{RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+			{RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+			{RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+			{RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+			{RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
 	};
 
 	/* Update Tx offload info */
@@ -573,7 +573,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		return -1;
 	}
 
-	if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (eth_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) {
 			ret = dpaa2_setup_flow_dist(dev,
 					eth_conf->rx_adv_conf.rss_conf.rss_hf,
@@ -587,12 +587,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		rx_l3_csum_offload = true;
 
-	if ((rx_offloads & DEV_RX_OFFLOAD_UDP_CKSUM) ||
-		(rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) ||
-		(rx_offloads & DEV_RX_OFFLOAD_SCTP_CKSUM))
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) ||
+		(rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) ||
+		(rx_offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM))
 		rx_l4_csum_offload = true;
 
 	ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -610,7 +610,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 	}
 
 #if !defined(RTE_LIBRTE_IEEE1588)
-	if (rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 #endif
 	{
 		ret = rte_mbuf_dyn_rx_timestamp_register(
@@ -623,12 +623,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		dpaa2_enable_ts[dev->data->port_id] = true;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		tx_l3_csum_offload = true;
 
-	if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ||
-		(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
-		(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ||
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM))
 		tx_l4_csum_offload = true;
 
 	ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -660,8 +660,8 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
-		dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+		dpaa2_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
 
 	dpaa2_tm_init(dev);
 
@@ -1856,7 +1856,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 			DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret);
 			return -1;
 		}
-		if (state.up == ETH_LINK_DOWN &&
+		if (state.up == RTE_ETH_LINK_DOWN &&
 		    wait_to_complete)
 			rte_delay_ms(CHECK_INTERVAL);
 		else
@@ -1868,9 +1868,9 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 	link.link_speed = state.rate;
 
 	if (state.options & DPNI_LINK_OPT_HALF_DUPLEX)
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	else
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	ret = rte_eth_linkstatus_set(dev, &link);
 	if (ret == -1)
@@ -2031,9 +2031,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *	No TX side flow control (send Pause frame disabled)
 		 */
 		if (!(state.options & DPNI_LINK_OPT_ASYM_PAUSE))
-			fc_conf->mode = RTE_FC_FULL;
+			fc_conf->mode = RTE_ETH_FC_FULL;
 		else
-			fc_conf->mode = RTE_FC_RX_PAUSE;
+			fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	} else {
 		/* DPNI_LINK_OPT_PAUSE not set
 		 *  if ASYM_PAUSE set,
@@ -2043,9 +2043,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *	Flow control disabled
 		 */
 		if (state.options & DPNI_LINK_OPT_ASYM_PAUSE)
-			fc_conf->mode = RTE_FC_TX_PAUSE;
+			fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		else
-			fc_conf->mode = RTE_FC_NONE;
+			fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return ret;
@@ -2089,14 +2089,14 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	/* update cfg with fc_conf */
 	switch (fc_conf->mode) {
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		/* Full flow control;
 		 * OPT_PAUSE set, ASYM_PAUSE not set
 		 */
 		cfg.options |= DPNI_LINK_OPT_PAUSE;
 		cfg.options &= ~DPNI_LINK_OPT_ASYM_PAUSE;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		/* Enable RX flow control
 		 * OPT_PAUSE not set;
 		 * ASYM_PAUSE set;
@@ -2104,7 +2104,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
 		cfg.options &= ~DPNI_LINK_OPT_PAUSE;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		/* Enable TX Flow control
 		 * OPT_PAUSE set
 		 * ASYM_PAUSE set
@@ -2112,7 +2112,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		cfg.options |= DPNI_LINK_OPT_PAUSE;
 		cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
 		break;
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		/* Disable Flow control
 		 * OPT_PAUSE not set
 		 * ASYM_PAUSE not set
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index fdc62ec30d22..c5e9267bf04d 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -65,17 +65,17 @@
 #define DPAA2_TX_CONF_ENABLE	0x08
 
 #define DPAA2_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_L2_PAYLOAD | \
-	ETH_RSS_IP | \
-	ETH_RSS_UDP | \
-	ETH_RSS_TCP | \
-	ETH_RSS_SCTP | \
-	ETH_RSS_MPLS | \
-	ETH_RSS_C_VLAN | \
-	ETH_RSS_S_VLAN | \
-	ETH_RSS_ESP | \
-	ETH_RSS_AH | \
-	ETH_RSS_PPPOE)
+	RTE_ETH_RSS_L2_PAYLOAD | \
+	RTE_ETH_RSS_IP | \
+	RTE_ETH_RSS_UDP | \
+	RTE_ETH_RSS_TCP | \
+	RTE_ETH_RSS_SCTP | \
+	RTE_ETH_RSS_MPLS | \
+	RTE_ETH_RSS_C_VLAN | \
+	RTE_ETH_RSS_S_VLAN | \
+	RTE_ETH_RSS_ESP | \
+	RTE_ETH_RSS_AH | \
+	RTE_ETH_RSS_PPPOE)
 
 /* LX2 FRC Parsed values (Little Endian) */
 #define DPAA2_PKT_TYPE_ETHER		0x0060
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f40369e2c3f9..7c77243b5d1a 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -773,7 +773,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 #endif
 
 		if (eth_data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_VLAN_STRIP)
+				RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			rte_vlan_strip(bufs[num_rx]);
 
 		dq_storage++;
@@ -987,7 +987,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 							eth_data->port_id);
 
 		if (eth_data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_VLAN_STRIP) {
+				RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 			rte_vlan_strip(bufs[num_rx]);
 		}
 
@@ -1230,7 +1230,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 					if (unlikely(((*bufs)->ol_flags
 						& PKT_TX_VLAN_PKT) ||
 						(eth_data->dev_conf.txmode.offloads
-						& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+						& RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
 						ret = rte_vlan_insert(bufs);
 						if (ret)
 							goto send_n_return;
@@ -1273,7 +1273,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 			if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN_PKT) ||
 				(eth_data->dev_conf.txmode.offloads
-				& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+				& RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
 				int ret = rte_vlan_insert(bufs);
 				if (ret)
 					goto send_n_return;
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 93bee734ae5d..031c92a66fa0 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -81,15 +81,15 @@
 #define E1000_FTQF_QUEUE_ENABLE          0x00000100
 
 #define IGB_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 /*
  * The overhead from MTU to max frame size.
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 73152dec6ed1..9da477e59def 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -597,8 +597,8 @@ eth_em_start(struct rte_eth_dev *dev)
 
 	e1000_clear_hw_cntrs_base_generic(hw);
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
-			ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+			RTE_ETH_VLAN_EXTEND_MASK;
 	ret = eth_em_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Unable to update vlan offload");
@@ -611,39 +611,39 @@ eth_em_start(struct rte_eth_dev *dev)
 
 	/* Setup link speed and duplex */
 	speeds = &dev->data->dev_conf.link_speeds;
-	if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
 		hw->mac.autoneg = 1;
 	} else {
 		num_speeds = 0;
-		autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+		autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 		/* Reset */
 		hw->phy.autoneg_advertised = 0;
 
-		if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+		if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
 			num_speeds = -1;
 			goto error_invalid_config;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_1G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_1G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
 			num_speeds++;
 		}
@@ -1102,9 +1102,9 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.nb_mtu_seg_max = EM_TX_MAX_MTU_SEG,
 	};
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G;
 
 	/* Preferred queue parameters */
 	dev_info->default_rxportconf.nb_queues = 1;
@@ -1162,17 +1162,17 @@ eth_em_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		uint16_t duplex, speed;
 		hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
 		link.link_duplex = (duplex == FULL_DUPLEX) ?
-				ETH_LINK_FULL_DUPLEX :
-				ETH_LINK_HALF_DUPLEX;
+				RTE_ETH_LINK_FULL_DUPLEX :
+				RTE_ETH_LINK_HALF_DUPLEX;
 		link.link_speed = speed;
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 		link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 	} else {
-		link.link_speed = ETH_SPEED_NUM_NONE;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_status = ETH_LINK_DOWN;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -1424,15 +1424,15 @@ eth_em_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if(mask & ETH_VLAN_STRIP_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			em_vlan_hw_strip_enable(dev);
 		else
 			em_vlan_hw_strip_disable(dev);
 	}
 
-	if(mask & ETH_VLAN_FILTER_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			em_vlan_hw_filter_enable(dev);
 		else
 			em_vlan_hw_filter_disable(dev);
@@ -1601,7 +1601,7 @@ eth_em_interrupt_action(struct rte_eth_dev *dev,
 	if (link.link_status) {
 		PMD_INIT_LOG(INFO, " Port %d: Link Up - speed %u Mbps - %s",
 			     dev->data->port_id, link.link_speed,
-			     link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			     link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 			     "full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down", dev->data->port_id);
@@ -1683,13 +1683,13 @@ eth_em_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		rx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 344149c19147..648b04154c5b 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -93,7 +93,7 @@ struct em_rx_queue {
 	struct em_rx_entry *sw_ring;   /**< address of RX software ring. */
 	struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */
 	struct rte_mbuf *pkt_last_seg;  /**< Last segment of current packet. */
-	uint64_t	    offloads;   /**< Offloads of DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads;   /**< Offloads of RTE_ETH_RX_OFFLOAD_* */
 	uint16_t            nb_rx_desc; /**< number of RX descriptors. */
 	uint16_t            rx_tail;    /**< current value of RDT register. */
 	uint16_t            nb_rx_hold; /**< number of held free RX desc. */
@@ -173,7 +173,7 @@ struct em_tx_queue {
 	uint8_t                wthresh;  /**< Write-back threshold register. */
 	struct em_ctx_info ctx_cache;
 	/**< Hardware context history.*/
-	uint64_t	       offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+	uint64_t	       offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
@@ -1171,11 +1171,11 @@ em_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
 
 	RTE_SET_USED(dev);
 	tx_offload_capa =
-		DEV_TX_OFFLOAD_MULTI_SEGS  |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	return tx_offload_capa;
 }
@@ -1369,13 +1369,13 @@ em_get_rx_port_offloads_capa(void)
 	uint64_t rx_offload_capa;
 
 	rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP  |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_IPV4_CKSUM  |
-		DEV_RX_OFFLOAD_UDP_CKSUM   |
-		DEV_RX_OFFLOAD_TCP_CKSUM   |
-		DEV_RX_OFFLOAD_KEEP_CRC    |
-		DEV_RX_OFFLOAD_SCATTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	return rx_offload_capa;
 }
@@ -1469,7 +1469,7 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
 	rxq->queue_id = queue_idx;
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1788,7 +1788,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 *  call to configure
 		 */
-		if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -1831,7 +1831,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (!dev->data->scattered_rx)
 			PMD_INIT_LOG(DEBUG, "forcing scatter mode");
 		dev->rx_pkt_burst = eth_em_recv_scattered_pkts;
@@ -1844,7 +1844,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 	 */
 	rxcsum = E1000_READ_REG(hw, E1000_RXCSUM);
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= E1000_RXCSUM_IPOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_IPOFL;
@@ -1870,7 +1870,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 	}
 
 	/* Setup the Receive Control Register. */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
 	else
 		rctl |= E1000_RCTL_SECRC; /* Strip Ethernet CRC. */
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index dbe811a1ad2f..ae3bc4a9c201 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -1073,21 +1073,21 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
 	uint16_t nb_rx_q = dev->data->nb_rx_queues;
 	uint16_t nb_tx_q = dev->data->nb_tx_queues;
 
-	if ((rx_mq_mode & ETH_MQ_RX_DCB_FLAG) ||
-	    tx_mq_mode == ETH_MQ_TX_DCB ||
-	    tx_mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	if ((rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) ||
+	    tx_mq_mode == RTE_ETH_MQ_TX_DCB ||
+	    tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		PMD_INIT_LOG(ERR, "DCB mode is not supported.");
 		return -EINVAL;
 	}
 	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
 		/* Check multi-queue mode.
-		 * To no break software we accept ETH_MQ_RX_NONE as this might
+		 * To no break software we accept RTE_ETH_MQ_RX_NONE as this might
 		 * be used to turn off VLAN filter.
 		 */
 
-		if (rx_mq_mode == ETH_MQ_RX_NONE ||
-		    rx_mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+		if (rx_mq_mode == RTE_ETH_MQ_RX_NONE ||
+		    rx_mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
 			RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
 		} else {
 			/* Only support one queue on VFs.
@@ -1099,12 +1099,12 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
 			return -EINVAL;
 		}
 		/* TX mode is not used here, so mode might be ignored.*/
-		if (tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+		if (tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
 			/* SRIOV only works in VMDq enable mode */
 			PMD_INIT_LOG(WARNING, "SRIOV is active,"
 					" TX mode %d is not supported. "
 					" Driver will behave as %d mode.",
-					tx_mq_mode, ETH_MQ_TX_VMDQ_ONLY);
+					tx_mq_mode, RTE_ETH_MQ_TX_VMDQ_ONLY);
 		}
 
 		/* check valid queue number */
@@ -1117,17 +1117,17 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
 		/* To no break software that set invalid mode, only display
 		 * warning if invalid mode is used.
 		 */
-		if (rx_mq_mode != ETH_MQ_RX_NONE &&
-		    rx_mq_mode != ETH_MQ_RX_VMDQ_ONLY &&
-		    rx_mq_mode != ETH_MQ_RX_RSS) {
+		if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+		    rx_mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY &&
+		    rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
 			/* RSS together with VMDq not supported*/
 			PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
 				     rx_mq_mode);
 			return -EINVAL;
 		}
 
-		if (tx_mq_mode != ETH_MQ_TX_NONE &&
-		    tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+		if (tx_mq_mode != RTE_ETH_MQ_TX_NONE &&
+		    tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
 			PMD_INIT_LOG(WARNING, "TX mode %d is not supported."
 					" Due to txmode is meaningless in this"
 					" driver, just ignore.",
@@ -1146,8 +1146,8 @@ eth_igb_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multipe queue mode checking */
 	ret  = igb_check_mq_mode(dev);
@@ -1287,8 +1287,8 @@ eth_igb_start(struct rte_eth_dev *dev)
 	/*
 	 * VLAN Offload Settings
 	 */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
-			ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+			RTE_ETH_VLAN_EXTEND_MASK;
 	ret = eth_igb_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Unable to set vlan offload");
@@ -1296,7 +1296,7 @@ eth_igb_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
 		/* Enable VLAN filter since VMDq always use VLAN filter */
 		igb_vmdq_vlan_hw_filter_enable(dev);
 	}
@@ -1310,39 +1310,39 @@ eth_igb_start(struct rte_eth_dev *dev)
 
 	/* Setup link speed and duplex */
 	speeds = &dev->data->dev_conf.link_speeds;
-	if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
 		hw->mac.autoneg = 1;
 	} else {
 		num_speeds = 0;
-		autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+		autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 		/* Reset */
 		hw->phy.autoneg_advertised = 0;
 
-		if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+		if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
 			num_speeds = -1;
 			goto error_invalid_config;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_1G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_1G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
 			num_speeds++;
 		}
@@ -2185,21 +2185,21 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	case e1000_82576:
 		dev_info->max_rx_queues = 16;
 		dev_info->max_tx_queues = 16;
-		dev_info->max_vmdq_pools = ETH_8_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
 		dev_info->vmdq_queue_num = 16;
 		break;
 
 	case e1000_82580:
 		dev_info->max_rx_queues = 8;
 		dev_info->max_tx_queues = 8;
-		dev_info->max_vmdq_pools = ETH_8_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
 		dev_info->vmdq_queue_num = 8;
 		break;
 
 	case e1000_i350:
 		dev_info->max_rx_queues = 8;
 		dev_info->max_tx_queues = 8;
-		dev_info->max_vmdq_pools = ETH_8_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
 		dev_info->vmdq_queue_num = 8;
 		break;
 
@@ -2225,7 +2225,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		return -EINVAL;
 	}
 	dev_info->hash_key_size = IGB_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = IGB_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -2251,9 +2251,9 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->rx_desc_lim = rx_desc_lim;
 	dev_info->tx_desc_lim = tx_desc_lim;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G;
 
 	dev_info->max_mtu = dev_info->max_rx_pktlen - E1000_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -2296,12 +2296,12 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_rx_bufsize = 256; /* See BSIZE field of RCTL register. */
 	dev_info->max_rx_pktlen  = 0x3FFF; /* See RLPML register. */
 	dev_info->max_mac_addrs = hw->mac.rar_entry_count;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-				DEV_TX_OFFLOAD_IPV4_CKSUM  |
-				DEV_TX_OFFLOAD_UDP_CKSUM   |
-				DEV_TX_OFFLOAD_TCP_CKSUM   |
-				DEV_TX_OFFLOAD_SCTP_CKSUM  |
-				DEV_TX_OFFLOAD_TCP_TSO;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	switch (hw->mac.type) {
 	case e1000_vfadapt:
 		dev_info->max_rx_queues = 2;
@@ -2402,17 +2402,17 @@ eth_igb_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		uint16_t duplex, speed;
 		hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
 		link.link_duplex = (duplex == FULL_DUPLEX) ?
-				ETH_LINK_FULL_DUPLEX :
-				ETH_LINK_HALF_DUPLEX;
+				RTE_ETH_LINK_FULL_DUPLEX :
+				RTE_ETH_LINK_HALF_DUPLEX;
 		link.link_speed = speed;
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 		link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 	} else if (!link_check) {
 		link.link_speed = 0;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_status = ETH_LINK_DOWN;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -2588,7 +2588,7 @@ eth_igb_vlan_tpid_set(struct rte_eth_dev *dev,
 	qinq &= E1000_CTRL_EXT_EXT_VLAN;
 
 	/* only outer TPID of double VLAN can be configured*/
-	if (qinq && vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (qinq && vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		reg = E1000_READ_REG(hw, E1000_VET);
 		reg = (reg & (~E1000_VET_VET_EXT)) |
 			((uint32_t)tpid << E1000_VET_VET_EXT_SHIFT);
@@ -2703,22 +2703,22 @@ eth_igb_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if(mask & ETH_VLAN_STRIP_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			igb_vlan_hw_strip_enable(dev);
 		else
 			igb_vlan_hw_strip_disable(dev);
 	}
 
-	if(mask & ETH_VLAN_FILTER_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			igb_vlan_hw_filter_enable(dev);
 		else
 			igb_vlan_hw_filter_disable(dev);
 	}
 
-	if(mask & ETH_VLAN_EXTEND_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			igb_vlan_hw_extend_enable(dev);
 		else
 			igb_vlan_hw_extend_disable(dev);
@@ -2870,7 +2870,7 @@ eth_igb_interrupt_action(struct rte_eth_dev *dev,
 				     " Port %d: Link Up - speed %u Mbps - %s",
 				     dev->data->port_id,
 				     (unsigned)link.link_speed,
-				     link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+				     link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 				     "full-duplex" : "half-duplex");
 		} else {
 			PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3024,13 +3024,13 @@ eth_igb_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		rx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -3099,18 +3099,18 @@ eth_igb_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 * on configuration
 		 */
 		switch (fc_conf->mode) {
-		case RTE_FC_NONE:
+		case RTE_ETH_FC_NONE:
 			ctrl &= ~E1000_CTRL_RFCE & ~E1000_CTRL_TFCE;
 			break;
-		case RTE_FC_RX_PAUSE:
+		case RTE_ETH_FC_RX_PAUSE:
 			ctrl |= E1000_CTRL_RFCE;
 			ctrl &= ~E1000_CTRL_TFCE;
 			break;
-		case RTE_FC_TX_PAUSE:
+		case RTE_ETH_FC_TX_PAUSE:
 			ctrl |= E1000_CTRL_TFCE;
 			ctrl &= ~E1000_CTRL_RFCE;
 			break;
-		case RTE_FC_FULL:
+		case RTE_ETH_FC_FULL:
 			ctrl |= E1000_CTRL_RFCE | E1000_CTRL_TFCE;
 			break;
 		default:
@@ -3258,22 +3258,22 @@ igbvf_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
 		     dev->data->port_id);
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/*
 	 * VF has no ability to enable/disable HW CRC
 	 * Keep the persistent behavior the same as Host PF
 	 */
 #ifndef RTE_LIBRTE_E1000_PF_DISABLE_STRIP_CRC
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
-		conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #else
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
 		PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #endif
 
@@ -3571,16 +3571,16 @@ eth_igb_rss_reta_update(struct rte_eth_dev *dev,
 	uint16_t idx, shift;
 	struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += IGB_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IGB_4_BIT_MASK);
 		if (!mask)
@@ -3612,16 +3612,16 @@ eth_igb_rss_reta_query(struct rte_eth_dev *dev,
 	uint16_t idx, shift;
 	struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += IGB_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IGB_4_BIT_MASK);
 		if (!mask)
diff --git a/drivers/net/e1000/igb_pf.c b/drivers/net/e1000/igb_pf.c
index 2ce74dd5a9a5..fe355ef6b3b5 100644
--- a/drivers/net/e1000/igb_pf.c
+++ b/drivers/net/e1000/igb_pf.c
@@ -88,7 +88,7 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev)
 	if (*vfinfo == NULL)
 		rte_panic("Cannot allocate memory for private VF data\n");
 
-	RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
+	RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_8_POOLS;
 	RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
 	RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
 	RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index a1d5eecc14a1..bcce2fc726d8 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -111,7 +111,7 @@ struct igb_rx_queue {
 	uint8_t             crc_len;    /**< 0 if CRC stripped, 4 otherwise. */
 	uint8_t             drop_en;  /**< If not 0, set SRRCTL.Drop_En. */
 	uint32_t            flags;      /**< RX flags. */
-	uint64_t	    offloads;   /**< offloads of DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads;   /**< offloads of RTE_ETH_RX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
@@ -186,7 +186,7 @@ struct igb_tx_queue {
 	/**< Start context position for transmit queue. */
 	struct igb_advctx_info ctx_cache[IGB_CTX_NUM];
 	/**< Hardware context history.*/
-	uint64_t	       offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+	uint64_t	       offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
@@ -1459,13 +1459,13 @@ igb_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
 	uint64_t tx_offload_capa;
 
 	RTE_SET_USED(dev);
-	tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-			  DEV_TX_OFFLOAD_IPV4_CKSUM  |
-			  DEV_TX_OFFLOAD_UDP_CKSUM   |
-			  DEV_TX_OFFLOAD_TCP_CKSUM   |
-			  DEV_TX_OFFLOAD_SCTP_CKSUM  |
-			  DEV_TX_OFFLOAD_TCP_TSO     |
-			  DEV_TX_OFFLOAD_MULTI_SEGS;
+	tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+			  RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+			  RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+			  RTE_ETH_TX_OFFLOAD_TCP_TSO     |
+			  RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return tx_offload_capa;
 }
@@ -1640,19 +1640,19 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
 
 	hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP  |
-			  DEV_RX_OFFLOAD_VLAN_FILTER |
-			  DEV_RX_OFFLOAD_IPV4_CKSUM  |
-			  DEV_RX_OFFLOAD_UDP_CKSUM   |
-			  DEV_RX_OFFLOAD_TCP_CKSUM   |
-			  DEV_RX_OFFLOAD_KEEP_CRC    |
-			  DEV_RX_OFFLOAD_SCATTER     |
-			  DEV_RX_OFFLOAD_RSS_HASH;
+	rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+			  RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+			  RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+			  RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+			  RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+			  RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+			  RTE_ETH_RX_OFFLOAD_SCATTER     |
+			  RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (hw->mac.type == e1000_i350 ||
 	    hw->mac.type == e1000_i210 ||
 	    hw->mac.type == e1000_i211)
-		rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+		rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
 	return rx_offload_capa;
 }
@@ -1733,7 +1733,7 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
 		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1950,23 +1950,23 @@ igb_hw_rss_hash_set(struct e1000_hw *hw, struct rte_eth_rss_conf *rss_conf)
 	/* Set configured hashing protocols in MRQC register */
 	rss_hf = rss_conf->rss_hf;
 	mrqc = E1000_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV4_TCP;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6;
-	if (rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_EX)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP;
-	if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV4_UDP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP;
-	if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP_EX;
 	E1000_WRITE_REG(hw, E1000_MRQC, mrqc);
 }
@@ -2032,23 +2032,23 @@ int eth_igb_rss_hash_conf_get(struct rte_eth_dev *dev,
 	}
 	rss_hf = 0;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_EX)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP_EX)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP_EX)
-		rss_hf |= ETH_RSS_IPV6_UDP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
 	rss_conf->rss_hf = rss_hf;
 	return 0;
 }
@@ -2170,15 +2170,15 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 			E1000_VMOLR_ROPE | E1000_VMOLR_BAM |
 			E1000_VMOLR_MPME);
 
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_UNTAG)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_UNTAG)
 			vmolr |= E1000_VMOLR_AUPE;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_MC)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
 			vmolr |= E1000_VMOLR_ROMPE;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_UC)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 			vmolr |= E1000_VMOLR_ROPE;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_BROADCAST)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 			vmolr |= E1000_VMOLR_BAM;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_MULTICAST)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 			vmolr |= E1000_VMOLR_MPME;
 
 		E1000_WRITE_REG(hw, E1000_VMOLR(i), vmolr);
@@ -2214,9 +2214,9 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 	/* VLVF: set up filters for vlan tags as configured */
 	for (i = 0; i < cfg->nb_pool_maps; i++) {
 		/* set vlan id in VF register and set the valid bit */
-		E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE | \
-                        (cfg->pool_map[i].vlan_id & ETH_VLAN_ID_MAX) | \
-			((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT ) & \
+		E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE |
+			(cfg->pool_map[i].vlan_id & RTE_ETH_VLAN_ID_MAX) |
+			((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT) &
 			E1000_VLVF_POOLSEL_MASK)));
 	}
 
@@ -2268,7 +2268,7 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	uint32_t mrqc;
 
-	if (RTE_ETH_DEV_SRIOV(dev).active == ETH_8_POOLS) {
+	if (RTE_ETH_DEV_SRIOV(dev).active == RTE_ETH_8_POOLS) {
 		/*
 		 * SRIOV active scheme
 		 * FIXME if support RSS together with VMDq & SRIOV
@@ -2282,14 +2282,14 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * SRIOV inactive scheme
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-			case ETH_MQ_RX_RSS:
+			case RTE_ETH_MQ_RX_RSS:
 				igb_rss_configure(dev);
 				break;
-			case ETH_MQ_RX_VMDQ_ONLY:
+			case RTE_ETH_MQ_RX_VMDQ_ONLY:
 				/*Configure general VMDQ only RX parameters*/
 				igb_vmdq_rx_hw_configure(dev);
 				break;
-			case ETH_MQ_RX_NONE:
+			case RTE_ETH_MQ_RX_NONE:
 				/* if mq_mode is none, disable rss mode.*/
 			default:
 				igb_rss_disable(dev);
@@ -2338,7 +2338,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		 * Set maximum packet length by default, and might be updated
 		 * together with enabling/disabling dual VLAN.
 		 */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			max_len += VLAN_TAG_SIZE;
 
 		E1000_WRITE_REG(hw, E1000_RLPML, max_len);
@@ -2374,7 +2374,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 *  call to configure
 		 */
-		if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -2444,7 +2444,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		E1000_WRITE_REG(hw, E1000_RXDCTL(rxq->reg_idx), rxdctl);
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (!dev->data->scattered_rx)
 			PMD_INIT_LOG(DEBUG, "forcing scatter mode");
 		dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
@@ -2488,16 +2488,16 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 	rxcsum |= E1000_RXCSUM_PCSD;
 
 	/* Enable both L3/L4 rx checksum offload */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		rxcsum |= E1000_RXCSUM_IPOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_IPOFL;
 	if (rxmode->offloads &
-		(DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+		(RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		rxcsum |= E1000_RXCSUM_TUOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_TUOFL;
-	if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= E1000_RXCSUM_CRCOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_CRCOFL;
@@ -2505,7 +2505,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 	E1000_WRITE_REG(hw, E1000_RXCSUM, rxcsum);
 
 	/* Setup the Receive Control Register. */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
 
 		/* clear STRCRC bit in all queues */
@@ -2545,7 +2545,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		(hw->mac.mc_filter_type << E1000_RCTL_MO_SHIFT);
 
 	/* Make sure VLAN Filters are off. */
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_VMDQ_ONLY)
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY)
 		rctl &= ~E1000_RCTL_VFE;
 	/* Don't store bad packets. */
 	rctl &= ~E1000_RCTL_SBP;
@@ -2743,7 +2743,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
 		E1000_WRITE_REG(hw, E1000_RXDCTL(i), rxdctl);
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (!dev->data->scattered_rx)
 			PMD_INIT_LOG(DEBUG, "forcing scatter mode");
 		dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 3fde099ab42c..57b53bfd6c48 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -116,10 +116,10 @@ static const struct ena_stats ena_stats_rx_strings[] = {
 #define ENA_STATS_ARRAY_TX	ARRAY_SIZE(ena_stats_tx_strings)
 #define ENA_STATS_ARRAY_RX	ARRAY_SIZE(ena_stats_rx_strings)
 
-#define QUEUE_OFFLOADS (DEV_TX_OFFLOAD_TCP_CKSUM |\
-			DEV_TX_OFFLOAD_UDP_CKSUM |\
-			DEV_TX_OFFLOAD_IPV4_CKSUM |\
-			DEV_TX_OFFLOAD_TCP_TSO)
+#define QUEUE_OFFLOADS (RTE_ETH_TX_OFFLOAD_TCP_CKSUM |\
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |\
+			RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |\
+			RTE_ETH_TX_OFFLOAD_TCP_TSO)
 #define MBUF_OFFLOADS (PKT_TX_L4_MASK |\
 		       PKT_TX_IP_CKSUM |\
 		       PKT_TX_TCP_SEG)
@@ -310,7 +310,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 	    (queue_offloads & QUEUE_OFFLOADS)) {
 		/* check if TSO is required */
 		if ((mbuf->ol_flags & PKT_TX_TCP_SEG) &&
-		    (queue_offloads & DEV_TX_OFFLOAD_TCP_TSO)) {
+		    (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
 			ena_tx_ctx->tso_enable = true;
 
 			ena_meta->l4_hdr_len = GET_L4_HDR_LEN(mbuf);
@@ -318,7 +318,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 
 		/* check if L3 checksum is needed */
 		if ((mbuf->ol_flags & PKT_TX_IP_CKSUM) &&
-		    (queue_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM))
+		    (queue_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM))
 			ena_tx_ctx->l3_csum_enable = true;
 
 		if (mbuf->ol_flags & PKT_TX_IPV6) {
@@ -335,12 +335,12 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 
 		/* check if L4 checksum is needed */
 		if (((mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) &&
-		    (queue_offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) {
+		    (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_TCP;
 			ena_tx_ctx->l4_csum_enable = true;
 		} else if (((mbuf->ol_flags & PKT_TX_L4_MASK) ==
 				PKT_TX_UDP_CKSUM) &&
-				(queue_offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+				(queue_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UDP;
 			ena_tx_ctx->l4_csum_enable = true;
 		} else {
@@ -621,9 +621,9 @@ static int ena_link_update(struct rte_eth_dev *dev,
 	struct rte_eth_link *link = &dev->data->dev_link;
 	struct ena_adapter *adapter = dev->data->dev_private;
 
-	link->link_status = adapter->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
-	link->link_speed = ETH_SPEED_NUM_NONE;
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_status = adapter->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
+	link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	return 0;
 }
@@ -901,7 +901,7 @@ static int ena_start(struct rte_eth_dev *dev)
 	if (rc)
 		goto err_start_tx;
 
-	if (adapter->edev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (adapter->edev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		rc = ena_rss_configure(adapter);
 		if (rc)
 			goto err_rss_init;
@@ -1840,9 +1840,9 @@ static int ena_dev_configure(struct rte_eth_dev *dev)
 
 	adapter->state = ENA_ADAPTER_STATE_CONFIG;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
-	dev->data->dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+	dev->data->dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	adapter->tx_selected_offloads = dev->data->dev_conf.txmode.offloads;
 	adapter->rx_selected_offloads = dev->data->dev_conf.rxmode.offloads;
@@ -1893,35 +1893,35 @@ static int ena_infos_get(struct rte_eth_dev *dev,
 	ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
 
 	dev_info->speed_capa =
-			ETH_LINK_SPEED_1G   |
-			ETH_LINK_SPEED_2_5G |
-			ETH_LINK_SPEED_5G   |
-			ETH_LINK_SPEED_10G  |
-			ETH_LINK_SPEED_25G  |
-			ETH_LINK_SPEED_40G  |
-			ETH_LINK_SPEED_50G  |
-			ETH_LINK_SPEED_100G;
+			RTE_ETH_LINK_SPEED_1G   |
+			RTE_ETH_LINK_SPEED_2_5G |
+			RTE_ETH_LINK_SPEED_5G   |
+			RTE_ETH_LINK_SPEED_10G  |
+			RTE_ETH_LINK_SPEED_25G  |
+			RTE_ETH_LINK_SPEED_40G  |
+			RTE_ETH_LINK_SPEED_50G  |
+			RTE_ETH_LINK_SPEED_100G;
 
 	/* Set Tx & Rx features available for device */
 	if (adapter->offloads.tso4_supported)
-		tx_feat	|= DEV_TX_OFFLOAD_TCP_TSO;
+		tx_feat	|= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (adapter->offloads.tx_csum_supported)
-		tx_feat |= DEV_TX_OFFLOAD_IPV4_CKSUM |
-			DEV_TX_OFFLOAD_UDP_CKSUM |
-			DEV_TX_OFFLOAD_TCP_CKSUM;
+		tx_feat |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if (adapter->offloads.rx_csum_supported)
-		rx_feat |= DEV_RX_OFFLOAD_IPV4_CKSUM |
-			DEV_RX_OFFLOAD_UDP_CKSUM  |
-			DEV_RX_OFFLOAD_TCP_CKSUM;
+		rx_feat |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM  |
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
-	tx_feat |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	tx_feat |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	/* Inform framework about available features */
 	dev_info->rx_offload_capa = rx_feat;
 	if (adapter->offloads.rss_hash_supported)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	dev_info->rx_queue_offload_capa = rx_feat;
 	dev_info->tx_offload_capa = tx_feat;
 	dev_info->tx_queue_offload_capa = tx_feat;
@@ -2088,7 +2088,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	}
 #endif
 
-	fill_hash = rx_ring->offloads & DEV_RX_OFFLOAD_RSS_HASH;
+	fill_hash = rx_ring->offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	descs_in_use = rx_ring->ring_size -
 		ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1;
diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h
index 06ac8b06b5cb..3b1844e50982 100644
--- a/drivers/net/ena/ena_ethdev.h
+++ b/drivers/net/ena/ena_ethdev.h
@@ -54,8 +54,8 @@
 
 #define ENA_HASH_KEY_SIZE		40
 
-#define ENA_ALL_RSS_HF (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
-			ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_UDP)
+#define ENA_ALL_RSS_HF (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define ENA_IO_TXQ_IDX(q)		(2 * (q))
 #define ENA_IO_RXQ_IDX(q)		(2 * (q) + 1)
diff --git a/drivers/net/ena/ena_rss.c b/drivers/net/ena/ena_rss.c
index 152098410fa2..be4007e3f3fe 100644
--- a/drivers/net/ena/ena_rss.c
+++ b/drivers/net/ena/ena_rss.c
@@ -76,7 +76,7 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
 	if (reta_size == 0 || reta_conf == NULL)
 		return -EINVAL;
 
-	if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		PMD_DRV_LOG(ERR,
 			"RSS was not configured for the PMD\n");
 		return -ENOTSUP;
@@ -93,8 +93,8 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
 		/* Each reta_conf is for 64 entries.
 		 * To support 128 we use 2 conf of 64.
 		 */
-		conf_idx = i / RTE_RETA_GROUP_SIZE;
-		idx = i % RTE_RETA_GROUP_SIZE;
+		conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		idx = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (TEST_BIT(reta_conf[conf_idx].mask, idx)) {
 			entry_value =
 				ENA_IO_RXQ_IDX(reta_conf[conf_idx].reta[idx]);
@@ -139,7 +139,7 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
 	if (reta_size == 0 || reta_conf == NULL)
 		return -EINVAL;
 
-	if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		PMD_DRV_LOG(ERR,
 			"RSS was not configured for the PMD\n");
 		return -ENOTSUP;
@@ -154,8 +154,8 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0 ; i < reta_size ; i++) {
-		reta_conf_idx = i / RTE_RETA_GROUP_SIZE;
-		reta_idx = i % RTE_RETA_GROUP_SIZE;
+		reta_conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (TEST_BIT(reta_conf[reta_conf_idx].mask, reta_idx))
 			reta_conf[reta_conf_idx].reta[reta_idx] =
 				ENA_IO_RXQ_IDX_REV(indirect_table[i]);
@@ -199,34 +199,34 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
 	/* Convert proto to ETH flag */
 	switch (proto) {
 	case ENA_ADMIN_RSS_TCP4:
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		break;
 	case ENA_ADMIN_RSS_UDP4:
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 		break;
 	case ENA_ADMIN_RSS_TCP6:
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 		break;
 	case ENA_ADMIN_RSS_UDP6:
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 		break;
 	case ENA_ADMIN_RSS_IP4:
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 		break;
 	case ENA_ADMIN_RSS_IP6:
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 		break;
 	case ENA_ADMIN_RSS_IP4_FRAG:
-		rss_hf |= ETH_RSS_FRAG_IPV4;
+		rss_hf |= RTE_ETH_RSS_FRAG_IPV4;
 		break;
 	case ENA_ADMIN_RSS_NOT_IP:
-		rss_hf |= ETH_RSS_L2_PAYLOAD;
+		rss_hf |= RTE_ETH_RSS_L2_PAYLOAD;
 		break;
 	case ENA_ADMIN_RSS_TCP6_EX:
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 		break;
 	case ENA_ADMIN_RSS_IP6_EX:
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 		break;
 	default:
 		break;
@@ -235,10 +235,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
 	/* Check if only DA or SA is being used for L3. */
 	switch (fields & ENA_HF_RSS_ALL_L3) {
 	case ENA_ADMIN_RSS_L3_SA:
-		rss_hf |= ETH_RSS_L3_SRC_ONLY;
+		rss_hf |= RTE_ETH_RSS_L3_SRC_ONLY;
 		break;
 	case ENA_ADMIN_RSS_L3_DA:
-		rss_hf |= ETH_RSS_L3_DST_ONLY;
+		rss_hf |= RTE_ETH_RSS_L3_DST_ONLY;
 		break;
 	default:
 		break;
@@ -247,10 +247,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
 	/* Check if only DA or SA is being used for L4. */
 	switch (fields & ENA_HF_RSS_ALL_L4) {
 	case ENA_ADMIN_RSS_L4_SP:
-		rss_hf |= ETH_RSS_L4_SRC_ONLY;
+		rss_hf |= RTE_ETH_RSS_L4_SRC_ONLY;
 		break;
 	case ENA_ADMIN_RSS_L4_DP:
-		rss_hf |= ETH_RSS_L4_DST_ONLY;
+		rss_hf |= RTE_ETH_RSS_L4_DST_ONLY;
 		break;
 	default:
 		break;
@@ -268,11 +268,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
 	fields_mask = ENA_ADMIN_RSS_L2_DA | ENA_ADMIN_RSS_L2_SA;
 
 	/* Determine which fields of L3 should be used. */
-	switch (rss_hf & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) {
-	case ETH_RSS_L3_DST_ONLY:
+	switch (rss_hf & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) {
+	case RTE_ETH_RSS_L3_DST_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L3_DA;
 		break;
-	case ETH_RSS_L3_SRC_ONLY:
+	case RTE_ETH_RSS_L3_SRC_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L3_SA;
 		break;
 	default:
@@ -284,11 +284,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
 	}
 
 	/* Determine which fields of L4 should be used. */
-	switch (rss_hf & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) {
-	case ETH_RSS_L4_DST_ONLY:
+	switch (rss_hf & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) {
+	case RTE_ETH_RSS_L4_DST_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L4_DP;
 		break;
-	case ETH_RSS_L4_SRC_ONLY:
+	case RTE_ETH_RSS_L4_SRC_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L4_SP;
 		break;
 	default:
@@ -334,43 +334,43 @@ static int ena_set_hash_fields(struct ena_com_dev *ena_dev, uint64_t rss_hf)
 	int rc, i;
 
 	/* Turn on appropriate fields for each requested packet type */
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) != 0)
 		selected_fields[ENA_ADMIN_RSS_TCP4].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP4, rss_hf);
 
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) != 0)
 		selected_fields[ENA_ADMIN_RSS_UDP4].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP4, rss_hf);
 
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) != 0)
 		selected_fields[ENA_ADMIN_RSS_TCP6].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6, rss_hf);
 
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) != 0)
 		selected_fields[ENA_ADMIN_RSS_UDP6].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP6, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV4) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV4) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP4].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV6) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV6) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP6].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6, rss_hf);
 
-	if ((rss_hf & ETH_RSS_FRAG_IPV4) != 0)
+	if ((rss_hf & RTE_ETH_RSS_FRAG_IPV4) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP4_FRAG].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4_FRAG, rss_hf);
 
-	if ((rss_hf & ETH_RSS_L2_PAYLOAD) != 0)
+	if ((rss_hf & RTE_ETH_RSS_L2_PAYLOAD) != 0)
 		selected_fields[ENA_ADMIN_RSS_NOT_IP].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_NOT_IP, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV6_TCP_EX) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) != 0)
 		selected_fields[ENA_ADMIN_RSS_TCP6_EX].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6_EX, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV6_EX) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV6_EX) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP6_EX].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6_EX, rss_hf);
 
@@ -541,7 +541,7 @@ int ena_rss_hash_conf_get(struct rte_eth_dev *dev,
 	uint16_t admin_hf;
 	static bool warn_once;
 
-	if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		PMD_DRV_LOG(ERR, "RSS was not configured for the PMD\n");
 		return -ENOTSUP;
 	}
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index 1b567f01eae0..7cdb8ce463ed 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -100,27 +100,27 @@ enetc_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 	status = enetc_port_rd(enetc_hw, ENETC_PM0_STATUS);
 
 	if (status & ENETC_LINK_MODE)
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	else
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 
 	if (status & ENETC_LINK_STATUS)
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 	else
-		link.link_status = ETH_LINK_DOWN;
+		link.link_status = RTE_ETH_LINK_DOWN;
 
 	switch (status & ENETC_LINK_SPEED_MASK) {
 	case ENETC_LINK_SPEED_1G:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case ENETC_LINK_SPEED_100M:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	default:
 	case ENETC_LINK_SPEED_10M:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -207,10 +207,10 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
 	dev_info->max_tx_queues = MAX_TX_RINGS;
 	dev_info->max_rx_pktlen = ENETC_MAC_MAXFRM_SIZE;
 	dev_info->rx_offload_capa =
-		(DEV_RX_OFFLOAD_IPV4_CKSUM |
-		 DEV_RX_OFFLOAD_UDP_CKSUM |
-		 DEV_RX_OFFLOAD_TCP_CKSUM |
-		 DEV_RX_OFFLOAD_KEEP_CRC);
+		(RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		 RTE_ETH_RX_OFFLOAD_KEEP_CRC);
 
 	return 0;
 }
@@ -463,7 +463,7 @@ enetc_rx_queue_setup(struct rte_eth_dev *dev,
 			       RTE_ETH_QUEUE_STATE_STOPPED;
 	}
 
-	rx_ring->crc_len = (uint8_t)((rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+	rx_ring->crc_len = (uint8_t)((rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
 				     RTE_ETHER_CRC_LEN : 0);
 
 	return 0;
@@ -705,7 +705,7 @@ enetc_dev_configure(struct rte_eth_dev *dev)
 	enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
 	enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		int config;
 
 		config = enetc_port_rd(enetc_hw, ENETC_PM0_CMD_CFG);
@@ -713,10 +713,10 @@ enetc_dev_configure(struct rte_eth_dev *dev)
 		enetc_port_wr(enetc_hw, ENETC_PM0_CMD_CFG, config);
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		checksum &= ~L3_CKSUM;
 
-	if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM))
+	if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
 		checksum &= ~L4_CKSUM;
 
 	enetc_port_wr(enetc_hw, ENETC_PAR_PORT_CFG, checksum);
diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index 47bfdac2cfdd..d5493c98345d 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -178,7 +178,7 @@ struct enic {
 	 */
 	uint8_t rss_hash_type; /* NIC_CFG_RSS_HASH_TYPE flags */
 	uint8_t rss_enable;
-	uint64_t rss_hf; /* ETH_RSS flags */
+	uint64_t rss_hf; /* RTE_ETH_RSS flags */
 	union vnic_rss_key rss_key;
 	union vnic_rss_cpu rss_cpu;
 
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8df7332bc5e0..c8bdaf1a8e79 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -38,30 +38,30 @@ static const struct vic_speed_capa {
 	uint16_t sub_devid;
 	uint32_t capa;
 } vic_speed_capa_map[] = {
-	{ 0x0043, ETH_LINK_SPEED_10G }, /* VIC */
-	{ 0x0047, ETH_LINK_SPEED_10G }, /* P81E PCIe */
-	{ 0x0048, ETH_LINK_SPEED_10G }, /* M81KR Mezz */
-	{ 0x004f, ETH_LINK_SPEED_10G }, /* 1280 Mezz */
-	{ 0x0084, ETH_LINK_SPEED_10G }, /* 1240 MLOM */
-	{ 0x0085, ETH_LINK_SPEED_10G }, /* 1225 PCIe */
-	{ 0x00cd, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1285 PCIe */
-	{ 0x00ce, ETH_LINK_SPEED_10G }, /* 1225T PCIe */
-	{ 0x012a, ETH_LINK_SPEED_40G }, /* M4308 */
-	{ 0x012c, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1340 MLOM */
-	{ 0x012e, ETH_LINK_SPEED_10G }, /* 1227 PCIe */
-	{ 0x0137, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1380 Mezz */
-	{ 0x014d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1385 PCIe */
-	{ 0x015d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1387 MLOM */
-	{ 0x0215, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
-		  ETH_LINK_SPEED_40G }, /* 1440 Mezz */
-	{ 0x0216, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
-		  ETH_LINK_SPEED_40G }, /* 1480 MLOM */
-	{ 0x0217, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1455 PCIe */
-	{ 0x0218, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1457 MLOM */
-	{ 0x0219, ETH_LINK_SPEED_40G }, /* 1485 PCIe */
-	{ 0x021a, ETH_LINK_SPEED_40G }, /* 1487 MLOM */
-	{ 0x024a, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1495 PCIe */
-	{ 0x024b, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1497 MLOM */
+	{ 0x0043, RTE_ETH_LINK_SPEED_10G }, /* VIC */
+	{ 0x0047, RTE_ETH_LINK_SPEED_10G }, /* P81E PCIe */
+	{ 0x0048, RTE_ETH_LINK_SPEED_10G }, /* M81KR Mezz */
+	{ 0x004f, RTE_ETH_LINK_SPEED_10G }, /* 1280 Mezz */
+	{ 0x0084, RTE_ETH_LINK_SPEED_10G }, /* 1240 MLOM */
+	{ 0x0085, RTE_ETH_LINK_SPEED_10G }, /* 1225 PCIe */
+	{ 0x00cd, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1285 PCIe */
+	{ 0x00ce, RTE_ETH_LINK_SPEED_10G }, /* 1225T PCIe */
+	{ 0x012a, RTE_ETH_LINK_SPEED_40G }, /* M4308 */
+	{ 0x012c, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1340 MLOM */
+	{ 0x012e, RTE_ETH_LINK_SPEED_10G }, /* 1227 PCIe */
+	{ 0x0137, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1380 Mezz */
+	{ 0x014d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1385 PCIe */
+	{ 0x015d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1387 MLOM */
+	{ 0x0215, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+		  RTE_ETH_LINK_SPEED_40G }, /* 1440 Mezz */
+	{ 0x0216, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+		  RTE_ETH_LINK_SPEED_40G }, /* 1480 MLOM */
+	{ 0x0217, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1455 PCIe */
+	{ 0x0218, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1457 MLOM */
+	{ 0x0219, RTE_ETH_LINK_SPEED_40G }, /* 1485 PCIe */
+	{ 0x021a, RTE_ETH_LINK_SPEED_40G }, /* 1487 MLOM */
+	{ 0x024a, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1495 PCIe */
+	{ 0x024b, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1497 MLOM */
 	{ 0, 0 }, /* End marker */
 };
 
@@ -297,8 +297,8 @@ static int enicpmd_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 	ENICPMD_FUNC_TRACE();
 
 	offloads = eth_dev->data->dev_conf.rxmode.offloads;
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			enic->ig_vlan_strip_en = 1;
 		else
 			enic->ig_vlan_strip_en = 0;
@@ -323,17 +323,17 @@ static int enicpmd_dev_configure(struct rte_eth_dev *eth_dev)
 		return ret;
 	}
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	enic->mc_count = 0;
 	enic->hw_ip_checksum = !!(eth_dev->data->dev_conf.rxmode.offloads &
-				  DEV_RX_OFFLOAD_CHECKSUM);
+				  RTE_ETH_RX_OFFLOAD_CHECKSUM);
 	/* All vlan offload masks to apply the current settings */
-	mask = ETH_VLAN_STRIP_MASK |
-		ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK |
+		RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	ret = enicpmd_vlan_offload_set(eth_dev, mask);
 	if (ret) {
 		dev_err(enic, "Failed to configure VLAN offloads\n");
@@ -435,14 +435,14 @@ static uint32_t speed_capa_from_pci_id(struct rte_eth_dev *eth_dev)
 	}
 	/* 1300 and later models are at least 40G */
 	if (id >= 0x0100)
-		return ETH_LINK_SPEED_40G;
+		return RTE_ETH_LINK_SPEED_40G;
 	/* VFs have subsystem id 0, check device id */
 	if (id == 0) {
 		/* Newer VF implies at least 40G model */
 		if (pdev->id.device_id == PCI_DEVICE_ID_CISCO_VIC_ENET_SN)
-			return ETH_LINK_SPEED_40G;
+			return RTE_ETH_LINK_SPEED_40G;
 	}
-	return ETH_LINK_SPEED_10G;
+	return RTE_ETH_LINK_SPEED_10G;
 }
 
 static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
@@ -774,8 +774,8 @@ static int enicpmd_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = enic_sop_rq_idx_to_rte_idx(
 				enic->rss_cpu.cpu[i / 4].b[i % 4]);
@@ -806,8 +806,8 @@ static int enicpmd_dev_rss_reta_update(struct rte_eth_dev *dev,
 	 */
 	rss_cpu = enic->rss_cpu;
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			rss_cpu.cpu[i / 4].b[i % 4] =
 				enic_rte_rq_idx_to_sop_idx(
@@ -883,7 +883,7 @@ static void enicpmd_dev_rxq_info_get(struct rte_eth_dev *dev,
 	 */
 	conf->offloads = enic->rx_offload_capa;
 	if (!enic->ig_vlan_strip_en)
-		conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	/* rx_thresh and other fields are not applicable for enic */
 }
 
@@ -969,8 +969,8 @@ static int enicpmd_dev_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
 static int udp_tunnel_common_check(struct enic *enic,
 				   struct rte_eth_udp_tunnel *tnl)
 {
-	if (tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN &&
-	    tnl->prot_type != RTE_TUNNEL_TYPE_GENEVE)
+	if (tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN &&
+	    tnl->prot_type != RTE_ETH_TUNNEL_TYPE_GENEVE)
 		return -ENOTSUP;
 	if (!enic->overlay_offload) {
 		ENICPMD_LOG(DEBUG, " overlay offload is not supported\n");
@@ -1010,7 +1010,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
 	ret = udp_tunnel_common_check(enic, tnl);
 	if (ret)
 		return ret;
-	vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+	vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
 	if (vxlan)
 		port = enic->vxlan_port;
 	else
@@ -1039,7 +1039,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
 	ret = udp_tunnel_common_check(enic, tnl);
 	if (ret)
 		return ret;
-	vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+	vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
 	if (vxlan)
 		port = enic->vxlan_port;
 	else
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index dfc7f5d1f94f..21b1fffb14f0 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -430,7 +430,7 @@ int enic_link_update(struct rte_eth_dev *eth_dev)
 
 	memset(&link, 0, sizeof(link));
 	link.link_status = enic_get_link_status(enic);
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_speed = vnic_dev_port_speed(enic->vdev);
 
 	return rte_eth_linkstatus_set(eth_dev, &link);
@@ -597,7 +597,7 @@ int enic_enable(struct enic *enic)
 	}
 
 	eth_dev->data->dev_link.link_speed = vnic_dev_port_speed(enic->vdev);
-	eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	/* vnic notification of link status has already been turned on in
 	 * enic_dev_init() which is called during probe time.  Here we are
@@ -638,11 +638,11 @@ int enic_enable(struct enic *enic)
 	 * and vlan insertion are supported.
 	 */
 	simple_tx_offloads = enic->tx_offload_capa &
-		(DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		 DEV_TX_OFFLOAD_VLAN_INSERT |
-		 DEV_TX_OFFLOAD_IPV4_CKSUM |
-		 DEV_TX_OFFLOAD_UDP_CKSUM |
-		 DEV_TX_OFFLOAD_TCP_CKSUM);
+		(RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		 RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		 RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
 	if ((eth_dev->data->dev_conf.txmode.offloads &
 	     ~simple_tx_offloads) == 0) {
 		ENICPMD_LOG(DEBUG, " use the simple tx handler");
@@ -858,7 +858,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
 	max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
 
 	if (enic->rte_dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_SCATTER) {
+	    RTE_ETH_RX_OFFLOAD_SCATTER) {
 		dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
 		/* ceil((max pkt len)/mbuf_size) */
 		mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) / mbuf_size;
@@ -1385,15 +1385,15 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
 	rss_hash_type = 0;
 	rss_hf = rss_conf->rss_hf & enic->flow_type_rss_offloads;
 	if (enic->rq_count > 1 &&
-	    (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) &&
+	    (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) &&
 	    rss_hf != 0) {
 		rss_enable = 1;
-		if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			      ETH_RSS_NONFRAG_IPV4_OTHER))
+		if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			      RTE_ETH_RSS_NONFRAG_IPV4_OTHER))
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV4;
 			if (enic->udp_rss_weak) {
 				/*
@@ -1404,12 +1404,12 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
 				rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
 			}
 		}
-		if (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_IPV6_EX |
-			      ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER))
+		if (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_IPV6_EX |
+			      RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER))
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV6;
-		if (rss_hf & (ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX))
+		if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX))
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
-		if (rss_hf & (ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX)) {
+		if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX)) {
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV6;
 			if (enic->udp_rss_weak)
 				rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
@@ -1745,9 +1745,9 @@ enic_enable_overlay_offload(struct enic *enic)
 		return -EINVAL;
 	}
 	enic->tx_offload_capa |=
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		(enic->geneve ? DEV_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
-		(enic->vxlan ? DEV_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		(enic->geneve ? RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
+		(enic->vxlan ? RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
 	enic->tx_offload_mask |=
 		PKT_TX_OUTER_IPV6 |
 		PKT_TX_OUTER_IPV4 |
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index c5777772a09e..918a9e170ff6 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -147,31 +147,31 @@ int enic_get_vnic_config(struct enic *enic)
 		 * IPV4 hash type handles both non-frag and frag packet types.
 		 * TCP/UDP is controlled via a separate flag below.
 		 */
-		enic->flow_type_rss_offloads |= ETH_RSS_IPV4 |
-			ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV4 |
+			RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER;
 	if (ENIC_SETTING(enic, RSSHASH_TCPIPV4))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_TCP;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (ENIC_SETTING(enic, RSSHASH_IPV6))
 		/*
 		 * The VIC adapter can perform RSS on IPv6 packets with and
 		 * without extension headers. An IPv6 "fragment" is an IPv6
 		 * packet with the fragment extension header.
 		 */
-		enic->flow_type_rss_offloads |= ETH_RSS_IPV6 |
-			ETH_RSS_IPV6_EX | ETH_RSS_FRAG_IPV6 |
-			ETH_RSS_NONFRAG_IPV6_OTHER;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV6 |
+			RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_FRAG_IPV6 |
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER;
 	if (ENIC_SETTING(enic, RSSHASH_TCPIPV6))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_TCP |
-			ETH_RSS_IPV6_TCP_EX;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			RTE_ETH_RSS_IPV6_TCP_EX;
 	if (enic->udp_rss_weak)
 		enic->flow_type_rss_offloads |=
-			ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
-			ETH_RSS_IPV6_UDP_EX;
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			RTE_ETH_RSS_IPV6_UDP_EX;
 	if (ENIC_SETTING(enic, RSSHASH_UDPIPV4))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_UDP;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (ENIC_SETTING(enic, RSSHASH_UDPIPV6))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_UDP |
-			ETH_RSS_IPV6_UDP_EX;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			RTE_ETH_RSS_IPV6_UDP_EX;
 
 	/* Zero offloads if RSS is not enabled */
 	if (!ENIC_SETTING(enic, RSS))
@@ -201,19 +201,19 @@ int enic_get_vnic_config(struct enic *enic)
 	enic->tx_queue_offload_capa = 0;
 	enic->tx_offload_capa =
 		enic->tx_queue_offload_capa |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	enic->rx_offload_capa =
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	enic->tx_offload_mask =
 		PKT_TX_IPV6 |
 		PKT_TX_IPV4 |
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index b87c036e6014..82d595b1d1a0 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -17,10 +17,10 @@
 
 const char pmd_failsafe_driver_name[] = FAILSAFE_DRIVER_NAME;
 static const struct rte_eth_link eth_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_UP,
-	.link_autoneg = ETH_LINK_AUTONEG,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_UP,
+	.link_autoneg = RTE_ETH_LINK_AUTONEG,
 };
 
 static int
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 602c04033c18..5f4810051dac 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -326,7 +326,7 @@ int failsafe_rx_intr_install_subdevice(struct sub_device *sdev)
 	int qid;
 	struct rte_eth_dev *fsdev;
 	struct rxq **rxq;
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 				&ETH(sdev)->data->dev_conf.intr_conf;
 
 	fsdev = fs_dev(sdev);
@@ -519,7 +519,7 @@ int
 failsafe_rx_intr_install(struct rte_eth_dev *dev)
 {
 	struct fs_priv *priv = PRIV(dev);
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 			&priv->data->dev_conf.intr_conf;
 
 	if (intr_conf->rxq == 0 || dev->intr_handle != NULL)
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 29de39910c6e..a3a8a1c82e3a 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1172,51 +1172,51 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
 	 * configuring a sub-device.
 	 */
 	infos->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_LRO |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_MACSEC_STRIP |
-		DEV_RX_OFFLOAD_HEADER_SPLIT |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_TIMESTAMP |
-		DEV_RX_OFFLOAD_SECURITY |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_LRO |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+		RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+		RTE_ETH_RX_OFFLOAD_SECURITY |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	infos->rx_queue_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_LRO |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_MACSEC_STRIP |
-		DEV_RX_OFFLOAD_HEADER_SPLIT |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_TIMESTAMP |
-		DEV_RX_OFFLOAD_SECURITY |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_LRO |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+		RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+		RTE_ETH_RX_OFFLOAD_SECURITY |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	infos->tx_offload_capa =
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	infos->flow_type_rss_offloads =
-		ETH_RSS_IP |
-		ETH_RSS_UDP |
-		ETH_RSS_TCP;
+		RTE_ETH_RSS_IP |
+		RTE_ETH_RSS_UDP |
+		RTE_ETH_RSS_TCP;
 	infos->dev_capa =
 		RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 		RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 17c73c4dc5ae..b7522a47a80b 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -177,7 +177,7 @@ struct fm10k_rx_queue {
 	uint8_t drop_en;
 	uint8_t rx_deferred_start; /* don't start this queue in dev start. */
 	uint16_t rx_ftag_en; /* indicates FTAG RX supported */
-	uint64_t offloads; /* offloads of DEV_RX_OFFLOAD_* */
+	uint64_t offloads; /* offloads of RTE_ETH_RX_OFFLOAD_* */
 };
 
 /*
@@ -209,7 +209,7 @@ struct fm10k_tx_queue {
 	uint16_t next_rs; /* Next pos to set RS flag */
 	uint16_t next_dd; /* Next pos to check DD flag */
 	volatile uint32_t *tail_ptr;
-	uint64_t offloads; /* Offloads of DEV_TX_OFFLOAD_* */
+	uint64_t offloads; /* Offloads of RTE_ETH_TX_OFFLOAD_* */
 	uint16_t nb_desc;
 	uint16_t port_id;
 	uint8_t tx_deferred_start; /** don't start this queue in dev start. */
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 66f4a5c6df2c..d256334bfde9 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -413,12 +413,12 @@ fm10k_check_mq_mode(struct rte_eth_dev *dev)
 
 	vmdq_conf = &dev->data->dev_conf.rx_adv_conf.vmdq_rx_conf;
 
-	if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		PMD_INIT_LOG(ERR, "DCB mode is not supported.");
 		return -EINVAL;
 	}
 
-	if (!(rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+	if (!(rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
 		return 0;
 
 	if (hw->mac.type == fm10k_mac_vf) {
@@ -449,8 +449,8 @@ fm10k_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multipe queue mode checking */
 	ret  = fm10k_check_mq_mode(dev);
@@ -510,7 +510,7 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
 		0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
 	};
 
-	if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_RSS ||
+	if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_RSS ||
 		dev_conf->rx_adv_conf.rss_conf.rss_hf == 0) {
 		FM10K_WRITE_REG(hw, FM10K_MRQC(0), 0);
 		return;
@@ -547,15 +547,15 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
 	 */
 	hf = dev_conf->rx_adv_conf.rss_conf.rss_hf;
 	mrqc = 0;
-	mrqc |= (hf & ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
 
 	if (mrqc == 0) {
 		PMD_INIT_LOG(ERR, "Specified RSS mode 0x%"PRIx64"is not"
@@ -602,7 +602,7 @@ fm10k_dev_mq_rx_configure(struct rte_eth_dev *dev)
 	if (hw->mac.type != fm10k_mac_pf)
 		return;
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
 		nb_queue_pools = vmdq_conf->nb_queue_pools;
 
 	/* no pool number change, no need to update logic port and VLAN/MAC */
@@ -759,7 +759,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
 		/* It adds dual VLAN length for supporting dual VLAN */
 		if ((dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
 				2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
-			rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
+			rxq->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 			uint32_t reg;
 			dev->data->scattered_rx = 1;
 			reg = FM10K_READ_REG(hw, FM10K_SRRCTL(i));
@@ -1145,7 +1145,7 @@ fm10k_dev_start(struct rte_eth_dev *dev)
 	}
 
 	/* Update default vlan when not in VMDQ mode */
-	if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+	if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
 		fm10k_vlan_filter_set(dev, hw->mac.default_vid, true);
 
 	fm10k_link_update(dev, 0);
@@ -1222,11 +1222,11 @@ fm10k_link_update(struct rte_eth_dev *dev,
 		FM10K_DEV_PRIVATE_TO_INFO(dev->data->dev_private);
 	PMD_INIT_FUNC_TRACE();
 
-	dev->data->dev_link.link_speed  = ETH_SPEED_NUM_50G;
-	dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	dev->data->dev_link.link_speed  = RTE_ETH_SPEED_NUM_50G;
+	dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	dev->data->dev_link.link_status =
-		dev_info->sm_down ? ETH_LINK_DOWN : ETH_LINK_UP;
-	dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
+		dev_info->sm_down ? RTE_ETH_LINK_DOWN : RTE_ETH_LINK_UP;
+	dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
 
 	return 0;
 }
@@ -1378,7 +1378,7 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 	dev_info->max_vfs            = pdev->max_vfs;
 	dev_info->vmdq_pool_base     = 0;
 	dev_info->vmdq_queue_base    = 0;
-	dev_info->max_vmdq_pools     = ETH_32_POOLS;
+	dev_info->max_vmdq_pools     = RTE_ETH_32_POOLS;
 	dev_info->vmdq_queue_num     = FM10K_MAX_QUEUES_PF;
 	dev_info->rx_queue_offload_capa = fm10k_get_rx_queue_offloads_capa(dev);
 	dev_info->rx_offload_capa = fm10k_get_rx_port_offloads_capa(dev) |
@@ -1389,15 +1389,15 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 
 	dev_info->hash_key_size = FM10K_RSSRK_SIZE * sizeof(uint32_t);
 	dev_info->reta_size = FM10K_MAX_RSS_INDICES;
-	dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
-					ETH_RSS_IPV6 |
-					ETH_RSS_IPV6_EX |
-					ETH_RSS_NONFRAG_IPV4_TCP |
-					ETH_RSS_NONFRAG_IPV6_TCP |
-					ETH_RSS_IPV6_TCP_EX |
-					ETH_RSS_NONFRAG_IPV4_UDP |
-					ETH_RSS_NONFRAG_IPV6_UDP |
-					ETH_RSS_IPV6_UDP_EX;
+	dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+					RTE_ETH_RSS_IPV6 |
+					RTE_ETH_RSS_IPV6_EX |
+					RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+					RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+					RTE_ETH_RSS_IPV6_TCP_EX |
+					RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+					RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+					RTE_ETH_RSS_IPV6_UDP_EX;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -1435,9 +1435,9 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 		.nb_mtu_seg_max = FM10K_TX_MAX_MTU_SEG,
 	};
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G |
-			ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
-			ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G |
+			RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+			RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -1509,7 +1509,7 @@ fm10k_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 		return -EINVAL;
 	}
 
-	if (vlan_id > ETH_VLAN_ID_MAX) {
+	if (vlan_id > RTE_ETH_VLAN_ID_MAX) {
 		PMD_INIT_LOG(ERR, "Invalid vlan_id: must be < 4096");
 		return -EINVAL;
 	}
@@ -1767,20 +1767,20 @@ static uint64_t fm10k_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
 
-	return (uint64_t)(DEV_RX_OFFLOAD_SCATTER);
+	return (uint64_t)(RTE_ETH_RX_OFFLOAD_SCATTER);
 }
 
 static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
 
-	return  (uint64_t)(DEV_RX_OFFLOAD_VLAN_STRIP  |
-			   DEV_RX_OFFLOAD_VLAN_FILTER |
-			   DEV_RX_OFFLOAD_IPV4_CKSUM  |
-			   DEV_RX_OFFLOAD_UDP_CKSUM   |
-			   DEV_RX_OFFLOAD_TCP_CKSUM   |
-			   DEV_RX_OFFLOAD_HEADER_SPLIT |
-			   DEV_RX_OFFLOAD_RSS_HASH);
+	return  (uint64_t)(RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+			   RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+			   RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+			   RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+			   RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+			   RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+			   RTE_ETH_RX_OFFLOAD_RSS_HASH);
 }
 
 static int
@@ -1965,12 +1965,12 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
 
-	return (uint64_t)(DEV_TX_OFFLOAD_VLAN_INSERT |
-			  DEV_TX_OFFLOAD_MULTI_SEGS  |
-			  DEV_TX_OFFLOAD_IPV4_CKSUM  |
-			  DEV_TX_OFFLOAD_UDP_CKSUM   |
-			  DEV_TX_OFFLOAD_TCP_CKSUM   |
-			  DEV_TX_OFFLOAD_TCP_TSO);
+	return (uint64_t)(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+			  RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+			  RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+			  RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_TCP_TSO);
 }
 
 static int
@@ -2111,8 +2111,8 @@ fm10k_reta_update(struct rte_eth_dev *dev,
 	 * 128-entries in 32 registers
 	 */
 	for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				BIT_MASK_PER_UINT32);
 		if (mask == 0)
@@ -2160,8 +2160,8 @@ fm10k_reta_query(struct rte_eth_dev *dev,
 	 * 128-entries in 32 registers
 	 */
 	for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				BIT_MASK_PER_UINT32);
 		if (mask == 0)
@@ -2198,15 +2198,15 @@ fm10k_rss_hash_update(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	mrqc = 0;
-	mrqc |= (hf & ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
 
 	/* If the mapping doesn't fit any supported, return */
 	if (mrqc == 0)
@@ -2243,15 +2243,15 @@ fm10k_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	mrqc = FM10K_READ_REG(hw, FM10K_MRQC(0));
 	hf = 0;
-	hf |= (mrqc & FM10K_MRQC_IPV4)     ? ETH_RSS_IPV4              : 0;
-	hf |= (mrqc & FM10K_MRQC_IPV6)     ? ETH_RSS_IPV6              : 0;
-	hf |= (mrqc & FM10K_MRQC_IPV6)     ? ETH_RSS_IPV6_EX           : 0;
-	hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? ETH_RSS_NONFRAG_IPV4_TCP  : 0;
-	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_NONFRAG_IPV6_TCP  : 0;
-	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP_EX       : 0;
-	hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? ETH_RSS_NONFRAG_IPV4_UDP  : 0;
-	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_NONFRAG_IPV6_UDP  : 0;
-	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP_EX       : 0;
+	hf |= (mrqc & FM10K_MRQC_IPV4)     ? RTE_ETH_RSS_IPV4              : 0;
+	hf |= (mrqc & FM10K_MRQC_IPV6)     ? RTE_ETH_RSS_IPV6              : 0;
+	hf |= (mrqc & FM10K_MRQC_IPV6)     ? RTE_ETH_RSS_IPV6_EX           : 0;
+	hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_TCP  : 0;
+	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_TCP  : 0;
+	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_IPV6_TCP_EX       : 0;
+	hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_UDP  : 0;
+	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_UDP  : 0;
+	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_IPV6_UDP_EX       : 0;
 
 	rss_conf->rss_hf = hf;
 
@@ -2606,7 +2606,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
 
 			/* first clear the internal SW recording structure */
 			if (!(dev->data->dev_conf.rxmode.mq_mode &
-						ETH_MQ_RX_VMDQ_FLAG))
+						RTE_ETH_MQ_RX_VMDQ_FLAG))
 				fm10k_vlan_filter_set(dev, hw->mac.default_vid,
 					false);
 
@@ -2622,7 +2622,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
 					MAIN_VSI_POOL_NUMBER);
 
 			if (!(dev->data->dev_conf.rxmode.mq_mode &
-						ETH_MQ_RX_VMDQ_FLAG))
+						RTE_ETH_MQ_RX_VMDQ_FLAG))
 				fm10k_vlan_filter_set(dev, hw->mac.default_vid,
 					true);
 
diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c
index 83af01dc2da6..50973a662c67 100644
--- a/drivers/net/fm10k/fm10k_rxtx_vec.c
+++ b/drivers/net/fm10k/fm10k_rxtx_vec.c
@@ -208,11 +208,11 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
 {
 #ifndef RTE_LIBRTE_IEEE1588
 	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 
 #ifndef RTE_FM10K_RX_OLFLAGS_ENABLE
 	/* whithout rx ol_flags, no VP flag report */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 		return -1;
 #endif
 
@@ -221,7 +221,7 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
 		return -1;
 
 	/* no header split support */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
 		return -1;
 
 	return 0;
diff --git a/drivers/net/hinic/base/hinic_pmd_hwdev.c b/drivers/net/hinic/base/hinic_pmd_hwdev.c
index cb9cf6efa287..80f9eb5c3031 100644
--- a/drivers/net/hinic/base/hinic_pmd_hwdev.c
+++ b/drivers/net/hinic/base/hinic_pmd_hwdev.c
@@ -1320,28 +1320,28 @@ hinic_cable_status_event(u8 cmd, void *buf_in, __rte_unused u16 in_size,
 static int hinic_link_event_process(struct hinic_hwdev *hwdev,
 				    struct rte_eth_dev *eth_dev, u8 status)
 {
-	uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
-					ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
-					ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
-					ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+	uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+					RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+					RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+					RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
 	struct nic_port_info port_info;
 	struct rte_eth_link link;
 	int rc = HINIC_OK;
 
 	if (!status) {
-		link.link_status = ETH_LINK_DOWN;
+		link.link_status = RTE_ETH_LINK_DOWN;
 		link.link_speed = 0;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	} else {
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 
 		memset(&port_info, 0, sizeof(port_info));
 		rc = hinic_get_port_info(hwdev, &port_info);
 		if (rc) {
-			link.link_speed = ETH_SPEED_NUM_NONE;
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
-			link.link_autoneg = ETH_LINK_FIXED;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+			link.link_autoneg = RTE_ETH_LINK_FIXED;
 		} else {
 			link.link_speed = port_speed[port_info.speed %
 						LINK_SPEED_MAX];
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index c2374ebb6759..4cd5a85d5f8d 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -311,8 +311,8 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* mtu size is 256~9600 */
 	if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
@@ -338,7 +338,7 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
 
 	/* init vlan offoad */
 	err = hinic_vlan_offload_set(dev,
-				ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+				RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
 	if (err) {
 		PMD_DRV_LOG(ERR, "Initialize vlan filter and strip failed");
 		(void)hinic_config_mq_mode(dev, FALSE);
@@ -696,15 +696,15 @@ static void hinic_get_speed_capa(struct rte_eth_dev *dev, uint32_t *speed_capa)
 	} else {
 		*speed_capa = 0;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_1G))
-			*speed_capa |= ETH_LINK_SPEED_1G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_1G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_10G))
-			*speed_capa |= ETH_LINK_SPEED_10G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_10G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_25G))
-			*speed_capa |= ETH_LINK_SPEED_25G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_25G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_40G))
-			*speed_capa |= ETH_LINK_SPEED_40G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_40G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_100G))
-			*speed_capa |= ETH_LINK_SPEED_100G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	}
 }
 
@@ -732,24 +732,24 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 
 	hinic_get_speed_capa(dev, &info->speed_capa);
 	info->rx_queue_offload_capa = 0;
-	info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
-				DEV_RX_OFFLOAD_IPV4_CKSUM |
-				DEV_RX_OFFLOAD_UDP_CKSUM |
-				DEV_RX_OFFLOAD_TCP_CKSUM |
-				DEV_RX_OFFLOAD_VLAN_FILTER |
-				DEV_RX_OFFLOAD_SCATTER |
-				DEV_RX_OFFLOAD_TCP_LRO |
-				DEV_RX_OFFLOAD_RSS_HASH;
+	info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+				RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				RTE_ETH_RX_OFFLOAD_SCATTER |
+				RTE_ETH_RX_OFFLOAD_TCP_LRO |
+				RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	info->tx_queue_offload_capa = 0;
-	info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-				DEV_TX_OFFLOAD_IPV4_CKSUM |
-				DEV_TX_OFFLOAD_UDP_CKSUM |
-				DEV_TX_OFFLOAD_TCP_CKSUM |
-				DEV_TX_OFFLOAD_SCTP_CKSUM |
-				DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				DEV_TX_OFFLOAD_TCP_TSO |
-				DEV_TX_OFFLOAD_MULTI_SEGS;
+	info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	info->hash_key_size = HINIC_RSS_KEY_SIZE;
 	info->reta_size = HINIC_RSS_INDIR_SIZE;
@@ -846,20 +846,20 @@ static int hinic_priv_get_dev_link_status(struct hinic_nic_dev *nic_dev,
 	u8 port_link_status = 0;
 	struct nic_port_info port_link_info;
 	struct hinic_hwdev *nic_hwdev = nic_dev->hwdev;
-	uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
-					ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
-					ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
-					ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+	uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+					RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+					RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+					RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
 
 	rc = hinic_get_link_status(nic_hwdev, &port_link_status);
 	if (rc)
 		return rc;
 
 	if (!port_link_status) {
-		link->link_status = ETH_LINK_DOWN;
+		link->link_status = RTE_ETH_LINK_DOWN;
 		link->link_speed = 0;
-		link->link_duplex = ETH_LINK_HALF_DUPLEX;
-		link->link_autoneg = ETH_LINK_FIXED;
+		link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link->link_autoneg = RTE_ETH_LINK_FIXED;
 		return HINIC_OK;
 	}
 
@@ -901,8 +901,8 @@ static int hinic_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		/* Get link status information from hardware */
 		rc = hinic_priv_get_dev_link_status(nic_dev, &link);
 		if (rc != HINIC_OK) {
-			link.link_speed = ETH_SPEED_NUM_NONE;
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR, "Get link status failed");
 			goto out;
 		}
@@ -1650,8 +1650,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	int err;
 
 	/* Enable or disable VLAN filter */
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) ?
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) ?
 			TRUE : FALSE;
 		err = hinic_config_vlan_filter(nic_dev->hwdev, on);
 		if (err == HINIC_MGMT_CMD_UNSUPPORTED) {
@@ -1672,8 +1672,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	}
 
 	/* Enable or disable VLAN stripping */
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) ?
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) ?
 			TRUE : FALSE;
 		err = hinic_set_rx_vlan_offload(nic_dev->hwdev, on);
 		if (err) {
@@ -1859,13 +1859,13 @@ static int hinic_flow_ctrl_get(struct rte_eth_dev *dev,
 	fc_conf->autoneg = nic_pause.auto_neg;
 
 	if (nic_pause.tx_pause && nic_pause.rx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (nic_pause.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else if (nic_pause.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -1879,14 +1879,14 @@ static int hinic_flow_ctrl_set(struct rte_eth_dev *dev,
 
 	nic_pause.auto_neg = fc_conf->autoneg;
 
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-		(fc_conf->mode & RTE_FC_TX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+		(fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
 		nic_pause.tx_pause = true;
 	else
 		nic_pause.tx_pause = false;
 
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-		(fc_conf->mode & RTE_FC_RX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+		(fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
 		nic_pause.rx_pause = true;
 	else
 		nic_pause.rx_pause = false;
@@ -1930,7 +1930,7 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
 	struct nic_rss_type rss_type = {0};
 	int err = 0;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		PMD_DRV_LOG(WARNING, "RSS is not enabled");
 		return HINIC_OK;
 	}
@@ -1951,14 +1951,14 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
 		}
 	}
 
-	rss_type.ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
-	rss_type.tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
-	rss_type.ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
-	rss_type.ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
-	rss_type.tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
-	rss_type.tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
-	rss_type.udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
-	rss_type.udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+	rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+	rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+	rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+	rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+	rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+	rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+	rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+	rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
 
 	err = hinic_set_rss_type(nic_dev->hwdev, tmpl_idx, rss_type);
 	if (err) {
@@ -1994,7 +1994,7 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
 	struct nic_rss_type rss_type = {0};
 	int err;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		PMD_DRV_LOG(WARNING, "RSS is not enabled");
 		return HINIC_ERROR;
 	}
@@ -2015,15 +2015,15 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
 
 	rss_conf->rss_hf = 0;
 	rss_conf->rss_hf |=  rss_type.ipv4 ?
-		(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4) : 0;
-	rss_conf->rss_hf |=  rss_type.tcp_ipv4 ? ETH_RSS_NONFRAG_IPV4_TCP : 0;
+		(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4) : 0;
+	rss_conf->rss_hf |=  rss_type.tcp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_TCP : 0;
 	rss_conf->rss_hf |=  rss_type.ipv6 ?
-		(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6) : 0;
-	rss_conf->rss_hf |=  rss_type.ipv6_ext ? ETH_RSS_IPV6_EX : 0;
-	rss_conf->rss_hf |=  rss_type.tcp_ipv6 ? ETH_RSS_NONFRAG_IPV6_TCP : 0;
-	rss_conf->rss_hf |=  rss_type.tcp_ipv6_ext ? ETH_RSS_IPV6_TCP_EX : 0;
-	rss_conf->rss_hf |=  rss_type.udp_ipv4 ? ETH_RSS_NONFRAG_IPV4_UDP : 0;
-	rss_conf->rss_hf |=  rss_type.udp_ipv6 ? ETH_RSS_NONFRAG_IPV6_UDP : 0;
+		(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6) : 0;
+	rss_conf->rss_hf |=  rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0;
+	rss_conf->rss_hf |=  rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0;
+	rss_conf->rss_hf |=  rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0;
+	rss_conf->rss_hf |=  rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0;
+	rss_conf->rss_hf |=  rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0;
 
 	return HINIC_OK;
 }
@@ -2053,7 +2053,7 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
 	u16 i = 0;
 	u16 idx, shift;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG))
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG))
 		return HINIC_OK;
 
 	if (reta_size != NIC_RSS_INDIR_SIZE) {
@@ -2067,8 +2067,8 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
 
 	/* update rss indir_tbl */
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (reta_conf[idx].reta[shift] >= nic_dev->num_rq) {
 			PMD_DRV_LOG(ERR, "Invalid reta entry, indirtbl[%d]: %d "
@@ -2133,8 +2133,8 @@ static int hinic_rss_indirtbl_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = (uint16_t)indirtbl[i];
 	}
diff --git a/drivers/net/hinic/hinic_pmd_rx.c b/drivers/net/hinic/hinic_pmd_rx.c
index 842399cc4cd8..d347afe9a6a9 100644
--- a/drivers/net/hinic/hinic_pmd_rx.c
+++ b/drivers/net/hinic/hinic_pmd_rx.c
@@ -504,14 +504,14 @@ static void hinic_fill_rss_type(struct nic_rss_type *rss_type,
 {
 	u64 rss_hf = rss_conf->rss_hf;
 
-	rss_type->ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
-	rss_type->tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
-	rss_type->ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
-	rss_type->ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
-	rss_type->tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
-	rss_type->tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
-	rss_type->udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
-	rss_type->udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+	rss_type->ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+	rss_type->tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+	rss_type->ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+	rss_type->ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+	rss_type->tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+	rss_type->tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+	rss_type->udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+	rss_type->udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
 }
 
 static void hinic_fillout_indir_tbl(struct hinic_nic_dev *nic_dev, u32 *indir)
@@ -588,8 +588,8 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
 {
 	int err, i;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
-		nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
+		nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
 		nic_dev->num_rss = 0;
 		if (nic_dev->num_rq > 1) {
 			/* get rss template id */
@@ -599,7 +599,7 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
 				PMD_DRV_LOG(WARNING, "Alloc rss template failed");
 				return err;
 			}
-			nic_dev->flags |= ETH_MQ_RX_RSS_FLAG;
+			nic_dev->flags |= RTE_ETH_MQ_RX_RSS_FLAG;
 			for (i = 0; i < nic_dev->num_rq; i++)
 				hinic_add_rq_to_rx_queue_list(nic_dev, i);
 		}
@@ -610,12 +610,12 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
 
 static void hinic_destroy_num_qps(struct hinic_nic_dev *nic_dev)
 {
-	if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+	if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (hinic_rss_template_free(nic_dev->hwdev,
 					    nic_dev->rss_tmpl_idx))
 			PMD_DRV_LOG(WARNING, "Free rss template failed");
 
-		nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+		nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
 	}
 }
 
@@ -641,7 +641,7 @@ int hinic_config_mq_mode(struct rte_eth_dev *dev, bool on)
 	int ret = 0;
 
 	switch (dev_conf->rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		ret = hinic_config_mq_rx_rss(nic_dev, on);
 		break;
 	default:
@@ -662,7 +662,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
 	int lro_wqe_num;
 	int buf_size;
 
-	if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+	if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (rss_conf.rss_hf == 0) {
 			rss_conf.rss_hf = HINIC_RSS_OFFLOAD_ALL;
 		} else if ((rss_conf.rss_hf & HINIC_RSS_OFFLOAD_ALL) == 0) {
@@ -678,7 +678,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
 	}
 
 	/* Enable both L3/L4 rx checksum offload */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		nic_dev->rx_csum_en = HINIC_RX_CSUM_OFFLOAD_EN;
 
 	err = hinic_set_rx_csum_offload(nic_dev->hwdev,
@@ -687,7 +687,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
 		goto rx_csum_ofl_err;
 
 	/* config lro */
-	lro_en = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ?
+	lro_en = dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ?
 			true : false;
 	max_lro_size = dev->data->dev_conf.rxmode.max_lro_pkt_size;
 	buf_size = nic_dev->hwdev->nic_io->rq_buf_size;
@@ -726,7 +726,7 @@ void hinic_rx_remove_configure(struct rte_eth_dev *dev)
 {
 	struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
 
-	if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+	if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
 		hinic_rss_deinit(nic_dev);
 		hinic_destroy_num_qps(nic_dev);
 	}
diff --git a/drivers/net/hinic/hinic_pmd_rx.h b/drivers/net/hinic/hinic_pmd_rx.h
index 8a45f2d9fc50..5c303398b635 100644
--- a/drivers/net/hinic/hinic_pmd_rx.h
+++ b/drivers/net/hinic/hinic_pmd_rx.h
@@ -8,17 +8,17 @@
 #define HINIC_DEFAULT_RX_FREE_THRESH	32
 
 #define HINIC_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 |\
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 |\
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 enum rq_completion_fmt {
 	RQ_COMPLETE_SGE = 1
diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
index 8753c340e790..3d0159d78778 100644
--- a/drivers/net/hns3/hns3_dcb.c
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -1536,7 +1536,7 @@ hns3_dcb_hw_configure(struct hns3_adapter *hns)
 		return ret;
 	}
 
-	if (hw->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (hw->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		dcb_rx_conf = &hw->data->dev_conf.rx_adv_conf.dcb_rx_conf;
 		if (dcb_rx_conf->nb_tcs == 0)
 			hw->dcb_info.pfc_en = 1; /* tc0 only */
@@ -1693,7 +1693,7 @@ hns3_update_queue_map_configure(struct hns3_adapter *hns)
 	uint16_t nb_tx_q = hw->data->nb_tx_queues;
 	int ret;
 
-	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		return 0;
 
 	ret = hns3_dcb_update_tc_queue_mapping(hw, nb_rx_q, nb_tx_q);
@@ -1713,22 +1713,22 @@ static void
 hns3_get_fc_mode(struct hns3_hw *hw, enum rte_eth_fc_mode mode)
 {
 	switch (mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		hw->requested_fc_mode = HNS3_FC_NONE;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		hw->requested_fc_mode = HNS3_FC_RX_PAUSE;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		hw->requested_fc_mode = HNS3_FC_TX_PAUSE;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		hw->requested_fc_mode = HNS3_FC_FULL;
 		break;
 	default:
 		hw->requested_fc_mode = HNS3_FC_NONE;
 		hns3_warn(hw, "fc_mode(%u) exceeds member scope and is "
-			  "configured to RTE_FC_NONE", mode);
+			  "configured to RTE_ETH_FC_NONE", mode);
 		break;
 	}
 }
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 693048f58704..8e0ccecb57a6 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -60,29 +60,29 @@ enum hns3_evt_cause {
 };
 
 static const struct rte_eth_fec_capa speed_fec_capa_tbl[] = {
-	{ ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
 
-	{ ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
 
-	{ ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
 
-	{ ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
 
-	{ ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
 
-	{ ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(RS) }
 };
@@ -500,8 +500,8 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
 	struct hns3_cmd_desc desc;
 	int ret;
 
-	if ((vlan_type != ETH_VLAN_TYPE_INNER &&
-	     vlan_type != ETH_VLAN_TYPE_OUTER)) {
+	if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+	     vlan_type != RTE_ETH_VLAN_TYPE_OUTER)) {
 		hns3_err(hw, "Unsupported vlan type, vlan_type =%d", vlan_type);
 		return -EINVAL;
 	}
@@ -514,10 +514,10 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
 	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MAC_VLAN_TYPE_ID, false);
 	rx_req = (struct hns3_rx_vlan_type_cfg_cmd *)desc.data;
 
-	if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
 		rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
-	} else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+	} else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
 		rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
 		rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
 		rx_req->in_fst_vlan_type = rte_cpu_to_le_16(tpid);
@@ -725,11 +725,11 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	rte_spinlock_lock(&hw->lock);
 	rxmode = &dev->data->dev_conf.rxmode;
 	tmp_mask = (unsigned int)mask;
-	if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* ignore vlan filter configuration during promiscuous mode */
 		if (!dev->data->promiscuous) {
 			/* Enable or disable VLAN filter */
-			enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER ?
+			enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ?
 				 true : false;
 
 			ret = hns3_enable_vlan_filter(hns, enable);
@@ -742,9 +742,9 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		}
 	}
 
-	if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP ?
+		enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ?
 		    true : false;
 
 		ret = hns3_en_hw_strip_rxvtag(hns, enable);
@@ -1118,7 +1118,7 @@ hns3_init_vlan_config(struct hns3_adapter *hns)
 		return ret;
 	}
 
-	ret = hns3_vlan_tpid_configure(hns, ETH_VLAN_TYPE_INNER,
+	ret = hns3_vlan_tpid_configure(hns, RTE_ETH_VLAN_TYPE_INNER,
 				       RTE_ETHER_TYPE_VLAN);
 	if (ret) {
 		hns3_err(hw, "tpid set fail in pf, ret =%d", ret);
@@ -1161,7 +1161,7 @@ hns3_restore_vlan_conf(struct hns3_adapter *hns)
 	if (!hw->data->promiscuous) {
 		/* restore vlan filter states */
 		offloads = hw->data->dev_conf.rxmode.offloads;
-		enable = offloads & DEV_RX_OFFLOAD_VLAN_FILTER ? true : false;
+		enable = offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ? true : false;
 		ret = hns3_enable_vlan_filter(hns, enable);
 		if (ret) {
 			hns3_err(hw, "failed to restore vlan rx filter conf, "
@@ -1204,7 +1204,7 @@ hns3_dev_configure_vlan(struct rte_eth_dev *dev)
 			  txmode->hw_vlan_reject_untagged);
 
 	/* Apply vlan offload setting */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
 	ret = hns3_vlan_offload_set(dev, mask);
 	if (ret) {
 		hns3_err(hw, "dev config rx vlan offload failed, ret = %d",
@@ -2213,9 +2213,9 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
 	int max_tc = 0;
 	int i;
 
-	if ((rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG) ||
-	    (tx_mq_mode == ETH_MQ_TX_VMDQ_DCB ||
-	     tx_mq_mode == ETH_MQ_TX_VMDQ_ONLY)) {
+	if ((rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) ||
+	    (tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB ||
+	     tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)) {
 		hns3_err(hw, "VMDQ is not supported, rx_mq_mode = %d, tx_mq_mode = %d.",
 			 rx_mq_mode, tx_mq_mode);
 		return -EOPNOTSUPP;
@@ -2223,7 +2223,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
 
 	dcb_rx_conf = &dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
 	dcb_tx_conf = &dev->data->dev_conf.tx_adv_conf.dcb_tx_conf;
-	if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		if (dcb_rx_conf->nb_tcs > pf->tc_max) {
 			hns3_err(hw, "nb_tcs(%u) > max_tc(%u) driver supported.",
 				 dcb_rx_conf->nb_tcs, pf->tc_max);
@@ -2232,7 +2232,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
 
 		if (!(dcb_rx_conf->nb_tcs == HNS3_4_TCS ||
 		      dcb_rx_conf->nb_tcs == HNS3_8_TCS)) {
-			hns3_err(hw, "on ETH_MQ_RX_DCB_RSS mode, "
+			hns3_err(hw, "on RTE_ETH_MQ_RX_DCB_RSS mode, "
 				 "nb_tcs(%d) != %d or %d in rx direction.",
 				 dcb_rx_conf->nb_tcs, HNS3_4_TCS, HNS3_8_TCS);
 			return -EINVAL;
@@ -2400,11 +2400,11 @@ hns3_check_link_speed(struct hns3_hw *hw, uint32_t link_speeds)
 	 * configure link_speeds (default 0), which means auto-negotiation.
 	 * In this case, it should return success.
 	 */
-	if (link_speeds == ETH_LINK_SPEED_AUTONEG &&
+	if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG &&
 	    hw->mac.support_autoneg == 0)
 		return 0;
 
-	if (link_speeds != ETH_LINK_SPEED_AUTONEG) {
+	if (link_speeds != RTE_ETH_LINK_SPEED_AUTONEG) {
 		ret = hns3_check_port_speed(hw, link_speeds);
 		if (ret)
 			return ret;
@@ -2464,15 +2464,15 @@ hns3_dev_configure(struct rte_eth_dev *dev)
 	if (ret)
 		goto cfg_err;
 
-	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		ret = hns3_setup_dcb(dev);
 		if (ret)
 			goto cfg_err;
 	}
 
 	/* When RSS is not configured, redirect the packet queue 0 */
-	if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 		rss_conf = conf->rx_adv_conf.rss_conf;
 		hw->rss_dis_flag = false;
 		ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -2493,7 +2493,7 @@ hns3_dev_configure(struct rte_eth_dev *dev)
 		goto cfg_err;
 
 	/* config hardware GRO */
-	gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+	gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
 	ret = hns3_config_gro(hw, gro_en);
 	if (ret)
 		goto cfg_err;
@@ -2600,15 +2600,15 @@ hns3_get_copper_port_speed_capa(uint32_t supported_speed)
 	uint32_t speed_capa = 0;
 
 	if (supported_speed & HNS3_PHY_LINK_SPEED_10M_HD_BIT)
-		speed_capa |= ETH_LINK_SPEED_10M_HD;
+		speed_capa |= RTE_ETH_LINK_SPEED_10M_HD;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_10M_BIT)
-		speed_capa |= ETH_LINK_SPEED_10M;
+		speed_capa |= RTE_ETH_LINK_SPEED_10M;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_100M_HD_BIT)
-		speed_capa |= ETH_LINK_SPEED_100M_HD;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_100M_BIT)
-		speed_capa |= ETH_LINK_SPEED_100M;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_1000M_BIT)
-		speed_capa |= ETH_LINK_SPEED_1G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G;
 
 	return speed_capa;
 }
@@ -2619,19 +2619,19 @@ hns3_get_firber_port_speed_capa(uint32_t supported_speed)
 	uint32_t speed_capa = 0;
 
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_1G_BIT)
-		speed_capa |= ETH_LINK_SPEED_1G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_10G_BIT)
-		speed_capa |= ETH_LINK_SPEED_10G;
+		speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_25G_BIT)
-		speed_capa |= ETH_LINK_SPEED_25G;
+		speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_40G_BIT)
-		speed_capa |= ETH_LINK_SPEED_40G;
+		speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_50G_BIT)
-		speed_capa |= ETH_LINK_SPEED_50G;
+		speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_100G_BIT)
-		speed_capa |= ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_200G_BIT)
-		speed_capa |= ETH_LINK_SPEED_200G;
+		speed_capa |= RTE_ETH_LINK_SPEED_200G;
 
 	return speed_capa;
 }
@@ -2650,7 +2650,7 @@ hns3_get_speed_capa(struct hns3_hw *hw)
 			hns3_get_firber_port_speed_capa(mac->supported_speed);
 
 	if (mac->support_autoneg == 0)
-		speed_capa |= ETH_LINK_SPEED_FIXED;
+		speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
 
 	return speed_capa;
 }
@@ -2676,40 +2676,40 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 	info->max_mac_addrs = HNS3_UC_MACADDR_NUM;
 	info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
 	info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
-	info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_TCP_CKSUM |
-				 DEV_RX_OFFLOAD_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_SCTP_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_KEEP_CRC |
-				 DEV_RX_OFFLOAD_SCATTER |
-				 DEV_RX_OFFLOAD_VLAN_STRIP |
-				 DEV_RX_OFFLOAD_VLAN_FILTER |
-				 DEV_RX_OFFLOAD_RSS_HASH |
-				 DEV_RX_OFFLOAD_TCP_LRO);
-	info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_TCP_CKSUM |
-				 DEV_TX_OFFLOAD_UDP_CKSUM |
-				 DEV_TX_OFFLOAD_SCTP_CKSUM |
-				 DEV_TX_OFFLOAD_MULTI_SEGS |
-				 DEV_TX_OFFLOAD_TCP_TSO |
-				 DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				 DEV_TX_OFFLOAD_GRE_TNL_TSO |
-				 DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-				 DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+	info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+				 RTE_ETH_RX_OFFLOAD_SCATTER |
+				 RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				 RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				 RTE_ETH_RX_OFFLOAD_RSS_HASH |
+				 RTE_ETH_RX_OFFLOAD_TCP_LRO);
+	info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				 RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				 RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
 				 hns3_txvlan_cap_get(hw));
 
 	if (hns3_dev_get_support(hw, OUTER_UDP_CKSUM))
-		info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+		info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 
 	if (hns3_dev_get_support(hw, INDEP_TXRX))
 		info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 				 RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
 
 	if (hns3_dev_get_support(hw, PTP))
-		info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+		info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	info->rx_desc_lim = (struct rte_eth_desc_lim) {
 		.nb_max = HNS3_MAX_RING_DESC,
@@ -2793,7 +2793,7 @@ hns3_update_port_link_info(struct rte_eth_dev *eth_dev)
 
 	ret = hns3_update_link_info(eth_dev);
 	if (ret)
-		hw->mac.link_status = ETH_LINK_DOWN;
+		hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	return ret;
 }
@@ -2806,29 +2806,29 @@ hns3_setup_linkstatus(struct rte_eth_dev *eth_dev,
 	struct hns3_mac *mac = &hw->mac;
 
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_10M:
-	case ETH_SPEED_NUM_100M:
-	case ETH_SPEED_NUM_1G:
-	case ETH_SPEED_NUM_10G:
-	case ETH_SPEED_NUM_25G:
-	case ETH_SPEED_NUM_40G:
-	case ETH_SPEED_NUM_50G:
-	case ETH_SPEED_NUM_100G:
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_200G:
 		if (mac->link_status)
 			new_link->link_speed = mac->link_speed;
 		break;
 	default:
 		if (mac->link_status)
-			new_link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+			new_link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 	}
 
 	if (!mac->link_status)
-		new_link->link_speed = ETH_SPEED_NUM_NONE;
+		new_link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	new_link->link_duplex = mac->link_duplex;
-	new_link->link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+	new_link->link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 	new_link->link_autoneg = mac->link_autoneg;
 }
 
@@ -2848,8 +2848,8 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
 	if (eth_dev->data->dev_started == 0) {
 		new_link.link_autoneg = mac->link_autoneg;
 		new_link.link_duplex = mac->link_duplex;
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
-		new_link.link_status = ETH_LINK_DOWN;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		new_link.link_status = RTE_ETH_LINK_DOWN;
 		goto out;
 	}
 
@@ -2861,7 +2861,7 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
 			break;
 		}
 
-		if (!wait_to_complete || mac->link_status == ETH_LINK_UP)
+		if (!wait_to_complete || mac->link_status == RTE_ETH_LINK_UP)
 			break;
 
 		rte_delay_ms(HNS3_LINK_CHECK_INTERVAL);
@@ -3207,31 +3207,31 @@ hns3_parse_speed(int speed_cmd, uint32_t *speed)
 {
 	switch (speed_cmd) {
 	case HNS3_CFG_SPEED_10M:
-		*speed = ETH_SPEED_NUM_10M;
+		*speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case HNS3_CFG_SPEED_100M:
-		*speed = ETH_SPEED_NUM_100M;
+		*speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case HNS3_CFG_SPEED_1G:
-		*speed = ETH_SPEED_NUM_1G;
+		*speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case HNS3_CFG_SPEED_10G:
-		*speed = ETH_SPEED_NUM_10G;
+		*speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case HNS3_CFG_SPEED_25G:
-		*speed = ETH_SPEED_NUM_25G;
+		*speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case HNS3_CFG_SPEED_40G:
-		*speed = ETH_SPEED_NUM_40G;
+		*speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case HNS3_CFG_SPEED_50G:
-		*speed = ETH_SPEED_NUM_50G;
+		*speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case HNS3_CFG_SPEED_100G:
-		*speed = ETH_SPEED_NUM_100G;
+		*speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	case HNS3_CFG_SPEED_200G:
-		*speed = ETH_SPEED_NUM_200G;
+		*speed = RTE_ETH_SPEED_NUM_200G;
 		break;
 	default:
 		return -EINVAL;
@@ -3559,39 +3559,39 @@ hns3_cfg_mac_speed_dup_hw(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
 	hns3_set_bit(req->speed_dup, HNS3_CFG_DUPLEX_B, !!duplex ? 1 : 0);
 
 	switch (speed) {
-	case ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_10M:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10M);
 		break;
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100M);
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_1G);
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10G);
 		break;
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_25G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_25G);
 		break;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_40G);
 		break;
-	case ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_50G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_50G);
 		break;
-	case ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_100G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100G);
 		break;
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_200G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_200G);
 		break;
@@ -4254,14 +4254,14 @@ hns3_mac_init(struct hns3_hw *hw)
 	int ret;
 
 	pf->support_sfp_query = true;
-	mac->link_duplex = ETH_LINK_FULL_DUPLEX;
+	mac->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	ret = hns3_cfg_mac_speed_dup_hw(hw, mac->link_speed, mac->link_duplex);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Config mac speed dup fail ret = %d", ret);
 		return ret;
 	}
 
-	mac->link_status = ETH_LINK_DOWN;
+	mac->link_status = RTE_ETH_LINK_DOWN;
 
 	return hns3_config_mtu(hw, pf->mps);
 }
@@ -4511,7 +4511,7 @@ hns3_dev_promiscuous_enable(struct rte_eth_dev *dev)
 	 * all packets coming in in the receiving direction.
 	 */
 	offloads = dev->data->dev_conf.rxmode.offloads;
-	if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		ret = hns3_enable_vlan_filter(hns, false);
 		if (ret) {
 			hns3_err(hw, "failed to enable promiscuous mode due to "
@@ -4552,7 +4552,7 @@ hns3_dev_promiscuous_disable(struct rte_eth_dev *dev)
 	}
 	/* when promiscuous mode was disabled, restore the vlan filter status */
 	offloads = dev->data->dev_conf.rxmode.offloads;
-	if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		ret = hns3_enable_vlan_filter(hns, true);
 		if (ret) {
 			hns3_err(hw, "failed to disable promiscuous mode due to"
@@ -4672,8 +4672,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
 		mac_info->supported_speed =
 					rte_le_to_cpu_32(resp->supported_speed);
 		mac_info->support_autoneg = resp->autoneg_ability;
-		mac_info->link_autoneg = (resp->autoneg == 0) ? ETH_LINK_FIXED
-					: ETH_LINK_AUTONEG;
+		mac_info->link_autoneg = (resp->autoneg == 0) ? RTE_ETH_LINK_FIXED
+					: RTE_ETH_LINK_AUTONEG;
 	} else {
 		mac_info->query_type = HNS3_DEFAULT_QUERY;
 	}
@@ -4684,8 +4684,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
 static uint8_t
 hns3_check_speed_dup(uint8_t duplex, uint32_t speed)
 {
-	if (!(speed == ETH_SPEED_NUM_10M || speed == ETH_SPEED_NUM_100M))
-		duplex = ETH_LINK_FULL_DUPLEX;
+	if (!(speed == RTE_ETH_SPEED_NUM_10M || speed == RTE_ETH_SPEED_NUM_100M))
+		duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	return duplex;
 }
@@ -4735,7 +4735,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
 		return ret;
 
 	/* Do nothing if no SFP */
-	if (mac_info.link_speed == ETH_SPEED_NUM_NONE)
+	if (mac_info.link_speed == RTE_ETH_SPEED_NUM_NONE)
 		return 0;
 
 	/*
@@ -4762,7 +4762,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
 
 	/* Config full duplex for SFP */
 	return hns3_cfg_mac_speed_dup(hw, mac_info.link_speed,
-				      ETH_LINK_FULL_DUPLEX);
+				      RTE_ETH_LINK_FULL_DUPLEX);
 }
 
 static void
@@ -4881,10 +4881,10 @@ hns3_cfg_mac_mode(struct hns3_hw *hw, bool enable)
 	hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_B, val);
 
 	/*
-	 * If DEV_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
+	 * If RTE_ETH_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
 	 * when receiving frames. Otherwise, CRC will be stripped.
 	 */
-	if (hw->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (hw->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, 0);
 	else
 		hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, val);
@@ -4912,7 +4912,7 @@ hns3_get_mac_link_status(struct hns3_hw *hw)
 	ret = hns3_cmd_send(hw, &desc, 1);
 	if (ret) {
 		hns3_err(hw, "get link status cmd failed %d", ret);
-		return ETH_LINK_DOWN;
+		return RTE_ETH_LINK_DOWN;
 	}
 
 	req = (struct hns3_link_status_cmd *)desc.data;
@@ -5094,19 +5094,19 @@ hns3_set_firber_default_support_speed(struct hns3_hw *hw)
 	struct hns3_mac *mac = &hw->mac;
 
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		return HNS3_FIBER_LINK_SPEED_1G_BIT;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		return HNS3_FIBER_LINK_SPEED_10G_BIT;
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_25G:
 		return HNS3_FIBER_LINK_SPEED_25G_BIT;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		return HNS3_FIBER_LINK_SPEED_40G_BIT;
-	case ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_50G:
 		return HNS3_FIBER_LINK_SPEED_50G_BIT;
-	case ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_100G:
 		return HNS3_FIBER_LINK_SPEED_100G_BIT;
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_200G:
 		return HNS3_FIBER_LINK_SPEED_200G_BIT;
 	default:
 		hns3_warn(hw, "invalid speed %u Mbps.", mac->link_speed);
@@ -5344,20 +5344,20 @@ hns3_convert_link_speeds2bitmap_copper(uint32_t link_speeds)
 {
 	uint32_t speed_bit;
 
-	switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
-	case ETH_LINK_SPEED_10M:
+	switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+	case RTE_ETH_LINK_SPEED_10M:
 		speed_bit = HNS3_PHY_LINK_SPEED_10M_BIT;
 		break;
-	case ETH_LINK_SPEED_10M_HD:
+	case RTE_ETH_LINK_SPEED_10M_HD:
 		speed_bit = HNS3_PHY_LINK_SPEED_10M_HD_BIT;
 		break;
-	case ETH_LINK_SPEED_100M:
+	case RTE_ETH_LINK_SPEED_100M:
 		speed_bit = HNS3_PHY_LINK_SPEED_100M_BIT;
 		break;
-	case ETH_LINK_SPEED_100M_HD:
+	case RTE_ETH_LINK_SPEED_100M_HD:
 		speed_bit = HNS3_PHY_LINK_SPEED_100M_HD_BIT;
 		break;
-	case ETH_LINK_SPEED_1G:
+	case RTE_ETH_LINK_SPEED_1G:
 		speed_bit = HNS3_PHY_LINK_SPEED_1000M_BIT;
 		break;
 	default:
@@ -5373,26 +5373,26 @@ hns3_convert_link_speeds2bitmap_fiber(uint32_t link_speeds)
 {
 	uint32_t speed_bit;
 
-	switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
-	case ETH_LINK_SPEED_1G:
+	switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+	case RTE_ETH_LINK_SPEED_1G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_1G_BIT;
 		break;
-	case ETH_LINK_SPEED_10G:
+	case RTE_ETH_LINK_SPEED_10G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_10G_BIT;
 		break;
-	case ETH_LINK_SPEED_25G:
+	case RTE_ETH_LINK_SPEED_25G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_25G_BIT;
 		break;
-	case ETH_LINK_SPEED_40G:
+	case RTE_ETH_LINK_SPEED_40G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_40G_BIT;
 		break;
-	case ETH_LINK_SPEED_50G:
+	case RTE_ETH_LINK_SPEED_50G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_50G_BIT;
 		break;
-	case ETH_LINK_SPEED_100G:
+	case RTE_ETH_LINK_SPEED_100G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_100G_BIT;
 		break;
-	case ETH_LINK_SPEED_200G:
+	case RTE_ETH_LINK_SPEED_200G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_200G_BIT;
 		break;
 	default:
@@ -5427,28 +5427,28 @@ hns3_check_port_speed(struct hns3_hw *hw, uint32_t link_speeds)
 static inline uint32_t
 hns3_get_link_speed(uint32_t link_speeds)
 {
-	uint32_t speed = ETH_SPEED_NUM_NONE;
-
-	if (link_speeds & ETH_LINK_SPEED_10M ||
-	    link_speeds & ETH_LINK_SPEED_10M_HD)
-		speed = ETH_SPEED_NUM_10M;
-	if (link_speeds & ETH_LINK_SPEED_100M ||
-	    link_speeds & ETH_LINK_SPEED_100M_HD)
-		speed = ETH_SPEED_NUM_100M;
-	if (link_speeds & ETH_LINK_SPEED_1G)
-		speed = ETH_SPEED_NUM_1G;
-	if (link_speeds & ETH_LINK_SPEED_10G)
-		speed = ETH_SPEED_NUM_10G;
-	if (link_speeds & ETH_LINK_SPEED_25G)
-		speed = ETH_SPEED_NUM_25G;
-	if (link_speeds & ETH_LINK_SPEED_40G)
-		speed = ETH_SPEED_NUM_40G;
-	if (link_speeds & ETH_LINK_SPEED_50G)
-		speed = ETH_SPEED_NUM_50G;
-	if (link_speeds & ETH_LINK_SPEED_100G)
-		speed = ETH_SPEED_NUM_100G;
-	if (link_speeds & ETH_LINK_SPEED_200G)
-		speed = ETH_SPEED_NUM_200G;
+	uint32_t speed = RTE_ETH_SPEED_NUM_NONE;
+
+	if (link_speeds & RTE_ETH_LINK_SPEED_10M ||
+	    link_speeds & RTE_ETH_LINK_SPEED_10M_HD)
+		speed = RTE_ETH_SPEED_NUM_10M;
+	if (link_speeds & RTE_ETH_LINK_SPEED_100M ||
+	    link_speeds & RTE_ETH_LINK_SPEED_100M_HD)
+		speed = RTE_ETH_SPEED_NUM_100M;
+	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
+		speed = RTE_ETH_SPEED_NUM_1G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
+		speed = RTE_ETH_SPEED_NUM_10G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
+		speed = RTE_ETH_SPEED_NUM_25G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
+		speed = RTE_ETH_SPEED_NUM_40G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_50G)
+		speed = RTE_ETH_SPEED_NUM_50G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_100G)
+		speed = RTE_ETH_SPEED_NUM_100G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_200G)
+		speed = RTE_ETH_SPEED_NUM_200G;
 
 	return speed;
 }
@@ -5456,11 +5456,11 @@ hns3_get_link_speed(uint32_t link_speeds)
 static uint8_t
 hns3_get_link_duplex(uint32_t link_speeds)
 {
-	if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
-	    (link_speeds & ETH_LINK_SPEED_100M_HD))
-		return ETH_LINK_HALF_DUPLEX;
+	if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+	    (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+		return RTE_ETH_LINK_HALF_DUPLEX;
 	else
-		return ETH_LINK_FULL_DUPLEX;
+		return RTE_ETH_LINK_FULL_DUPLEX;
 }
 
 static int
@@ -5594,9 +5594,9 @@ hns3_apply_link_speed(struct hns3_hw *hw)
 	struct hns3_set_link_speed_cfg cfg;
 
 	memset(&cfg, 0, sizeof(struct hns3_set_link_speed_cfg));
-	cfg.autoneg = (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) ?
-			ETH_LINK_AUTONEG : ETH_LINK_FIXED;
-	if (cfg.autoneg != ETH_LINK_AUTONEG) {
+	cfg.autoneg = (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) ?
+			RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
+	if (cfg.autoneg != RTE_ETH_LINK_AUTONEG) {
 		cfg.speed = hns3_get_link_speed(conf->link_speeds);
 		cfg.duplex = hns3_get_link_duplex(conf->link_speeds);
 	}
@@ -5869,7 +5869,7 @@ hns3_do_stop(struct hns3_adapter *hns)
 	ret = hns3_cfg_mac_mode(hw, false);
 	if (ret)
 		return ret;
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED) == 0) {
 		hns3_configure_all_mac_addr(hns, true);
@@ -6080,17 +6080,17 @@ hns3_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	current_mode = hns3_get_current_fc_mode(dev);
 	switch (current_mode) {
 	case HNS3_FC_FULL:
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	case HNS3_FC_TX_PAUSE:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case HNS3_FC_RX_PAUSE:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case HNS3_FC_NONE:
 	default:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		break;
 	}
 
@@ -6236,7 +6236,7 @@ hns3_get_dcb_info(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info)
 	int i;
 
 	rte_spinlock_lock(&hw->lock);
-	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = pf->local_max_tc;
 	else
 		dcb_info->nb_tcs = 1;
@@ -6536,7 +6536,7 @@ hns3_stop_service(struct hns3_adapter *hns)
 	struct rte_eth_dev *eth_dev;
 
 	eth_dev = &rte_eth_devices[hw->data->port_id];
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 	if (hw->adapter_state == HNS3_NIC_STARTED) {
 		rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
 		hns3_update_linkstatus_and_event(hw, false);
@@ -6826,7 +6826,7 @@ get_current_fec_auto_state(struct hns3_hw *hw, uint8_t *state)
 	 * in device of link speed
 	 * below 10 Gbps.
 	 */
-	if (hw->mac.link_speed < ETH_SPEED_NUM_10G) {
+	if (hw->mac.link_speed < RTE_ETH_SPEED_NUM_10G) {
 		*state = 0;
 		return 0;
 	}
@@ -6858,7 +6858,7 @@ hns3_fec_get_internal(struct hns3_hw *hw, uint32_t *fec_capa)
 	 * configured FEC mode is returned.
 	 * If link is up, current FEC mode is returned.
 	 */
-	if (hw->mac.link_status == ETH_LINK_DOWN) {
+	if (hw->mac.link_status == RTE_ETH_LINK_DOWN) {
 		ret = get_current_fec_auto_state(hw, &auto_state);
 		if (ret)
 			return ret;
@@ -6957,12 +6957,12 @@ get_current_speed_fec_cap(struct hns3_hw *hw, struct rte_eth_fec_capa *fec_capa)
 	uint32_t cur_capa;
 
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		cur_capa = fec_capa[1].capa;
 		break;
-	case ETH_SPEED_NUM_25G:
-	case ETH_SPEED_NUM_100G:
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_200G:
 		cur_capa = fec_capa[0].capa;
 		break;
 	default:
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index e28056b1bd60..0f55fd4c83ad 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -190,10 +190,10 @@ struct hns3_mac {
 	uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
 	uint8_t media_type;
 	uint8_t phy_addr;
-	uint8_t link_duplex  : 1; /* ETH_LINK_[HALF/FULL]_DUPLEX */
-	uint8_t link_autoneg : 1; /* ETH_LINK_[AUTONEG/FIXED] */
-	uint8_t link_status  : 1; /* ETH_LINK_[DOWN/UP] */
-	uint32_t link_speed;      /* ETH_SPEED_NUM_ */
+	uint8_t link_duplex  : 1; /* RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+	uint8_t link_autoneg : 1; /* RTE_ETH_LINK_[AUTONEG/FIXED] */
+	uint8_t link_status  : 1; /* RTE_ETH_LINK_[DOWN/UP] */
+	uint32_t link_speed;      /* RTE_ETH_SPEED_NUM_ */
 	/*
 	 * Some firmware versions support only the SFP speed query. In addition
 	 * to the SFP speed query, some firmware supports the query of the speed
@@ -1076,9 +1076,9 @@ static inline uint64_t
 hns3_txvlan_cap_get(struct hns3_hw *hw)
 {
 	if (hw->port_base_vlan_cfg.state)
-		return DEV_TX_OFFLOAD_VLAN_INSERT;
+		return RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	else
-		return DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT;
+		return RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
 }
 
 #endif /* _HNS3_ETHDEV_H_ */
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 54dbd4b798f2..7b784048b518 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -807,15 +807,15 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
 	}
 
 	hw->adapter_state = HNS3_NIC_CONFIGURING;
-	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		hns3_err(hw, "setting link speed/duplex not supported");
 		ret = -EINVAL;
 		goto cfg_err;
 	}
 
 	/* When RSS is not configured, redirect the packet queue 0 */
-	if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 		hw->rss_dis_flag = false;
 		rss_conf = conf->rx_adv_conf.rss_conf;
 		ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -832,7 +832,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
 		goto cfg_err;
 
 	/* config hardware GRO */
-	gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+	gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
 	ret = hns3_config_gro(hw, gro_en);
 	if (ret)
 		goto cfg_err;
@@ -935,32 +935,32 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 	info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
 	info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
 
-	info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_TCP_CKSUM |
-				 DEV_RX_OFFLOAD_SCTP_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_SCATTER |
-				 DEV_RX_OFFLOAD_VLAN_STRIP |
-				 DEV_RX_OFFLOAD_VLAN_FILTER |
-				 DEV_RX_OFFLOAD_RSS_HASH |
-				 DEV_RX_OFFLOAD_TCP_LRO);
-	info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_TCP_CKSUM |
-				 DEV_TX_OFFLOAD_UDP_CKSUM |
-				 DEV_TX_OFFLOAD_SCTP_CKSUM |
-				 DEV_TX_OFFLOAD_MULTI_SEGS |
-				 DEV_TX_OFFLOAD_TCP_TSO |
-				 DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				 DEV_TX_OFFLOAD_GRE_TNL_TSO |
-				 DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-				 DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+	info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_SCATTER |
+				 RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				 RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				 RTE_ETH_RX_OFFLOAD_RSS_HASH |
+				 RTE_ETH_RX_OFFLOAD_TCP_LRO);
+	info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				 RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				 RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
 				 hns3_txvlan_cap_get(hw));
 
 	if (hns3_dev_get_support(hw, OUTER_UDP_CKSUM))
-		info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+		info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 
 	if (hns3_dev_get_support(hw, INDEP_TXRX))
 		info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -1640,10 +1640,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	tmp_mask = (unsigned int)mask;
 
-	if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
 		rte_spinlock_lock(&hw->lock);
 		/* Enable or disable VLAN filter */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ret = hns3vf_en_vlan_filter(hw, true);
 		else
 			ret = hns3vf_en_vlan_filter(hw, false);
@@ -1653,10 +1653,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	}
 
 	/* Vlan stripping setting */
-	if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
 		rte_spinlock_lock(&hw->lock);
 		/* Enable or disable VLAN stripping */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			ret = hns3vf_en_hw_strip_rxvtag(hw, true);
 		else
 			ret = hns3vf_en_hw_strip_rxvtag(hw, false);
@@ -1724,7 +1724,7 @@ hns3vf_restore_vlan_conf(struct hns3_adapter *hns)
 	int ret;
 
 	dev_conf = &hw->data->dev_conf;
-	en = dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP ? true
+	en = dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ? true
 								   : false;
 	ret = hns3vf_en_hw_strip_rxvtag(hw, en);
 	if (ret)
@@ -1749,8 +1749,8 @@ hns3vf_dev_configure_vlan(struct rte_eth_dev *dev)
 	}
 
 	/* Apply vlan offload setting */
-	ret = hns3vf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK |
-					ETH_VLAN_FILTER_MASK);
+	ret = hns3vf_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK |
+					RTE_ETH_VLAN_FILTER_MASK);
 	if (ret)
 		hns3_err(hw, "dev config vlan offload failed, ret = %d.", ret);
 
@@ -2059,7 +2059,7 @@ hns3vf_do_stop(struct hns3_adapter *hns)
 	struct hns3_hw *hw = &hns->hw;
 	int ret;
 
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	/*
 	 * The "hns3vf_do_stop" function will also be called by .stop_service to
@@ -2218,31 +2218,31 @@ hns3vf_dev_link_update(struct rte_eth_dev *eth_dev,
 
 	memset(&new_link, 0, sizeof(new_link));
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_10M:
-	case ETH_SPEED_NUM_100M:
-	case ETH_SPEED_NUM_1G:
-	case ETH_SPEED_NUM_10G:
-	case ETH_SPEED_NUM_25G:
-	case ETH_SPEED_NUM_40G:
-	case ETH_SPEED_NUM_50G:
-	case ETH_SPEED_NUM_100G:
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_200G:
 		if (mac->link_status)
 			new_link.link_speed = mac->link_speed;
 		break;
 	default:
 		if (mac->link_status)
-			new_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+			new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 	}
 
 	if (!mac->link_status)
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	new_link.link_duplex = mac->link_duplex;
-	new_link.link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+	new_link.link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg =
-	    !(eth_dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED);
+	    !(eth_dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(eth_dev, &new_link);
 }
@@ -2570,11 +2570,11 @@ hns3vf_stop_service(struct hns3_adapter *hns)
 		 * Make sure call update link status before hns3vf_stop_poll_job
 		 * because update link status depend on polling job exist.
 		 */
-		hns3vf_update_link_status(hw, ETH_LINK_DOWN, hw->mac.link_speed,
+		hns3vf_update_link_status(hw, RTE_ETH_LINK_DOWN, hw->mac.link_speed,
 					  hw->mac.link_duplex);
 		hns3vf_stop_poll_job(eth_dev);
 	}
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	hns3_set_rxtx_function(eth_dev);
 	rte_wmb();
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 38a2ee58a651..da6918fddda3 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1298,10 +1298,10 @@ hns3_rss_input_tuple_supported(struct hns3_hw *hw,
 	 * Kunpeng930 and future kunpeng series support to use src/dst port
 	 * fields to RSS hash for IPv6 SCTP packet type.
 	 */
-	if (rss->types & (ETH_RSS_L4_DST_ONLY | ETH_RSS_L4_SRC_ONLY) &&
-	    (rss->types & ETH_RSS_IP ||
+	if (rss->types & (RTE_ETH_RSS_L4_DST_ONLY | RTE_ETH_RSS_L4_SRC_ONLY) &&
+	    (rss->types & RTE_ETH_RSS_IP ||
 	    (!hw->rss_info.ipv6_sctp_offload_supported &&
-	    rss->types & ETH_RSS_NONFRAG_IPV6_SCTP)))
+	    rss->types & RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
 		return false;
 
 	return true;
diff --git a/drivers/net/hns3/hns3_ptp.c b/drivers/net/hns3/hns3_ptp.c
index 5dfe68cc4dbd..9a829d7011ad 100644
--- a/drivers/net/hns3/hns3_ptp.c
+++ b/drivers/net/hns3/hns3_ptp.c
@@ -21,7 +21,7 @@ hns3_mbuf_dyn_rx_timestamp_register(struct rte_eth_dev *dev,
 	struct hns3_hw *hw = &hns->hw;
 	int ret;
 
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		return 0;
 
 	ret = rte_mbuf_dyn_rx_timestamp_register
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index 3a81e90e0911..85495bbe89d9 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -76,69 +76,69 @@ static const struct {
 	uint64_t rss_types;
 	uint64_t rss_field;
 } hns3_set_tuple_table[] = {
-	{ ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
-	{ ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
-	{ ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) },
-	{ ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) },
 };
 
@@ -146,44 +146,44 @@ static const struct {
 	uint64_t rss_types;
 	uint64_t rss_field;
 } hns3_set_rss_types[] = {
-	{ ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
+	{ RTE_ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_VER) },
-	{ ETH_RSS_NONFRAG_IPV4_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
-	{ ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
+	{ RTE_ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) |
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_VER) },
-	{ ETH_RSS_NONFRAG_IPV6_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }
 };
@@ -365,10 +365,10 @@ hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw,
 	 * When user does not specify the following types or a combination of
 	 * the following types, it enables all fields for the supported RSS
 	 * types. the following types as:
-	 * - ETH_RSS_L3_SRC_ONLY
-	 * - ETH_RSS_L3_DST_ONLY
-	 * - ETH_RSS_L4_SRC_ONLY
-	 * - ETH_RSS_L4_DST_ONLY
+	 * - RTE_ETH_RSS_L3_SRC_ONLY
+	 * - RTE_ETH_RSS_L3_DST_ONLY
+	 * - RTE_ETH_RSS_L4_SRC_ONLY
+	 * - RTE_ETH_RSS_L4_DST_ONLY
 	 */
 	if (fields_count == 0) {
 		for (i = 0; i < RTE_DIM(hns3_set_rss_types); i++) {
@@ -520,8 +520,8 @@ hns3_dev_rss_reta_update(struct rte_eth_dev *dev,
 	memcpy(indirection_tbl, rss_cfg->rss_indirection_tbl,
 	       sizeof(rss_cfg->rss_indirection_tbl));
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].reta[shift] >= hw->alloc_rss_size) {
 			rte_spinlock_unlock(&hw->lock);
 			hns3_err(hw, "queue id(%u) set to redirection table "
@@ -572,8 +572,8 @@ hns3_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 	rte_spinlock_lock(&hw->lock);
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] =
 						rss_cfg->rss_indirection_tbl[i];
@@ -692,7 +692,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 	}
 
 	/* When RSS is off, redirect the packet queue 0 */
-	if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) == 0)
+	if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0)
 		hns3_rss_uninit(hns);
 
 	/* Configure RSS hash algorithm and hash key offset */
@@ -709,7 +709,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 	 * When RSS is off, it doesn't need to configure rss redirection table
 	 * to hardware.
 	 */
-	if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+	if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		ret = hns3_set_rss_indir_table(hw, rss_cfg->rss_indirection_tbl,
 					       hw->rss_ind_tbl_size);
 		if (ret)
@@ -723,7 +723,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 	return ret;
 
 rss_indir_table_uninit:
-	if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+	if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		ret1 = hns3_rss_reset_indir_table(hw);
 		if (ret1 != 0)
 			return ret;
diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h
index 996083b88b25..6f153a1b7bfb 100644
--- a/drivers/net/hns3/hns3_rss.h
+++ b/drivers/net/hns3/hns3_rss.h
@@ -8,20 +8,20 @@
 #include <rte_flow.h>
 
 #define HNS3_ETH_RSS_SUPPORT ( \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L3_SRC_ONLY | \
-	ETH_RSS_L3_DST_ONLY | \
-	ETH_RSS_L4_SRC_ONLY | \
-	ETH_RSS_L4_DST_ONLY)
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L3_SRC_ONLY | \
+	RTE_ETH_RSS_L3_DST_ONLY | \
+	RTE_ETH_RSS_L4_SRC_ONLY | \
+	RTE_ETH_RSS_L4_DST_ONLY)
 
 #define HNS3_RSS_IND_TBL_SIZE	512 /* The size of hash lookup table */
 #define HNS3_RSS_IND_TBL_SIZE_MAX 2048
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 602548a4f25b..920ee8ceeab9 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1924,7 +1924,7 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	memset(&rxq->dfx_stats, 0, sizeof(struct hns3_rx_dfx_stats));
 
 	/* CRC len set here is used for amending packet length */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1969,7 +1969,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
 						 rxq->rx_buf_len);
 	}
 
-	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
+	if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
 	    dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
 		dev->data->scattered_rx = true;
 }
@@ -2845,7 +2845,7 @@ hns3_get_rx_function(struct rte_eth_dev *dev)
 	vec_allowed = vec_support && hns3_get_default_vec_support();
 	sve_allowed = vec_support && hns3_get_sve_support();
 	simple_allowed = !dev->data->scattered_rx &&
-			 (offloads & DEV_RX_OFFLOAD_TCP_LRO) == 0;
+			 (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) == 0;
 
 	if (hns->rx_func_hint == HNS3_IO_FUNC_HINT_VEC && vec_allowed)
 		return hns3_recv_pkts_vec;
@@ -3139,7 +3139,7 @@ hns3_restore_gro_conf(struct hns3_hw *hw)
 	int ret;
 
 	offloads = hw->data->dev_conf.rxmode.offloads;
-	gro_en = offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+	gro_en = offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
 	ret = hns3_config_gro(hw, gro_en);
 	if (ret)
 		hns3_err(hw, "restore hardware GRO to %s failed, ret = %d",
@@ -4291,7 +4291,7 @@ hns3_tx_check_simple_support(struct rte_eth_dev *dev)
 	if (hns3_dev_get_support(hw, PTP))
 		return false;
 
-	return (offloads == (offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE));
+	return (offloads == (offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE));
 }
 
 static bool
@@ -4303,16 +4303,16 @@ hns3_get_tx_prep_needed(struct rte_eth_dev *dev)
 	return true;
 #else
 #define HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK (\
-		DEV_TX_OFFLOAD_IPV4_CKSUM | \
-		DEV_TX_OFFLOAD_TCP_CKSUM | \
-		DEV_TX_OFFLOAD_UDP_CKSUM | \
-		DEV_TX_OFFLOAD_SCTP_CKSUM | \
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-		DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
-		DEV_TX_OFFLOAD_TCP_TSO | \
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-		DEV_TX_OFFLOAD_GRE_TNL_TSO | \
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO)
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)
 
 	uint64_t tx_offload = dev->data->dev_conf.txmode.offloads;
 	if (tx_offload & HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index c8229e9076b5..dfea5d5b4c2f 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -307,7 +307,7 @@ struct hns3_rx_queue {
 	uint16_t rx_rearm_start; /* index of BD that driver re-arming from */
 	uint16_t rx_rearm_nb;    /* number of remaining BDs to be re-armed */
 
-	/* 4 if DEV_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
+	/* 4 if RTE_ETH_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
 	uint8_t crc_len;
 
 	/*
diff --git a/drivers/net/hns3/hns3_rxtx_vec.c b/drivers/net/hns3/hns3_rxtx_vec.c
index ff434d2d33ed..455110361aac 100644
--- a/drivers/net/hns3/hns3_rxtx_vec.c
+++ b/drivers/net/hns3/hns3_rxtx_vec.c
@@ -22,8 +22,8 @@ hns3_tx_check_vec_support(struct rte_eth_dev *dev)
 	if (hns3_dev_get_support(hw, PTP))
 		return -ENOTSUP;
 
-	/* Only support DEV_TX_OFFLOAD_MBUF_FAST_FREE */
-	if (txmode->offloads != DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	/* Only support RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE */
+	if (txmode->offloads != RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		return -ENOTSUP;
 
 	return 0;
@@ -228,10 +228,10 @@ hns3_rxq_vec_check(struct hns3_rx_queue *rxq, void *arg)
 int
 hns3_rx_check_vec_support(struct rte_eth_dev *dev)
 {
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-	uint64_t offloads_mask = DEV_RX_OFFLOAD_TCP_LRO |
-				 DEV_RX_OFFLOAD_VLAN;
+	uint64_t offloads_mask = RTE_ETH_RX_OFFLOAD_TCP_LRO |
+				 RTE_ETH_RX_OFFLOAD_VLAN;
 
 	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if (hns3_dev_get_support(hw, PTP))
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 0a4db0891d4a..293df887bf7c 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1629,7 +1629,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
 
 	/* Set the global registers with default ether type value */
 	if (!pf->support_multi_driver) {
-		ret = i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+		ret = i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					 RTE_ETHER_TYPE_VLAN);
 		if (ret != I40E_SUCCESS) {
 			PMD_INIT_LOG(ERR,
@@ -1896,8 +1896,8 @@ i40e_dev_configure(struct rte_eth_dev *dev)
 	ad->tx_simple_allowed = true;
 	ad->tx_vec_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Only legacy filter API needs the following fdir config. So when the
 	 * legacy filter API is deprecated, the following codes should also be
@@ -1931,13 +1931,13 @@ i40e_dev_configure(struct rte_eth_dev *dev)
 	 *  number, which will be available after rx_queue_setup(). dev_start()
 	 *  function is good to place RSS setup.
 	 */
-	if (mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+	if (mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) {
 		ret = i40e_vmdq_setup(dev);
 		if (ret)
 			goto err;
 	}
 
-	if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if (mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		ret = i40e_dcb_setup(dev);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "failed to configure DCB.");
@@ -2214,17 +2214,17 @@ i40e_parse_link_speeds(uint16_t link_speeds)
 {
 	uint8_t link_speed = I40E_LINK_SPEED_UNKNOWN;
 
-	if (link_speeds & ETH_LINK_SPEED_40G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
 		link_speed |= I40E_LINK_SPEED_40GB;
-	if (link_speeds & ETH_LINK_SPEED_25G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
 		link_speed |= I40E_LINK_SPEED_25GB;
-	if (link_speeds & ETH_LINK_SPEED_20G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_20G)
 		link_speed |= I40E_LINK_SPEED_20GB;
-	if (link_speeds & ETH_LINK_SPEED_10G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 		link_speed |= I40E_LINK_SPEED_10GB;
-	if (link_speeds & ETH_LINK_SPEED_1G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
 		link_speed |= I40E_LINK_SPEED_1GB;
-	if (link_speeds & ETH_LINK_SPEED_100M)
+	if (link_speeds & RTE_ETH_LINK_SPEED_100M)
 		link_speed |= I40E_LINK_SPEED_100MB;
 
 	return link_speed;
@@ -2332,13 +2332,13 @@ i40e_apply_link_speed(struct rte_eth_dev *dev)
 	abilities |= I40E_AQ_PHY_ENABLE_ATOMIC_LINK |
 		     I40E_AQ_PHY_LINK_ENABLED;
 
-	if (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
-		conf->link_speeds = ETH_LINK_SPEED_40G |
-				    ETH_LINK_SPEED_25G |
-				    ETH_LINK_SPEED_20G |
-				    ETH_LINK_SPEED_10G |
-				    ETH_LINK_SPEED_1G |
-				    ETH_LINK_SPEED_100M;
+	if (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
+		conf->link_speeds = RTE_ETH_LINK_SPEED_40G |
+				    RTE_ETH_LINK_SPEED_25G |
+				    RTE_ETH_LINK_SPEED_20G |
+				    RTE_ETH_LINK_SPEED_10G |
+				    RTE_ETH_LINK_SPEED_1G |
+				    RTE_ETH_LINK_SPEED_100M;
 
 		abilities |= I40E_AQ_PHY_AN_ENABLED;
 	} else {
@@ -2876,34 +2876,34 @@ update_link_reg(struct i40e_hw *hw, struct rte_eth_link *link)
 	/* Parse the link status */
 	switch (link_speed) {
 	case I40E_REG_SPEED_0:
-		link->link_speed = ETH_SPEED_NUM_100M;
+		link->link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case I40E_REG_SPEED_1:
-		link->link_speed = ETH_SPEED_NUM_1G;
+		link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case I40E_REG_SPEED_2:
 		if (hw->mac.type == I40E_MAC_X722)
-			link->link_speed = ETH_SPEED_NUM_2_5G;
+			link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		else
-			link->link_speed = ETH_SPEED_NUM_10G;
+			link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case I40E_REG_SPEED_3:
 		if (hw->mac.type == I40E_MAC_X722) {
-			link->link_speed = ETH_SPEED_NUM_5G;
+			link->link_speed = RTE_ETH_SPEED_NUM_5G;
 		} else {
 			reg_val = I40E_READ_REG(hw, I40E_PRTMAC_MACC);
 
 			if (reg_val & I40E_REG_MACC_25GB)
-				link->link_speed = ETH_SPEED_NUM_25G;
+				link->link_speed = RTE_ETH_SPEED_NUM_25G;
 			else
-				link->link_speed = ETH_SPEED_NUM_40G;
+				link->link_speed = RTE_ETH_SPEED_NUM_40G;
 		}
 		break;
 	case I40E_REG_SPEED_4:
 		if (hw->mac.type == I40E_MAC_X722)
-			link->link_speed = ETH_SPEED_NUM_10G;
+			link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		else
-			link->link_speed = ETH_SPEED_NUM_20G;
+			link->link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "Unknown link speed info %u", link_speed);
@@ -2930,8 +2930,8 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
 		status = i40e_aq_get_link_info(hw, enable_lse,
 						&link_status, NULL);
 		if (unlikely(status != I40E_SUCCESS)) {
-			link->link_speed = ETH_SPEED_NUM_NONE;
-			link->link_duplex = ETH_LINK_FULL_DUPLEX;
+			link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+			link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR, "Failed to get link info");
 			return;
 		}
@@ -2946,28 +2946,28 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
 	/* Parse the link status */
 	switch (link_status.link_speed) {
 	case I40E_LINK_SPEED_100MB:
-		link->link_speed = ETH_SPEED_NUM_100M;
+		link->link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case I40E_LINK_SPEED_1GB:
-		link->link_speed = ETH_SPEED_NUM_1G;
+		link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case I40E_LINK_SPEED_10GB:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case I40E_LINK_SPEED_20GB:
-		link->link_speed = ETH_SPEED_NUM_20G;
+		link->link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case I40E_LINK_SPEED_25GB:
-		link->link_speed = ETH_SPEED_NUM_25G;
+		link->link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case I40E_LINK_SPEED_40GB:
-		link->link_speed = ETH_SPEED_NUM_40G;
+		link->link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	default:
 		if (link->link_status)
-			link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+			link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		else
-			link->link_speed = ETH_SPEED_NUM_NONE;
+			link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 }
@@ -2984,9 +2984,9 @@ i40e_dev_link_update(struct rte_eth_dev *dev,
 	memset(&link, 0, sizeof(link));
 
 	/* i40e uses full duplex only */
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_SPEED_FIXED);
 
 	if (!wait_to_complete && !enable_lse)
 		update_link_reg(hw, &link);
@@ -3720,33 +3720,33 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_KEEP_CRC |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_RSS_HASH;
-
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
 		dev_info->tx_queue_offload_capa;
 	dev_info->dev_capa =
 		RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -3805,7 +3805,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	if (I40E_PHY_TYPE_SUPPORT_40G(hw->phy.phy_types)) {
 		/* For XL710 */
-		dev_info->speed_capa = ETH_LINK_SPEED_40G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_40G;
 		dev_info->default_rxportconf.nb_queues = 2;
 		dev_info->default_txportconf.nb_queues = 2;
 		if (dev->data->nb_rx_queues == 1)
@@ -3819,17 +3819,17 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	} else if (I40E_PHY_TYPE_SUPPORT_25G(hw->phy.phy_types)) {
 		/* For XXV710 */
-		dev_info->speed_capa = ETH_LINK_SPEED_25G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_25G;
 		dev_info->default_rxportconf.nb_queues = 1;
 		dev_info->default_txportconf.nb_queues = 1;
 		dev_info->default_rxportconf.ring_size = 256;
 		dev_info->default_txportconf.ring_size = 256;
 	} else {
 		/* For X710 */
-		dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
 		dev_info->default_rxportconf.nb_queues = 1;
 		dev_info->default_txportconf.nb_queues = 1;
-		if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_10G) {
+		if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_10G) {
 			dev_info->default_rxportconf.ring_size = 512;
 			dev_info->default_txportconf.ring_size = 256;
 		} else {
@@ -3868,7 +3868,7 @@ i40e_vlan_tpid_set_by_registers(struct rte_eth_dev *dev,
 	int ret;
 
 	if (qinq) {
-		if (vlan_type == ETH_VLAN_TYPE_OUTER)
+		if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
 			reg_id = 2;
 	}
 
@@ -3915,12 +3915,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
 	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	int qinq = dev->data->dev_conf.rxmode.offloads &
-		   DEV_RX_OFFLOAD_VLAN_EXTEND;
+		   RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	int ret = 0;
 
-	if ((vlan_type != ETH_VLAN_TYPE_INNER &&
-	     vlan_type != ETH_VLAN_TYPE_OUTER) ||
-	    (!qinq && vlan_type == ETH_VLAN_TYPE_INNER)) {
+	if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+	     vlan_type != RTE_ETH_VLAN_TYPE_OUTER) ||
+	    (!qinq && vlan_type == RTE_ETH_VLAN_TYPE_INNER)) {
 		PMD_DRV_LOG(ERR,
 			    "Unsupported vlan type.");
 		return -EINVAL;
@@ -3934,12 +3934,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
 	/* 802.1ad frames ability is added in NVM API 1.7*/
 	if (hw->flags & I40E_HW_FLAG_802_1AD_CAPABLE) {
 		if (qinq) {
-			if (vlan_type == ETH_VLAN_TYPE_OUTER)
+			if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
 				hw->first_tag = rte_cpu_to_le_16(tpid);
-			else if (vlan_type == ETH_VLAN_TYPE_INNER)
+			else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER)
 				hw->second_tag = rte_cpu_to_le_16(tpid);
 		} else {
-			if (vlan_type == ETH_VLAN_TYPE_OUTER)
+			if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
 				hw->second_tag = rte_cpu_to_le_16(tpid);
 		}
 		ret = i40e_aq_set_switch_config(hw, 0, 0, 0, NULL);
@@ -3998,37 +3998,37 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			i40e_vsi_config_vlan_filter(vsi, TRUE);
 		else
 			i40e_vsi_config_vlan_filter(vsi, FALSE);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			i40e_vsi_config_vlan_stripping(vsi, TRUE);
 		else
 			i40e_vsi_config_vlan_stripping(vsi, FALSE);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
 			i40e_vsi_config_double_vlan(vsi, TRUE);
 			/* Set global registers with default ethertype. */
-			i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+			i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					   RTE_ETHER_TYPE_VLAN);
-			i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+			i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
 					   RTE_ETHER_TYPE_VLAN);
 		}
 		else
 			i40e_vsi_config_double_vlan(vsi, FALSE);
 	}
 
-	if (mask & ETH_QINQ_STRIP_MASK) {
+	if (mask & RTE_ETH_QINQ_STRIP_MASK) {
 		/* Enable or disable outer VLAN stripping */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
 			i40e_vsi_config_outer_vlan_stripping(vsi, TRUE);
 		else
 			i40e_vsi_config_outer_vlan_stripping(vsi, FALSE);
@@ -4111,17 +4111,17 @@ i40e_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	 /* Return current mode according to actual setting*/
 	switch (hw->fc.current_mode) {
 	case I40E_FC_FULL:
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	case I40E_FC_TX_PAUSE:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case I40E_FC_RX_PAUSE:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case I40E_FC_NONE:
 	default:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	};
 
 	return 0;
@@ -4137,10 +4137,10 @@ i40e_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	struct i40e_hw *hw;
 	struct i40e_pf *pf;
 	enum i40e_fc_mode rte_fcmode_2_i40e_fcmode[] = {
-		[RTE_FC_NONE] = I40E_FC_NONE,
-		[RTE_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
-		[RTE_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
-		[RTE_FC_FULL] = I40E_FC_FULL
+		[RTE_ETH_FC_NONE] = I40E_FC_NONE,
+		[RTE_ETH_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
+		[RTE_ETH_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
+		[RTE_ETH_FC_FULL] = I40E_FC_FULL
 	};
 
 	/* high_water field in the rte_eth_fc_conf using the kilobytes unit */
@@ -4287,7 +4287,7 @@ i40e_macaddr_add(struct rte_eth_dev *dev,
 	}
 
 	rte_memcpy(&mac_filter.mac_addr, mac_addr, RTE_ETHER_ADDR_LEN);
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 		mac_filter.filter_type = I40E_MACVLAN_PERFECT_MATCH;
 	else
 		mac_filter.filter_type = I40E_MAC_PERFECT_MATCH;
@@ -4440,7 +4440,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
 	int ret;
 
 	if (reta_size != lut_size ||
-		reta_size > ETH_RSS_RETA_SIZE_512) {
+		reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
 		PMD_DRV_LOG(ERR,
 			"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
 			reta_size, lut_size);
@@ -4456,8 +4456,8 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
 	if (ret)
 		goto out;
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			lut[i] = reta_conf[idx].reta[shift];
 	}
@@ -4483,7 +4483,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
 	int ret;
 
 	if (reta_size != lut_size ||
-		reta_size > ETH_RSS_RETA_SIZE_512) {
+		reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
 		PMD_DRV_LOG(ERR,
 			"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
 			reta_size, lut_size);
@@ -4500,8 +4500,8 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
 	if (ret)
 		goto out;
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = lut[i];
 	}
@@ -4818,7 +4818,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
 			pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
 				hw->func_caps.num_vsis - vsi_count);
 			pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
-				ETH_64_POOLS);
+				RTE_ETH_64_POOLS);
 			if (pf->max_nb_vmdq_vsi) {
 				pf->flags |= I40E_FLAG_VMDQ;
 				pf->vmdq_nb_qps = pf->vmdq_nb_qp_max;
@@ -6104,10 +6104,10 @@ i40e_dev_init_vlan(struct rte_eth_dev *dev)
 	int mask = 0;
 
 	/* Apply vlan offload setting */
-	mask = ETH_VLAN_STRIP_MASK |
-	       ETH_QINQ_STRIP_MASK |
-	       ETH_VLAN_FILTER_MASK |
-	       ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK |
+	       RTE_ETH_QINQ_STRIP_MASK |
+	       RTE_ETH_VLAN_FILTER_MASK |
+	       RTE_ETH_VLAN_EXTEND_MASK;
 	ret = i40e_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_DRV_LOG(INFO, "Failed to update vlan offload");
@@ -6236,9 +6236,9 @@ i40e_pf_setup(struct i40e_pf *pf)
 
 	/* Configure filter control */
 	memset(&settings, 0, sizeof(settings));
-	if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_128)
+	if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_128)
 		settings.hash_lut_size = I40E_HASH_LUT_SIZE_128;
-	else if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_512)
+	else if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_512)
 		settings.hash_lut_size = I40E_HASH_LUT_SIZE_512;
 	else {
 		PMD_DRV_LOG(ERR, "Hash lookup table size (%u) not supported",
@@ -7098,7 +7098,7 @@ i40e_find_vlan_filter(struct i40e_vsi *vsi,
 {
 	uint32_t vid_idx, vid_bit;
 
-	if (vlan_id > ETH_VLAN_ID_MAX)
+	if (vlan_id > RTE_ETH_VLAN_ID_MAX)
 		return 0;
 
 	vid_idx = I40E_VFTA_IDX(vlan_id);
@@ -7133,7 +7133,7 @@ i40e_set_vlan_filter(struct i40e_vsi *vsi,
 	struct i40e_aqc_add_remove_vlan_element_data vlan_data = {0};
 	int ret;
 
-	if (vlan_id > ETH_VLAN_ID_MAX)
+	if (vlan_id > RTE_ETH_VLAN_ID_MAX)
 		return;
 
 	i40e_store_vlan_filter(vsi, vlan_id, on);
@@ -7727,25 +7727,25 @@ static int
 i40e_dev_get_filter_type(uint16_t filter_type, uint16_t *flag)
 {
 	switch (filter_type) {
-	case RTE_TUNNEL_FILTER_IMAC_IVLAN:
+	case RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN;
 		break;
-	case RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID:
+	case RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID;
 		break;
-	case RTE_TUNNEL_FILTER_IMAC_TENID:
+	case RTE_ETH_TUNNEL_FILTER_IMAC_TENID:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID;
 		break;
-	case RTE_TUNNEL_FILTER_OMAC_TENID_IMAC:
+	case RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC;
 		break;
-	case ETH_TUNNEL_FILTER_IMAC:
+	case RTE_ETH_TUNNEL_FILTER_IMAC:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC;
 		break;
-	case ETH_TUNNEL_FILTER_OIP:
+	case RTE_ETH_TUNNEL_FILTER_OIP:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_OIP;
 		break;
-	case ETH_TUNNEL_FILTER_IIP:
+	case RTE_ETH_TUNNEL_FILTER_IIP:
 		*flag = I40E_AQC_ADD_CLOUD_FILTER_IIP;
 		break;
 	default:
@@ -8711,16 +8711,16 @@ i40e_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
 					  I40E_AQC_TUNNEL_TYPE_VXLAN);
 		break;
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
 					  I40E_AQC_TUNNEL_TYPE_VXLAN_GPE);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -1;
 		break;
@@ -8746,12 +8746,12 @@ i40e_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		ret = i40e_del_vxlan_port(pf, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -1;
 		break;
@@ -8843,7 +8843,7 @@ int
 i40e_pf_reset_rss_reta(struct i40e_pf *pf)
 {
 	struct i40e_hw *hw = &pf->adapter->hw;
-	uint8_t lut[ETH_RSS_RETA_SIZE_512];
+	uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
 	uint32_t i;
 	int num;
 
@@ -8851,7 +8851,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
 	 * configured. It's necessary to calculate the actual PF
 	 * queues that are configured.
 	 */
-	if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+	if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
 		num = i40e_pf_calc_configured_queues_num(pf);
 	else
 		num = pf->dev_data->nb_rx_queues;
@@ -8930,7 +8930,7 @@ i40e_pf_config_rss(struct i40e_pf *pf)
 	rss_hf = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
 	mq_mode = pf->dev_data->dev_conf.rxmode.mq_mode;
 	if (!(rss_hf & pf->adapter->flow_types_mask) ||
-	    !(mq_mode & ETH_MQ_RX_RSS_FLAG))
+	    !(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 		return 0;
 
 	hw = I40E_PF_TO_HW(pf);
@@ -10267,16 +10267,16 @@ i40e_start_timecounters(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 
 	switch (link.link_speed) {
-	case ETH_SPEED_NUM_40G:
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_25G:
 		tsync_inc_l = I40E_PTP_40GB_INCVAL & 0xFFFFFFFF;
 		tsync_inc_h = I40E_PTP_40GB_INCVAL >> 32;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		tsync_inc_l = I40E_PTP_10GB_INCVAL & 0xFFFFFFFF;
 		tsync_inc_h = I40E_PTP_10GB_INCVAL >> 32;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		tsync_inc_l = I40E_PTP_1GB_INCVAL & 0xFFFFFFFF;
 		tsync_inc_h = I40E_PTP_1GB_INCVAL >> 32;
 		break;
@@ -10504,7 +10504,7 @@ i40e_parse_dcb_configure(struct rte_eth_dev *dev,
 	else
 		*tc_map = RTE_LEN2MASK(dcb_rx_conf->nb_tcs, uint8_t);
 
-	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		dcb_cfg->pfc.willing = 0;
 		dcb_cfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS;
 		dcb_cfg->pfc.pfcenable = *tc_map;
@@ -11012,7 +11012,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
 	uint16_t bsf, tc_mapping;
 	int i, j = 0;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = rte_bsf32(vsi->enabled_tc + 1);
 	else
 		dcb_info->nb_tcs = 1;
@@ -11060,7 +11060,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
 				dcb_info->tc_queue.tc_rxq[j][i].nb_queue;
 		}
 		j++;
-	} while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, ETH_MAX_VMDQ_POOL));
+	} while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, RTE_ETH_MAX_VMDQ_POOL));
 	return 0;
 }
 
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 1d57b9617e66..d8042abbd9be 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -147,17 +147,17 @@ enum i40e_flxpld_layer_idx {
 		       I40E_FLAG_RSS_AQ_CAPABLE)
 
 #define I40E_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L2_PAYLOAD)
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L2_PAYLOAD)
 
 /* All bits of RSS hash enable for X722*/
 #define I40E_RSS_HENA_ALL_X722 ( \
@@ -1063,7 +1063,7 @@ struct i40e_rte_flow_rss_conf {
 	uint8_t key[(I40E_VFQF_HKEY_MAX_INDEX > I40E_PFQF_HKEY_MAX_INDEX ?
 		     I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
 		    sizeof(uint32_t)];		/**< Hash key. */
-	uint16_t queue[ETH_RSS_RETA_SIZE_512];	/**< Queues indices to use. */
+	uint16_t queue[RTE_ETH_RSS_RETA_SIZE_512];	/**< Queues indices to use. */
 
 	bool symmetric_enable;		/**< true, if enable symmetric */
 	uint64_t config_pctypes;	/**< All PCTYPES with the flow  */
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index e41a84f1d737..9acaa1875105 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -2015,7 +2015,7 @@ i40e_get_outer_vlan(struct rte_eth_dev *dev)
 {
 	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	int qinq = dev->data->dev_conf.rxmode.offloads &
-		DEV_RX_OFFLOAD_VLAN_EXTEND;
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	uint64_t reg_r = 0;
 	uint16_t reg_id;
 	uint16_t tpid;
@@ -3601,13 +3601,13 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
 }
 
 static uint16_t i40e_supported_tunnel_filter_types[] = {
-	ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_TENID |
-	ETH_TUNNEL_FILTER_IVLAN,
-	ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_IVLAN,
-	ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_TENID,
-	ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_TENID |
-	ETH_TUNNEL_FILTER_IMAC,
-	ETH_TUNNEL_FILTER_IMAC,
+	RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID |
+	RTE_ETH_TUNNEL_FILTER_IVLAN,
+	RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
+	RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID,
+	RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_TENID |
+	RTE_ETH_TUNNEL_FILTER_IMAC,
+	RTE_ETH_TUNNEL_FILTER_IMAC,
 };
 
 static int
@@ -3697,12 +3697,12 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 					rte_memcpy(&filter->outer_mac,
 						   &eth_spec->dst,
 						   RTE_ETHER_ADDR_LEN);
-					filter_type |= ETH_TUNNEL_FILTER_OMAC;
+					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
 						   &eth_spec->dst,
 						   RTE_ETHER_ADDR_LEN);
-					filter_type |= ETH_TUNNEL_FILTER_IMAC;
+					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
 			}
 			break;
@@ -3724,7 +3724,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 					filter->inner_vlan =
 					      rte_be_to_cpu_16(vlan_spec->tci) &
 					      I40E_VLAN_TCI_MASK;
-				filter_type |= ETH_TUNNEL_FILTER_IVLAN;
+				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
@@ -3798,7 +3798,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 					   vxlan_spec->vni, 3);
 				filter->tenant_id =
 					rte_be_to_cpu_32(tenant_id_be);
-				filter_type |= ETH_TUNNEL_FILTER_TENID;
+				filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
 			}
 
 			vxlan_flag = 1;
@@ -3927,12 +3927,12 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 					rte_memcpy(&filter->outer_mac,
 						   &eth_spec->dst,
 						   RTE_ETHER_ADDR_LEN);
-					filter_type |= ETH_TUNNEL_FILTER_OMAC;
+					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
 						   &eth_spec->dst,
 						   RTE_ETHER_ADDR_LEN);
-					filter_type |= ETH_TUNNEL_FILTER_IMAC;
+					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
 			}
 
@@ -3955,7 +3955,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 					filter->inner_vlan =
 					      rte_be_to_cpu_16(vlan_spec->tci) &
 					      I40E_VLAN_TCI_MASK;
-				filter_type |= ETH_TUNNEL_FILTER_IVLAN;
+				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
@@ -4050,7 +4050,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 					   nvgre_spec->tni, 3);
 				filter->tenant_id =
 					rte_be_to_cpu_32(tenant_id_be);
-				filter_type |= ETH_TUNNEL_FILTER_TENID;
+				filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
 			}
 
 			nvgre_flag = 1;
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 6579b1a00b16..1229f2f7a1c7 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -102,47 +102,47 @@ struct i40e_hash_map_rss_inset {
 
 const struct i40e_hash_map_rss_inset i40e_hash_rss_inset[] = {
 	/* IPv4 */
-	{ ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
-	{ ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+	{ RTE_ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+	{ RTE_ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
 
-	{ ETH_RSS_NONFRAG_IPV4_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	  I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
 
-	{ ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
 
 	/* IPv6 */
-	{ ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
-	{ ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+	{ RTE_ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+	{ RTE_ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
 
-	{ ETH_RSS_NONFRAG_IPV6_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	  I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
 
-	{ ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
 
 	/* Port */
-	{ ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
+	{ RTE_ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
 	/* Ether */
-	{ ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
-	{ ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
+	{ RTE_ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
+	{ RTE_ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
 
 	/* VLAN */
-	{ ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
-	{ ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
+	{ RTE_ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
+	{ RTE_ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
 };
 
 #define I40E_HASH_VOID_NEXT_ALLOW	BIT_ULL(RTE_FLOW_ITEM_TYPE_ETH)
@@ -201,30 +201,30 @@ struct i40e_hash_match_pattern {
 #define I40E_HASH_MAP_CUS_PATTERN(pattern, rss_mask, cus_pctype) { \
 	pattern, rss_mask, true, cus_pctype }
 
-#define I40E_HASH_L2_RSS_MASK		(ETH_RSS_VLAN | ETH_RSS_ETH | \
-					ETH_RSS_L2_SRC_ONLY | \
-					ETH_RSS_L2_DST_ONLY)
+#define I40E_HASH_L2_RSS_MASK		(RTE_ETH_RSS_VLAN | RTE_ETH_RSS_ETH | \
+					RTE_ETH_RSS_L2_SRC_ONLY | \
+					RTE_ETH_RSS_L2_DST_ONLY)
 
 #define I40E_HASH_L23_RSS_MASK		(I40E_HASH_L2_RSS_MASK | \
-					ETH_RSS_L3_SRC_ONLY | \
-					ETH_RSS_L3_DST_ONLY)
+					RTE_ETH_RSS_L3_SRC_ONLY | \
+					RTE_ETH_RSS_L3_DST_ONLY)
 
-#define I40E_HASH_IPV4_L23_RSS_MASK	(ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
-#define I40E_HASH_IPV6_L23_RSS_MASK	(ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV4_L23_RSS_MASK	(RTE_ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV6_L23_RSS_MASK	(RTE_ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
 
 #define I40E_HASH_L234_RSS_MASK		(I40E_HASH_L23_RSS_MASK | \
-					ETH_RSS_PORT | ETH_RSS_L4_SRC_ONLY | \
-					ETH_RSS_L4_DST_ONLY)
+					RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY | \
+					RTE_ETH_RSS_L4_DST_ONLY)
 
-#define I40E_HASH_IPV4_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV4)
-#define I40E_HASH_IPV6_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV6)
+#define I40E_HASH_IPV4_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV4)
+#define I40E_HASH_IPV6_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV6)
 
-#define I40E_HASH_L4_TYPES		(ETH_RSS_NONFRAG_IPV4_TCP | \
-					ETH_RSS_NONFRAG_IPV4_UDP | \
-					ETH_RSS_NONFRAG_IPV4_SCTP | \
-					ETH_RSS_NONFRAG_IPV6_TCP | \
-					ETH_RSS_NONFRAG_IPV6_UDP | \
-					ETH_RSS_NONFRAG_IPV6_SCTP)
+#define I40E_HASH_L4_TYPES		(RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+					RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+					RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+					RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+					RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+					RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 /* Current supported patterns and RSS types.
  * All items that have the same pattern types are together.
@@ -232,68 +232,68 @@ struct i40e_hash_match_pattern {
 static const struct i40e_hash_match_pattern match_patterns[] = {
 	/* Ether */
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_ETH,
-			      ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
+			      RTE_ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
 			      I40E_FILTER_PCTYPE_L2_PAYLOAD),
 
 	/* IPv4 */
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
-			      ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
+			      RTE_ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_FRAG_IPV4),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
-			      ETH_RSS_NONFRAG_IPV4_OTHER |
+			      RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
 			      I40E_HASH_IPV4_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_OTHER),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_TCP,
-			      ETH_RSS_NONFRAG_IPV4_TCP |
+			      RTE_ETH_RSS_NONFRAG_IPV4_TCP |
 			      I40E_HASH_IPV4_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_TCP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_UDP,
-			      ETH_RSS_NONFRAG_IPV4_UDP |
+			      RTE_ETH_RSS_NONFRAG_IPV4_UDP |
 			      I40E_HASH_IPV4_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_UDP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_SCTP,
-			      ETH_RSS_NONFRAG_IPV4_SCTP |
+			      RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
 			      I40E_HASH_IPV4_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_SCTP),
 
 	/* IPv6 */
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
-			      ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
+			      RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_FRAG_IPV6),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
-			      ETH_RSS_NONFRAG_IPV6_OTHER |
+			      RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			      I40E_HASH_IPV6_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_OTHER),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_TCP,
-			      ETH_RSS_NONFRAG_IPV6_TCP |
+			      RTE_ETH_RSS_NONFRAG_IPV6_TCP |
 			      I40E_HASH_IPV6_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_TCP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_UDP,
-			      ETH_RSS_NONFRAG_IPV6_UDP |
+			      RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			      I40E_HASH_IPV6_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_UDP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_SCTP,
-			      ETH_RSS_NONFRAG_IPV6_SCTP |
+			      RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
 			      I40E_HASH_IPV6_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_SCTP),
 
 	/* ESP */
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_UDP_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_UDP_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
 
 	/* GTPC */
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPC,
@@ -308,27 +308,27 @@ static const struct i40e_hash_match_pattern match_patterns[] = {
 				  I40E_HASH_IPV4_L234_RSS_MASK,
 				  I40E_CUSTOMIZED_GTPU),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV4,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV6,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU,
 				  I40E_HASH_IPV6_L234_RSS_MASK,
 				  I40E_CUSTOMIZED_GTPU),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV4,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV6,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
 
 	/* L2TPV3 */
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_L2TPV3,
-				  ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
+				  RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_L2TPV3,
-				  ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
+				  RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
 
 	/* AH */
-	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, ETH_RSS_AH,
+	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, RTE_ETH_RSS_AH,
 				  I40E_CUSTOMIZED_AH_IPV4),
-	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, ETH_RSS_AH,
+	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, RTE_ETH_RSS_AH,
 				  I40E_CUSTOMIZED_AH_IPV6),
 };
 
@@ -564,29 +564,29 @@ i40e_hash_get_inset(uint64_t rss_types)
 	/* If SRC_ONLY and DST_ONLY of the same level are used simultaneously,
 	 * it is the same case as none of them are added.
 	 */
-	mask = rss_types & (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY);
-	if (mask == ETH_RSS_L2_SRC_ONLY)
+	mask = rss_types & (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY);
+	if (mask == RTE_ETH_RSS_L2_SRC_ONLY)
 		inset &= ~I40E_INSET_DMAC;
-	else if (mask == ETH_RSS_L2_DST_ONLY)
+	else if (mask == RTE_ETH_RSS_L2_DST_ONLY)
 		inset &= ~I40E_INSET_SMAC;
 
-	mask = rss_types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
-	if (mask == ETH_RSS_L3_SRC_ONLY)
+	mask = rss_types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
+	if (mask == RTE_ETH_RSS_L3_SRC_ONLY)
 		inset &= ~(I40E_INSET_IPV4_DST | I40E_INSET_IPV6_DST);
-	else if (mask == ETH_RSS_L3_DST_ONLY)
+	else if (mask == RTE_ETH_RSS_L3_DST_ONLY)
 		inset &= ~(I40E_INSET_IPV4_SRC | I40E_INSET_IPV6_SRC);
 
-	mask = rss_types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
-	if (mask == ETH_RSS_L4_SRC_ONLY)
+	mask = rss_types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
+	if (mask == RTE_ETH_RSS_L4_SRC_ONLY)
 		inset &= ~I40E_INSET_DST_PORT;
-	else if (mask == ETH_RSS_L4_DST_ONLY)
+	else if (mask == RTE_ETH_RSS_L4_DST_ONLY)
 		inset &= ~I40E_INSET_SRC_PORT;
 
 	if (rss_types & I40E_HASH_L4_TYPES) {
 		uint64_t l3_mask = rss_types &
-				   (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+				   (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
 		uint64_t l4_mask = rss_types &
-				   (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+				   (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
 
 		if (l3_mask && !l4_mask)
 			inset &= ~(I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT);
@@ -825,7 +825,7 @@ i40e_hash_config(struct i40e_pf *pf,
 
 	/* Update lookup table */
 	if (rss_info->queue_num > 0) {
-		uint8_t lut[ETH_RSS_RETA_SIZE_512];
+		uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
 		uint32_t i, j = 0;
 
 		for (i = 0; i < hw->func_caps.rss_table_size; i++) {
@@ -932,7 +932,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
 			    "RSS key is ignored when queues specified");
 
 	pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
-	if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+	if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
 		max_queue = i40e_pf_calc_configured_queues_num(pf);
 	else
 		max_queue = pf->dev_data->nb_rx_queues;
@@ -1070,22 +1070,22 @@ i40e_hash_validate_rss_types(uint64_t rss_types)
 	uint64_t type, mask;
 
 	/* Validate L2 */
-	type = ETH_RSS_ETH & rss_types;
-	mask = (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY) & rss_types;
+	type = RTE_ETH_RSS_ETH & rss_types;
+	mask = (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY) & rss_types;
 	if (!type && mask)
 		return false;
 
 	/* Validate L3 */
-	type = (I40E_HASH_L4_TYPES | ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-	       ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_IPV6 |
-	       ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
-	mask = (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY) & rss_types;
+	type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+	       RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_IPV6 |
+	       RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
+	mask = (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY) & rss_types;
 	if (!type && mask)
 		return false;
 
 	/* Validate L4 */
-	type = (I40E_HASH_L4_TYPES | ETH_RSS_PORT) & rss_types;
-	mask = (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY) & rss_types;
+	type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_PORT) & rss_types;
+	mask = (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY) & rss_types;
 	if (!type && mask)
 		return false;
 
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index e2d8b2b5f7f1..ccb3924a5f68 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -1207,24 +1207,24 @@ i40e_notify_vf_link_status(struct rte_eth_dev *dev, struct i40e_pf_vf *vf)
 	event.event_data.link_event.link_status =
 		dev->data->dev_link.link_status;
 
-	/* need to convert the ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
+	/* need to convert the RTE_ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
 	switch (dev->data->dev_link.link_speed) {
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_100MB;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_1GB;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_10GB;
 		break;
-	case ETH_SPEED_NUM_20G:
+	case RTE_ETH_SPEED_NUM_20G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_20GB;
 		break;
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_25G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_25GB;
 		break;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_40GB;
 		break;
 	default:
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 554b1142c136..a13bb81115f4 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1329,7 +1329,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 	for (i = 0; i < tx_rs_thresh; i++)
 		rte_prefetch0((txep + i)->mbuf);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
 		if (k) {
 			for (j = 0; j != k; j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
 				for (i = 0; i < RTE_I40E_TX_MAX_FREE_BUF_SZ; ++i, ++txep) {
@@ -1995,7 +1995,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->queue_id = queue_idx;
 	rxq->reg_idx = reg_idx;
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -2243,7 +2243,7 @@ i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
 	}
 	/* check simple tx conflict */
 	if (ad->tx_simple_allowed) {
-		if ((txq->offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
+		if ((txq->offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
 				txq->tx_rs_thresh < RTE_PMD_I40E_TX_MAX_BURST) {
 			PMD_DRV_LOG(ERR, "No-simple tx is required.");
 			return -EINVAL;
@@ -3417,7 +3417,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
 	/* Use a simple Tx queue if possible (only fast free is allowed) */
 	ad->tx_simple_allowed =
 		(txq->offloads ==
-		 (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+		 (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
 		 txq->tx_rs_thresh >= RTE_PMD_I40E_TX_MAX_BURST);
 	ad->tx_vec_allowed = (ad->tx_simple_allowed &&
 			txq->tx_rs_thresh <= RTE_I40E_TX_MAX_FREE_BUF_SZ);
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 2301e6301d7d..5e6eecc50116 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -120,7 +120,7 @@ struct i40e_rx_queue {
 	bool rx_deferred_start; /**< don't start this queue in dev start */
 	uint16_t rx_using_sse; /**<flag indicate the usage of vPMD for rx */
 	uint8_t dcb_tc;         /**< Traffic class of rx queue */
-	uint64_t offloads; /**< Rx offload flags of DEV_RX_OFFLOAD_* */
+	uint64_t offloads; /**< Rx offload flags of RTE_ETH_RX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
@@ -166,7 +166,7 @@ struct i40e_tx_queue {
 	bool q_set; /**< indicate if tx queue has been configured */
 	bool tx_deferred_start; /**< don't start this queue in dev start */
 	uint8_t dcb_tc;         /**< Traffic class of tx queue */
-	uint64_t offloads; /**< Tx offload flags of DEV_RX_OFFLOAD_* */
+	uint64_t offloads; /**< Tx offload flags of RTE_ETH_RX_OFFLOAD_* */
 	const struct rte_memzone *mz;
 };
 
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index bd21d6422394..5f00d43950aa 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -899,7 +899,7 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->tx_next_dd - (n - 1);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		void **cache_objs;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index f52ed98d62d0..0192164c35fa 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -100,7 +100,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 	  */
 	txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
 		for (i = 0; i < n; i++) {
 			free[i] = txep[i].mbuf;
 			txep[i].mbuf = NULL;
@@ -211,7 +211,7 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
 	struct i40e_adapter *ad =
 		I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 	struct i40e_rx_queue *rxq;
 	uint16_t desc, i;
 	bool first_queue;
@@ -221,11 +221,11 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
 		return -1;
 
 	 /* no header split support */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
 		return -1;
 
 	/* no QinQ support */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 		return -1;
 
 	/**
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 12d5a2e48a9b..663c46b91dc5 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -42,30 +42,30 @@ i40e_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX;
 	dev_info->hash_key_size = (I40E_VFQF_HKEY_MAX_INDEX + 1) *
 		sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_64;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_64;
 	dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
 	dev_info->max_mac_addrs = I40E_NUM_MACADDR_MAX;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_FILTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_MULTI_SEGS  |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -385,19 +385,19 @@ i40e_vf_representor_vlan_offload_set(struct rte_eth_dev *ethdev, int mask)
 		return -EINVAL;
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* Enable or disable VLAN filtering offload */
 		if (ethdev->data->dev_conf.rxmode.offloads &
-		    DEV_RX_OFFLOAD_VLAN_FILTER)
+		    RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			return i40e_vsi_config_vlan_filter(vsi, TRUE);
 		else
 			return i40e_vsi_config_vlan_filter(vsi, FALSE);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping offload */
 		if (ethdev->data->dev_conf.rxmode.offloads &
-		    DEV_RX_OFFLOAD_VLAN_STRIP)
+		    RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			return i40e_vsi_config_vlan_stripping(vsi, TRUE);
 		else
 			return i40e_vsi_config_vlan_stripping(vsi, FALSE);
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 34bfa9af4734..12f541f53926 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -50,18 +50,18 @@
 	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
 
 #define IAVF_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 |         \
-	ETH_RSS_NONFRAG_IPV4_TCP |  \
-	ETH_RSS_NONFRAG_IPV4_UDP |  \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 |         \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP |  \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP |  \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
 
 #define IAVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
 #define IAVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 611f1f7722b0..df44df772e4e 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -266,53 +266,53 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
 	static const uint64_t map_hena_rss[] = {
 		/* IPv4 */
 		[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP] =
-				ETH_RSS_NONFRAG_IPV4_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP] =
-				ETH_RSS_NONFRAG_IPV4_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_UDP] =
-				ETH_RSS_NONFRAG_IPV4_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK] =
-				ETH_RSS_NONFRAG_IPV4_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP] =
-				ETH_RSS_NONFRAG_IPV4_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_SCTP] =
-				ETH_RSS_NONFRAG_IPV4_SCTP,
+				RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_OTHER] =
-				ETH_RSS_NONFRAG_IPV4_OTHER,
-		[IAVF_FILTER_PCTYPE_FRAG_IPV4] = ETH_RSS_FRAG_IPV4,
+				RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+		[IAVF_FILTER_PCTYPE_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
 
 		/* IPv6 */
 		[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP] =
-				ETH_RSS_NONFRAG_IPV6_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP] =
-				ETH_RSS_NONFRAG_IPV6_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_UDP] =
-				ETH_RSS_NONFRAG_IPV6_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK] =
-				ETH_RSS_NONFRAG_IPV6_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP] =
-				ETH_RSS_NONFRAG_IPV6_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_SCTP] =
-				ETH_RSS_NONFRAG_IPV6_SCTP,
+				RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_OTHER] =
-				ETH_RSS_NONFRAG_IPV6_OTHER,
-		[IAVF_FILTER_PCTYPE_FRAG_IPV6] = ETH_RSS_FRAG_IPV6,
+				RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+		[IAVF_FILTER_PCTYPE_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
 
 		/* L2 Payload */
-		[IAVF_FILTER_PCTYPE_L2_PAYLOAD] = ETH_RSS_L2_PAYLOAD
+		[IAVF_FILTER_PCTYPE_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
 	};
 
-	const uint64_t ipv4_rss = ETH_RSS_NONFRAG_IPV4_UDP |
-				  ETH_RSS_NONFRAG_IPV4_TCP |
-				  ETH_RSS_NONFRAG_IPV4_SCTP |
-				  ETH_RSS_NONFRAG_IPV4_OTHER |
-				  ETH_RSS_FRAG_IPV4;
+	const uint64_t ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+				  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+				  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+				  RTE_ETH_RSS_FRAG_IPV4;
 
-	const uint64_t ipv6_rss = ETH_RSS_NONFRAG_IPV6_UDP |
-				  ETH_RSS_NONFRAG_IPV6_TCP |
-				  ETH_RSS_NONFRAG_IPV6_SCTP |
-				  ETH_RSS_NONFRAG_IPV6_OTHER |
-				  ETH_RSS_FRAG_IPV6;
+	const uint64_t ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+				  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+				  RTE_ETH_RSS_FRAG_IPV6;
 
 	struct iavf_info *vf =  IAVF_DEV_PRIVATE_TO_VF(adapter);
 	uint64_t caps = 0, hena = 0, valid_rss_hf = 0;
@@ -331,13 +331,13 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
 	}
 
 	/**
-	 * ETH_RSS_IPV4 and ETH_RSS_IPV6 can be considered as 2
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
 	 * generalizations of all other IPv4 and IPv6 RSS types.
 	 */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		rss_hf |= ipv4_rss;
 
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		rss_hf |= ipv6_rss;
 
 	RTE_BUILD_BUG_ON(RTE_DIM(map_hena_rss) > sizeof(uint64_t) * CHAR_BIT);
@@ -363,10 +363,10 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
 	}
 
 	if (valid_rss_hf & ipv4_rss)
-		valid_rss_hf |= rss_hf & ETH_RSS_IPV4;
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
 
 	if (valid_rss_hf & ipv6_rss)
-		valid_rss_hf |= rss_hf & ETH_RSS_IPV6;
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
 
 	if (rss_hf & ~valid_rss_hf)
 		PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
@@ -467,7 +467,7 @@ iavf_dev_vlan_insert_set(struct rte_eth_dev *dev)
 		return 0;
 
 	enable = !!(dev->data->dev_conf.txmode.offloads &
-		    DEV_TX_OFFLOAD_VLAN_INSERT);
+		    RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
 	iavf_config_vlan_insert_v2(adapter, enable);
 
 	return 0;
@@ -479,10 +479,10 @@ iavf_dev_init_vlan(struct rte_eth_dev *dev)
 	int err;
 
 	err = iavf_dev_vlan_offload_set(dev,
-					ETH_VLAN_STRIP_MASK |
-					ETH_QINQ_STRIP_MASK |
-					ETH_VLAN_FILTER_MASK |
-					ETH_VLAN_EXTEND_MASK);
+					RTE_ETH_VLAN_STRIP_MASK |
+					RTE_ETH_QINQ_STRIP_MASK |
+					RTE_ETH_VLAN_FILTER_MASK |
+					RTE_ETH_VLAN_EXTEND_MASK);
 	if (err) {
 		PMD_DRV_LOG(ERR, "Failed to update vlan offload");
 		return err;
@@ -512,8 +512,8 @@ iavf_dev_configure(struct rte_eth_dev *dev)
 	ad->rx_vec_allowed = true;
 	ad->tx_vec_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Large VF setting */
 	if (num_queue_pairs > IAVF_MAX_NUM_QUEUES_DFLT) {
@@ -611,7 +611,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
 	}
 
 	rxq->max_pkt_len = max_pkt_len;
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 	    rxq->max_pkt_len > buf_size) {
 		dev_data->scattered_rx = 1;
 	}
@@ -961,34 +961,34 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->flow_type_rss_offloads = IAVF_RSS_OFFLOAD_ALL;
 	dev_info->max_mac_addrs = IAVF_NUM_MACADDR_MAX;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_CRC)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_KEEP_CRC;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_free_thresh = IAVF_DEFAULT_RX_FREE_THRESH,
@@ -1048,42 +1048,42 @@ iavf_dev_link_update(struct rte_eth_dev *dev,
 	 */
 	switch (vf->link_speed) {
 	case 10:
-		new_link.link_speed = ETH_SPEED_NUM_10M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case 100:
-		new_link.link_speed = ETH_SPEED_NUM_100M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case 1000:
-		new_link.link_speed = ETH_SPEED_NUM_1G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case 10000:
-		new_link.link_speed = ETH_SPEED_NUM_10G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case 20000:
-		new_link.link_speed = ETH_SPEED_NUM_20G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case 25000:
-		new_link.link_speed = ETH_SPEED_NUM_25G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case 40000:
-		new_link.link_speed = ETH_SPEED_NUM_40G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case 50000:
-		new_link.link_speed = ETH_SPEED_NUM_50G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case 100000:
-		new_link.link_speed = ETH_SPEED_NUM_100G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	default:
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	new_link.link_status = vf->link_up ? ETH_LINK_UP :
-					     ETH_LINK_DOWN;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vf->link_up ? RTE_ETH_LINK_UP :
+					     RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(dev, &new_link);
 }
@@ -1231,14 +1231,14 @@ iavf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask)
 	bool enable;
 	int err;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
 
 		iavf_iterate_vlan_filters_v2(dev, enable);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 		err = iavf_config_vlan_strip_v2(adapter, enable);
 		/* If not support, the stripping is already disabled by PF */
@@ -1267,9 +1267,9 @@ iavf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		return -ENOTSUP;
 
 	/* Vlan stripping setting */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			err = iavf_enable_vlan_strip(adapter);
 		else
 			err = iavf_disable_vlan_strip(adapter);
@@ -1311,8 +1311,8 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
 	rte_memcpy(lut, vf->rss_lut, reta_size);
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			lut[i] = reta_conf[idx].reta[shift];
 	}
@@ -1348,8 +1348,8 @@ iavf_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = vf->rss_lut[i];
 	}
@@ -1556,7 +1556,7 @@ iavf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	ret = iavf_query_stats(adapter, &pstats);
 	if (ret == 0) {
 		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
-					 DEV_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
 					 RTE_ETHER_CRC_LEN;
 		iavf_update_stats(vsi, pstats);
 		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index 1f2d3772d105..248054f79efd 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -341,90 +341,90 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
 /* rss type super set */
 
 /* IPv4 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV4	(ETH_RSS_ETH | ETH_RSS_IPV4 | \
-					 ETH_RSS_FRAG_IPV4 | \
-					 ETH_RSS_IPV4_CHKSUM)
+#define IAVF_RSS_TYPE_OUTER_IPV4	(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_FRAG_IPV4 | \
+					 RTE_ETH_RSS_IPV4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV4_UDP	(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV4_TCP	(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV4_SCTP	(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 /* IPv6 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV6	(ETH_RSS_ETH | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_OUTER_IPV6	(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
 #define IAVF_RSS_TYPE_OUTER_IPV6_FRAG	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_FRAG_IPV6)
+					 RTE_ETH_RSS_FRAG_IPV6)
 #define IAVF_RSS_TYPE_OUTER_IPV6_UDP	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV6_TCP	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define IAVF_RSS_TYPE_OUTER_IPV6_SCTP	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 /* VLAN IPV4 */
 #define IAVF_RSS_TYPE_VLAN_IPV4		(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV4_UDP	(IAVF_RSS_TYPE_OUTER_IPV4_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV4_TCP	(IAVF_RSS_TYPE_OUTER_IPV4_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV4_SCTP	(IAVF_RSS_TYPE_OUTER_IPV4_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 /* VLAN IPv6 */
 #define IAVF_RSS_TYPE_VLAN_IPV6		(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_FRAG	(IAVF_RSS_TYPE_OUTER_IPV6_FRAG | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_UDP	(IAVF_RSS_TYPE_OUTER_IPV6_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_TCP	(IAVF_RSS_TYPE_OUTER_IPV6_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_SCTP	(IAVF_RSS_TYPE_OUTER_IPV6_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 /* IPv4 inner */
-#define IAVF_RSS_TYPE_INNER_IPV4	ETH_RSS_IPV4
-#define IAVF_RSS_TYPE_INNER_IPV4_UDP	(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV4_TCP	(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV4_SCTP	(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV4	RTE_ETH_RSS_IPV4
+#define IAVF_RSS_TYPE_INNER_IPV4_UDP	(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV4_TCP	(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV4_SCTP	(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 /* IPv6 inner */
-#define IAVF_RSS_TYPE_INNER_IPV6	ETH_RSS_IPV6
-#define IAVF_RSS_TYPE_INNER_IPV6_UDP	(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV6_TCP	(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV6_SCTP	(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV6	RTE_ETH_RSS_IPV6
+#define IAVF_RSS_TYPE_INNER_IPV6_UDP	(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV6_TCP	(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV6_SCTP	(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 /* GTPU IPv4 */
 #define IAVF_RSS_TYPE_GTPU_IPV4		(IAVF_RSS_TYPE_INNER_IPV4 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV4_UDP	(IAVF_RSS_TYPE_INNER_IPV4_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV4_TCP	(IAVF_RSS_TYPE_INNER_IPV4_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 /* GTPU IPv6 */
 #define IAVF_RSS_TYPE_GTPU_IPV6		(IAVF_RSS_TYPE_INNER_IPV6 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV6_UDP	(IAVF_RSS_TYPE_INNER_IPV6_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV6_TCP	(IAVF_RSS_TYPE_INNER_IPV6_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 /* ESP, AH, L2TPV3 and PFCP */
-#define IAVF_RSS_TYPE_IPV4_ESP		(ETH_RSS_ESP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV4_AH		(ETH_RSS_AH | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_ESP		(ETH_RSS_ESP | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV6_AH		(ETH_RSS_AH | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV4_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV6_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
 
 /**
  * Supported pattern for hash.
@@ -442,7 +442,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_vlan_ipv4_udp,		IAVF_RSS_TYPE_VLAN_IPV4_UDP,	&outer_ipv4_udp_tmplt},
 	{iavf_pattern_eth_vlan_ipv4_tcp,		IAVF_RSS_TYPE_VLAN_IPV4_TCP,	&outer_ipv4_tcp_tmplt},
 	{iavf_pattern_eth_vlan_ipv4_sctp,		IAVF_RSS_TYPE_VLAN_IPV4_SCTP,	&outer_ipv4_sctp_tmplt},
-	{iavf_pattern_eth_ipv4_gtpu,			ETH_RSS_IPV4,			&outer_ipv4_udp_tmplt},
+	{iavf_pattern_eth_ipv4_gtpu,			RTE_ETH_RSS_IPV4,			&outer_ipv4_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv4,		IAVF_RSS_TYPE_GTPU_IPV4,	&inner_ipv4_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv4_udp,		IAVF_RSS_TYPE_GTPU_IPV4_UDP,	&inner_ipv4_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv4_tcp,		IAVF_RSS_TYPE_GTPU_IPV4_TCP,	&inner_ipv4_tcp_tmplt},
@@ -484,9 +484,9 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_ipv4_ah,			IAVF_RSS_TYPE_IPV4_AH,		&ipv4_ah_tmplt},
 	{iavf_pattern_eth_ipv4_l2tpv3,			IAVF_RSS_TYPE_IPV4_L2TPV3,	&ipv4_l2tpv3_tmplt},
 	{iavf_pattern_eth_ipv4_pfcp,			IAVF_RSS_TYPE_IPV4_PFCP,	&ipv4_pfcp_tmplt},
-	{iavf_pattern_eth_ipv4_gtpc,			ETH_RSS_IPV4,			&ipv4_udp_gtpc_tmplt},
-	{iavf_pattern_eth_ecpri,			ETH_RSS_ECPRI,			&eth_ecpri_tmplt},
-	{iavf_pattern_eth_ipv4_ecpri,			ETH_RSS_ECPRI,			&ipv4_ecpri_tmplt},
+	{iavf_pattern_eth_ipv4_gtpc,			RTE_ETH_RSS_IPV4,			&ipv4_udp_gtpc_tmplt},
+	{iavf_pattern_eth_ecpri,			RTE_ETH_RSS_ECPRI,			&eth_ecpri_tmplt},
+	{iavf_pattern_eth_ipv4_ecpri,			RTE_ETH_RSS_ECPRI,			&ipv4_ecpri_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv4,		IAVF_RSS_TYPE_INNER_IPV4,	&inner_ipv4_tmplt},
 	{iavf_pattern_eth_ipv6_gre_ipv4,		IAVF_RSS_TYPE_INNER_IPV4, &inner_ipv4_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv4_tcp,	IAVF_RSS_TYPE_INNER_IPV4_TCP, &inner_ipv4_tcp_tmplt},
@@ -504,7 +504,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_vlan_ipv6_udp,		IAVF_RSS_TYPE_VLAN_IPV6_UDP,	&outer_ipv6_udp_tmplt},
 	{iavf_pattern_eth_vlan_ipv6_tcp,		IAVF_RSS_TYPE_VLAN_IPV6_TCP,	&outer_ipv6_tcp_tmplt},
 	{iavf_pattern_eth_vlan_ipv6_sctp,		IAVF_RSS_TYPE_VLAN_IPV6_SCTP,	&outer_ipv6_sctp_tmplt},
-	{iavf_pattern_eth_ipv6_gtpu,			ETH_RSS_IPV6,			&outer_ipv6_udp_tmplt},
+	{iavf_pattern_eth_ipv6_gtpu,			RTE_ETH_RSS_IPV6,			&outer_ipv6_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv6,		IAVF_RSS_TYPE_GTPU_IPV6,	&inner_ipv6_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv6_udp,		IAVF_RSS_TYPE_GTPU_IPV6_UDP,	&inner_ipv6_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv6_tcp,		IAVF_RSS_TYPE_GTPU_IPV6_TCP,	&inner_ipv6_tcp_tmplt},
@@ -546,7 +546,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_ipv6_ah,			IAVF_RSS_TYPE_IPV6_AH,		&ipv6_ah_tmplt},
 	{iavf_pattern_eth_ipv6_l2tpv3,			IAVF_RSS_TYPE_IPV6_L2TPV3,	&ipv6_l2tpv3_tmplt},
 	{iavf_pattern_eth_ipv6_pfcp,			IAVF_RSS_TYPE_IPV6_PFCP,	&ipv6_pfcp_tmplt},
-	{iavf_pattern_eth_ipv6_gtpc,			ETH_RSS_IPV6,			&ipv6_udp_gtpc_tmplt},
+	{iavf_pattern_eth_ipv6_gtpc,			RTE_ETH_RSS_IPV6,			&ipv6_udp_gtpc_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv6,		IAVF_RSS_TYPE_INNER_IPV6,	&inner_ipv6_tmplt},
 	{iavf_pattern_eth_ipv6_gre_ipv6,		IAVF_RSS_TYPE_INNER_IPV6, &inner_ipv6_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv6_tcp,	IAVF_RSS_TYPE_INNER_IPV6_TCP, &inner_ipv6_tcp_tmplt},
@@ -580,52 +580,52 @@ iavf_rss_hash_set(struct iavf_adapter *ad, uint64_t rss_hf, bool add)
 	struct virtchnl_rss_cfg rss_cfg;
 
 #define IAVF_RSS_HF_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 	rss_cfg.rss_algorithm = VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC;
-	if (rss_hf & ETH_RSS_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_IPV4) {
 		rss_cfg.proto_hdrs = inner_ipv4_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		rss_cfg.proto_hdrs = inner_ipv4_udp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		rss_cfg.proto_hdrs = inner_ipv4_tcp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
 		rss_cfg.proto_hdrs = inner_ipv4_sctp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_IPV6) {
 		rss_cfg.proto_hdrs = inner_ipv6_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
 		rss_cfg.proto_hdrs = inner_ipv6_udp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 		rss_cfg.proto_hdrs = inner_ipv6_tcp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
 		rss_cfg.proto_hdrs = inner_ipv6_sctp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
@@ -779,28 +779,28 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 		hdr = &proto_hdrs->proto_hdr[i];
 		switch (hdr->type) {
 		case VIRTCHNL_PROTO_HDR_ETH:
-			if (!(rss_type & ETH_RSS_ETH))
+			if (!(rss_type & RTE_ETH_RSS_ETH))
 				hdr->field_selector = 0;
-			else if (rss_type & ETH_RSS_L2_SRC_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
 				REFINE_PROTO_FLD(DEL, ETH_DST);
-			else if (rss_type & ETH_RSS_L2_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
 				REFINE_PROTO_FLD(DEL, ETH_SRC);
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV4:
 			if (rss_type &
-			    (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			     ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV4_SCTP)) {
-				if (rss_type & ETH_RSS_FRAG_IPV4) {
+			    (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			     RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
 					iavf_hash_add_fragment_hdr(proto_hdrs, i + 1);
-				} else if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+				} else if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV4_DST);
-				} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+				} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV4_SRC);
 				} else if (rss_type &
-					   (ETH_RSS_L4_SRC_ONLY |
-					    ETH_RSS_L4_DST_ONLY)) {
+					   (RTE_ETH_RSS_L4_SRC_ONLY |
+					    RTE_ETH_RSS_L4_DST_ONLY)) {
 					REFINE_PROTO_FLD(DEL, IPV4_DST);
 					REFINE_PROTO_FLD(DEL, IPV4_SRC);
 				}
@@ -808,39 +808,39 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_IPV4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, IPV4_CHKSUM);
 
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV4_FRAG:
 			if (rss_type &
-			    (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			     ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV4_SCTP)) {
-				if (rss_type & ETH_RSS_FRAG_IPV4)
+			    (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			     RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_FRAG_IPV4)
 					REFINE_PROTO_FLD(ADD, IPV4_FRAG_PKID);
 			} else {
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_IPV4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, IPV4_CHKSUM);
 
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV6:
 			if (rss_type &
-			    (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-			     ETH_RSS_NONFRAG_IPV6_UDP |
-			     ETH_RSS_NONFRAG_IPV6_TCP |
-			     ETH_RSS_NONFRAG_IPV6_SCTP)) {
-				if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			    (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+			     RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV6_DST);
-				} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+				} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV6_SRC);
 				} else if (rss_type &
-					   (ETH_RSS_L4_SRC_ONLY |
-					    ETH_RSS_L4_DST_ONLY)) {
+					   (RTE_ETH_RSS_L4_SRC_ONLY |
+					    RTE_ETH_RSS_L4_DST_ONLY)) {
 					REFINE_PROTO_FLD(DEL, IPV6_DST);
 					REFINE_PROTO_FLD(DEL, IPV6_SRC);
 				}
@@ -857,7 +857,7 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			}
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG:
-			if (rss_type & ETH_RSS_FRAG_IPV6)
+			if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
 				REFINE_PROTO_FLD(ADD, IPV6_EH_FRAG_PKID);
 			else
 				hdr->field_selector = 0;
@@ -865,87 +865,87 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			break;
 		case VIRTCHNL_PROTO_HDR_UDP:
 			if (rss_type &
-			    (ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV6_UDP)) {
-				if (rss_type & ETH_RSS_L4_SRC_ONLY)
+			    (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+				if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 					REFINE_PROTO_FLD(DEL, UDP_DST_PORT);
-				else if (rss_type & ETH_RSS_L4_DST_ONLY)
+				else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 					REFINE_PROTO_FLD(DEL, UDP_SRC_PORT);
 				else if (rss_type &
-					 (ETH_RSS_L3_SRC_ONLY |
-					  ETH_RSS_L3_DST_ONLY))
+					 (RTE_ETH_RSS_L3_SRC_ONLY |
+					  RTE_ETH_RSS_L3_DST_ONLY))
 					hdr->field_selector = 0;
 			} else {
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_L4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, UDP_CHKSUM);
 			break;
 		case VIRTCHNL_PROTO_HDR_TCP:
 			if (rss_type &
-			    (ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV6_TCP)) {
-				if (rss_type & ETH_RSS_L4_SRC_ONLY)
+			    (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+				if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 					REFINE_PROTO_FLD(DEL, TCP_DST_PORT);
-				else if (rss_type & ETH_RSS_L4_DST_ONLY)
+				else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 					REFINE_PROTO_FLD(DEL, TCP_SRC_PORT);
 				else if (rss_type &
-					 (ETH_RSS_L3_SRC_ONLY |
-					  ETH_RSS_L3_DST_ONLY))
+					 (RTE_ETH_RSS_L3_SRC_ONLY |
+					  RTE_ETH_RSS_L3_DST_ONLY))
 					hdr->field_selector = 0;
 			} else {
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_L4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, TCP_CHKSUM);
 			break;
 		case VIRTCHNL_PROTO_HDR_SCTP:
 			if (rss_type &
-			    (ETH_RSS_NONFRAG_IPV4_SCTP |
-			     ETH_RSS_NONFRAG_IPV6_SCTP)) {
-				if (rss_type & ETH_RSS_L4_SRC_ONLY)
+			    (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 					REFINE_PROTO_FLD(DEL, SCTP_DST_PORT);
-				else if (rss_type & ETH_RSS_L4_DST_ONLY)
+				else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 					REFINE_PROTO_FLD(DEL, SCTP_SRC_PORT);
 				else if (rss_type &
-					 (ETH_RSS_L3_SRC_ONLY |
-					  ETH_RSS_L3_DST_ONLY))
+					 (RTE_ETH_RSS_L3_SRC_ONLY |
+					  RTE_ETH_RSS_L3_DST_ONLY))
 					hdr->field_selector = 0;
 			} else {
 				hdr->field_selector = 0;
 			}
 
-			if (rss_type & ETH_RSS_L4_CHKSUM)
+			if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 				REFINE_PROTO_FLD(ADD, SCTP_CHKSUM);
 			break;
 		case VIRTCHNL_PROTO_HDR_S_VLAN:
-			if (!(rss_type & ETH_RSS_S_VLAN))
+			if (!(rss_type & RTE_ETH_RSS_S_VLAN))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_C_VLAN:
-			if (!(rss_type & ETH_RSS_C_VLAN))
+			if (!(rss_type & RTE_ETH_RSS_C_VLAN))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_L2TPV3:
-			if (!(rss_type & ETH_RSS_L2TPV3))
+			if (!(rss_type & RTE_ETH_RSS_L2TPV3))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_ESP:
-			if (!(rss_type & ETH_RSS_ESP))
+			if (!(rss_type & RTE_ETH_RSS_ESP))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_AH:
-			if (!(rss_type & ETH_RSS_AH))
+			if (!(rss_type & RTE_ETH_RSS_AH))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_PFCP:
-			if (!(rss_type & ETH_RSS_PFCP))
+			if (!(rss_type & RTE_ETH_RSS_PFCP))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_ECPRI:
-			if (!(rss_type & ETH_RSS_ECPRI))
+			if (!(rss_type & RTE_ETH_RSS_ECPRI))
 				hdr->field_selector = 0;
 			break;
 		default:
@@ -962,7 +962,7 @@ iavf_refine_proto_hdrs_gtpu(struct virtchnl_proto_hdrs *proto_hdrs,
 	struct virtchnl_proto_hdr *hdr;
 	int i;
 
-	if (!(rss_type & ETH_RSS_GTPU))
+	if (!(rss_type & RTE_ETH_RSS_GTPU))
 		return;
 
 	for (i = 0; i < proto_hdrs->count; i++) {
@@ -1059,10 +1059,10 @@ static void iavf_refine_proto_hdrs(struct virtchnl_proto_hdrs *proto_hdrs,
 }
 
 static uint64_t invalid_rss_comb[] = {
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	RTE_ETH_RSS_L3_PRE32 | RTE_ETH_RSS_L3_PRE40 |
 	RTE_ETH_RSS_L3_PRE48 | RTE_ETH_RSS_L3_PRE56 |
 	RTE_ETH_RSS_L3_PRE96
@@ -1073,27 +1073,27 @@ struct rss_attr_type {
 	uint64_t type;
 };
 
-#define VALID_RSS_IPV4_L4	(ETH_RSS_NONFRAG_IPV4_UDP	| \
-				 ETH_RSS_NONFRAG_IPV4_TCP	| \
-				 ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4	(RTE_ETH_RSS_NONFRAG_IPV4_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
-#define VALID_RSS_IPV6_L4	(ETH_RSS_NONFRAG_IPV6_UDP	| \
-				 ETH_RSS_NONFRAG_IPV6_TCP	| \
-				 ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4	(RTE_ETH_RSS_NONFRAG_IPV6_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
-#define VALID_RSS_IPV4		(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4		(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
 				 VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6		(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6		(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
 				 VALID_RSS_IPV6_L4)
 #define VALID_RSS_L3		(VALID_RSS_IPV4 | VALID_RSS_IPV6)
 #define VALID_RSS_L4		(VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
 
-#define VALID_RSS_ATTR		(ETH_RSS_L3_SRC_ONLY	| \
-				 ETH_RSS_L3_DST_ONLY	| \
-				 ETH_RSS_L4_SRC_ONLY	| \
-				 ETH_RSS_L4_DST_ONLY	| \
-				 ETH_RSS_L2_SRC_ONLY	| \
-				 ETH_RSS_L2_DST_ONLY	| \
+#define VALID_RSS_ATTR		(RTE_ETH_RSS_L3_SRC_ONLY	| \
+				 RTE_ETH_RSS_L3_DST_ONLY	| \
+				 RTE_ETH_RSS_L4_SRC_ONLY	| \
+				 RTE_ETH_RSS_L4_DST_ONLY	| \
+				 RTE_ETH_RSS_L2_SRC_ONLY	| \
+				 RTE_ETH_RSS_L2_DST_ONLY	| \
 				 RTE_ETH_RSS_L3_PRE64)
 
 #define INVALID_RSS_ATTR	(RTE_ETH_RSS_L3_PRE32	| \
@@ -1103,9 +1103,9 @@ struct rss_attr_type {
 				 RTE_ETH_RSS_L3_PRE96)
 
 static struct rss_attr_type rss_attr_to_valid_type[] = {
-	{ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY,	ETH_RSS_ETH},
-	{ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
-	{ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
+	{RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY,	RTE_ETH_RSS_ETH},
+	{RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
+	{RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
 	/* current ipv6 prefix only supports prefix 64 bits*/
 	{RTE_ETH_RSS_L3_PRE64,				VALID_RSS_IPV6},
 	{INVALID_RSS_ATTR,				0}
@@ -1122,15 +1122,15 @@ iavf_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
 	 * hash function.
 	 */
 	if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
-		if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
-		    ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+		if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+		    RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
 			return true;
 
 		if (!(rss_type &
-		   (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
-		    ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
-		    ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
-		    ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+		   (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+		    RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
 			return true;
 	}
 
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 88bbd40c1027..ac4db117f5cd 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -617,7 +617,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	rxq->vsi = vsi;
 	rxq->offloads = offloads;
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index f4ae2fd6e123..2d7f6b1b2dca 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -24,22 +24,22 @@
 #define IAVF_VPMD_TX_MAX_FREE_BUF 64
 
 #define IAVF_TX_NO_VECTOR_FLAGS (				 \
-		DEV_TX_OFFLOAD_MULTI_SEGS |		 \
-		DEV_TX_OFFLOAD_TCP_TSO)
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |		 \
+		RTE_ETH_TX_OFFLOAD_TCP_TSO)
 
 #define IAVF_TX_VECTOR_OFFLOAD (				 \
-		DEV_TX_OFFLOAD_VLAN_INSERT |		 \
-		DEV_TX_OFFLOAD_QINQ_INSERT |		 \
-		DEV_TX_OFFLOAD_IPV4_CKSUM |		 \
-		DEV_TX_OFFLOAD_SCTP_CKSUM |		 \
-		DEV_TX_OFFLOAD_UDP_CKSUM |		 \
-		DEV_TX_OFFLOAD_TCP_CKSUM)
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |		 \
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |		 \
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |		 \
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |		 \
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |		 \
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 
 #define IAVF_RX_VECTOR_OFFLOAD (				 \
-		DEV_RX_OFFLOAD_CHECKSUM |		 \
-		DEV_RX_OFFLOAD_SCTP_CKSUM |		 \
-		DEV_RX_OFFLOAD_VLAN |		 \
-		DEV_RX_OFFLOAD_RSS_HASH)
+		RTE_ETH_RX_OFFLOAD_CHECKSUM |		 \
+		RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |		 \
+		RTE_ETH_RX_OFFLOAD_VLAN |		 \
+		RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define IAVF_VECTOR_PATH 0
 #define IAVF_VECTOR_OFFLOAD_PATH 1
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 72a4fcab04a5..b47c51b8ebe4 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -906,7 +906,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 		 * needs to load 2nd 16B of each desc for RSS hash parsing,
 		 * will cause performance drop to get into this context.
 		 */
-		if (offloads & DEV_RX_OFFLOAD_RSS_HASH ||
+		if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH ||
 		    rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh7 =
@@ -958,7 +958,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 					(_mm256_castsi128_si256(raw_desc_bh0),
 					raw_desc_bh1, 1);
 
-			if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+			if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 				/**
 				 * to shift the 32b RSS hash value to the
 				 * highest 32b of each 128b before mask
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 12375d3d80bd..b8f2f69f12fc 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1141,7 +1141,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 			 * needs to load 2nd 16B of each desc for RSS hash parsing,
 			 * will cause performance drop to get into this context.
 			 */
-			if (offloads & DEV_RX_OFFLOAD_RSS_HASH ||
+			if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH ||
 			    rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
@@ -1193,7 +1193,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 						(_mm256_castsi128_si256(raw_desc_bh0),
 						 raw_desc_bh1, 1);
 
-				if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+				if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 					/**
 					 * to shift the 32b RSS hash value to the
 					 * highest 32b of each 128b before mask
@@ -1721,7 +1721,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->next_dd - (n - 1);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
 								rte_lcore_id());
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index edb54991e298..1de43b9b8ee2 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -819,7 +819,7 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
 		 * needs to load 2nd 16B of each desc for RSS hash parsing,
 		 * will cause performance drop to get into this context.
 		 */
-		if (offloads & DEV_RX_OFFLOAD_RSS_HASH) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh3 =
 				_mm_load_si128
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index c9c01a14e349..7b7df5eebb6d 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -835,7 +835,7 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
 		PMD_DRV_LOG(DEBUG, "RSS is not supported");
 		return -ENOTSUP;
 	}
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
 		/* set all lut items to default queue */
 		memset(hw->rss_lut, 0, hw->vf_res->rss_lut_size);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index b8a537cb8556..a90e40964ec5 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -95,7 +95,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
 	}
 
 	rxq->max_pkt_len = max_pkt_len;
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 	    (rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size) {
 		dev_data->scattered_rx = 1;
 	}
@@ -576,7 +576,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -637,7 +637,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
 	}
 
 	ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	ad->pf.adapter_stopped = 1;
 
 	return 0;
@@ -652,8 +652,8 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
 	ad->rx_bulk_alloc_allowed = true;
 	ad->tx_simple_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	return 0;
 }
@@ -675,27 +675,27 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -925,42 +925,42 @@ ice_dcf_link_update(struct rte_eth_dev *dev,
 	 */
 	switch (hw->link_speed) {
 	case 10:
-		new_link.link_speed = ETH_SPEED_NUM_10M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case 100:
-		new_link.link_speed = ETH_SPEED_NUM_100M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case 1000:
-		new_link.link_speed = ETH_SPEED_NUM_1G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case 10000:
-		new_link.link_speed = ETH_SPEED_NUM_10G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case 20000:
-		new_link.link_speed = ETH_SPEED_NUM_20G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case 25000:
-		new_link.link_speed = ETH_SPEED_NUM_25G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case 40000:
-		new_link.link_speed = ETH_SPEED_NUM_40G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case 50000:
-		new_link.link_speed = ETH_SPEED_NUM_50G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case 100000:
-		new_link.link_speed = ETH_SPEED_NUM_100G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	default:
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	new_link.link_status = hw->link_up ? ETH_LINK_UP :
-					     ETH_LINK_DOWN;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = hw->link_up ? RTE_ETH_LINK_UP :
+					     RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(dev, &new_link);
 }
@@ -979,11 +979,11 @@ ice_dcf_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ice_create_tunnel(parent_hw, TNL_VXLAN,
 					udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_ECPRI:
+	case RTE_ETH_TUNNEL_TYPE_ECPRI:
 		ret = ice_create_tunnel(parent_hw, TNL_ECPRI,
 					udp_tunnel->udp_port);
 		break;
@@ -1010,8 +1010,8 @@ ice_dcf_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
-	case RTE_TUNNEL_TYPE_ECPRI:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_ECPRI:
 		ret = ice_destroy_tunnel(parent_hw, udp_tunnel->udp_port, 0);
 		break;
 	default:
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index 44fb38dbe7b1..b9fcfc80ad9b 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -37,7 +37,7 @@ ice_dcf_vf_repr_dev_configure(struct rte_eth_dev *dev)
 static int
 ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -45,7 +45,7 @@ ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
 static int
 ice_dcf_vf_repr_dev_stop(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -143,28 +143,28 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -246,9 +246,9 @@ ice_dcf_vf_repr_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		return -ENOTSUP;
 
 	/* Vlan stripping setting */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		bool enable = !!(dev_conf->rxmode.offloads &
-				 DEV_RX_OFFLOAD_VLAN_STRIP);
+				 RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 		if (enable && repr->outer_vlan_info.port_vlan_ena) {
 			PMD_DRV_LOG(ERR,
@@ -345,7 +345,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
 	if (!ice_dcf_vlan_offload_ena(repr))
 		return -ENOTSUP;
 
-	if (vlan_type != ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
 		PMD_DRV_LOG(ERR,
 			    "Can accelerate only outer VLAN in QinQ\n");
 		return -EINVAL;
@@ -375,7 +375,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
 
 	if (repr->outer_vlan_info.stripping_ena) {
 		err = ice_dcf_vf_repr_vlan_offload_set(dev,
-						       ETH_VLAN_STRIP_MASK);
+						       RTE_ETH_VLAN_STRIP_MASK);
 		if (err) {
 			PMD_DRV_LOG(ERR,
 				    "Failed to reset VLAN stripping : %d\n",
@@ -449,7 +449,7 @@ ice_dcf_vf_repr_init_vlan(struct rte_eth_dev *vf_rep_eth_dev)
 	int err;
 
 	err = ice_dcf_vf_repr_vlan_offload_set(vf_rep_eth_dev,
-					       ETH_VLAN_STRIP_MASK);
+					       RTE_ETH_VLAN_STRIP_MASK);
 	if (err) {
 		PMD_DRV_LOG(ERR, "Failed to set VLAN offload");
 		return err;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 2e7273cd1e93..fe546cf5159d 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1480,9 +1480,9 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
 	TAILQ_INIT(&vsi->mac_list);
 	TAILQ_INIT(&vsi->vlan_list);
 
-	/* Be sync with ETH_RSS_RETA_SIZE_x maximum value definition */
+	/* Be sync with RTE_ETH_RSS_RETA_SIZE_x maximum value definition */
 	pf->hash_lut_size = hw->func_caps.common_cap.rss_table_size >
-			ETH_RSS_RETA_SIZE_512 ? ETH_RSS_RETA_SIZE_512 :
+			RTE_ETH_RSS_RETA_SIZE_512 ? RTE_ETH_RSS_RETA_SIZE_512 :
 			hw->func_caps.common_cap.rss_table_size;
 	pf->flags |= ICE_FLAG_RSS_AQ_CAPABLE;
 
@@ -2986,14 +2986,14 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	int ret;
 
 #define ICE_RSS_HF_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 	ret = ice_rem_vsi_rss_cfg(hw, vsi->idx);
 	if (ret)
@@ -3003,7 +3003,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	cfg.symm = 0;
 	cfg.hdr_type = ICE_RSS_OUTER_HEADERS;
 	/* Configure RSS for IPv4 with src/dst addr as input set */
-	if (rss_hf & ETH_RSS_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_IPV4) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV4;
 		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -3013,7 +3013,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for IPv6 with src/dst addr as input set */
-	if (rss_hf & ETH_RSS_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_IPV6) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV6;
 		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -3023,7 +3023,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for udp4 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -3034,7 +3034,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for udp6 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -3045,7 +3045,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for tcp4 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -3056,7 +3056,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for tcp6 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -3067,7 +3067,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for sctp4 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_SCTP_IPV4;
@@ -3078,7 +3078,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for sctp6 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_SCTP_IPV6;
@@ -3088,7 +3088,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_IPV4) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV4;
@@ -3098,7 +3098,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_IPV6) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV6;
@@ -3108,7 +3108,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
 				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -3118,7 +3118,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
 				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -3128,7 +3128,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
 				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -3138,7 +3138,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
 				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -3281,8 +3281,8 @@ ice_dev_configure(struct rte_eth_dev *dev)
 	ad->rx_bulk_alloc_allowed = true;
 	ad->tx_simple_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (dev->data->nb_rx_queues) {
 		ret = ice_init_rss(pf);
@@ -3562,8 +3562,8 @@ ice_dev_start(struct rte_eth_dev *dev)
 	ice_set_rx_function(dev);
 	ice_set_tx_function(dev);
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-			ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+			RTE_ETH_VLAN_EXTEND_MASK;
 	ret = ice_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
@@ -3675,40 +3675,40 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_KEEP_CRC |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_FILTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->flow_type_rss_offloads = 0;
 
 	if (!is_safe_mode) {
 		dev_info->rx_offload_capa |=
-			DEV_RX_OFFLOAD_IPV4_CKSUM |
-			DEV_RX_OFFLOAD_UDP_CKSUM |
-			DEV_RX_OFFLOAD_TCP_CKSUM |
-			DEV_RX_OFFLOAD_QINQ_STRIP |
-			DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-			DEV_RX_OFFLOAD_VLAN_EXTEND |
-			DEV_RX_OFFLOAD_RSS_HASH |
-			DEV_RX_OFFLOAD_TIMESTAMP;
+			RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+			RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+			RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+			RTE_ETH_RX_OFFLOAD_RSS_HASH |
+			RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 		dev_info->tx_offload_capa |=
-			DEV_TX_OFFLOAD_QINQ_INSERT |
-			DEV_TX_OFFLOAD_IPV4_CKSUM |
-			DEV_TX_OFFLOAD_UDP_CKSUM |
-			DEV_TX_OFFLOAD_TCP_CKSUM |
-			DEV_TX_OFFLOAD_SCTP_CKSUM |
-			DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-			DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+			RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+			RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+			RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 		dev_info->flow_type_rss_offloads |= ICE_RSS_OFFLOAD_ALL;
 	}
 
 	dev_info->rx_queue_offload_capa = 0;
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->reta_size = pf->hash_lut_size;
 	dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
@@ -3747,24 +3747,24 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.nb_align = ICE_ALIGN_RING_DESC,
 	};
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M |
-			       ETH_LINK_SPEED_100M |
-			       ETH_LINK_SPEED_1G |
-			       ETH_LINK_SPEED_2_5G |
-			       ETH_LINK_SPEED_5G |
-			       ETH_LINK_SPEED_10G |
-			       ETH_LINK_SPEED_20G |
-			       ETH_LINK_SPEED_25G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			       RTE_ETH_LINK_SPEED_100M |
+			       RTE_ETH_LINK_SPEED_1G |
+			       RTE_ETH_LINK_SPEED_2_5G |
+			       RTE_ETH_LINK_SPEED_5G |
+			       RTE_ETH_LINK_SPEED_10G |
+			       RTE_ETH_LINK_SPEED_20G |
+			       RTE_ETH_LINK_SPEED_25G;
 
 	phy_type_low = hw->port_info->phy.phy_type_low;
 	phy_type_high = hw->port_info->phy.phy_type_high;
 
 	if (ICE_PHY_TYPE_SUPPORT_50G(phy_type_low))
-		dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
 
 	if (ICE_PHY_TYPE_SUPPORT_100G_LOW(phy_type_low) ||
 			ICE_PHY_TYPE_SUPPORT_100G_HIGH(phy_type_high))
-		dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
 
 	dev_info->nb_rx_queues = dev->data->nb_rx_queues;
 	dev_info->nb_tx_queues = dev->data->nb_tx_queues;
@@ -3829,8 +3829,8 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		status = ice_aq_get_link_info(hw->port_info, enable_lse,
 					      &link_status, NULL);
 		if (status != ICE_SUCCESS) {
-			link.link_speed = ETH_SPEED_NUM_100M;
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			link.link_speed = RTE_ETH_SPEED_NUM_100M;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR, "Failed to get link info");
 			goto out;
 		}
@@ -3846,55 +3846,55 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		goto out;
 
 	/* Full-duplex operation at all supported speeds */
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	/* Parse the link status */
 	switch (link_status.link_speed) {
 	case ICE_AQ_LINK_SPEED_10MB:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case ICE_AQ_LINK_SPEED_100MB:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case ICE_AQ_LINK_SPEED_1000MB:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case ICE_AQ_LINK_SPEED_2500MB:
-		link.link_speed = ETH_SPEED_NUM_2_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	case ICE_AQ_LINK_SPEED_5GB:
-		link.link_speed = ETH_SPEED_NUM_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 	case ICE_AQ_LINK_SPEED_10GB:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case ICE_AQ_LINK_SPEED_20GB:
-		link.link_speed = ETH_SPEED_NUM_20G;
+		link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case ICE_AQ_LINK_SPEED_25GB:
-		link.link_speed = ETH_SPEED_NUM_25G;
+		link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case ICE_AQ_LINK_SPEED_40GB:
-		link.link_speed = ETH_SPEED_NUM_40G;
+		link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case ICE_AQ_LINK_SPEED_50GB:
-		link.link_speed = ETH_SPEED_NUM_50G;
+		link.link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case ICE_AQ_LINK_SPEED_100GB:
-		link.link_speed = ETH_SPEED_NUM_100G;
+		link.link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	case ICE_AQ_LINK_SPEED_UNKNOWN:
 		PMD_DRV_LOG(ERR, "Unknown link speed");
-		link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "None link speed");
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			      ETH_LINK_SPEED_FIXED);
+			      RTE_ETH_LINK_SPEED_FIXED);
 
 out:
 	ice_atomic_write_link_status(dev, &link);
@@ -4370,15 +4370,15 @@ ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ice_vsi_config_vlan_filter(vsi, true);
 		else
 			ice_vsi_config_vlan_filter(vsi, false);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			ice_vsi_config_vlan_stripping(vsi, true);
 		else
 			ice_vsi_config_vlan_stripping(vsi, false);
@@ -4493,8 +4493,8 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
 		goto out;
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			lut[i] = reta_conf[idx].reta[shift];
 	}
@@ -4543,8 +4543,8 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
 		goto out;
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = lut[i];
 	}
@@ -5453,7 +5453,7 @@ ice_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ice_create_tunnel(hw, TNL_VXLAN, udp_tunnel->udp_port);
 		break;
 	default:
@@ -5477,7 +5477,7 @@ ice_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ice_destroy_tunnel(hw, udp_tunnel->udp_port, 0);
 		break;
 	default:
@@ -5498,7 +5498,7 @@ ice_timesync_enable(struct rte_eth_dev *dev)
 	int ret;
 
 	if (dev->data->dev_started && !(dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_TIMESTAMP)) {
+	    RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
 		PMD_DRV_LOG(ERR, "Rx timestamp offload not configured");
 		return -1;
 	}
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 5845f44c860c..ff9bef17760b 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -116,19 +116,19 @@
 		       ICE_FLAG_VF_MAC_BY_PF)
 
 #define ICE_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L2_PAYLOAD)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L2_PAYLOAD)
 
 /**
  * The overhead from MTU to max frame size.
diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c
index 20a3204fab7e..35eff8b17d28 100644
--- a/drivers/net/ice/ice_hash.c
+++ b/drivers/net/ice/ice_hash.c
@@ -39,27 +39,27 @@
 #define ICE_IPV4_PROT		BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_PROT)
 #define ICE_IPV6_PROT		BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PROT)
 
-#define VALID_RSS_IPV4_L4	(ETH_RSS_NONFRAG_IPV4_UDP	| \
-				 ETH_RSS_NONFRAG_IPV4_TCP	| \
-				 ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4	(RTE_ETH_RSS_NONFRAG_IPV4_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
-#define VALID_RSS_IPV6_L4	(ETH_RSS_NONFRAG_IPV6_UDP	| \
-				 ETH_RSS_NONFRAG_IPV6_TCP	| \
-				 ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4	(RTE_ETH_RSS_NONFRAG_IPV6_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
-#define VALID_RSS_IPV4		(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4		(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
 				 VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6		(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6		(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
 				 VALID_RSS_IPV6_L4)
 #define VALID_RSS_L3		(VALID_RSS_IPV4 | VALID_RSS_IPV6)
 #define VALID_RSS_L4		(VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
 
-#define VALID_RSS_ATTR		(ETH_RSS_L3_SRC_ONLY	| \
-				 ETH_RSS_L3_DST_ONLY	| \
-				 ETH_RSS_L4_SRC_ONLY	| \
-				 ETH_RSS_L4_DST_ONLY	| \
-				 ETH_RSS_L2_SRC_ONLY	| \
-				 ETH_RSS_L2_DST_ONLY	| \
+#define VALID_RSS_ATTR		(RTE_ETH_RSS_L3_SRC_ONLY	| \
+				 RTE_ETH_RSS_L3_DST_ONLY	| \
+				 RTE_ETH_RSS_L4_SRC_ONLY	| \
+				 RTE_ETH_RSS_L4_DST_ONLY	| \
+				 RTE_ETH_RSS_L2_SRC_ONLY	| \
+				 RTE_ETH_RSS_L2_DST_ONLY	| \
 				 RTE_ETH_RSS_L3_PRE32	| \
 				 RTE_ETH_RSS_L3_PRE48	| \
 				 RTE_ETH_RSS_L3_PRE64)
@@ -373,87 +373,87 @@ struct ice_rss_hash_cfg eth_tmplt = {
 };
 
 /* IPv4 */
-#define ICE_RSS_TYPE_ETH_IPV4		(ETH_RSS_ETH | ETH_RSS_IPV4 | \
-					 ETH_RSS_FRAG_IPV4 | \
-					 ETH_RSS_IPV4_CHKSUM)
+#define ICE_RSS_TYPE_ETH_IPV4		(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_FRAG_IPV4 | \
+					 RTE_ETH_RSS_IPV4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV4_UDP	(ICE_RSS_TYPE_ETH_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV4_TCP	(ICE_RSS_TYPE_ETH_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV4_SCTP	(ICE_RSS_TYPE_ETH_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP | \
-					 ETH_RSS_L4_CHKSUM)
-#define ICE_RSS_TYPE_IPV4		ETH_RSS_IPV4
-#define ICE_RSS_TYPE_IPV4_UDP		(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP)
-#define ICE_RSS_TYPE_IPV4_TCP		(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP)
-#define ICE_RSS_TYPE_IPV4_SCTP		(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP)
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
+#define ICE_RSS_TYPE_IPV4		RTE_ETH_RSS_IPV4
+#define ICE_RSS_TYPE_IPV4_UDP		(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define ICE_RSS_TYPE_IPV4_TCP		(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define ICE_RSS_TYPE_IPV4_SCTP		(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
 /* IPv6 */
-#define ICE_RSS_TYPE_ETH_IPV6		(ETH_RSS_ETH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_ETH_IPV6_FRAG	(ETH_RSS_ETH | ETH_RSS_IPV6 | \
-					 ETH_RSS_FRAG_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6		(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6_FRAG	(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_FRAG_IPV6)
 #define ICE_RSS_TYPE_ETH_IPV6_UDP	(ICE_RSS_TYPE_ETH_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV6_TCP	(ICE_RSS_TYPE_ETH_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP | \
-					 ETH_RSS_L4_CHKSUM)
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
 #define ICE_RSS_TYPE_ETH_IPV6_SCTP	(ICE_RSS_TYPE_ETH_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP | \
-					 ETH_RSS_L4_CHKSUM)
-#define ICE_RSS_TYPE_IPV6		ETH_RSS_IPV6
-#define ICE_RSS_TYPE_IPV6_UDP		(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP)
-#define ICE_RSS_TYPE_IPV6_TCP		(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP)
-#define ICE_RSS_TYPE_IPV6_SCTP		(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP)
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+					 RTE_ETH_RSS_L4_CHKSUM)
+#define ICE_RSS_TYPE_IPV6		RTE_ETH_RSS_IPV6
+#define ICE_RSS_TYPE_IPV6_UDP		(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define ICE_RSS_TYPE_IPV6_TCP		(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define ICE_RSS_TYPE_IPV6_SCTP		(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 /* VLAN IPV4 */
 #define ICE_RSS_TYPE_VLAN_IPV4		(ICE_RSS_TYPE_IPV4 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
-					 ETH_RSS_FRAG_IPV4)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+					 RTE_ETH_RSS_FRAG_IPV4)
 #define ICE_RSS_TYPE_VLAN_IPV4_UDP	(ICE_RSS_TYPE_IPV4_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV4_TCP	(ICE_RSS_TYPE_IPV4_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV4_SCTP	(ICE_RSS_TYPE_IPV4_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 /* VLAN IPv6 */
 #define ICE_RSS_TYPE_VLAN_IPV6		(ICE_RSS_TYPE_IPV6 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV6_FRAG	(ICE_RSS_TYPE_IPV6 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
-					 ETH_RSS_FRAG_IPV6)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+					 RTE_ETH_RSS_FRAG_IPV6)
 #define ICE_RSS_TYPE_VLAN_IPV6_UDP	(ICE_RSS_TYPE_IPV6_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV6_TCP	(ICE_RSS_TYPE_IPV6_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV6_SCTP	(ICE_RSS_TYPE_IPV6_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 
 /* GTPU IPv4 */
 #define ICE_RSS_TYPE_GTPU_IPV4		(ICE_RSS_TYPE_IPV4 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV4_UDP	(ICE_RSS_TYPE_IPV4_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV4_TCP	(ICE_RSS_TYPE_IPV4_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 /* GTPU IPv6 */
 #define ICE_RSS_TYPE_GTPU_IPV6		(ICE_RSS_TYPE_IPV6 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV6_UDP	(ICE_RSS_TYPE_IPV6_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV6_TCP	(ICE_RSS_TYPE_IPV6_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 
 /* PPPOE */
-#define ICE_RSS_TYPE_PPPOE		(ETH_RSS_ETH | ETH_RSS_PPPOE)
+#define ICE_RSS_TYPE_PPPOE		(RTE_ETH_RSS_ETH | RTE_ETH_RSS_PPPOE)
 
 /* PPPOE IPv4 */
 #define ICE_RSS_TYPE_PPPOE_IPV4		(ICE_RSS_TYPE_IPV4 | \
@@ -472,17 +472,17 @@ struct ice_rss_hash_cfg eth_tmplt = {
 					 ICE_RSS_TYPE_PPPOE)
 
 /* ESP, AH, L2TPV3 and PFCP */
-#define ICE_RSS_TYPE_IPV4_ESP		(ETH_RSS_ESP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_ESP		(ETH_RSS_ESP | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_AH		(ETH_RSS_AH | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_AH		(ETH_RSS_AH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
 
 /* MAC */
-#define ICE_RSS_TYPE_ETH		ETH_RSS_ETH
+#define ICE_RSS_TYPE_ETH		RTE_ETH_RSS_ETH
 
 /**
  * Supported pattern for hash.
@@ -647,86 +647,86 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 	uint64_t *hash_flds = &hash_cfg->hash_flds;
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH) {
-		if (!(rss_type & ETH_RSS_ETH))
+		if (!(rss_type & RTE_ETH_RSS_ETH))
 			*hash_flds &= ~ICE_FLOW_HASH_ETH;
-		if (rss_type & ETH_RSS_L2_SRC_ONLY)
+		if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
 			*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_DA));
-		else if (rss_type & ETH_RSS_L2_DST_ONLY)
+		else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
 			*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_SA));
 		*addl_hdrs &= ~ICE_FLOW_SEG_HDR_ETH;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH_NON_IP) {
-		if (rss_type & ETH_RSS_ETH)
+		if (rss_type & RTE_ETH_RSS_ETH)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_TYPE);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_VLAN) {
-		if (rss_type & ETH_RSS_C_VLAN)
+		if (rss_type & RTE_ETH_RSS_C_VLAN)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_C_VLAN);
-		else if (rss_type & ETH_RSS_S_VLAN)
+		else if (rss_type & RTE_ETH_RSS_S_VLAN)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_S_VLAN);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_PPPOE) {
-		if (!(rss_type & ETH_RSS_PPPOE))
+		if (!(rss_type & RTE_ETH_RSS_PPPOE))
 			*hash_flds &= ~ICE_FLOW_HASH_PPPOE_SESS_ID;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV4) {
 		if (rss_type &
-		   (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-		    ETH_RSS_NONFRAG_IPV4_UDP |
-		    ETH_RSS_NONFRAG_IPV4_TCP |
-		    ETH_RSS_NONFRAG_IPV4_SCTP)) {
-			if (rss_type & ETH_RSS_FRAG_IPV4) {
+		   (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+		    RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+			if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
 				*addl_hdrs |= ICE_FLOW_SEG_HDR_IPV_FRAG;
 				*addl_hdrs &= ~(ICE_FLOW_SEG_HDR_IPV_OTHER);
 				*hash_flds |=
 					BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_ID);
 			}
-			if (rss_type & ETH_RSS_L3_SRC_ONLY)
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_DA));
-			else if (rss_type & ETH_RSS_L3_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_SA));
 			else if (rss_type &
-				(ETH_RSS_L4_SRC_ONLY |
-				ETH_RSS_L4_DST_ONLY))
+				(RTE_ETH_RSS_L4_SRC_ONLY |
+				RTE_ETH_RSS_L4_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_IPV4;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_IPV4;
 		}
 
-		if (rss_type & ETH_RSS_IPV4_CHKSUM)
+		if (rss_type & RTE_ETH_RSS_IPV4_CHKSUM)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_CHKSUM);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV6) {
 		if (rss_type &
-		   (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-		    ETH_RSS_NONFRAG_IPV6_UDP |
-		    ETH_RSS_NONFRAG_IPV6_TCP |
-		    ETH_RSS_NONFRAG_IPV6_SCTP)) {
-			if (rss_type & ETH_RSS_FRAG_IPV6)
+		   (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+		    RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+			if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
 				*hash_flds |=
 					BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_ID);
-			if (rss_type & ETH_RSS_L3_SRC_ONLY)
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
-			else if (rss_type & ETH_RSS_L3_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 			else if (rss_type &
-				(ETH_RSS_L4_SRC_ONLY |
-				ETH_RSS_L4_DST_ONLY))
+				(RTE_ETH_RSS_L4_SRC_ONLY |
+				RTE_ETH_RSS_L4_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_IPV6;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_IPV6;
 		}
 
 		if (rss_type & RTE_ETH_RSS_L3_PRE32) {
-			if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_SA));
-			} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+			} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_DA));
 			} else {
@@ -735,10 +735,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 			}
 		}
 		if (rss_type & RTE_ETH_RSS_L3_PRE48) {
-			if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_SA));
-			} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+			} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_DA));
 			} else {
@@ -747,10 +747,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 			}
 		}
 		if (rss_type & RTE_ETH_RSS_L3_PRE64) {
-			if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_SA));
-			} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+			} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_DA));
 			} else {
@@ -762,81 +762,81 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_UDP) {
 		if (rss_type &
-		   (ETH_RSS_NONFRAG_IPV4_UDP |
-		    ETH_RSS_NONFRAG_IPV6_UDP)) {
-			if (rss_type & ETH_RSS_L4_SRC_ONLY)
+		   (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+			if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_DST_PORT));
-			else if (rss_type & ETH_RSS_L4_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_SRC_PORT));
 			else if (rss_type &
-				(ETH_RSS_L3_SRC_ONLY |
-				  ETH_RSS_L3_DST_ONLY))
+				(RTE_ETH_RSS_L3_SRC_ONLY |
+				  RTE_ETH_RSS_L3_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
 		}
 
-		if (rss_type & ETH_RSS_L4_CHKSUM)
+		if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_CHKSUM);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_TCP) {
 		if (rss_type &
-		   (ETH_RSS_NONFRAG_IPV4_TCP |
-		    ETH_RSS_NONFRAG_IPV6_TCP)) {
-			if (rss_type & ETH_RSS_L4_SRC_ONLY)
+		   (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+			if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_DST_PORT));
-			else if (rss_type & ETH_RSS_L4_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_SRC_PORT));
 			else if (rss_type &
-				(ETH_RSS_L3_SRC_ONLY |
-				  ETH_RSS_L3_DST_ONLY))
+				(RTE_ETH_RSS_L3_SRC_ONLY |
+				  RTE_ETH_RSS_L3_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
 		}
 
-		if (rss_type & ETH_RSS_L4_CHKSUM)
+		if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_CHKSUM);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_SCTP) {
 		if (rss_type &
-		   (ETH_RSS_NONFRAG_IPV4_SCTP |
-		    ETH_RSS_NONFRAG_IPV6_SCTP)) {
-			if (rss_type & ETH_RSS_L4_SRC_ONLY)
+		   (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+			if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_DST_PORT));
-			else if (rss_type & ETH_RSS_L4_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_SRC_PORT));
 			else if (rss_type &
-				(ETH_RSS_L3_SRC_ONLY |
-				  ETH_RSS_L3_DST_ONLY))
+				(RTE_ETH_RSS_L3_SRC_ONLY |
+				  RTE_ETH_RSS_L3_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
 		}
 
-		if (rss_type & ETH_RSS_L4_CHKSUM)
+		if (rss_type & RTE_ETH_RSS_L4_CHKSUM)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_CHKSUM);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_L2TPV3) {
-		if (!(rss_type & ETH_RSS_L2TPV3))
+		if (!(rss_type & RTE_ETH_RSS_L2TPV3))
 			*hash_flds &= ~ICE_FLOW_HASH_L2TPV3_SESS_ID;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_ESP) {
-		if (!(rss_type & ETH_RSS_ESP))
+		if (!(rss_type & RTE_ETH_RSS_ESP))
 			*hash_flds &= ~ICE_FLOW_HASH_ESP_SPI;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_AH) {
-		if (!(rss_type & ETH_RSS_AH))
+		if (!(rss_type & RTE_ETH_RSS_AH))
 			*hash_flds &= ~ICE_FLOW_HASH_AH_SPI;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_PFCP_SESSION) {
-		if (!(rss_type & ETH_RSS_PFCP))
+		if (!(rss_type & RTE_ETH_RSS_PFCP))
 			*hash_flds &= ~ICE_FLOW_HASH_PFCP_SEID;
 	}
 }
@@ -870,7 +870,7 @@ ice_refine_hash_cfg_gtpu(struct ice_rss_hash_cfg *hash_cfg,
 	uint64_t *hash_flds = &hash_cfg->hash_flds;
 
 	/* update hash field for gtpu eh/gtpu dwn/gtpu up. */
-	if (!(rss_type & ETH_RSS_GTPU))
+	if (!(rss_type & RTE_ETH_RSS_GTPU))
 		return;
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_GTPU_DWN)
@@ -892,10 +892,10 @@ static void ice_refine_hash_cfg(struct ice_rss_hash_cfg *hash_cfg,
 }
 
 static uint64_t invalid_rss_comb[] = {
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	RTE_ETH_RSS_L3_PRE40 |
 	RTE_ETH_RSS_L3_PRE56 |
 	RTE_ETH_RSS_L3_PRE96
@@ -907,9 +907,9 @@ struct rss_attr_type {
 };
 
 static struct rss_attr_type rss_attr_to_valid_type[] = {
-	{ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY,	ETH_RSS_ETH},
-	{ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
-	{ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
+	{RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY,	RTE_ETH_RSS_ETH},
+	{RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
+	{RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
 	/* current ipv6 prefix only supports prefix 64 bits*/
 	{RTE_ETH_RSS_L3_PRE32,				VALID_RSS_IPV6},
 	{RTE_ETH_RSS_L3_PRE48,				VALID_RSS_IPV6},
@@ -928,16 +928,16 @@ ice_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
 	 * hash function.
 	 */
 	if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
-		if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
-		    ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+		if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+		    RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
 			return true;
 
 		if (!(rss_type &
-		   (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
-		    ETH_RSS_FRAG_IPV4 | ETH_RSS_FRAG_IPV6 |
-		    ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
-		    ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
-		    ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+		   (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+		    RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_FRAG_IPV6 |
+		    RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
 			return true;
 	}
 
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index ff362c21d9f5..8406240d7209 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -303,7 +303,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
 		}
 	}
 
-	if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 		/* Register mbuf field and flag for Rx timestamp */
 		err = rte_mbuf_dyn_rx_timestamp_register(
 				&ice_timestamp_dynfield_offset,
@@ -367,7 +367,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
 	regval |= (0x03 << QRXFLXP_CNTXT_RXDID_PRIO_S) &
 		QRXFLXP_CNTXT_RXDID_PRIO_M;
 
-	if (ad->ptp_ena || rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (ad->ptp_ena || rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 		regval |= QRXFLXP_CNTXT_TS_M;
 
 	ICE_WRITE_REG(hw, QRXFLXP_CNTXT(rxq->reg_idx), regval);
@@ -1117,7 +1117,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
 
 	rxq->reg_idx = vsi->base_queue + queue_idx;
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1624,7 +1624,7 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq)
 			ice_rxd_to_vlan_tci(mb, &rxdp[j]);
 			rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
-			if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+			if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 				ts_ns = ice_tstamp_convert_32b_64b(hw,
 					rte_le_to_cpu_32(rxdp[j].wb.flex_ts.ts_high));
 				if (ice_timestamp_dynflag > 0) {
@@ -1942,7 +1942,7 @@ ice_recv_scattered_pkts(void *rx_queue,
 		rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
 		pkt_flags = ice_rxd_error_to_pkt_flags(rx_stat_err0);
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
-		if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 			ts_ns = ice_tstamp_convert_32b_64b(hw,
 				rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high));
 			if (ice_timestamp_dynflag > 0) {
@@ -2373,7 +2373,7 @@ ice_recv_pkts(void *rx_queue,
 		rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
 		pkt_flags = ice_rxd_error_to_pkt_flags(rx_stat_err0);
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
-		if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 			ts_ns = ice_tstamp_convert_32b_64b(hw,
 				rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high));
 			if (ice_timestamp_dynflag > 0) {
@@ -2889,7 +2889,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
 	for (i = 0; i < txq->tx_rs_thresh; i++)
 		rte_prefetch0((txep + i)->mbuf);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
 		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
 			rte_mempool_put(txep->mbuf->pool, txep->mbuf);
 			txep->mbuf = NULL;
@@ -3365,7 +3365,7 @@ ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq)
 	/* Use a simple Tx queue if possible (only fast free is allowed) */
 	ad->tx_simple_allowed =
 		(txq->offloads ==
-		(txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+		(txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
 		txq->tx_rs_thresh >= ICE_TX_MAX_BURST);
 
 	if (ad->tx_simple_allowed)
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 9725ac018043..8c870354619e 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -473,7 +473,7 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 			 * will cause performance drop to get into this context.
 			 */
 			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-					DEV_RX_OFFLOAD_RSS_HASH) {
+					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
 					_mm_load_si128
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 5bba9887d296..6d2038975830 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -584,7 +584,7 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
 			 * will cause performance drop to get into this context.
 			 */
 			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-					DEV_RX_OFFLOAD_RSS_HASH) {
+					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
 					_mm_load_si128
@@ -994,7 +994,7 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->tx_next_dd - (n - 1);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		void **cache_objs;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 5b5250565e35..a04b6fee560a 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -248,23 +248,23 @@ ice_rxq_vec_setup_default(struct ice_rx_queue *rxq)
 }
 
 #define ICE_TX_NO_VECTOR_FLAGS (			\
-		DEV_TX_OFFLOAD_MULTI_SEGS |		\
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |	\
-		DEV_TX_OFFLOAD_TCP_TSO)
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |		\
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |	\
+		RTE_ETH_TX_OFFLOAD_TCP_TSO)
 
 #define ICE_TX_VECTOR_OFFLOAD (				\
-		DEV_TX_OFFLOAD_VLAN_INSERT |		\
-		DEV_TX_OFFLOAD_QINQ_INSERT |		\
-		DEV_TX_OFFLOAD_IPV4_CKSUM |		\
-		DEV_TX_OFFLOAD_SCTP_CKSUM |		\
-		DEV_TX_OFFLOAD_UDP_CKSUM |		\
-		DEV_TX_OFFLOAD_TCP_CKSUM)
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |		\
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |		\
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 
 #define ICE_RX_VECTOR_OFFLOAD (				\
-		DEV_RX_OFFLOAD_CHECKSUM |		\
-		DEV_RX_OFFLOAD_SCTP_CKSUM |		\
-		DEV_RX_OFFLOAD_VLAN |			\
-		DEV_RX_OFFLOAD_RSS_HASH)
+		RTE_ETH_RX_OFFLOAD_CHECKSUM |		\
+		RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |		\
+		RTE_ETH_RX_OFFLOAD_VLAN |			\
+		RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define ICE_VECTOR_PATH		0
 #define ICE_VECTOR_OFFLOAD_PATH	1
@@ -287,7 +287,7 @@ ice_rx_vec_queue_default(struct ice_rx_queue *rxq)
 	if (rxq->proto_xtr != PROTO_XTR_NONE)
 		return -1;
 
-	if (rxq->offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 		return -1;
 
 	if (rxq->offloads & ICE_RX_VECTOR_OFFLOAD)
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 653bd28b417c..117494131f32 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -479,7 +479,7 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 		 * will cause performance drop to get into this context.
 		 */
 		if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_RSS_HASH) {
+				RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh3 =
 				_mm_load_si128
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 2a1ed90b641b..7ce80a442b35 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -307,8 +307,8 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (rx_mq_mode != ETH_MQ_RX_NONE &&
-		rx_mq_mode != ETH_MQ_RX_RSS) {
+	if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+		rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
 		/* RSS together with VMDq not supported*/
 		PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
 				rx_mq_mode);
@@ -318,7 +318,7 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
 	/* To no break software that set invalid mode, only display
 	 * warning if invalid mode is used.
 	 */
-	if (tx_mq_mode != ETH_MQ_TX_NONE)
+	if (tx_mq_mode != RTE_ETH_MQ_TX_NONE)
 		PMD_INIT_LOG(WARNING,
 			"TX mode %d is not supported. Due to meaningless in this driver, just ignore",
 			tx_mq_mode);
@@ -334,8 +334,8 @@ eth_igc_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	ret  = igc_check_mq_mode(dev);
 	if (ret != 0)
@@ -473,12 +473,12 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		uint16_t duplex, speed;
 		hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
 		link.link_duplex = (duplex == FULL_DUPLEX) ?
-				ETH_LINK_FULL_DUPLEX :
-				ETH_LINK_HALF_DUPLEX;
+				RTE_ETH_LINK_FULL_DUPLEX :
+				RTE_ETH_LINK_HALF_DUPLEX;
 		link.link_speed = speed;
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 		link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 		if (speed == SPEED_2500) {
 			uint32_t tipg = IGC_READ_REG(hw, IGC_TIPG);
@@ -490,9 +490,9 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		}
 	} else {
 		link.link_speed = 0;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_status = ETH_LINK_DOWN;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -525,7 +525,7 @@ eth_igc_interrupt_action(struct rte_eth_dev *dev)
 				" Port %d: Link Up - speed %u Mbps - %s",
 				dev->data->port_id,
 				(unsigned int)link.link_speed,
-				link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+				link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 				"full-duplex" : "half-duplex");
 		else
 			PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -972,18 +972,18 @@ eth_igc_start(struct rte_eth_dev *dev)
 
 	/* VLAN Offload Settings */
 	eth_igc_vlan_offload_set(dev,
-		ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK);
+		RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK);
 
 	/* Setup link speed and duplex */
 	speeds = &dev->data->dev_conf.link_speeds;
-	if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		hw->phy.autoneg_advertised = IGC_ALL_SPEED_DUPLEX_2500;
 		hw->mac.autoneg = 1;
 	} else {
 		int num_speeds = 0;
 
-		if (*speeds & ETH_LINK_SPEED_FIXED) {
+		if (*speeds & RTE_ETH_LINK_SPEED_FIXED) {
 			PMD_DRV_LOG(ERR,
 				    "Force speed mode currently not supported");
 			igc_dev_clear_queues(dev);
@@ -993,33 +993,33 @@ eth_igc_start(struct rte_eth_dev *dev)
 		hw->phy.autoneg_advertised = 0;
 		hw->mac.autoneg = 1;
 
-		if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G)) {
+		if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G)) {
 			num_speeds = -1;
 			goto error_invalid_config;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_1G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_1G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_2_5G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_2_5G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_2500_FULL;
 			num_speeds++;
 		}
@@ -1482,14 +1482,14 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = hw->mac.rar_entry_count;
 	dev_info->rx_offload_capa = IGC_RX_OFFLOAD_ALL;
 	dev_info->tx_offload_capa = IGC_TX_OFFLOAD_ALL;
-	dev_info->rx_queue_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->rx_queue_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	dev_info->max_rx_queues = IGC_QUEUE_PAIRS_NUM;
 	dev_info->max_tx_queues = IGC_QUEUE_PAIRS_NUM;
 	dev_info->max_vmdq_pools = 0;
 
 	dev_info->hash_key_size = IGC_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = IGC_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1515,9 +1515,9 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->rx_desc_lim = rx_desc_lim;
 	dev_info->tx_desc_lim = tx_desc_lim;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G;
 
 	dev_info->max_mtu = dev_info->max_rx_pktlen - IGC_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -2141,13 +2141,13 @@ eth_igc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		rx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -2179,16 +2179,16 @@ eth_igc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		hw->fc.requested_mode = igc_fc_none;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		hw->fc.requested_mode = igc_fc_rx_pause;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		hw->fc.requested_mode = igc_fc_tx_pause;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		hw->fc.requested_mode = igc_fc_full;
 		break;
 	default:
@@ -2234,29 +2234,29 @@ eth_igc_rss_reta_update(struct rte_eth_dev *dev,
 	struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
 	uint16_t i;
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR,
 			"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
-			reta_size, ETH_RSS_RETA_SIZE_128);
+			reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
-	RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+	RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
 
 	/* set redirection table */
-	for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+	for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
 		union igc_rss_reta_reg reta, reg;
 		uint16_t idx, shift;
 		uint8_t j, mask;
 
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				IGC_RSS_RDT_REG_SIZE_MASK);
 
 		/* if no need to update the register */
 		if (!mask ||
-		    shift > (RTE_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
+		    shift > (RTE_ETH_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
 			continue;
 
 		/* check mask whether need to read the register value first */
@@ -2290,29 +2290,29 @@ eth_igc_rss_reta_query(struct rte_eth_dev *dev,
 	struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
 	uint16_t i;
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR,
 			"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
-			reta_size, ETH_RSS_RETA_SIZE_128);
+			reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
-	RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+	RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
 
 	/* read redirection table */
-	for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+	for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
 		union igc_rss_reta_reg reta;
 		uint16_t idx, shift;
 		uint8_t j, mask;
 
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				IGC_RSS_RDT_REG_SIZE_MASK);
 
 		/* if no need to read register */
 		if (!mask ||
-		    shift > (RTE_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
+		    shift > (RTE_ETH_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
 			continue;
 
 		/* read register and get the queue index */
@@ -2369,23 +2369,23 @@ eth_igc_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	rss_hf = 0;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_EX)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP_EX)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP_EX)
-		rss_hf |= ETH_RSS_IPV6_UDP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
 
 	rss_conf->rss_hf |= rss_hf;
 	return 0;
@@ -2514,22 +2514,22 @@ eth_igc_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			igc_vlan_hw_strip_enable(dev);
 		else
 			igc_vlan_hw_strip_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			igc_vlan_hw_filter_enable(dev);
 		else
 			igc_vlan_hw_filter_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			return igc_vlan_hw_extend_enable(dev);
 		else
 			return igc_vlan_hw_extend_disable(dev);
@@ -2547,7 +2547,7 @@ eth_igc_vlan_tpid_set(struct rte_eth_dev *dev,
 	uint32_t reg_val;
 
 	/* only outer TPID of double VLAN can be configured*/
-	if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		reg_val = IGC_READ_REG(hw, IGC_VET);
 		reg_val = (reg_val & (~IGC_VET_EXT)) |
 			((uint32_t)tpid << IGC_VET_EXT_SHIFT);
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index 5e6c2ff30157..f56cad79e939 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -66,37 +66,37 @@ extern "C" {
 #define IGC_TX_MAX_MTU_SEG	UINT8_MAX
 
 #define IGC_RX_OFFLOAD_ALL	(    \
-	DEV_RX_OFFLOAD_VLAN_STRIP  | \
-	DEV_RX_OFFLOAD_VLAN_FILTER | \
-	DEV_RX_OFFLOAD_VLAN_EXTEND | \
-	DEV_RX_OFFLOAD_IPV4_CKSUM  | \
-	DEV_RX_OFFLOAD_UDP_CKSUM   | \
-	DEV_RX_OFFLOAD_TCP_CKSUM   | \
-	DEV_RX_OFFLOAD_SCTP_CKSUM  | \
-	DEV_RX_OFFLOAD_KEEP_CRC    | \
-	DEV_RX_OFFLOAD_SCATTER     | \
-	DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RX_OFFLOAD_VLAN_STRIP  | \
+	RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+	RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+	RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  | \
+	RTE_ETH_RX_OFFLOAD_UDP_CKSUM   | \
+	RTE_ETH_RX_OFFLOAD_TCP_CKSUM   | \
+	RTE_ETH_RX_OFFLOAD_SCTP_CKSUM  | \
+	RTE_ETH_RX_OFFLOAD_KEEP_CRC    | \
+	RTE_ETH_RX_OFFLOAD_SCATTER     | \
+	RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define IGC_TX_OFFLOAD_ALL	(    \
-	DEV_TX_OFFLOAD_VLAN_INSERT | \
-	DEV_TX_OFFLOAD_IPV4_CKSUM  | \
-	DEV_TX_OFFLOAD_UDP_CKSUM   | \
-	DEV_TX_OFFLOAD_TCP_CKSUM   | \
-	DEV_TX_OFFLOAD_SCTP_CKSUM  | \
-	DEV_TX_OFFLOAD_TCP_TSO     | \
-	DEV_TX_OFFLOAD_UDP_TSO	   | \
-	DEV_TX_OFFLOAD_MULTI_SEGS)
+	RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  | \
+	RTE_ETH_TX_OFFLOAD_UDP_CKSUM   | \
+	RTE_ETH_TX_OFFLOAD_TCP_CKSUM   | \
+	RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  | \
+	RTE_ETH_TX_OFFLOAD_TCP_TSO     | \
+	RTE_ETH_TX_OFFLOAD_UDP_TSO	   | \
+	RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define IGC_RSS_OFFLOAD_ALL	(    \
-	ETH_RSS_IPV4               | \
-	ETH_RSS_NONFRAG_IPV4_TCP   | \
-	ETH_RSS_NONFRAG_IPV4_UDP   | \
-	ETH_RSS_IPV6               | \
-	ETH_RSS_NONFRAG_IPV6_TCP   | \
-	ETH_RSS_NONFRAG_IPV6_UDP   | \
-	ETH_RSS_IPV6_EX            | \
-	ETH_RSS_IPV6_TCP_EX        | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4               | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP   | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP   | \
+	RTE_ETH_RSS_IPV6               | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP   | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP   | \
+	RTE_ETH_RSS_IPV6_EX            | \
+	RTE_ETH_RSS_IPV6_TCP_EX        | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define IGC_MAX_ETQF_FILTERS		3	/* etqf(3) is used for 1588 */
 #define IGC_ETQF_FILTER_1588		3
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 56132e8c6cd6..1d34ae2e1b15 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -127,7 +127,7 @@ struct igc_rx_queue {
 	uint8_t             crc_len;    /**< 0 if CRC stripped, 4 otherwise. */
 	uint8_t             drop_en;	/**< If not 0, set SRRCTL.Drop_En. */
 	uint32_t            flags;      /**< RX flags. */
-	uint64_t	    offloads;   /**< offloads of DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads;   /**< offloads of RTE_ETH_RX_OFFLOAD_* */
 };
 
 /** Offload features */
@@ -209,7 +209,7 @@ struct igc_tx_queue {
 	/**< Start context position for transmit queue. */
 	struct igc_advctx_info ctx_cache[IGC_CTX_NUM];
 	/**< Hardware context history.*/
-	uint64_t	       offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+	uint64_t	       offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
 };
 
 static inline uint64_t
@@ -847,23 +847,23 @@ igc_hw_rss_hash_set(struct igc_hw *hw, struct rte_eth_rss_conf *rss_conf)
 	/* Set configured hashing protocols in MRQC register */
 	rss_hf = rss_conf->rss_hf;
 	mrqc = IGC_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV4_TCP;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6;
-	if (rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_EX)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP;
-	if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV4_UDP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP;
-	if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP_EX;
 	IGC_WRITE_REG(hw, IGC_MRQC, mrqc);
 }
@@ -1037,10 +1037,10 @@ igc_dev_mq_rx_configure(struct rte_eth_dev *dev)
 	}
 
 	switch (dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		igc_rss_configure(dev);
 		break;
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 		/*
 		 * configure RSS register for following,
 		 * then disable the RSS logic
@@ -1111,7 +1111,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 * call to configure
 		 */
-		rxq->crc_len = (offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+		rxq->crc_len = (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
 				RTE_ETHER_CRC_LEN : 0;
 
 		bus_addr = rxq->rx_ring_phys_addr;
@@ -1177,7 +1177,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 		IGC_WRITE_REG(hw, IGC_RXDCTL(rxq->reg_idx), rxdctl);
 	}
 
-	if (offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		dev->data->scattered_rx = 1;
 
 	if (dev->data->scattered_rx) {
@@ -1221,20 +1221,20 @@ igc_rx_init(struct rte_eth_dev *dev)
 	rxcsum |= IGC_RXCSUM_PCSD;
 
 	/* Enable both L3/L4 rx checksum offload */
-	if (offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		rxcsum |= IGC_RXCSUM_IPOFL;
 	else
 		rxcsum &= ~IGC_RXCSUM_IPOFL;
 
 	if (offloads &
-		(DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+		(RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
 		rxcsum |= IGC_RXCSUM_TUOFL;
-		offloads |= DEV_RX_OFFLOAD_SCTP_CKSUM;
+		offloads |= RTE_ETH_RX_OFFLOAD_SCTP_CKSUM;
 	} else {
 		rxcsum &= ~IGC_RXCSUM_TUOFL;
 	}
 
-	if (offloads & DEV_RX_OFFLOAD_SCTP_CKSUM)
+	if (offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM)
 		rxcsum |= IGC_RXCSUM_CRCOFL;
 	else
 		rxcsum &= ~IGC_RXCSUM_CRCOFL;
@@ -1242,7 +1242,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 	IGC_WRITE_REG(hw, IGC_RXCSUM, rxcsum);
 
 	/* Setup the Receive Control Register. */
-	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rctl &= ~IGC_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
 	else
 		rctl |= IGC_RCTL_SECRC; /* Strip Ethernet CRC. */
@@ -1279,12 +1279,12 @@ igc_rx_init(struct rte_eth_dev *dev)
 		IGC_WRITE_REG(hw, IGC_RDT(rxq->reg_idx), rxq->nb_rx_desc - 1);
 
 		dvmolr = IGC_READ_REG(hw, IGC_DVMOLR(rxq->reg_idx));
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			dvmolr |= IGC_DVMOLR_STRVLAN;
 		else
 			dvmolr &= ~IGC_DVMOLR_STRVLAN;
 
-		if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			dvmolr &= ~IGC_DVMOLR_STRCRC;
 		else
 			dvmolr |= IGC_DVMOLR_STRCRC;
@@ -2253,10 +2253,10 @@ eth_igc_vlan_strip_queue_set(struct rte_eth_dev *dev,
 	reg_val = IGC_READ_REG(hw, IGC_DVMOLR(rx_queue_id));
 	if (on) {
 		reg_val |= IGC_DVMOLR_STRVLAN;
-		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
 		reg_val &= ~(IGC_DVMOLR_STRVLAN | IGC_DVMOLR_HIDVLAN);
-		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	IGC_WRITE_REG(hw, IGC_DVMOLR(rx_queue_id), reg_val);
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index f94a1fed0a38..c688c3735c06 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -280,37 +280,37 @@ ionic_dev_link_update(struct rte_eth_dev *eth_dev,
 	memset(&link, 0, sizeof(link));
 
 	if (adapter->idev.port_info->config.an_enable) {
-		link.link_autoneg = ETH_LINK_AUTONEG;
+		link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	}
 
 	if (!adapter->link_up ||
 	    !(lif->state & IONIC_LIF_F_UP)) {
 		/* Interface is down */
-		link.link_status = ETH_LINK_DOWN;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	} else {
 		/* Interface is up */
-		link.link_status = ETH_LINK_UP;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_status = RTE_ETH_LINK_UP;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		switch (adapter->link_speed) {
 		case  10000:
-			link.link_speed = ETH_SPEED_NUM_10G;
+			link.link_speed = RTE_ETH_SPEED_NUM_10G;
 			break;
 		case  25000:
-			link.link_speed = ETH_SPEED_NUM_25G;
+			link.link_speed = RTE_ETH_SPEED_NUM_25G;
 			break;
 		case  40000:
-			link.link_speed = ETH_SPEED_NUM_40G;
+			link.link_speed = RTE_ETH_SPEED_NUM_40G;
 			break;
 		case  50000:
-			link.link_speed = ETH_SPEED_NUM_50G;
+			link.link_speed = RTE_ETH_SPEED_NUM_50G;
 			break;
 		case 100000:
-			link.link_speed = ETH_SPEED_NUM_100G;
+			link.link_speed = RTE_ETH_SPEED_NUM_100G;
 			break;
 		default:
-			link.link_speed = ETH_SPEED_NUM_NONE;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 			break;
 		}
 	}
@@ -387,17 +387,17 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->flow_type_rss_offloads = IONIC_ETH_RSS_OFFLOAD_ALL;
 
 	dev_info->speed_capa =
-		ETH_LINK_SPEED_10G |
-		ETH_LINK_SPEED_25G |
-		ETH_LINK_SPEED_40G |
-		ETH_LINK_SPEED_50G |
-		ETH_LINK_SPEED_100G;
+		RTE_ETH_LINK_SPEED_10G |
+		RTE_ETH_LINK_SPEED_25G |
+		RTE_ETH_LINK_SPEED_40G |
+		RTE_ETH_LINK_SPEED_50G |
+		RTE_ETH_LINK_SPEED_100G;
 
 	/*
 	 * Per-queue capabilities
 	 * RTE does not support disabling a feature on a queue if it is
 	 * enabled globally on the device. Thus the driver does not advertise
-	 * capabilities like DEV_TX_OFFLOAD_IPV4_CKSUM as per-queue even
+	 * capabilities like RTE_ETH_TX_OFFLOAD_IPV4_CKSUM as per-queue even
 	 * though the driver would be otherwise capable of disabling it on
 	 * a per-queue basis.
 	 */
@@ -411,24 +411,24 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
 	 */
 
 	dev_info->rx_offload_capa = dev_info->rx_queue_offload_capa |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_RSS_HASH |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH |
 		0;
 
 	dev_info->tx_offload_capa = dev_info->tx_queue_offload_capa |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
 		0;
 
 	dev_info->rx_desc_lim = rx_desc_lim;
@@ -463,9 +463,9 @@ ionic_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 		fc_conf->autoneg = 0;
 
 		if (idev->port_info->config.pause_type)
-			fc_conf->mode = RTE_FC_FULL;
+			fc_conf->mode = RTE_ETH_FC_FULL;
 		else
-			fc_conf->mode = RTE_FC_NONE;
+			fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return 0;
@@ -487,14 +487,14 @@ ionic_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		pause_type = IONIC_PORT_PAUSE_TYPE_NONE;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		pause_type = IONIC_PORT_PAUSE_TYPE_LINK;
 		break;
-	case RTE_FC_RX_PAUSE:
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		return -ENOTSUP;
 	}
 
@@ -545,12 +545,12 @@ ionic_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	num = tbl_sz / RTE_RETA_GROUP_SIZE;
+	num = tbl_sz / RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < num; i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if (reta_conf[i].mask & ((uint64_t)1 << j)) {
-				index = (i * RTE_RETA_GROUP_SIZE) + j;
+				index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
 				lif->rss_ind_tbl[index] = reta_conf[i].reta[j];
 			}
 		}
@@ -585,12 +585,12 @@ ionic_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	num = reta_size / RTE_RETA_GROUP_SIZE;
+	num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < num; i++) {
 		memcpy(reta_conf->reta,
-			&lif->rss_ind_tbl[i * RTE_RETA_GROUP_SIZE],
-			RTE_RETA_GROUP_SIZE);
+			&lif->rss_ind_tbl[i * RTE_ETH_RETA_GROUP_SIZE],
+			RTE_ETH_RETA_GROUP_SIZE);
 		reta_conf++;
 	}
 
@@ -618,17 +618,17 @@ ionic_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
 			IONIC_RSS_HASH_KEY_SIZE);
 
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 	rss_conf->rss_hf = rss_hf;
 
@@ -660,17 +660,17 @@ ionic_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
 		if (!lif->rss_ind_tbl)
 			return -EINVAL;
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV4)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
 			rss_types |= IONIC_RSS_TYPE_IPV4;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			rss_types |= IONIC_RSS_TYPE_IPV4_TCP;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 			rss_types |= IONIC_RSS_TYPE_IPV4_UDP;
-		if (rss_conf->rss_hf & ETH_RSS_IPV6)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
 			rss_types |= IONIC_RSS_TYPE_IPV6;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 			rss_types |= IONIC_RSS_TYPE_IPV6_TCP;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 			rss_types |= IONIC_RSS_TYPE_IPV6_UDP;
 
 		ionic_lif_rss_config(lif, rss_types, key, NULL);
@@ -842,15 +842,15 @@ ionic_dev_configure(struct rte_eth_dev *eth_dev)
 static inline uint32_t
 ionic_parse_link_speeds(uint16_t link_speeds)
 {
-	if (link_speeds & ETH_LINK_SPEED_100G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_100G)
 		return 100000;
-	else if (link_speeds & ETH_LINK_SPEED_50G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_50G)
 		return 50000;
-	else if (link_speeds & ETH_LINK_SPEED_40G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_40G)
 		return 40000;
-	else if (link_speeds & ETH_LINK_SPEED_25G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_25G)
 		return 25000;
-	else if (link_speeds & ETH_LINK_SPEED_10G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 		return 10000;
 	else
 		return 0;
@@ -874,12 +874,12 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
 	IONIC_PRINT_CALL();
 
 	allowed_speeds =
-		ETH_LINK_SPEED_FIXED |
-		ETH_LINK_SPEED_10G |
-		ETH_LINK_SPEED_25G |
-		ETH_LINK_SPEED_40G |
-		ETH_LINK_SPEED_50G |
-		ETH_LINK_SPEED_100G;
+		RTE_ETH_LINK_SPEED_FIXED |
+		RTE_ETH_LINK_SPEED_10G |
+		RTE_ETH_LINK_SPEED_25G |
+		RTE_ETH_LINK_SPEED_40G |
+		RTE_ETH_LINK_SPEED_50G |
+		RTE_ETH_LINK_SPEED_100G;
 
 	if (dev_conf->link_speeds & ~allowed_speeds) {
 		IONIC_PRINT(ERR, "Invalid link setting");
@@ -896,7 +896,7 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
 	}
 
 	/* Configure link */
-	an_enable = (dev_conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+	an_enable = (dev_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 	ionic_dev_cmd_port_autoneg(idev, an_enable);
 	err = ionic_dev_cmd_wait_check(idev, IONIC_DEVCMD_TIMEOUT);
diff --git a/drivers/net/ionic/ionic_ethdev.h b/drivers/net/ionic/ionic_ethdev.h
index 6cbcd0f825a3..652f28c97d57 100644
--- a/drivers/net/ionic/ionic_ethdev.h
+++ b/drivers/net/ionic/ionic_ethdev.h
@@ -8,12 +8,12 @@
 #include <rte_ethdev.h>
 
 #define IONIC_ETH_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define IONIC_ETH_DEV_TO_LIF(eth_dev) ((struct ionic_lif *) \
 	(eth_dev)->data->dev_private)
diff --git a/drivers/net/ionic/ionic_lif.c b/drivers/net/ionic/ionic_lif.c
index a1f9ce2d81cb..5e8fdf3893ad 100644
--- a/drivers/net/ionic/ionic_lif.c
+++ b/drivers/net/ionic/ionic_lif.c
@@ -1688,12 +1688,12 @@ ionic_lif_configure_vlan_offload(struct ionic_lif *lif, int mask)
 
 	/*
 	 * IONIC_ETH_HW_VLAN_RX_FILTER cannot be turned off, so
-	 * set DEV_RX_OFFLOAD_VLAN_FILTER and ignore ETH_VLAN_FILTER_MASK
+	 * set RTE_ETH_RX_OFFLOAD_VLAN_FILTER and ignore RTE_ETH_VLAN_FILTER_MASK
 	 */
-	rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			lif->features |= IONIC_ETH_HW_VLAN_RX_STRIP;
 		else
 			lif->features &= ~IONIC_ETH_HW_VLAN_RX_STRIP;
@@ -1733,19 +1733,19 @@ ionic_lif_configure(struct ionic_lif *lif)
 	/*
 	 * NB: While it is true that RSS_HASH is always enabled on ionic,
 	 *     setting this flag unconditionally causes problems in DTS.
-	 * rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	 * rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	 */
 
 	/* RX per-port */
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM ||
-	    rxmode->offloads & DEV_RX_OFFLOAD_UDP_CKSUM ||
-	    rxmode->offloads & DEV_RX_OFFLOAD_TCP_CKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM ||
+	    rxmode->offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM ||
+	    rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
 		lif->features |= IONIC_ETH_HW_RX_CSUM;
 	else
 		lif->features &= ~IONIC_ETH_HW_RX_CSUM;
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		lif->features |= IONIC_ETH_HW_RX_SG;
 		lif->eth_dev->data->scattered_rx = 1;
 	} else {
@@ -1754,30 +1754,30 @@ ionic_lif_configure(struct ionic_lif *lif)
 	}
 
 	/* Covers VLAN_STRIP */
-	ionic_lif_configure_vlan_offload(lif, ETH_VLAN_STRIP_MASK);
+	ionic_lif_configure_vlan_offload(lif, RTE_ETH_VLAN_STRIP_MASK);
 
 	/* TX per-port */
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		lif->features |= IONIC_ETH_HW_TX_CSUM;
 	else
 		lif->features &= ~IONIC_ETH_HW_TX_CSUM;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		lif->features |= IONIC_ETH_HW_VLAN_TX_TAG;
 	else
 		lif->features &= ~IONIC_ETH_HW_VLAN_TX_TAG;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		lif->features |= IONIC_ETH_HW_TX_SG;
 	else
 		lif->features &= ~IONIC_ETH_HW_TX_SG;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		lif->features |= IONIC_ETH_HW_TSO;
 		lif->features |= IONIC_ETH_HW_TSO_IPV6;
 		lif->features |= IONIC_ETH_HW_TSO_ECN;
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index 4d16a39c6b6d..e3df7c56debe 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -203,11 +203,11 @@ ionic_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id,
 		txq->flags |= IONIC_QCQ_F_DEFERRED;
 
 	/* Convert the offload flags into queue flags */
-	if (offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+	if (offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		txq->flags |= IONIC_QCQ_F_CSUM_L3;
-	if (offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+	if (offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 		txq->flags |= IONIC_QCQ_F_CSUM_TCP;
-	if (offloads & DEV_TX_OFFLOAD_UDP_CKSUM)
+	if (offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)
 		txq->flags |= IONIC_QCQ_F_CSUM_UDP;
 
 	eth_dev->data->tx_queues[tx_queue_id] = txq;
@@ -743,11 +743,11 @@ ionic_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 
 	/*
 	 * Note: the interface does not currently support
-	 * DEV_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
+	 * RTE_ETH_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
 	 * when the adapter will be able to keep the CRC and subtract
 	 * it to the length for all received packets:
 	 * if (eth_dev->data->dev_conf.rxmode.offloads &
-	 *     DEV_RX_OFFLOAD_KEEP_CRC)
+	 *     RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 	 *   rxq->crc_len = ETHER_CRC_LEN;
 	 */
 
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 063a9c6a6f7f..17088585757f 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -50,11 +50,11 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->speed_capa =
 		(hw->retimer.mac_type ==
 			IFPGA_RAWDEV_RETIMER_MAC_TYPE_10GE_XFI) ?
-		ETH_LINK_SPEED_10G :
+		RTE_ETH_LINK_SPEED_10G :
 		((hw->retimer.mac_type ==
 			IFPGA_RAWDEV_RETIMER_MAC_TYPE_25GE_25GAUI) ?
-		ETH_LINK_SPEED_25G :
-		ETH_LINK_SPEED_AUTONEG);
+		RTE_ETH_LINK_SPEED_25G :
+		RTE_ETH_LINK_SPEED_AUTONEG);
 
 	dev_info->max_rx_queues  = 1;
 	dev_info->max_tx_queues  = 1;
@@ -67,30 +67,30 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
 	};
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_VLAN_FILTER;
-
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
 		dev_info->tx_queue_offload_capa;
 
 	dev_info->dev_capa =
@@ -2399,10 +2399,10 @@ ipn3ke_update_link(struct rte_rawdev *rawdev,
 				(uint64_t *)&link_speed);
 	switch (link_speed) {
 	case IFPGA_RAWDEV_LINK_SPEED_10GB:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case IFPGA_RAWDEV_LINK_SPEED_25GB:
-		link->link_speed = ETH_SPEED_NUM_25G;
+		link->link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	default:
 		IPN3KE_AFU_PMD_ERR("Unknown link speed info %u", link_speed);
@@ -2460,9 +2460,9 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,
 
 	memset(&link, 0, sizeof(link));
 
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_autoneg = !(ethdev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	rawdev = hw->rawdev;
 	ipn3ke_update_link(rawdev, rpst->port_id, &link);
@@ -2518,9 +2518,9 @@ ipn3ke_rpst_link_check(struct ipn3ke_rpst *rpst)
 
 	memset(&link, 0, sizeof(link));
 
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_autoneg = !(rpst->ethdev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	rawdev = hw->rawdev;
 	ipn3ke_update_link(rawdev, rpst->port_id, &link);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 46c95425adfb..7fd2c539e002 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1857,7 +1857,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 	qinq &= IXGBE_DMATXCTL_GDV;
 
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
+	case RTE_ETH_VLAN_TYPE_INNER:
 		if (qinq) {
 			reg = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
 			reg = (reg & (~IXGBE_VLNCTRL_VET)) | (uint32_t)tpid;
@@ -1872,7 +1872,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 				    " by single VLAN");
 		}
 		break;
-	case ETH_VLAN_TYPE_OUTER:
+	case RTE_ETH_VLAN_TYPE_OUTER:
 		if (qinq) {
 			/* Only the high 16-bits is valid */
 			IXGBE_WRITE_REG(hw, IXGBE_EXVET, (uint32_t)tpid <<
@@ -1959,10 +1959,10 @@ ixgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 
 	if (on) {
 		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
-		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
 		rxq->vlan_flags = PKT_RX_VLAN;
-		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
 
@@ -2083,7 +2083,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	if (hw->mac.type == ixgbe_mac_82598EB) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 			ctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
 			ctrl |= IXGBE_VLNCTRL_VME;
 			IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, ctrl);
@@ -2100,7 +2100,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
 			ctrl = IXGBE_READ_REG(hw, IXGBE_RXDCTL(rxq->reg_idx));
-			if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+			if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 				ctrl |= IXGBE_RXDCTL_VME;
 				on = TRUE;
 			} else {
@@ -2122,17 +2122,17 @@ ixgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	struct ixgbe_rx_queue *rxq;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		rxmode = &dev->data->dev_conf.rxmode;
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 		else
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 	}
 }
@@ -2143,19 +2143,18 @@ ixgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	rxmode = &dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK)
 		ixgbe_vlan_hw_strip_config(dev);
-	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ixgbe_vlan_hw_filter_enable(dev);
 		else
 			ixgbe_vlan_hw_filter_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			ixgbe_vlan_hw_extend_enable(dev);
 		else
 			ixgbe_vlan_hw_extend_disable(dev);
@@ -2194,10 +2193,10 @@ ixgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
 	switch (nb_rx_q) {
 	case 1:
 	case 2:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
 		break;
 	case 4:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
 		break;
 	default:
 		return -EINVAL;
@@ -2221,18 +2220,18 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
 		/* check multi-queue mode */
 		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
 			break;
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
 			/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
 			PMD_INIT_LOG(ERR, "SRIOV active,"
 					" unsupported mq_mode rx %d.",
 					dev_conf->rxmode.mq_mode);
 			return -EINVAL;
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
 			if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
 				if (ixgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
 					PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -2242,12 +2241,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 					return -EINVAL;
 				}
 			break;
-		case ETH_MQ_RX_VMDQ_ONLY:
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_NONE:
 			/* if nothing mq mode configure, use default scheme */
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
 			break;
-		default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+		default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
 			/* SRIOV only works in VMDq enable mode */
 			PMD_INIT_LOG(ERR, "SRIOV is active,"
 					" wrong mq_mode rx %d.",
@@ -2256,12 +2255,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 		}
 
 		switch (dev_conf->txmode.mq_mode) {
-		case ETH_MQ_TX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+		case RTE_ETH_MQ_TX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+			dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
 			break;
-		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
+		default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
+			dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_ONLY;
 			break;
 		}
 
@@ -2276,13 +2275,13 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 			return -EINVAL;
 		}
 	} else {
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
 			PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
 					  " not supported.");
 			return -EINVAL;
 		}
 		/* check configuration for vmdb+dcb mode */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_conf *conf;
 
 			if (nb_rx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2291,15 +2290,15 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools must be %d or %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_tx_conf *conf;
 
 			if (nb_tx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2308,39 +2307,39 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools != %d and"
 						" nb_queue_pools != %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
 
 		/* For DCB mode check our configuration before we go further */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
 			const struct rte_eth_dcb_rx_conf *conf;
 
 			conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
 
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 			const struct rte_eth_dcb_tx_conf *conf;
 
 			conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
@@ -2349,7 +2348,7 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 		 * When DCB/VT is off, maximum number of queues changes,
 		 * except for 82598EB, which remains constant.
 		 */
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
 				hw->mac.type != ixgbe_mac_82598EB) {
 			if (nb_tx_q > IXGBE_NONE_MODE_TX_NB_QUEUES) {
 				PMD_INIT_LOG(ERR,
@@ -2373,8 +2372,8 @@ ixgbe_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multipe queue mode checking */
 	ret  = ixgbe_check_mq_mode(dev);
@@ -2619,15 +2618,15 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 		goto error;
 	}
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = ixgbe_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
 		goto error;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
 		/* Enable vlan filtering for VMDq */
 		ixgbe_vmdq_vlan_hw_filter_enable(dev);
 	}
@@ -2704,17 +2703,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 	case ixgbe_mac_X550:
 	case ixgbe_mac_X550EM_x:
 	case ixgbe_mac_X550EM_a:
-		allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_2_5G |  ETH_LINK_SPEED_5G |
-			ETH_LINK_SPEED_10G;
+		allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_2_5G |  RTE_ETH_LINK_SPEED_5G |
+			RTE_ETH_LINK_SPEED_10G;
 		if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
 				hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
-			allowed_speeds = ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+			allowed_speeds = RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
 		break;
 	default:
-		allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_10G;
+		allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G;
 	}
 
 	link_speeds = &dev->data->dev_conf.link_speeds;
@@ -2728,7 +2727,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 	}
 
 	speed = 0x0;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		switch (hw->mac.type) {
 		case ixgbe_mac_82598EB:
 			speed = IXGBE_LINK_SPEED_82598_AUTONEG;
@@ -2746,17 +2745,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 			speed = IXGBE_LINK_SPEED_82599_AUTONEG;
 		}
 	} else {
-		if (*link_speeds & ETH_LINK_SPEED_10G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
 			speed |= IXGBE_LINK_SPEED_10GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
 			speed |= IXGBE_LINK_SPEED_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_2_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
 			speed |= IXGBE_LINK_SPEED_2_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_1G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed |= IXGBE_LINK_SPEED_1GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_100M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed |= IXGBE_LINK_SPEED_100_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_10M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
 			speed |= IXGBE_LINK_SPEED_10_FULL;
 	}
 
@@ -3832,7 +3831,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		 * When DCB/VT is off, maximum number of queues changes,
 		 * except for 82598EB, which remains constant.
 		 */
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
 				hw->mac.type != ixgbe_mac_82598EB)
 			dev_info->max_tx_queues = IXGBE_NONE_MODE_TX_NB_QUEUES;
 	}
@@ -3842,9 +3841,9 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
 	if (hw->mac.type == ixgbe_mac_82598EB)
-		dev_info->max_vmdq_pools = ETH_16_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	else
-		dev_info->max_vmdq_pools = ETH_64_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->max_mtu =  dev_info->max_rx_pktlen - IXGBE_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 	dev_info->vmdq_queue_num = dev_info->max_rx_queues;
@@ -3883,21 +3882,21 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->reta_size = ixgbe_reta_size_get(hw->mac.type);
 	dev_info->flow_type_rss_offloads = IXGBE_RSS_OFFLOAD_ALL;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
 	if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
 			hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
-		dev_info->speed_capa = ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
 
 	if (hw->mac.type == ixgbe_mac_X540 ||
 	    hw->mac.type == ixgbe_mac_X540_vf ||
 	    hw->mac.type == ixgbe_mac_X550 ||
 	    hw->mac.type == ixgbe_mac_X550_vf) {
-		dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
 	}
 	if (hw->mac.type == ixgbe_mac_X550) {
-		dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
-		dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
 	}
 
 	/* Driver-preferred Rx/Tx parameters */
@@ -3966,9 +3965,9 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
 	if (hw->mac.type == ixgbe_mac_82598EB)
-		dev_info->max_vmdq_pools = ETH_16_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	else
-		dev_info->max_vmdq_pools = ETH_64_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->rx_queue_offload_capa = ixgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (ixgbe_get_rx_port_offloads(dev) |
 				     dev_info->rx_queue_offload_capa);
@@ -4211,11 +4210,11 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	u32 esdp_reg;
 
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_SPEED_FIXED);
 
 	hw->mac.get_link_status = true;
 
@@ -4237,8 +4236,8 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
 		diag = ixgbe_check_link(hw, &link_speed, &link_up, wait);
 
 	if (diag != 0) {
-		link.link_speed = ETH_SPEED_NUM_100M;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
@@ -4274,37 +4273,37 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (link_speed) {
 	default:
 	case IXGBE_LINK_SPEED_UNKNOWN:
-		link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 
 	case IXGBE_LINK_SPEED_10_FULL:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 
 	case IXGBE_LINK_SPEED_100_FULL:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	case IXGBE_LINK_SPEED_1GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case IXGBE_LINK_SPEED_2_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_2_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 
 	case IXGBE_LINK_SPEED_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 
 	case IXGBE_LINK_SPEED_10GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	}
 
@@ -4521,7 +4520,7 @@ ixgbe_dev_link_status_print(struct rte_eth_dev *dev)
 		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -4740,13 +4739,13 @@ ixgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		tx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -5044,8 +5043,8 @@ ixgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i += IXGBE_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IXGBE_4_BIT_MASK);
 		if (!mask)
@@ -5092,8 +5091,8 @@ ixgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i += IXGBE_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IXGBE_4_BIT_MASK);
 		if (!mask)
@@ -5255,22 +5254,22 @@ ixgbevf_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
 		     dev->data->port_id);
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/*
 	 * VF has no ability to enable/disable HW CRC
 	 * Keep the persistent behavior the same as Host PF
 	 */
 #ifndef RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
-		conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #else
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
 		PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #endif
 
@@ -5330,8 +5329,8 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
 	ixgbevf_set_vfta_all(dev, 1);
 
 	/* Set HW strip */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = ixgbevf_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -5568,10 +5567,10 @@ ixgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	int on = 0;
 
 	/* VF function only support hw strip feature, others are not support */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
-			on = !!(rxq->offloads &	DEV_RX_OFFLOAD_VLAN_STRIP);
+			on = !!(rxq->offloads &	RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 			ixgbevf_vlan_strip_queue_set(dev, i, on);
 		}
 	}
@@ -5702,12 +5701,12 @@ ixgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
 		return -ENOTSUP;
 
 	if (on) {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = ~0;
 			IXGBE_WRITE_REG(hw, IXGBE_UTA(i), ~0);
 		}
 	} else {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = 0;
 			IXGBE_WRITE_REG(hw, IXGBE_UTA(i), 0);
 		}
@@ -5721,15 +5720,15 @@ ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
 {
 	uint32_t new_val = orig_val;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
 		new_val |= IXGBE_VMOLR_AUPE;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
 		new_val |= IXGBE_VMOLR_ROMPE;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 		new_val |= IXGBE_VMOLR_ROPE;
-	if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 		new_val |= IXGBE_VMOLR_BAM;
-	if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 		new_val |= IXGBE_VMOLR_MPE;
 
 	return new_val;
@@ -6724,15 +6723,15 @@ ixgbe_start_timecounters(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 
 	switch (link.link_speed) {
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		incval = IXGBE_INCVAL_100;
 		shift = IXGBE_INCVAL_SHIFT_100;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		incval = IXGBE_INCVAL_1GB;
 		shift = IXGBE_INCVAL_SHIFT_1GB;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 	default:
 		incval = IXGBE_INCVAL_10GB;
 		shift = IXGBE_INCVAL_SHIFT_10GB;
@@ -7143,16 +7142,16 @@ ixgbe_reta_size_get(enum ixgbe_mac_type mac_type) {
 	case ixgbe_mac_X550:
 	case ixgbe_mac_X550EM_x:
 	case ixgbe_mac_X550EM_a:
-		return ETH_RSS_RETA_SIZE_512;
+		return RTE_ETH_RSS_RETA_SIZE_512;
 	case ixgbe_mac_X550_vf:
 	case ixgbe_mac_X550EM_x_vf:
 	case ixgbe_mac_X550EM_a_vf:
-		return ETH_RSS_RETA_SIZE_64;
+		return RTE_ETH_RSS_RETA_SIZE_64;
 	case ixgbe_mac_X540_vf:
 	case ixgbe_mac_82599_vf:
 		return 0;
 	default:
-		return ETH_RSS_RETA_SIZE_128;
+		return RTE_ETH_RSS_RETA_SIZE_128;
 	}
 }
 
@@ -7162,10 +7161,10 @@ ixgbe_reta_reg_get(enum ixgbe_mac_type mac_type, uint16_t reta_idx) {
 	case ixgbe_mac_X550:
 	case ixgbe_mac_X550EM_x:
 	case ixgbe_mac_X550EM_a:
-		if (reta_idx < ETH_RSS_RETA_SIZE_128)
+		if (reta_idx < RTE_ETH_RSS_RETA_SIZE_128)
 			return IXGBE_RETA(reta_idx >> 2);
 		else
-			return IXGBE_ERETA((reta_idx - ETH_RSS_RETA_SIZE_128) >> 2);
+			return IXGBE_ERETA((reta_idx - RTE_ETH_RSS_RETA_SIZE_128) >> 2);
 	case ixgbe_mac_X550_vf:
 	case ixgbe_mac_X550EM_x_vf:
 	case ixgbe_mac_X550EM_a_vf:
@@ -7221,7 +7220,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	uint8_t nb_tcs;
 	uint8_t i, j;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
 	else
 		dcb_info->nb_tcs = 1;
@@ -7232,7 +7231,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	if (dcb_config->vt_mode) { /* vt is enabled*/
 		struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
 		if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
 			for (j = 0; j < nb_tcs; j++) {
@@ -7256,9 +7255,9 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	} else { /* vt is disabled*/
 		struct rte_eth_dcb_rx_conf *rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
-		if (dcb_info->nb_tcs == ETH_4_TCS) {
+		if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7271,7 +7270,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 			dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
 			dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
 			dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
-		} else if (dcb_info->nb_tcs == ETH_8_TCS) {
+		} else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7524,7 +7523,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = ixgbe_e_tag_filter_add(dev, l2_tunnel);
 		break;
 	default:
@@ -7556,7 +7555,7 @@ ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 		return ret;
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = ixgbe_e_tag_filter_del(dev, l2_tunnel);
 		break;
 	default:
@@ -7653,12 +7652,12 @@ ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ixgbe_add_vxlan_port(hw, udp_tunnel->udp_port);
 		break;
 
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -EINVAL;
 		break;
@@ -7690,11 +7689,11 @@ ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ixgbe_del_vxlan_port(hw, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -EINVAL;
 		break;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 950fb2d2450c..876b670f2682 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -114,15 +114,15 @@
 #define IXGBE_FDIR_NVGRE_TUNNEL_TYPE    0x0
 
 #define IXGBE_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define IXGBE_VF_IRQ_ENABLE_MASK        3          /* vf irq enable mask */
 #define IXGBE_VF_MAXMSIVECTOR           1
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 27a49bbce5e7..7894047829a8 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -90,9 +90,9 @@ static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl);
 static uint32_t ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
 				 uint32_t key);
 static uint32_t atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc);
+		enum rte_eth_fdir_pballoc_type pballoc);
 static uint32_t atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc);
+		enum rte_eth_fdir_pballoc_type pballoc);
 static int fdir_write_perfect_filter_82599(struct ixgbe_hw *hw,
 			union ixgbe_atr_input *input, uint8_t queue,
 			uint32_t fdircmd, uint32_t fdirhash,
@@ -163,20 +163,20 @@ fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl)
  * flexbytes matching field, and drop queue (only for perfect matching mode).
  */
 static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf, uint32_t *fdirctrl)
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf, uint32_t *fdirctrl)
 {
 	*fdirctrl = 0;
 
 	switch (conf->pballoc) {
-	case RTE_FDIR_PBALLOC_64K:
+	case RTE_ETH_FDIR_PBALLOC_64K:
 		/* 8k - 1 signature filters */
 		*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_64K;
 		break;
-	case RTE_FDIR_PBALLOC_128K:
+	case RTE_ETH_FDIR_PBALLOC_128K:
 		/* 16k - 1 signature filters */
 		*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_128K;
 		break;
-	case RTE_FDIR_PBALLOC_256K:
+	case RTE_ETH_FDIR_PBALLOC_256K:
 		/* 32k - 1 signature filters */
 		*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_256K;
 		break;
@@ -807,13 +807,13 @@ ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
 
 static uint32_t
 atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		return ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				PERFECT_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		return ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				PERFECT_BUCKET_128KB_HASH_MASK;
@@ -850,15 +850,15 @@ ixgbe_fdir_check_cmd_complete(struct ixgbe_hw *hw, uint32_t *fdircmd)
  */
 static uint32_t
 atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
 	uint32_t bucket_hash, sig_hash;
 
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		bucket_hash = ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				SIG_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		bucket_hash = ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				SIG_BUCKET_128KB_HASH_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 27322ab9038a..bdc9d4796c02 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -1259,7 +1259,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
 		return -rte_errno;
 	}
 
-	filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+	filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
 	/**
 	 * grp and e_cid_base are bit fields and only use 14 bits.
 	 * e-tag id is taken as little endian by HW.
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
index e45c5501e6bf..944c9f23809e 100644
--- a/drivers/net/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -392,7 +392,7 @@ ixgbe_crypto_create_session(void *device,
 	aead_xform = &conf->crypto_xform->aead;
 
 	if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 			ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -400,7 +400,7 @@ ixgbe_crypto_create_session(void *device,
 			return -ENOTSUP;
 		}
 	} else {
-		if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+		if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 			ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -633,11 +633,11 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	tx_offloads = dev->data->dev_conf.txmode.offloads;
 
 	/* sanity checks */
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
 		return -1;
 	}
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
 		return -1;
 	}
@@ -657,7 +657,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
 	IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
 		reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
 		if (reg != 0) {
@@ -665,7 +665,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 			return -1;
 		}
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 		IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL,
 				IXGBE_SECTXCTRL_STORE_FORWARD);
 		reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 295e5a39b245..9f1bd0a62ba4 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -104,15 +104,15 @@ int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
 	memset(uta_info, 0, sizeof(struct ixgbe_uta_info));
 	hw->mac.mc_filter_type = 0;
 
-	if (vf_num >= ETH_32_POOLS) {
+	if (vf_num >= RTE_ETH_32_POOLS) {
 		nb_queue = 2;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
-	} else if (vf_num >= ETH_16_POOLS) {
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+	} else if (vf_num >= RTE_ETH_16_POOLS) {
 		nb_queue = 4;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
 	} else {
 		nb_queue = 8;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
 	}
 
 	RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -263,15 +263,15 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
 	gpie |= IXGBE_GPIE_MSIX_MODE | IXGBE_GPIE_PBA_SUPPORT;
 
 	switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		gcr_ext |= IXGBE_GCR_EXT_VT_MODE_64;
 		gpie |= IXGBE_GPIE_VTMODE_64;
 		break;
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		gcr_ext |= IXGBE_GCR_EXT_VT_MODE_32;
 		gpie |= IXGBE_GPIE_VTMODE_32;
 		break;
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		gcr_ext |= IXGBE_GCR_EXT_VT_MODE_16;
 		gpie |= IXGBE_GPIE_VTMODE_16;
 		break;
@@ -674,29 +674,29 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
 	/* Notify VF of number of DCB traffic classes */
 	eth_conf = &dev->data->dev_conf;
 	switch (eth_conf->txmode.mq_mode) {
-	case ETH_MQ_TX_NONE:
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_NONE:
+	case RTE_ETH_MQ_TX_DCB:
 		PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
 			", but its tx mode = %d\n", vf,
 			eth_conf->txmode.mq_mode);
 		return -1;
 
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		vmdq_dcb_tx_conf = &eth_conf->tx_adv_conf.vmdq_dcb_tx_conf;
 		switch (vmdq_dcb_tx_conf->nb_queue_pools) {
-		case ETH_16_POOLS:
-			num_tcs = ETH_8_TCS;
+		case RTE_ETH_16_POOLS:
+			num_tcs = RTE_ETH_8_TCS;
 			break;
-		case ETH_32_POOLS:
-			num_tcs = ETH_4_TCS;
+		case RTE_ETH_32_POOLS:
+			num_tcs = RTE_ETH_4_TCS;
 			break;
 		default:
 			return -1;
 		}
 		break;
 
-	/* ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
-	case ETH_MQ_TX_VMDQ_ONLY:
+	/* RTE_ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
+	case RTE_ETH_MQ_TX_VMDQ_ONLY:
 		hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 		vmvir = IXGBE_READ_REG(hw, IXGBE_VMVIR(vf));
 		vlana = vmvir & IXGBE_VMVIR_VLANA_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index b263dfe1d574..9e5716f935a2 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2592,26 +2592,26 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM   |
-		DEV_TX_OFFLOAD_SCTP_CKSUM  |
-		DEV_TX_OFFLOAD_TCP_TSO     |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO     |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	if (hw->mac.type == ixgbe_mac_82599EB ||
 	    hw->mac.type == ixgbe_mac_X540)
-		tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 
 	if (hw->mac.type == ixgbe_mac_X550 ||
 	    hw->mac.type == ixgbe_mac_X550EM_x ||
 	    hw->mac.type == ixgbe_mac_X550EM_a)
-		tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
 #endif
 	return tx_offload_capa;
 }
@@ -2780,7 +2780,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_deferred_start = tx_conf->tx_deferred_start;
 #ifdef RTE_LIB_SECURITY
 	txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SECURITY);
+			RTE_ETH_TX_OFFLOAD_SECURITY);
 #endif
 
 	/*
@@ -3021,7 +3021,7 @@ ixgbe_get_rx_queue_offloads(struct rte_eth_dev *dev)
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (hw->mac.type != ixgbe_mac_82598EB)
-		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	return offloads;
 }
@@ -3032,19 +3032,19 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
 	uint64_t offloads;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	offloads = DEV_RX_OFFLOAD_IPV4_CKSUM  |
-		   DEV_RX_OFFLOAD_UDP_CKSUM   |
-		   DEV_RX_OFFLOAD_TCP_CKSUM   |
-		   DEV_RX_OFFLOAD_KEEP_CRC    |
-		   DEV_RX_OFFLOAD_VLAN_FILTER |
-		   DEV_RX_OFFLOAD_SCATTER |
-		   DEV_RX_OFFLOAD_RSS_HASH;
+	offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+		   RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+		   RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		   RTE_ETH_RX_OFFLOAD_SCATTER |
+		   RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (hw->mac.type == ixgbe_mac_82598EB)
-		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	if (ixgbe_is_vf(dev) == 0)
-		offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
 	/*
 	 * RSC is only supported by 82599 and x540 PF devices in a non-SR-IOV
@@ -3054,20 +3054,20 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
 	     hw->mac.type == ixgbe_mac_X540 ||
 	     hw->mac.type == ixgbe_mac_X550) &&
 	    !RTE_ETH_DEV_SRIOV(dev).active)
-		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+		offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 
 	if (hw->mac.type == ixgbe_mac_82599EB ||
 	    hw->mac.type == ixgbe_mac_X540)
-		offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
 
 	if (hw->mac.type == ixgbe_mac_X550 ||
 	    hw->mac.type == ixgbe_mac_X550EM_x ||
 	    hw->mac.type == ixgbe_mac_X550EM_a)
-		offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		offloads |= DEV_RX_OFFLOAD_SECURITY;
+		offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
 #endif
 
 	return offloads;
@@ -3122,7 +3122,7 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
 		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -3507,23 +3507,23 @@ ixgbe_hw_rss_hash_set(struct ixgbe_hw *hw, struct rte_eth_rss_conf *rss_conf)
 	/* Set configured hashing protocols in MRQC register */
 	rss_hf = rss_conf->rss_hf;
 	mrqc = IXGBE_MRQC_RSSEN; /* Enable RSS */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_TCP;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6;
-	if (rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_EX)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_TCP;
-	if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_UDP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_UDP;
-	if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP;
 	IXGBE_WRITE_REG(hw, mrqc_reg, mrqc);
 }
@@ -3605,23 +3605,23 @@ ixgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 	}
 	rss_hf = 0;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP)
-		rss_hf |= ETH_RSS_IPV6_UDP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
 	rss_conf->rss_hf = rss_hf;
 	return 0;
 }
@@ -3697,12 +3697,12 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
 	num_pools = cfg->nb_queue_pools;
 	/* Check we have a valid number of pools */
-	if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+	if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
 		ixgbe_rss_disable(dev);
 		return;
 	}
 	/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
-	nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+	nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
 
 	/*
 	 * RXPBSIZE
@@ -3727,7 +3727,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
 	}
 	/* zero alloc all unused TCs */
-	for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		uint32_t rxpbsize = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(i));
 
 		rxpbsize &= (~(0x3FF << IXGBE_RXPBSIZE_SHIFT));
@@ -3736,7 +3736,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	}
 
 	/* MRQC: enable vmdq and dcb */
-	mrqc = (num_pools == ETH_16_POOLS) ?
+	mrqc = (num_pools == RTE_ETH_16_POOLS) ?
 		IXGBE_MRQC_VMDQRT8TCEN : IXGBE_MRQC_VMDQRT4TCEN;
 	IXGBE_WRITE_REG(hw, IXGBE_MRQC, mrqc);
 
@@ -3752,7 +3752,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 
 	/* RTRUP2TC: mapping user priorities to traffic classes (TCs) */
 	queue_mapping = 0;
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 		/*
 		 * mapping is done with 3 bits per priority,
 		 * so shift by i*3 each time
@@ -3776,7 +3776,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 
 	/* VFRE: pool enabling for receive - 16 or 32 */
 	IXGBE_WRITE_REG(hw, IXGBE_VFRE(0),
-			num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+			num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	/*
 	 * MPSAR - allow pools to read specific mac addresses
@@ -3858,7 +3858,7 @@ ixgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
 	if (hw->mac.type != ixgbe_mac_82598EB)
 		/*PF VF Transmit Enable*/
 		IXGBE_WRITE_REG(hw, IXGBE_VFTE(0),
-			vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+			vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	/*Configure general DCB TX parameters*/
 	ixgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3874,12 +3874,12 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
-	if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3889,7 +3889,7 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3907,12 +3907,12 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
-	if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3922,7 +3922,7 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3949,7 +3949,7 @@ ixgbe_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3976,7 +3976,7 @@ ixgbe_dcb_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -4145,7 +4145,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
 
 	switch (dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_VMDQ_DCB:
+	case RTE_ETH_MQ_RX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		if (hw->mac.type != ixgbe_mac_82598EB) {
 			config_dcb_rx = DCB_RX_CONFIG;
@@ -4158,8 +4158,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			ixgbe_vmdq_dcb_configure(dev);
 		}
 		break;
-	case ETH_MQ_RX_DCB:
-	case ETH_MQ_RX_DCB_RSS:
+	case RTE_ETH_MQ_RX_DCB:
+	case RTE_ETH_MQ_RX_DCB_RSS:
 		dcb_config->vt_mode = false;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -4172,7 +4172,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		break;
 	}
 	switch (dev->data->dev_conf.txmode.mq_mode) {
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/* get DCB and VT TX configuration parameters
@@ -4183,7 +4183,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		ixgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
 		break;
 
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_DCB:
 		dcb_config->vt_mode = false;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/*get DCB TX configuration parameters from rte_eth_conf*/
@@ -4199,15 +4199,15 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	nb_tcs = dcb_config->num_tcs.pfc_tcs;
 	/* Unpack map */
 	ixgbe_dcb_unpack_map_cee(dcb_config, IXGBE_DCB_RX_CONFIG, map);
-	if (nb_tcs == ETH_4_TCS) {
+	if (nb_tcs == RTE_ETH_4_TCS) {
 		/* Avoid un-configured priority mapping to TC0 */
 		uint8_t j = 4;
 		uint8_t mask = 0xFF;
 
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
 			mask = (uint8_t)(mask & (~(1 << map[i])));
 		for (i = 0; mask && (i < IXGBE_DCB_MAX_TRAFFIC_CLASS); i++) {
-			if ((mask & 0x1) && (j < ETH_DCB_NUM_USER_PRIORITIES))
+			if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
 				map[j++] = i;
 			mask >>= 1;
 		}
@@ -4257,9 +4257,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
 		}
 		/* zero alloc all unused TCs */
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
-		}
 	}
 	if (config_dcb_tx) {
 		/* Only support an equally distributed
@@ -4273,7 +4272,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), txpbthresh);
 		}
 		/* Clear unused TCs, if any, to zero buffer size*/
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), 0);
 			IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), 0);
 		}
@@ -4309,7 +4308,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	ixgbe_dcb_config_tc_stats_82599(hw, dcb_config);
 
 	/* Check if the PFC is supported */
-	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
 		for (i = 0; i < nb_tcs; i++) {
 			/*
@@ -4323,7 +4322,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			tc->pfc = ixgbe_dcb_pfc_enabled;
 		}
 		ixgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
-		if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+		if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
 			pfc_en &= 0x0F;
 		ret = ixgbe_dcb_config_pfc(hw, pfc_en, map);
 	}
@@ -4344,12 +4343,12 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	/* check support mq_mode for DCB */
-	if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
-	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
-	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS))
+	if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
 		return;
 
-	if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+	if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
 		return;
 
 	/** Configure DCB hardware **/
@@ -4405,7 +4404,7 @@ ixgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 
 	/* VFRE: pool enabling for receive - 64 */
 	IXGBE_WRITE_REG(hw, IXGBE_VFRE(0), UINT32_MAX);
-	if (num_pools == ETH_64_POOLS)
+	if (num_pools == RTE_ETH_64_POOLS)
 		IXGBE_WRITE_REG(hw, IXGBE_VFRE(1), UINT32_MAX);
 
 	/*
@@ -4526,11 +4525,11 @@ ixgbe_config_vf_rss(struct rte_eth_dev *dev)
 	mrqc = IXGBE_READ_REG(hw, IXGBE_MRQC);
 	mrqc &= ~IXGBE_MRQC_MRQE_MASK;
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		mrqc |= IXGBE_MRQC_VMDQRSS64EN;
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		mrqc |= IXGBE_MRQC_VMDQRSS32EN;
 		break;
 
@@ -4551,17 +4550,17 @@ ixgbe_config_vf_default(struct rte_eth_dev *dev)
 		IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		IXGBE_WRITE_REG(hw, IXGBE_MRQC,
 			IXGBE_MRQC_VMDQEN);
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		IXGBE_WRITE_REG(hw, IXGBE_MRQC,
 			IXGBE_MRQC_VMDQRT4TCEN);
 		break;
 
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		IXGBE_WRITE_REG(hw, IXGBE_MRQC,
 			IXGBE_MRQC_VMDQRT8TCEN);
 		break;
@@ -4588,21 +4587,21 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * any DCB/RSS w/o VMDq multi-queue setting
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_DCB_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			ixgbe_rss_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
 			ixgbe_vmdq_dcb_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
 			ixgbe_vmdq_rx_hw_configure(dev);
 			break;
 
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_NONE:
 		default:
 			/* if mq_mode is none, disable rss mode.*/
 			ixgbe_rss_disable(dev);
@@ -4613,18 +4612,18 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * Support RSS together with SRIOV.
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			ixgbe_config_vf_rss(dev);
 			break;
-		case ETH_MQ_RX_VMDQ_DCB:
-		case ETH_MQ_RX_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_DCB:
 		/* In SRIOV, the configuration is the same as VMDq case */
 			ixgbe_vmdq_dcb_configure(dev);
 			break;
 		/* DCB/RSS together with SRIOV is not supported */
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
-		case ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
 			PMD_INIT_LOG(ERR,
 				"Could not support DCB/RSS with VMDq & SRIOV");
 			return -1;
@@ -4658,7 +4657,7 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV inactive scheme
 		 * any DCB w/o VMDq multi-queue setting
 		 */
-		if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+		if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
 			ixgbe_vmdq_tx_hw_configure(hw);
 		else {
 			mtqc = IXGBE_MTQC_64Q_1PB;
@@ -4671,13 +4670,13 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV active scheme
 		 * FIXME if support DCB together with VMDq & SRIOV
 		 */
-		case ETH_64_POOLS:
+		case RTE_ETH_64_POOLS:
 			mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_64VF;
 			break;
-		case ETH_32_POOLS:
+		case RTE_ETH_32_POOLS:
 			mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_32VF;
 			break;
-		case ETH_16_POOLS:
+		case RTE_ETH_16_POOLS:
 			mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_RT_ENA |
 				IXGBE_MTQC_8TC_8TQ;
 			break;
@@ -4885,7 +4884,7 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
 		rxq->rx_using_sse = rx_using_sse;
 #ifdef RTE_LIB_SECURITY
 		rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_SECURITY);
+				RTE_ETH_RX_OFFLOAD_SECURITY);
 #endif
 	}
 }
@@ -4913,10 +4912,10 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* Sanity check */
 	dev->dev_ops->dev_infos_get(dev, &dev_info);
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		rsc_capable = true;
 
-	if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
 				   "support it");
 		return -EINVAL;
@@ -4924,8 +4923,8 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* RSC global configuration (chapter 4.6.7.2.1 of 82599 Spec) */
 
-	if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
-	     (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+	     (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		/*
 		 * According to chapter of 4.6.7.2.1 of the Spec Rev.
 		 * 3.0 RSC configuration requires HW CRC stripping being
@@ -4939,7 +4938,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* RFCTL configuration  */
 	rfctl = IXGBE_READ_REG(hw, IXGBE_RFCTL);
-	if ((rsc_capable) && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if ((rsc_capable) && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		rfctl &= ~IXGBE_RFCTL_RSC_DIS;
 	else
 		rfctl |= IXGBE_RFCTL_RSC_DIS;
@@ -4948,7 +4947,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 	IXGBE_WRITE_REG(hw, IXGBE_RFCTL, rfctl);
 
 	/* If LRO hasn't been requested - we are done here. */
-	if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		return 0;
 
 	/* Set RDRXCTL.RSCACKC bit */
@@ -5070,7 +5069,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Configure CRC stripping, if any.
 	 */
 	hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		hlreg0 &= ~IXGBE_HLREG0_RXCRCSTRP;
 	else
 		hlreg0 |= IXGBE_HLREG0_RXCRCSTRP;
@@ -5107,7 +5106,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	/* Setup RX queues */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
@@ -5116,7 +5115,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 * call to configure.
 		 */
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -5158,11 +5157,11 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 		/* It adds dual VLAN length for supporting dual VLAN */
 		if (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
 			dev->data->scattered_rx = 1;
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		dev->data->scattered_rx = 1;
 
 	/*
@@ -5177,7 +5176,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 */
 	rxcsum = IXGBE_READ_REG(hw, IXGBE_RXCSUM);
 	rxcsum |= IXGBE_RXCSUM_PCSD;
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= IXGBE_RXCSUM_IPPCSE;
 	else
 		rxcsum &= ~IXGBE_RXCSUM_IPPCSE;
@@ -5187,7 +5186,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	if (hw->mac.type == ixgbe_mac_82599EB ||
 	    hw->mac.type == ixgbe_mac_X540) {
 		rdrxctl = IXGBE_READ_REG(hw, IXGBE_RDRXCTL);
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rdrxctl &= ~IXGBE_RDRXCTL_CRCSTRIP;
 		else
 			rdrxctl |= IXGBE_RDRXCTL_CRCSTRIP;
@@ -5393,9 +5392,9 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
 
 #ifdef RTE_LIB_SECURITY
 	if ((dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_SECURITY) ||
+			RTE_ETH_RX_OFFLOAD_SECURITY) ||
 		(dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SECURITY)) {
+			RTE_ETH_TX_OFFLOAD_SECURITY)) {
 		ret = ixgbe_crypto_enable_ipsec(dev);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR,
@@ -5683,7 +5682,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	/* Setup RX queues */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
@@ -5732,7 +5731,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
 		buf_size = (uint16_t) ((srrctl & IXGBE_SRRCTL_BSIZEPKT_MASK) <<
 				       IXGBE_SRRCTL_BSIZEPKT_SHIFT);
 
-		if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
 		    /* It adds dual VLAN length for supporting dual VLAN */
 		    (frame_size + 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
 			if (!dev->data->scattered_rx)
@@ -5740,8 +5739,8 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
 			dev->data->scattered_rx = 1;
 		}
 
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	/* Set RQPL for VF RSS according to max Rx queue */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index a1764f2b08af..668a5b9814f6 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -133,7 +133,7 @@ struct ixgbe_rx_queue {
 	uint8_t             rx_udp_csum_zero_err;
 	/** flags to set in mbuf when a vlan is detected. */
 	uint64_t            vlan_flags;
-	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
 	/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
 	struct rte_mbuf fake_mbuf;
 	/** hold packets to return to application */
@@ -227,7 +227,7 @@ struct ixgbe_tx_queue {
 	uint8_t             pthresh;       /**< Prefetch threshold register. */
 	uint8_t             hthresh;       /**< Host threshold register. */
 	uint8_t             wthresh;       /**< Write-back threshold reg. */
-	uint64_t offloads; /**< Tx offload flags of DEV_TX_OFFLOAD_* */
+	uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
 	uint32_t            ctx_curr;      /**< Hardware context states. */
 	/** Hardware context0 history. */
 	struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index 005e60668a8b..cd34d4098785 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -277,7 +277,7 @@ static inline int
 ixgbe_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
 {
 #ifndef RTE_LIBRTE_IEEE1588
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 
 	/* no fdir support */
 	if (fconf->mode != RTE_FDIR_MODE_NONE)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index ae03ea6e9db3..ac8976062fa7 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -119,14 +119,14 @@ ixgbe_tc_nb_get(struct rte_eth_dev *dev)
 	uint8_t nb_tcs = 0;
 
 	eth_conf = &dev->data->dev_conf;
-	if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+	if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 		nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
-	} else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	} else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
-		    ETH_32_POOLS)
-			nb_tcs = ETH_4_TCS;
+		    RTE_ETH_32_POOLS)
+			nb_tcs = RTE_ETH_4_TCS;
 		else
-			nb_tcs = ETH_8_TCS;
+			nb_tcs = RTE_ETH_8_TCS;
 	} else {
 		nb_tcs = 1;
 	}
@@ -375,10 +375,10 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 	if (vf_num) {
 		/* no DCB */
 		if (nb_tcs == 1) {
-			if (vf_num >= ETH_32_POOLS) {
+			if (vf_num >= RTE_ETH_32_POOLS) {
 				*nb = 2;
 				*base = vf_num * 2;
-			} else if (vf_num >= ETH_16_POOLS) {
+			} else if (vf_num >= RTE_ETH_16_POOLS) {
 				*nb = 4;
 				*base = vf_num * 4;
 			} else {
@@ -392,7 +392,7 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 		}
 	} else {
 		/* VT off */
-		if (nb_tcs == ETH_8_TCS) {
+		if (nb_tcs == RTE_ETH_8_TCS) {
 			switch (tc_node_no) {
 			case 0:
 				*base = 0;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index 9fa75984fb31..bd528ff346c7 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -58,20 +58,20 @@ ixgbe_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	/**< Maximum number of MAC addresses. */
 
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |	DEV_RX_OFFLOAD_UDP_CKSUM  |
-		DEV_RX_OFFLOAD_TCP_CKSUM;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	RTE_ETH_RX_OFFLOAD_UDP_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 	/**< Device RX offload capabilities. */
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	/**< Device TX offload capabilities. */
 
 	dev_info->speed_capa =
 		representor->pf_ethdev->data->dev_link.link_speed;
-	/**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+	/**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
 
 	dev_info->switch_info.name =
 		representor->pf_ethdev->device->name;
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.c b/drivers/net/ixgbe/rte_pmd_ixgbe.c
index cf089cd9aee5..9729f8575f53 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.c
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.c
@@ -303,10 +303,10 @@ rte_pmd_ixgbe_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
 	 */
 	if (hw->mac.type == ixgbe_mac_82598EB)
 		queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
-				  ETH_16_POOLS;
+				  RTE_ETH_16_POOLS;
 	else
 		queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
-				  ETH_64_POOLS;
+				  RTE_ETH_64_POOLS;
 
 	for (q = 0; q < queues_per_pool; q++)
 		(*dev->dev_ops->vlan_strip_queue_set)(dev,
@@ -736,14 +736,14 @@ rte_pmd_ixgbe_set_tc_bw_alloc(uint16_t port,
 	bw_conf = IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
 	eth_conf = &dev->data->dev_conf;
 
-	if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+	if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 		nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
-	} else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	} else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
-		    ETH_32_POOLS)
-			nb_tcs = ETH_4_TCS;
+		    RTE_ETH_32_POOLS)
+			nb_tcs = RTE_ETH_4_TCS;
 		else
-			nb_tcs = ETH_8_TCS;
+			nb_tcs = RTE_ETH_8_TCS;
 	} else {
 		nb_tcs = 1;
 	}
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.h b/drivers/net/ixgbe/rte_pmd_ixgbe.h
index 90fc8160b1f8..eef6f6661c74 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.h
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.h
@@ -285,8 +285,8 @@ int rte_pmd_ixgbe_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an,
 * @param rx_mask
 *    The RX mode mask, which is one or more of accepting Untagged Packets,
 *    packets that match the PFUTA table, Broadcast and Multicast Promiscuous.
-*    ETH_VMDQ_ACCEPT_UNTAG,ETH_VMDQ_ACCEPT_HASH_UC,
-*    ETH_VMDQ_ACCEPT_BROADCAST and ETH_VMDQ_ACCEPT_MULTICAST will be used
+*    RTE_ETH_VMDQ_ACCEPT_UNTAG, RTE_ETH_VMDQ_ACCEPT_HASH_UC,
+*    RTE_ETH_VMDQ_ACCEPT_BROADCAST and RTE_ETH_VMDQ_ACCEPT_MULTICAST will be used
 *    in rx_mode.
 * @param on
 *    1 - Enable a VF RX mode.
diff --git a/drivers/net/kni/rte_eth_kni.c b/drivers/net/kni/rte_eth_kni.c
index cb9f7c8e8200..c428caf44189 100644
--- a/drivers/net/kni/rte_eth_kni.c
+++ b/drivers/net/kni/rte_eth_kni.c
@@ -61,10 +61,10 @@ struct pmd_internals {
 };
 
 static const struct rte_eth_link pmd_link = {
-		.link_speed = ETH_SPEED_NUM_10G,
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_status = ETH_LINK_DOWN,
-		.link_autoneg = ETH_LINK_FIXED,
+		.link_speed = RTE_ETH_SPEED_NUM_10G,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_status = RTE_ETH_LINK_DOWN,
+		.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 static int is_kni_initialized;
 
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 0fc3f0ab66a9..90ffe31b9fda 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -384,15 +384,15 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
 	case PCI_SUBSYS_DEV_ID_CN2360_210SVPN3:
 	case PCI_SUBSYS_DEV_ID_CN2350_210SVPT:
 	case PCI_SUBSYS_DEV_ID_CN2360_210SVPT:
-		devinfo->speed_capa = ETH_LINK_SPEED_10G;
+		devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
 		break;
 	/* CN23xx 25G cards */
 	case PCI_SUBSYS_DEV_ID_CN2350_225:
 	case PCI_SUBSYS_DEV_ID_CN2360_225:
-		devinfo->speed_capa = ETH_LINK_SPEED_25G;
+		devinfo->speed_capa = RTE_ETH_LINK_SPEED_25G;
 		break;
 	default:
-		devinfo->speed_capa = ETH_LINK_SPEED_10G;
+		devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
 		lio_dev_err(lio_dev,
 			    "Unknown CN23XX subsystem device id. Setting 10G as default link speed.\n");
 		return -EINVAL;
@@ -406,27 +406,27 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
 
 	devinfo->max_mac_addrs = 1;
 
-	devinfo->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM		|
-				    DEV_RX_OFFLOAD_UDP_CKSUM		|
-				    DEV_RX_OFFLOAD_TCP_CKSUM		|
-				    DEV_RX_OFFLOAD_VLAN_STRIP		|
-				    DEV_RX_OFFLOAD_RSS_HASH);
-	devinfo->tx_offload_capa = (DEV_TX_OFFLOAD_IPV4_CKSUM		|
-				    DEV_TX_OFFLOAD_UDP_CKSUM		|
-				    DEV_TX_OFFLOAD_TCP_CKSUM		|
-				    DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM);
+	devinfo->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM		|
+				    RTE_ETH_RX_OFFLOAD_UDP_CKSUM		|
+				    RTE_ETH_RX_OFFLOAD_TCP_CKSUM		|
+				    RTE_ETH_RX_OFFLOAD_VLAN_STRIP		|
+				    RTE_ETH_RX_OFFLOAD_RSS_HASH);
+	devinfo->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
+				    RTE_ETH_TX_OFFLOAD_UDP_CKSUM		|
+				    RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
+				    RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM);
 
 	devinfo->rx_desc_lim = lio_rx_desc_lim;
 	devinfo->tx_desc_lim = lio_tx_desc_lim;
 
 	devinfo->reta_size = LIO_RSS_MAX_TABLE_SZ;
 	devinfo->hash_key_size = LIO_RSS_MAX_KEY_SZ;
-	devinfo->flow_type_rss_offloads = (ETH_RSS_IPV4			|
-					   ETH_RSS_NONFRAG_IPV4_TCP	|
-					   ETH_RSS_IPV6			|
-					   ETH_RSS_NONFRAG_IPV6_TCP	|
-					   ETH_RSS_IPV6_EX		|
-					   ETH_RSS_IPV6_TCP_EX);
+	devinfo->flow_type_rss_offloads = (RTE_ETH_RSS_IPV4			|
+					   RTE_ETH_RSS_NONFRAG_IPV4_TCP	|
+					   RTE_ETH_RSS_IPV6			|
+					   RTE_ETH_RSS_NONFRAG_IPV6_TCP	|
+					   RTE_ETH_RSS_IPV6_EX		|
+					   RTE_ETH_RSS_IPV6_TCP_EX);
 	return 0;
 }
 
@@ -519,10 +519,10 @@ lio_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
 	rss_param->param.flags &= ~LIO_RSS_PARAM_ITABLE_UNCHANGED;
 	rss_param->param.itablesize = LIO_RSS_MAX_TABLE_SZ;
 
-	for (i = 0; i < (reta_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask) & ((uint64_t)1 << j)) {
-				index = (i * RTE_RETA_GROUP_SIZE) + j;
+				index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
 				rss_state->itable[index] = reta_conf[i].reta[j];
 			}
 		}
@@ -562,12 +562,12 @@ lio_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	num = reta_size / RTE_RETA_GROUP_SIZE;
+	num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < num; i++) {
 		memcpy(reta_conf->reta,
-		       &rss_state->itable[i * RTE_RETA_GROUP_SIZE],
-		       RTE_RETA_GROUP_SIZE);
+		       &rss_state->itable[i * RTE_ETH_RETA_GROUP_SIZE],
+		       RTE_ETH_RETA_GROUP_SIZE);
 		reta_conf++;
 	}
 
@@ -595,17 +595,17 @@ lio_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
 		memcpy(hash_key, rss_state->hash_key, rss_state->hash_key_size);
 
 	if (rss_state->ip)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (rss_state->tcp_hash)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (rss_state->ipv6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (rss_state->ipv6_tcp_hash)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (rss_state->ipv6_ex)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (rss_state->ipv6_tcp_ex_hash)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 
 	rss_conf->rss_hf = rss_hf;
 
@@ -673,42 +673,42 @@ lio_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
 		if (rss_state->hash_disable)
 			return -EINVAL;
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
 			hashinfo |= LIO_RSS_HASH_IPV4;
 			rss_state->ip = 1;
 		} else {
 			rss_state->ip = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 			hashinfo |= LIO_RSS_HASH_TCP_IPV4;
 			rss_state->tcp_hash = 1;
 		} else {
 			rss_state->tcp_hash = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV6) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6) {
 			hashinfo |= LIO_RSS_HASH_IPV6;
 			rss_state->ipv6 = 1;
 		} else {
 			rss_state->ipv6 = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 			hashinfo |= LIO_RSS_HASH_TCP_IPV6;
 			rss_state->ipv6_tcp_hash = 1;
 		} else {
 			rss_state->ipv6_tcp_hash = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV6_EX) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX) {
 			hashinfo |= LIO_RSS_HASH_IPV6_EX;
 			rss_state->ipv6_ex = 1;
 		} else {
 			rss_state->ipv6_ex = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) {
 			hashinfo |= LIO_RSS_HASH_TCP_IPV6_EX;
 			rss_state->ipv6_tcp_ex_hash = 1;
 		} else {
@@ -757,7 +757,7 @@ lio_dev_udp_tunnel_add(struct rte_eth_dev *eth_dev,
 	if (udp_tnl == NULL)
 		return -EINVAL;
 
-	if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+	if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
 		lio_dev_err(lio_dev, "Unsupported tunnel type\n");
 		return -1;
 	}
@@ -814,7 +814,7 @@ lio_dev_udp_tunnel_del(struct rte_eth_dev *eth_dev,
 	if (udp_tnl == NULL)
 		return -EINVAL;
 
-	if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+	if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
 		lio_dev_err(lio_dev, "Unsupported tunnel type\n");
 		return -1;
 	}
@@ -912,10 +912,10 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
 
 	/* Initialize */
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	/* Return what we found */
 	if (lio_dev->linfo.link.s.link_up == 0) {
@@ -923,18 +923,18 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
 		return rte_eth_linkstatus_set(eth_dev, &link);
 	}
 
-	link.link_status = ETH_LINK_UP; /* Interface is up */
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP; /* Interface is up */
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	switch (lio_dev->linfo.link.s.speed) {
 	case LIO_LINK_SPEED_10000:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case LIO_LINK_SPEED_25000:
-		link.link_speed = ETH_SPEED_NUM_25G;
+		link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	default:
-		link.link_speed = ETH_SPEED_NUM_NONE;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	}
 
 	return rte_eth_linkstatus_set(eth_dev, &link);
@@ -1086,8 +1086,8 @@ lio_dev_rss_configure(struct rte_eth_dev *eth_dev)
 
 		q_idx = (uint8_t)((eth_dev->data->nb_rx_queues > 1) ?
 				  i % eth_dev->data->nb_rx_queues : 0);
-		conf_idx = i / RTE_RETA_GROUP_SIZE;
-		reta_idx = i % RTE_RETA_GROUP_SIZE;
+		conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
 		reta_conf[conf_idx].reta[reta_idx] = q_idx;
 		reta_conf[conf_idx].mask |= ((uint64_t)1 << reta_idx);
 	}
@@ -1103,10 +1103,10 @@ lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rss_conf rss_conf;
 
 	switch (eth_dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		lio_dev_rss_configure(eth_dev);
 		break;
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 	/* if mq_mode is none, disable rss mode. */
 	default:
 		memset(&rss_conf, 0, sizeof(rss_conf));
@@ -1484,7 +1484,7 @@ lio_dev_set_link_up(struct rte_eth_dev *eth_dev)
 	}
 
 	lio_dev->linfo.link.s.link_up = 1;
-	eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -1505,11 +1505,11 @@ lio_dev_set_link_down(struct rte_eth_dev *eth_dev)
 	}
 
 	lio_dev->linfo.link.s.link_up = 0;
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	if (lio_send_rx_ctrl_cmd(eth_dev, 0)) {
 		lio_dev->linfo.link.s.link_up = 1;
-		eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+		eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 		lio_dev_err(lio_dev, "Unable to set Link Down\n");
 		return -1;
 	}
@@ -1721,9 +1721,9 @@ lio_dev_configure(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Inform firmware about change in number of queues to use.
 	 * Disable IO queues and reset registers for re-configuration.
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index 364e818d65c1..8533e39f6957 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -525,7 +525,7 @@ memif_disconnect(struct rte_eth_dev *dev)
 	int i;
 	int ret;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
 	pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTED;
 
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index 980150293e86..9deb7a5f1360 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -55,10 +55,10 @@ static const char * const valid_arguments[] = {
 };
 
 static const struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_AUTONEG
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_AUTONEG
 };
 
 #define MEMIF_MP_SEND_REGION		"memif_mp_send_region"
@@ -199,7 +199,7 @@ memif_dev_info(struct rte_eth_dev *dev __rte_unused, struct rte_eth_dev_info *de
 	dev_info->max_rx_queues = ETH_MEMIF_MAX_NUM_Q_PAIRS;
 	dev_info->max_tx_queues = ETH_MEMIF_MAX_NUM_Q_PAIRS;
 	dev_info->min_rx_bufsize = 0;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -1219,7 +1219,7 @@ memif_connect(struct rte_eth_dev *dev)
 
 		pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
 		pmd->flags |= ETH_MEMIF_FLAG_CONNECTED;
-		dev->data->dev_link.link_status = ETH_LINK_UP;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	}
 	MIF_LOG(INFO, "Connected.");
 	return 0;
@@ -1381,10 +1381,10 @@ memif_link_update(struct rte_eth_dev *dev,
 
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
 		proc_private = dev->process_private;
-		if (dev->data->dev_link.link_status == ETH_LINK_UP &&
+		if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP &&
 				proc_private->regions_num == 0) {
 			memif_mp_request_regions(dev);
-		} else if (dev->data->dev_link.link_status == ETH_LINK_DOWN &&
+		} else if (dev->data->dev_link.link_status == RTE_ETH_LINK_DOWN &&
 				proc_private->regions_num > 0) {
 			memif_free_regions(dev);
 		}
diff --git a/drivers/net/mlx4/mlx4_ethdev.c b/drivers/net/mlx4/mlx4_ethdev.c
index 783ff94dce8d..d606ec8ca76d 100644
--- a/drivers/net/mlx4/mlx4_ethdev.c
+++ b/drivers/net/mlx4/mlx4_ethdev.c
@@ -657,11 +657,11 @@ mlx4_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 	info->if_index = priv->if_index;
 	info->hash_key_size = MLX4_RSS_HASH_KEY_SIZE;
 	info->speed_capa =
-			ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_10G |
-			ETH_LINK_SPEED_20G |
-			ETH_LINK_SPEED_40G |
-			ETH_LINK_SPEED_56G;
+			RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_20G |
+			RTE_ETH_LINK_SPEED_40G |
+			RTE_ETH_LINK_SPEED_56G;
 	info->flow_type_rss_offloads = mlx4_conv_rss_types(priv, 0, 1);
 
 	return 0;
@@ -821,13 +821,13 @@ mlx4_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 	}
 	link_speed = ethtool_cmd_speed(&edata);
 	if (link_speed == -1)
-		dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	else
 		dev_link.link_speed = link_speed;
 	dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
-				ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+				RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
 	dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				  ETH_LINK_SPEED_FIXED);
+				  RTE_ETH_LINK_SPEED_FIXED);
 	dev->data->dev_link = dev_link;
 	return 0;
 }
@@ -863,13 +863,13 @@ mlx4_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 	fc_conf->autoneg = ethpause.autoneg;
 	if (ethpause.rx_pause && ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (ethpause.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	ret = 0;
 out:
 	MLX4_ASSERT(ret >= 0);
@@ -899,13 +899,13 @@ mlx4_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	ifr.ifr_data = (void *)&ethpause;
 	ethpause.autoneg = fc_conf->autoneg;
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_RX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
 		ethpause.rx_pause = 1;
 	else
 		ethpause.rx_pause = 0;
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_TX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
 		ethpause.tx_pause = 1;
 	else
 		ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c
index 71ea91b3fb82..2e1b6c87e983 100644
--- a/drivers/net/mlx4/mlx4_flow.c
+++ b/drivers/net/mlx4/mlx4_flow.c
@@ -109,21 +109,21 @@ mlx4_conv_rss_types(struct mlx4_priv *priv, uint64_t types, int verbs_to_dpdk)
 	};
 	static const uint64_t dpdk[] = {
 		[INNER] = 0,
-		[IPV4] = ETH_RSS_IPV4,
-		[IPV4_1] = ETH_RSS_FRAG_IPV4,
-		[IPV4_2] = ETH_RSS_NONFRAG_IPV4_OTHER,
-		[IPV6] = ETH_RSS_IPV6,
-		[IPV6_1] = ETH_RSS_FRAG_IPV6,
-		[IPV6_2] = ETH_RSS_NONFRAG_IPV6_OTHER,
-		[IPV6_3] = ETH_RSS_IPV6_EX,
+		[IPV4] = RTE_ETH_RSS_IPV4,
+		[IPV4_1] = RTE_ETH_RSS_FRAG_IPV4,
+		[IPV4_2] = RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+		[IPV6] = RTE_ETH_RSS_IPV6,
+		[IPV6_1] = RTE_ETH_RSS_FRAG_IPV6,
+		[IPV6_2] = RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+		[IPV6_3] = RTE_ETH_RSS_IPV6_EX,
 		[TCP] = 0,
 		[UDP] = 0,
-		[IPV4_TCP] = ETH_RSS_NONFRAG_IPV4_TCP,
-		[IPV4_UDP] = ETH_RSS_NONFRAG_IPV4_UDP,
-		[IPV6_TCP] = ETH_RSS_NONFRAG_IPV6_TCP,
-		[IPV6_TCP_1] = ETH_RSS_IPV6_TCP_EX,
-		[IPV6_UDP] = ETH_RSS_NONFRAG_IPV6_UDP,
-		[IPV6_UDP_1] = ETH_RSS_IPV6_UDP_EX,
+		[IPV4_TCP] = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+		[IPV4_UDP] = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+		[IPV6_TCP] = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+		[IPV6_TCP_1] = RTE_ETH_RSS_IPV6_TCP_EX,
+		[IPV6_UDP] = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+		[IPV6_UDP_1] = RTE_ETH_RSS_IPV6_UDP_EX,
 	};
 	static const uint64_t verbs[RTE_DIM(dpdk)] = {
 		[INNER] = IBV_RX_HASH_INNER,
@@ -1283,7 +1283,7 @@ mlx4_flow_internal_next_vlan(struct mlx4_priv *priv, uint16_t vlan)
  * - MAC flow rules are generated from @p dev->data->mac_addrs
  *   (@p priv->mac array).
  * - An additional flow rule for Ethernet broadcasts is also generated.
- * - All these are per-VLAN if @p DEV_RX_OFFLOAD_VLAN_FILTER
+ * - All these are per-VLAN if @p RTE_ETH_RX_OFFLOAD_VLAN_FILTER
  *   is enabled and VLAN filters are configured.
  *
  * @param priv
@@ -1358,7 +1358,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 	struct rte_ether_addr *rule_mac = &eth_spec.dst;
 	rte_be16_t *rule_vlan =
 		(ETH_DEV(priv)->data->dev_conf.rxmode.offloads &
-		 DEV_RX_OFFLOAD_VLAN_FILTER) &&
+		 RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 		!ETH_DEV(priv)->data->promiscuous ?
 		&vlan_spec.tci :
 		NULL;
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index d56009c41845..2aab0f60a7b5 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -118,7 +118,7 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
 static void
 mlx4_link_status_alarm(struct mlx4_priv *priv)
 {
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 
 	MLX4_ASSERT(priv->intr_alarm == 1);
@@ -183,7 +183,7 @@ mlx4_interrupt_handler(struct mlx4_priv *priv)
 	};
 	uint32_t caught[RTE_DIM(type)] = { 0 };
 	struct ibv_async_event event;
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 	unsigned int i;
 
@@ -280,7 +280,7 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
 int
 mlx4_intr_install(struct mlx4_priv *priv)
 {
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 	int rc;
 
@@ -386,7 +386,7 @@ mlx4_rx_intr_enable(struct rte_eth_dev *dev, uint16_t idx)
 int
 mlx4_rxq_intr_enable(struct mlx4_priv *priv)
 {
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 
 	if (intr_conf->rxq && mlx4_rx_intr_vec_enable(priv) < 0)
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index ee2d2b75e59a..781ee256df71 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -682,12 +682,12 @@ mlx4_rxq_detach(struct rxq *rxq)
 uint64_t
 mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
 {
-	uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
-			    DEV_RX_OFFLOAD_KEEP_CRC |
-			    DEV_RX_OFFLOAD_RSS_HASH;
+	uint64_t offloads = RTE_ETH_RX_OFFLOAD_SCATTER |
+			    RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+			    RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (priv->hw_csum)
-		offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 	return offloads;
 }
 
@@ -703,7 +703,7 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
 uint64_t
 mlx4_get_rx_port_offloads(struct mlx4_priv *priv)
 {
-	uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+	uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	(void)priv;
 	return offloads;
@@ -785,7 +785,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	}
 	/* By default, FCS (CRC) is stripped by hardware. */
 	crc_present = 0;
-	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		if (priv->hw_fcs_strip) {
 			crc_present = 1;
 		} else {
@@ -816,9 +816,9 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		.elts = elts,
 		/* Toggle Rx checksum offload if hardware supports it. */
 		.csum = priv->hw_csum &&
-			(offloads & DEV_RX_OFFLOAD_CHECKSUM),
+			(offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
 		.csum_l2tun = priv->hw_csum_l2tun &&
-			      (offloads & DEV_RX_OFFLOAD_CHECKSUM),
+			      (offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
 		.crc_present = crc_present,
 		.l2tun_offload = priv->hw_csum_l2tun,
 		.stats = {
@@ -832,7 +832,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
 	if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
 		;
-	} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+	} else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
 		uint32_t sges_n;
 
diff --git a/drivers/net/mlx4/mlx4_txq.c b/drivers/net/mlx4/mlx4_txq.c
index 7d8c4f2a2223..0db2e55befd3 100644
--- a/drivers/net/mlx4/mlx4_txq.c
+++ b/drivers/net/mlx4/mlx4_txq.c
@@ -273,20 +273,20 @@ mlx4_txq_fill_dv_obj_info(struct txq *txq, struct mlx4dv_obj *mlxdv)
 uint64_t
 mlx4_get_tx_port_offloads(struct mlx4_priv *priv)
 {
-	uint64_t offloads = DEV_TX_OFFLOAD_MULTI_SEGS;
+	uint64_t offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	if (priv->hw_csum) {
-		offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_UDP_CKSUM |
-			     DEV_TX_OFFLOAD_TCP_CKSUM);
+		offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
 	}
 	if (priv->tso)
-		offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+		offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	if (priv->hw_csum_l2tun) {
-		offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+		offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 		if (priv->tso)
-			offloads |= (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				     DEV_TX_OFFLOAD_GRE_TNL_TSO);
+			offloads |= (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
 	}
 	return offloads;
 }
@@ -394,12 +394,12 @@ mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		.elts_comp_cd_init =
 			RTE_MIN(MLX4_PMD_TX_PER_COMP_REQ, desc / 4),
 		.csum = priv->hw_csum &&
-			(offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-					   DEV_TX_OFFLOAD_UDP_CKSUM |
-					   DEV_TX_OFFLOAD_TCP_CKSUM)),
+			(offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+					   RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+					   RTE_ETH_TX_OFFLOAD_TCP_CKSUM)),
 		.csum_l2tun = priv->hw_csum_l2tun &&
 			      (offloads &
-			       DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM),
+			       RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM),
 		/* Enable Tx loopback for VF devices. */
 		.lb = !!priv->vf,
 		.bounce_buf = bounce_buf,
diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index f34133e2c641..79e27fe2d668 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -439,24 +439,24 @@ mlx5_link_update_unlocked_gset(struct rte_eth_dev *dev,
 	}
 	link_speed = ethtool_cmd_speed(&edata);
 	if (link_speed == -1)
-		dev_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		dev_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	else
 		dev_link.link_speed = link_speed;
 	priv->link_speed_capa = 0;
 	if (edata.supported & (SUPPORTED_1000baseT_Full |
 			       SUPPORTED_1000baseKX_Full))
-		priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (edata.supported & SUPPORTED_10000baseKR_Full)
-		priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (edata.supported & (SUPPORTED_40000baseKR4_Full |
 			       SUPPORTED_40000baseCR4_Full |
 			       SUPPORTED_40000baseSR4_Full |
 			       SUPPORTED_40000baseLR4_Full))
-		priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
-				ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+				RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
 	dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_SPEED_FIXED);
 	*link = dev_link;
 	return 0;
 }
@@ -545,45 +545,45 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
 		return ret;
 	}
 	dev_link.link_speed = (ecmd->speed == UINT32_MAX) ?
-				ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
+				RTE_ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
 	sc = ecmd->link_mode_masks[0] |
 		((uint64_t)ecmd->link_mode_masks[1] << 32);
 	priv->link_speed_capa = 0;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseT_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseKX_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKR_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseR_FEC_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_20G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_20G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseCR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseSR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseLR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_56G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_56G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseCR_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseKR_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseSR_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_25G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_50G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_100G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
 
 	sc = ecmd->link_mode_masks[2] |
 		((uint64_t)ecmd->link_mode_masks[3] << 32);
@@ -591,11 +591,11 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
 		  MLX5_BITSHIFT
 		       (ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseDR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
 	dev_link.link_duplex = ((ecmd->duplex == DUPLEX_HALF) ?
-				ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+				RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
 	dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				  ETH_LINK_SPEED_FIXED);
+				  RTE_ETH_LINK_SPEED_FIXED);
 	*link = dev_link;
 	return 0;
 }
@@ -677,13 +677,13 @@ mlx5_dev_get_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 	fc_conf->autoneg = ethpause.autoneg;
 	if (ethpause.rx_pause && ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (ethpause.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	return 0;
 }
 
@@ -709,14 +709,14 @@ mlx5_dev_set_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	ifr.ifr_data = (void *)&ethpause;
 	ethpause.autoneg = fc_conf->autoneg;
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_RX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
 		ethpause.rx_pause = 1;
 	else
 		ethpause.rx_pause = 0;
 
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_TX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
 		ethpause.tx_pause = 1;
 	else
 		ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index a823d26bebf9..d207ec053e07 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1350,8 +1350,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	 * Remove this check once DPDK supports larger/variable
 	 * indirection tables.
 	 */
-	if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
-		config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+	if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+		config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
 	DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
 		config->ind_table_max_size);
 	config->hw_vlan_strip = !!(sh->device_attr.raw_packet_caps &
@@ -1634,7 +1634,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	/*
 	 * If HW has bug working with tunnel packet decapsulation and
 	 * scatter FCS, and decapsulation is needed, clear the hw_fcs_strip
-	 * bit. Then DEV_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
+	 * bit. Then RTE_ETH_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
 	 */
 	if (config->hca_attr.scatter_fcs_w_decap_disable && config->decap_en)
 		config->hw_fcs_strip = 0;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index e28cc461b914..7727dfb4196c 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1488,10 +1488,10 @@ mlx5_udp_tunnel_port_add(struct rte_eth_dev *dev __rte_unused,
 			 struct rte_eth_udp_tunnel *udp_tunnel)
 {
 	MLX5_ASSERT(udp_tunnel != NULL);
-	if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN &&
+	if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN &&
 	    udp_tunnel->udp_port == 4789)
 		return 0;
-	if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN_GPE &&
+	if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN_GPE &&
 	    udp_tunnel->udp_port == 4790)
 		return 0;
 	return -ENOTSUP;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index a15f86616d49..ea17a86f4955 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1217,7 +1217,7 @@ TAILQ_HEAD(mlx5_legacy_flow_meters, mlx5_legacy_flow_meter);
 struct mlx5_flow_rss_desc {
 	uint32_t level;
 	uint32_t queue_num; /**< Number of entries in @p queue. */
-	uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+	uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
 	uint64_t hash_fields; /* Verbs Hash fields. */
 	uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
 	uint32_t key_len; /**< RSS hash key len. */
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index fe86bb40d351..12ddf4c7ff28 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -90,11 +90,11 @@
 #define MLX5_VPMD_DESCS_PER_LOOP      4
 
 /* Mask of RSS on source only or destination only. */
-#define MLX5_RSS_SRC_DST_ONLY (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | \
-			       ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define MLX5_RSS_SRC_DST_ONLY (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY | \
+			       RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
 
 /* Supported RSS */
-#define MLX5_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP | \
+#define MLX5_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP | \
 			    MLX5_RSS_SRC_DST_ONLY))
 
 /* Timeout in seconds to get a valid link status. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 82e2284d9866..f2b78c3cc69e 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -91,7 +91,7 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
 	}
 
 	if ((dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
+			RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
 			rte_mbuf_dyn_tx_timestamp_register(NULL, NULL) != 0) {
 		DRV_LOG(ERR, "port %u cannot register Tx timestamp field/flag",
 			dev->data->port_id);
@@ -225,8 +225,8 @@ mlx5_set_default_params(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 	info->default_txportconf.ring_size = 256;
 	info->default_rxportconf.burst_size = MLX5_RX_DEFAULT_BURST;
 	info->default_txportconf.burst_size = MLX5_TX_DEFAULT_BURST;
-	if ((priv->link_speed_capa & ETH_LINK_SPEED_200G) |
-		(priv->link_speed_capa & ETH_LINK_SPEED_100G)) {
+	if ((priv->link_speed_capa & RTE_ETH_LINK_SPEED_200G) |
+		(priv->link_speed_capa & RTE_ETH_LINK_SPEED_100G)) {
 		info->default_rxportconf.nb_queues = 16;
 		info->default_txportconf.nb_queues = 16;
 		if (dev->data->nb_rx_queues > 2 ||
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index c914a7120cca..5dc0400e8bdc 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -98,7 +98,7 @@ struct mlx5_flow_expand_node {
 	uint64_t rss_types;
 	/**<
 	 * RSS types bit-field associated with this node
-	 * (see ETH_RSS_* definitions).
+	 * (see RTE_ETH_RSS_* definitions).
 	 */
 	uint64_t node_flags;
 	/**<
@@ -292,7 +292,7 @@ mlx5_flow_expand_rss_skip_explicit(const struct mlx5_flow_expand_node graph[],
  * @param[in] pattern
  *   User flow pattern.
  * @param[in] types
- *   RSS types to expand (see ETH_RSS_* definitions).
+ *   RSS types to expand (see RTE_ETH_RSS_* definitions).
  * @param[in] graph
  *   Input graph to expand @p pattern according to @p types.
  * @param[in] graph_root_index
@@ -546,8 +546,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 			 MLX5_EXPANSION_IPV4,
 			 MLX5_EXPANSION_IPV6),
 		.type = RTE_FLOW_ITEM_TYPE_IPV4,
-		.rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			ETH_RSS_NONFRAG_IPV4_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	},
 	[MLX5_EXPANSION_OUTER_IPV4_UDP] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -555,11 +555,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 						  MLX5_EXPANSION_MPLS,
 						  MLX5_EXPANSION_GTP),
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 	},
 	[MLX5_EXPANSION_OUTER_IPV4_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 	},
 	[MLX5_EXPANSION_OUTER_IPV6] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT
@@ -570,8 +570,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 			 MLX5_EXPANSION_GRE,
 			 MLX5_EXPANSION_NVGRE),
 		.type = RTE_FLOW_ITEM_TYPE_IPV6,
-		.rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-			ETH_RSS_NONFRAG_IPV6_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	},
 	[MLX5_EXPANSION_OUTER_IPV6_UDP] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -579,11 +579,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 						  MLX5_EXPANSION_MPLS,
 						  MLX5_EXPANSION_GTP),
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 	},
 	[MLX5_EXPANSION_OUTER_IPV6_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	},
 	[MLX5_EXPANSION_VXLAN] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_ETH,
@@ -636,32 +636,32 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4_UDP,
 						  MLX5_EXPANSION_IPV4_TCP),
 		.type = RTE_FLOW_ITEM_TYPE_IPV4,
-		.rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			ETH_RSS_NONFRAG_IPV4_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	},
 	[MLX5_EXPANSION_IPV4_UDP] = {
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 	},
 	[MLX5_EXPANSION_IPV4_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 	},
 	[MLX5_EXPANSION_IPV6] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV6_UDP,
 						  MLX5_EXPANSION_IPV6_TCP,
 						  MLX5_EXPANSION_IPV6_FRAG_EXT),
 		.type = RTE_FLOW_ITEM_TYPE_IPV6,
-		.rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-			ETH_RSS_NONFRAG_IPV6_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	},
 	[MLX5_EXPANSION_IPV6_UDP] = {
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 	},
 	[MLX5_EXPANSION_IPV6_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	},
 	[MLX5_EXPANSION_IPV6_FRAG_EXT] = {
 		.type = RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
@@ -1072,7 +1072,7 @@ mlx5_flow_item_acceptable(const struct rte_flow_item *item,
  * @param[in] tunnel
  *   1 when the hash field is for a tunnel item.
  * @param[in] layer_types
- *   ETH_RSS_* types.
+ *   RTE_ETH_RSS_* types.
  * @param[in] hash_fields
  *   Item hash fields.
  *
@@ -1625,14 +1625,14 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev,
 					  &rss->types,
 					  "some RSS protocols are not"
 					  " supported");
-	if ((rss->types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) &&
-	    !(rss->types & ETH_RSS_IP))
+	if ((rss->types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) &&
+	    !(rss->types & RTE_ETH_RSS_IP))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
 					  "L3 partial RSS requested but L3 RSS"
 					  " type not specified");
-	if ((rss->types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) &&
-	    !(rss->types & (ETH_RSS_UDP | ETH_RSS_TCP)))
+	if ((rss->types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) &&
+	    !(rss->types & (RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP)))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
 					  "L4 partial RSS requested but L4 RSS"
@@ -6388,8 +6388,8 @@ flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type,
 		 * mlx5_flow_hashfields_adjust() in advance.
 		 */
 		rss_desc->level = rss->level;
-		/* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
-		rss_desc->types = !rss->types ? ETH_RSS_IP : rss->types;
+		/* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+		rss_desc->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
 	}
 	flow->dev_handles = 0;
 	if (rss && rss->types) {
@@ -7013,7 +7013,7 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev,
 	if (!priv->reta_idx_n || !priv->rxqs_n) {
 		return 0;
 	}
-	if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+	if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 		action_rss.types = 0;
 	for (i = 0; i != priv->reta_idx_n; ++i)
 		queue[i] = (*priv->reta_idx)[i];
@@ -8681,7 +8681,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
 				(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION_CONF,
 				NULL, "invalid port configuration");
-		if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+		if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 			ctx->action_rss.types = 0;
 		for (i = 0; i != priv->reta_idx_n; ++i)
 			ctx->queue[i] = (*priv->reta_idx)[i];
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 5c68d4f7d742..ff85c1c013a5 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -328,18 +328,18 @@ enum mlx5_feature_name {
 
 /* Valid layer type for IPV4 RSS. */
 #define MLX5_IPV4_LAYER_TYPES \
-	(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
-	 ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
-	 ETH_RSS_NONFRAG_IPV4_OTHER)
+	(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+	 RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	 RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
 
 /* IBV hash source bits  for IPV4. */
 #define MLX5_IPV4_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV4 | IBV_RX_HASH_DST_IPV4)
 
 /* Valid layer type for IPV6 RSS. */
 #define MLX5_IPV6_LAYER_TYPES \
-	(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP | \
-	 ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_EX  | ETH_RSS_IPV6_TCP_EX | \
-	 ETH_RSS_IPV6_UDP_EX | ETH_RSS_NONFRAG_IPV6_OTHER)
+	(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	 RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_EX  | RTE_ETH_RSS_IPV6_TCP_EX | \
+	 RTE_ETH_RSS_IPV6_UDP_EX | RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
 
 /* IBV hash source bits  for IPV6. */
 #define MLX5_IPV6_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV6 | IBV_RX_HASH_DST_IPV6)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index e31d4d846825..759fe57f19d6 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -10837,9 +10837,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 	if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV4)) ||
 	    (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV4))) {
 		if (rss_types & MLX5_IPV4_LAYER_TYPES) {
-			if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV4;
-			else if (rss_types & ETH_RSS_L3_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV4;
 			else
 				dev_flow->hash_fields |= MLX5_IPV4_IBV_RX_HASH;
@@ -10847,9 +10847,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 	} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV6)) ||
 		   (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV6))) {
 		if (rss_types & MLX5_IPV6_LAYER_TYPES) {
-			if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV6;
-			else if (rss_types & ETH_RSS_L3_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV6;
 			else
 				dev_flow->hash_fields |= MLX5_IPV6_IBV_RX_HASH;
@@ -10863,11 +10863,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 		return;
 	if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_UDP)) ||
 	    (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_UDP))) {
-		if (rss_types & ETH_RSS_UDP) {
-			if (rss_types & ETH_RSS_L4_SRC_ONLY)
+		if (rss_types & RTE_ETH_RSS_UDP) {
+			if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_SRC_PORT_UDP;
-			else if (rss_types & ETH_RSS_L4_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_DST_PORT_UDP;
 			else
@@ -10875,11 +10875,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 		}
 	} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_TCP)) ||
 		   (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_TCP))) {
-		if (rss_types & ETH_RSS_TCP) {
-			if (rss_types & ETH_RSS_L4_SRC_ONLY)
+		if (rss_types & RTE_ETH_RSS_TCP) {
+			if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_SRC_PORT_TCP;
-			else if (rss_types & ETH_RSS_L4_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_DST_PORT_TCP;
 			else
@@ -14418,9 +14418,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV4:
 		if (rss_types & MLX5_IPV4_LAYER_TYPES) {
 			*hash_field &= ~MLX5_RSS_HASH_IPV4;
-			if (rss_types & ETH_RSS_L3_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_IPV4;
-			else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_IPV4;
 			else
 				*hash_field |= MLX5_RSS_HASH_IPV4;
@@ -14429,9 +14429,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV6:
 		if (rss_types & MLX5_IPV6_LAYER_TYPES) {
 			*hash_field &= ~MLX5_RSS_HASH_IPV6;
-			if (rss_types & ETH_RSS_L3_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_IPV6;
-			else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_IPV6;
 			else
 				*hash_field |= MLX5_RSS_HASH_IPV6;
@@ -14440,11 +14440,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV4_UDP:
 		/* fall-through. */
 	case MLX5_RSS_HASH_IPV6_UDP:
-		if (rss_types & ETH_RSS_UDP) {
+		if (rss_types & RTE_ETH_RSS_UDP) {
 			*hash_field &= ~MLX5_UDP_IBV_RX_HASH;
-			if (rss_types & ETH_RSS_L4_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_PORT_UDP;
-			else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_PORT_UDP;
 			else
 				*hash_field |= MLX5_UDP_IBV_RX_HASH;
@@ -14453,11 +14453,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV4_TCP:
 		/* fall-through. */
 	case MLX5_RSS_HASH_IPV6_TCP:
-		if (rss_types & ETH_RSS_TCP) {
+		if (rss_types & RTE_ETH_RSS_TCP) {
 			*hash_field &= ~MLX5_TCP_IBV_RX_HASH;
-			if (rss_types & ETH_RSS_L4_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_PORT_TCP;
-			else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_PORT_TCP;
 			else
 				*hash_field |= MLX5_TCP_IBV_RX_HASH;
@@ -14605,8 +14605,8 @@ __flow_dv_action_rss_create(struct rte_eth_dev *dev,
 	origin = &shared_rss->origin;
 	origin->func = rss->func;
 	origin->level = rss->level;
-	/* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
-	origin->types = !rss->types ? ETH_RSS_IP : rss->types;
+	/* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+	origin->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
 	/* NULL RSS key indicates default RSS key. */
 	rss_key = !rss->key ? rss_hash_default_key : rss->key;
 	memcpy(shared_rss->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 1627c3905fa4..8a455cbf22f4 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1816,7 +1816,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
 			if (dev_flow->hash_fields != 0)
 				dev_flow->hash_fields |=
 					mlx5_flow_hashfields_adjust
-					(rss_desc, tunnel, ETH_RSS_TCP,
+					(rss_desc, tunnel, RTE_ETH_RSS_TCP,
 					 (IBV_RX_HASH_SRC_PORT_TCP |
 					  IBV_RX_HASH_DST_PORT_TCP));
 			item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP :
@@ -1829,7 +1829,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
 			if (dev_flow->hash_fields != 0)
 				dev_flow->hash_fields |=
 					mlx5_flow_hashfields_adjust
-					(rss_desc, tunnel, ETH_RSS_UDP,
+					(rss_desc, tunnel, RTE_ETH_RSS_UDP,
 					 (IBV_RX_HASH_SRC_PORT_UDP |
 					  IBV_RX_HASH_DST_PORT_UDP));
 			item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
diff --git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c
index c32129cdc2b8..a4f690039e24 100644
--- a/drivers/net/mlx5/mlx5_rss.c
+++ b/drivers/net/mlx5/mlx5_rss.c
@@ -68,7 +68,7 @@ mlx5_rss_hash_update(struct rte_eth_dev *dev,
 		if (!(*priv->rxqs)[i])
 			continue;
 		(*priv->rxqs)[i]->rss_hash = !!rss_conf->rss_hf &&
-			!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS);
+			!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS);
 		++idx;
 	}
 	return 0;
@@ -170,8 +170,8 @@ mlx5_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 	/* Fill each entry of the table even if its bit is not set. */
 	for (idx = 0, i = 0; (i != reta_size); ++i) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		reta_conf[idx].reta[i % RTE_RETA_GROUP_SIZE] =
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] =
 			(*priv->reta_idx)[i];
 	}
 	return 0;
@@ -209,8 +209,8 @@ mlx5_dev_rss_reta_update(struct rte_eth_dev *dev,
 	if (ret)
 		return ret;
 	for (idx = 0, i = 0; (i != reta_size); ++i) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		pos = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (((reta_conf[idx].mask >> i) & 0x1) == 0)
 			continue;
 		MLX5_ASSERT(reta_conf[idx].reta[pos] < priv->rxqs_n);
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index d8d7e481dea0..eb4dc3375248 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -333,22 +333,22 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_dev_config *config = &priv->config;
-	uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
-			     DEV_RX_OFFLOAD_TIMESTAMP |
-			     DEV_RX_OFFLOAD_RSS_HASH);
+	uint64_t offloads = (RTE_ETH_RX_OFFLOAD_SCATTER |
+			     RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+			     RTE_ETH_RX_OFFLOAD_RSS_HASH);
 
 	if (!config->mprq.enabled)
 		offloads |= RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT;
 	if (config->hw_fcs_strip)
-		offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	if (config->hw_csum)
-		offloads |= (DEV_RX_OFFLOAD_IPV4_CKSUM |
-			     DEV_RX_OFFLOAD_UDP_CKSUM |
-			     DEV_RX_OFFLOAD_TCP_CKSUM);
+		offloads |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			     RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
 	if (config->hw_vlan_strip)
-		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	if (MLX5_LRO_SUPPORTED(dev))
-		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+		offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 	return offloads;
 }
 
@@ -362,7 +362,7 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
 uint64_t
 mlx5_get_rx_port_offloads(void)
 {
-	uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+	uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	return offloads;
 }
@@ -694,7 +694,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 				    dev->data->dev_conf.rxmode.offloads;
 
 		/* The offloads should be checked on rte_eth_dev layer. */
-		MLX5_ASSERT(offloads & DEV_RX_OFFLOAD_SCATTER);
+		MLX5_ASSERT(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
 		if (!(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
 			DRV_LOG(ERR, "port %u queue index %u split "
 				     "offload not configured",
@@ -1325,7 +1325,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	struct mlx5_dev_config *config = &priv->config;
 	uint64_t offloads = conf->offloads |
 			   dev->data->dev_conf.rxmode.offloads;
-	unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
+	unsigned int lro_on_queue = !!(offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO);
 	unsigned int max_rx_pktlen = lro_on_queue ?
 			dev->data->dev_conf.rxmode.max_lro_pkt_size :
 			dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN +
@@ -1428,7 +1428,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	} while (tail_len || !rte_is_power_of_2(tmpl->rxq.rxseg_n));
 	MLX5_ASSERT(tmpl->rxq.rxseg_n &&
 		    tmpl->rxq.rxseg_n <= MLX5_MAX_RXQ_NSEG);
-	if (tmpl->rxq.rxseg_n > 1 && !(offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	if (tmpl->rxq.rxseg_n > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
 			" configured and no enough mbuf space(%u) to contain "
 			"the maximum RX packet length(%u) with head-room(%u)",
@@ -1472,7 +1472,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 			config->mprq.stride_size_n : mprq_stride_size;
 		tmpl->rxq.strd_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT;
 		tmpl->rxq.strd_scatter_en =
-				!!(offloads & DEV_RX_OFFLOAD_SCATTER);
+				!!(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
 		tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
 				config->mprq.max_memcpy_len);
 		max_lro_size = RTE_MIN(max_rx_pktlen,
@@ -1487,7 +1487,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
 		tmpl->rxq.sges_n = 0;
 		max_lro_size = max_rx_pktlen;
-	} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+	} else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		unsigned int sges_n;
 
 		if (lro_on_queue && first_mb_free_size <
@@ -1548,9 +1548,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	}
 	mlx5_max_lro_msg_size_adjust(dev, idx, max_lro_size);
 	/* Toggle RX checksum offload if hardware supports it. */
-	tmpl->rxq.csum = !!(offloads & DEV_RX_OFFLOAD_CHECKSUM);
+	tmpl->rxq.csum = !!(offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM);
 	/* Configure Rx timestamp. */
-	tmpl->rxq.hw_timestamp = !!(offloads & DEV_RX_OFFLOAD_TIMESTAMP);
+	tmpl->rxq.hw_timestamp = !!(offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP);
 	tmpl->rxq.timestamp_rx_flag = 0;
 	if (tmpl->rxq.hw_timestamp && rte_mbuf_dyn_rx_timestamp_register(
 			&tmpl->rxq.timestamp_offset,
@@ -1559,11 +1559,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		goto error;
 	}
 	/* Configure VLAN stripping. */
-	tmpl->rxq.vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	tmpl->rxq.vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 	/* By default, FCS (CRC) is stripped by hardware. */
 	tmpl->rxq.crc_present = 0;
 	tmpl->rxq.lro = lro_on_queue;
-	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		if (config->hw_fcs_strip) {
 			/*
 			 * RQs used for LRO-enabled TIRs should not be
@@ -1593,7 +1593,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		tmpl->rxq.crc_present << 2);
 	/* Save port ID. */
 	tmpl->rxq.rss_hash = !!priv->rss_conf.rss_hf &&
-		(!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS));
+		(!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS));
 	tmpl->rxq.port_id = dev->data->port_id;
 	tmpl->priv = priv;
 	tmpl->rxq.mp = rx_seg[0].mp;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.h b/drivers/net/mlx5/mlx5_rxtx_vec.h
index 93b4f517bb3e..65d91bdf67e2 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.h
@@ -16,10 +16,10 @@
 
 /* HW checksum offload capabilities of vectorized Tx. */
 #define MLX5_VEC_TX_CKSUM_OFFLOAD_CAP \
-	(DEV_TX_OFFLOAD_IPV4_CKSUM | \
-	 DEV_TX_OFFLOAD_UDP_CKSUM | \
-	 DEV_TX_OFFLOAD_TCP_CKSUM | \
-	 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+	(RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+	 RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+	 RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+	 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 
 /*
  * Compile time sanity check for vectorized functions.
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index df671379e46d..12aeba60348a 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -523,36 +523,36 @@ mlx5_select_tx_function(struct rte_eth_dev *dev)
 	unsigned int diff = 0, olx = 0, i, m;
 
 	MLX5_ASSERT(priv);
-	if (tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
 		/* We should support Multi-Segment Packets. */
 		olx |= MLX5_TXOFF_CONFIG_MULTI;
 	}
-	if (tx_offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-			   DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			   DEV_TX_OFFLOAD_GRE_TNL_TSO |
-			   DEV_TX_OFFLOAD_IP_TNL_TSO |
-			   DEV_TX_OFFLOAD_UDP_TNL_TSO)) {
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+			   RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO)) {
 		/* We should support TCP Send Offload. */
 		olx |= MLX5_TXOFF_CONFIG_TSO;
 	}
-	if (tx_offloads & (DEV_TX_OFFLOAD_IP_TNL_TSO |
-			   DEV_TX_OFFLOAD_UDP_TNL_TSO |
-			   DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 		/* We should support Software Parser for Tunnels. */
 		olx |= MLX5_TXOFF_CONFIG_SWP;
 	}
-	if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			   DEV_TX_OFFLOAD_UDP_CKSUM |
-			   DEV_TX_OFFLOAD_TCP_CKSUM |
-			   DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 		/* We should support IP/TCP/UDP Checksums. */
 		olx |= MLX5_TXOFF_CONFIG_CSUM;
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) {
 		/* We should support VLAN insertion. */
 		olx |= MLX5_TXOFF_CONFIG_VLAN;
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
 	    rte_mbuf_dynflag_lookup
 			(RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME, NULL) >= 0 &&
 	    rte_mbuf_dynfield_lookup
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 1f92250f5edd..02bb9307ae61 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -98,42 +98,42 @@ uint64_t
 mlx5_get_tx_port_offloads(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	uint64_t offloads = (DEV_TX_OFFLOAD_MULTI_SEGS |
-			     DEV_TX_OFFLOAD_VLAN_INSERT);
+	uint64_t offloads = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+			     RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
 	struct mlx5_dev_config *config = &priv->config;
 
 	if (config->hw_csum)
-		offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_UDP_CKSUM |
-			     DEV_TX_OFFLOAD_TCP_CKSUM);
+		offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
 	if (config->tso)
-		offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+		offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	if (config->tx_pp)
-		offloads |= DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP;
+		offloads |= RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP;
 	if (config->swp) {
 		if (config->swp & MLX5_SW_PARSING_CSUM_CAP)
-			offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+			offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 		if (config->swp & MLX5_SW_PARSING_TSO_CAP)
-			offloads |= (DEV_TX_OFFLOAD_IP_TNL_TSO |
-				     DEV_TX_OFFLOAD_UDP_TNL_TSO);
+			offloads |= (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 	}
 	if (config->tunnel_en) {
 		if (config->hw_csum)
-			offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+			offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 		if (config->tso) {
 			if (config->tunnel_en &
 				MLX5_TUNNELED_OFFLOADS_VXLAN_CAP)
-				offloads |= DEV_TX_OFFLOAD_VXLAN_TNL_TSO;
+				offloads |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
 			if (config->tunnel_en &
 				MLX5_TUNNELED_OFFLOADS_GRE_CAP)
-				offloads |= DEV_TX_OFFLOAD_GRE_TNL_TSO;
+				offloads |= RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO;
 			if (config->tunnel_en &
 				MLX5_TUNNELED_OFFLOADS_GENEVE_CAP)
-				offloads |= DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+				offloads |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
 		}
 	}
 	if (!config->mprq.enabled)
-		offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	return offloads;
 }
 
@@ -801,17 +801,17 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
 	unsigned int inlen_mode; /* Minimal required Inline data. */
 	unsigned int txqs_inline; /* Min Tx queues to enable inline. */
 	uint64_t dev_txoff = priv->dev_data->dev_conf.txmode.offloads;
-	bool tso = txq_ctrl->txq.offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-					    DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-					    DEV_TX_OFFLOAD_GRE_TNL_TSO |
-					    DEV_TX_OFFLOAD_IP_TNL_TSO |
-					    DEV_TX_OFFLOAD_UDP_TNL_TSO);
+	bool tso = txq_ctrl->txq.offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+					    RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+					    RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+					    RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+					    RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 	bool vlan_inline;
 	unsigned int temp;
 
 	txq_ctrl->txq.fast_free =
-		!!((txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
-		   !(txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MULTI_SEGS) &&
+		!!((txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
+		   !(txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) &&
 		   !config->mprq.enabled);
 	if (config->txqs_inline == MLX5_ARG_UNSET)
 		txqs_inline =
@@ -870,7 +870,7 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
 	 * tx_burst routine.
 	 */
 	txq_ctrl->txq.vlan_en = config->hw_vlan_insert;
-	vlan_inline = (dev_txoff & DEV_TX_OFFLOAD_VLAN_INSERT) &&
+	vlan_inline = (dev_txoff & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) &&
 		      !config->hw_vlan_insert;
 	/*
 	 * If there are few Tx queues it is prioritized
@@ -978,19 +978,19 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
 						    MLX5_MAX_TSO_HEADER);
 		txq_ctrl->txq.tso_en = 1;
 	}
-	if (((DEV_TX_OFFLOAD_VXLAN_TNL_TSO & txq_ctrl->txq.offloads) &&
+	if (((RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO & txq_ctrl->txq.offloads) &&
 	    (config->tunnel_en & MLX5_TUNNELED_OFFLOADS_VXLAN_CAP)) |
-	   ((DEV_TX_OFFLOAD_GRE_TNL_TSO & txq_ctrl->txq.offloads) &&
+	   ((RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO & txq_ctrl->txq.offloads) &&
 	    (config->tunnel_en & MLX5_TUNNELED_OFFLOADS_GRE_CAP)) |
-	   ((DEV_TX_OFFLOAD_GENEVE_TNL_TSO & txq_ctrl->txq.offloads) &&
+	   ((RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO & txq_ctrl->txq.offloads) &&
 	    (config->tunnel_en & MLX5_TUNNELED_OFFLOADS_GENEVE_CAP)) |
 	   (config->swp  & MLX5_SW_PARSING_TSO_CAP))
 		txq_ctrl->txq.tunnel_en = 1;
-	txq_ctrl->txq.swp_en = (((DEV_TX_OFFLOAD_IP_TNL_TSO |
-				  DEV_TX_OFFLOAD_UDP_TNL_TSO) &
+	txq_ctrl->txq.swp_en = (((RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO) &
 				  txq_ctrl->txq.offloads) && (config->swp &
 				  MLX5_SW_PARSING_TSO_CAP)) |
-				((DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM &
+				((RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM &
 				 txq_ctrl->txq.offloads) && (config->swp &
 				 MLX5_SW_PARSING_CSUM_CAP));
 }
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 60f97f2d2d1f..07792fc5d94f 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -142,9 +142,9 @@ mlx5_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct mlx5_priv *priv = dev->data->dev_private;
 	unsigned int i;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		int hw_vlan_strip = !!(dev->data->dev_conf.rxmode.offloads &
-				       DEV_RX_OFFLOAD_VLAN_STRIP);
+				       RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 		if (!priv->config.hw_vlan_strip) {
 			DRV_LOG(ERR, "port %u VLAN stripping is not supported",
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 8937ec0d3037..7f7b545ca63a 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -485,8 +485,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	 * Remove this check once DPDK supports larger/variable
 	 * indirection tables.
 	 */
-	if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
-		config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+	if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+		config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
 	DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
 		config->ind_table_max_size);
 	if (config->hw_padding) {
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index 2a0288087357..10fe6d828ccd 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -114,7 +114,7 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
 	struct mvneta_priv *priv = dev->data->dev_private;
 	struct neta_ppio_params *ppio_params;
 
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE) {
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE) {
 		MVNETA_LOG(INFO, "Unsupported RSS and rx multi queue mode %d",
 			dev->data->dev_conf.rxmode.mq_mode);
 		if (dev->data->nb_rx_queues > 1)
@@ -126,7 +126,7 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		priv->multiseg = 1;
 
 	ppio_params = &priv->ppio_params;
@@ -151,10 +151,10 @@ static int
 mvneta_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
 		   struct rte_eth_dev_info *info)
 {
-	info->speed_capa = ETH_LINK_SPEED_10M |
-			   ETH_LINK_SPEED_100M |
-			   ETH_LINK_SPEED_1G |
-			   ETH_LINK_SPEED_2_5G;
+	info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			   RTE_ETH_LINK_SPEED_100M |
+			   RTE_ETH_LINK_SPEED_1G |
+			   RTE_ETH_LINK_SPEED_2_5G;
 
 	info->max_rx_queues = MRVL_NETA_RXQ_MAX;
 	info->max_tx_queues = MRVL_NETA_TXQ_MAX;
@@ -503,28 +503,28 @@ mvneta_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 
 	switch (ethtool_cmd_speed(&edata)) {
 	case SPEED_10:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case SPEED_100:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case SPEED_1000:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case SPEED_2500:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	default:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	}
 
-	dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
-							 ETH_LINK_HALF_DUPLEX;
-	dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
-							   ETH_LINK_FIXED;
+	dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+							 RTE_ETH_LINK_HALF_DUPLEX;
+	dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+							   RTE_ETH_LINK_FIXED;
 
 	neta_ppio_get_link_state(priv->ppio, &link_up);
-	dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index 6428f9ff7931..64aadcffd85a 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -54,14 +54,14 @@
 #define MRVL_NETA_MRU_TO_MTU(mru)	((mru) - MRVL_NETA_HDRS_LEN)
 
 /** Rx offloads capabilities */
-#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_CHECKSUM)
+#define MVNETA_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_CHECKSUM)
 
 /** Tx offloads capabilities */
-#define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				    DEV_TX_OFFLOAD_UDP_CKSUM  | \
-				    DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MVNETA_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+				    RTE_ETH_TX_OFFLOAD_UDP_CKSUM  | \
+				    RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 #define MVNETA_TX_OFFLOADS (MVNETA_TX_OFFLOAD_CHECKSUM | \
-			    DEV_TX_OFFLOAD_MULTI_SEGS)
+			    RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define MVNETA_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
 				PKT_TX_TCP_CKSUM | \
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index 9836bb071a82..62d8aa586dae 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -734,7 +734,7 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	rxq->priv = priv;
 	rxq->mp = mp;
 	rxq->cksum_enabled = dev->data->dev_conf.rxmode.offloads &
-			     DEV_RX_OFFLOAD_IPV4_CKSUM;
+			     RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 	rxq->queue_id = idx;
 	rxq->port_id = dev->data->port_id;
 	rxq->size = desc;
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index a6458d2ce9b5..d0746b0d1215 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -58,15 +58,15 @@
 #define MRVL_COOKIE_HIGH_ADDR_MASK 0xffffff0000000000
 
 /** Port Rx offload capabilities */
-#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
-			  DEV_RX_OFFLOAD_CHECKSUM)
+#define MRVL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+			  RTE_ETH_RX_OFFLOAD_CHECKSUM)
 
 /** Port Tx offloads capabilities */
-#define MRVL_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				  DEV_TX_OFFLOAD_UDP_CKSUM  | \
-				  DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MRVL_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM  | \
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 #define MRVL_TX_OFFLOADS (MRVL_TX_OFFLOAD_CHECKSUM | \
-			  DEV_TX_OFFLOAD_MULTI_SEGS)
+			  RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define MRVL_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
 			      PKT_TX_TCP_CKSUM | \
@@ -442,14 +442,14 @@ mrvl_configure_rss(struct mrvl_priv *priv, struct rte_eth_rss_conf *rss_conf)
 
 	if (rss_conf->rss_hf == 0) {
 		priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
-	} else if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+	} else if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
 		priv->ppio_params.inqs_params.hash_type =
 			PP2_PPIO_HASH_T_2_TUPLE;
-	} else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	} else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		priv->ppio_params.inqs_params.hash_type =
 			PP2_PPIO_HASH_T_5_TUPLE;
 		priv->rss_hf_tcp = 1;
-	} else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	} else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		priv->ppio_params.inqs_params.hash_type =
 			PP2_PPIO_HASH_T_5_TUPLE;
 		priv->rss_hf_tcp = 0;
@@ -483,8 +483,8 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE &&
-	    dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE &&
+	    dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		MRVL_LOG(INFO, "Unsupported rx multi queue mode %d",
 			dev->data->dev_conf.rxmode.mq_mode);
 		return -EINVAL;
@@ -502,7 +502,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		priv->multiseg = 1;
 
 	ret = mrvl_configure_rxqs(priv, dev->data->port_id,
@@ -524,7 +524,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		return ret;
 
 	if (dev->data->nb_rx_queues == 1 &&
-	    dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	    dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		MRVL_LOG(WARNING, "Disabling hash for 1 rx queue");
 		priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
 		priv->configured = 1;
@@ -623,7 +623,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
 	int ret;
 
 	if (!priv->ppio) {
-		dev->data->dev_link.link_status = ETH_LINK_UP;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 		return 0;
 	}
 
@@ -644,7 +644,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
 		return ret;
 	}
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -664,14 +664,14 @@ mrvl_dev_set_link_down(struct rte_eth_dev *dev)
 	int ret;
 
 	if (!priv->ppio) {
-		dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 		return 0;
 	}
 	ret = pp2_ppio_disable(priv->ppio);
 	if (ret)
 		return ret;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
@@ -893,7 +893,7 @@ mrvl_dev_start(struct rte_eth_dev *dev)
 	if (dev->data->all_multicast == 1)
 		mrvl_allmulticast_enable(dev);
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		ret = mrvl_populate_vlan_table(dev, 1);
 		if (ret) {
 			MRVL_LOG(ERR, "Failed to populate VLAN table");
@@ -929,11 +929,11 @@ mrvl_dev_start(struct rte_eth_dev *dev)
 		priv->flow_ctrl = 0;
 	}
 
-	if (dev->data->dev_link.link_status == ETH_LINK_UP) {
+	if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
 		ret = mrvl_dev_set_link_up(dev);
 		if (ret) {
 			MRVL_LOG(ERR, "Failed to set link up");
-			dev->data->dev_link.link_status = ETH_LINK_DOWN;
+			dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 			goto out;
 		}
 	}
@@ -1202,30 +1202,30 @@ mrvl_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 
 	switch (ethtool_cmd_speed(&edata)) {
 	case SPEED_10:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case SPEED_100:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case SPEED_1000:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case SPEED_2500:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	case SPEED_10000:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	default:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	}
 
-	dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
-							 ETH_LINK_HALF_DUPLEX;
-	dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
-							   ETH_LINK_FIXED;
+	dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+							 RTE_ETH_LINK_HALF_DUPLEX;
+	dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+							   RTE_ETH_LINK_FIXED;
 	pp2_ppio_get_link_state(priv->ppio, &link_up);
-	dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -1709,11 +1709,11 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
 {
 	struct mrvl_priv *priv = dev->data->dev_private;
 
-	info->speed_capa = ETH_LINK_SPEED_10M |
-			   ETH_LINK_SPEED_100M |
-			   ETH_LINK_SPEED_1G |
-			   ETH_LINK_SPEED_2_5G |
-			   ETH_LINK_SPEED_10G;
+	info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			   RTE_ETH_LINK_SPEED_100M |
+			   RTE_ETH_LINK_SPEED_1G |
+			   RTE_ETH_LINK_SPEED_2_5G |
+			   RTE_ETH_LINK_SPEED_10G;
 
 	info->max_rx_queues = MRVL_PP2_RXQ_MAX;
 	info->max_tx_queues = MRVL_PP2_TXQ_MAX;
@@ -1733,9 +1733,9 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
 	info->tx_offload_capa = MRVL_TX_OFFLOADS;
 	info->tx_queue_offload_capa = MRVL_TX_OFFLOADS;
 
-	info->flow_type_rss_offloads = ETH_RSS_IPV4 |
-				       ETH_RSS_NONFRAG_IPV4_TCP |
-				       ETH_RSS_NONFRAG_IPV4_UDP;
+	info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+				       RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				       RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	/* By default packets are dropped if no descriptors are available */
 	info->default_rxconf.rx_drop_en = 1;
@@ -1864,13 +1864,13 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
 	int ret;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		MRVL_LOG(ERR, "VLAN stripping is not supported\n");
 		return -ENOTSUP;
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ret = mrvl_populate_vlan_table(dev, 1);
 		else
 			ret = mrvl_populate_vlan_table(dev, 0);
@@ -1879,7 +1879,7 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			return ret;
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
 		MRVL_LOG(ERR, "Extend VLAN not supported\n");
 		return -ENOTSUP;
 	}
@@ -2022,7 +2022,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 
 	rxq->priv = priv;
 	rxq->mp = mp;
-	rxq->cksum_enabled = offloads & DEV_RX_OFFLOAD_IPV4_CKSUM;
+	rxq->cksum_enabled = offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 	rxq->queue_id = idx;
 	rxq->port_id = dev->data->port_id;
 	mrvl_port_to_bpool_lookup[rxq->port_id] = priv->bpool;
@@ -2182,7 +2182,7 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		return ret;
 	}
 
-	fc_conf->mode = en ? RTE_FC_RX_PAUSE : RTE_FC_NONE;
+	fc_conf->mode = en ? RTE_ETH_FC_RX_PAUSE : RTE_ETH_FC_NONE;
 
 	ret = pp2_ppio_get_tx_pause(priv->ppio, &en);
 	if (ret) {
@@ -2191,10 +2191,10 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	if (en) {
-		if (fc_conf->mode == RTE_FC_NONE)
-			fc_conf->mode = RTE_FC_TX_PAUSE;
+		if (fc_conf->mode == RTE_ETH_FC_NONE)
+			fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		else
-			fc_conf->mode = RTE_FC_FULL;
+			fc_conf->mode = RTE_ETH_FC_FULL;
 	}
 
 	return 0;
@@ -2240,19 +2240,19 @@ mrvl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		rx_en = 1;
 		tx_en = 1;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		rx_en = 0;
 		tx_en = 1;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		rx_en = 1;
 		tx_en = 0;
 		break;
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		rx_en = 0;
 		tx_en = 0;
 		break;
@@ -2329,11 +2329,11 @@ mrvl_rss_hash_conf_get(struct rte_eth_dev *dev,
 	if (hash_type == PP2_PPIO_HASH_T_NONE)
 		rss_conf->rss_hf = 0;
 	else if (hash_type == PP2_PPIO_HASH_T_2_TUPLE)
-		rss_conf->rss_hf = ETH_RSS_IPV4;
+		rss_conf->rss_hf = RTE_ETH_RSS_IPV4;
 	else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && priv->rss_hf_tcp)
-		rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && !priv->rss_hf_tcp)
-		rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	return 0;
 }
@@ -3152,7 +3152,7 @@ mrvl_eth_dev_create(struct rte_vdev_device *vdev, const char *name)
 	eth_dev->dev_ops = &mrvl_ops;
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	rte_eth_dev_probing_finish(eth_dev);
 	return 0;
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9e2a40597349..9c4ae80e7e16 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -40,16 +40,16 @@
 #include "hn_nvs.h"
 #include "ndis.h"
 
-#define HN_TX_OFFLOAD_CAPS (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-			    DEV_TX_OFFLOAD_TCP_CKSUM  | \
-			    DEV_TX_OFFLOAD_UDP_CKSUM  | \
-			    DEV_TX_OFFLOAD_TCP_TSO    | \
-			    DEV_TX_OFFLOAD_MULTI_SEGS | \
-			    DEV_TX_OFFLOAD_VLAN_INSERT)
+#define HN_TX_OFFLOAD_CAPS (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+			    RTE_ETH_TX_OFFLOAD_TCP_CKSUM  | \
+			    RTE_ETH_TX_OFFLOAD_UDP_CKSUM  | \
+			    RTE_ETH_TX_OFFLOAD_TCP_TSO    | \
+			    RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+			    RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 
-#define HN_RX_OFFLOAD_CAPS (DEV_RX_OFFLOAD_CHECKSUM | \
-			    DEV_RX_OFFLOAD_VLAN_STRIP | \
-			    DEV_RX_OFFLOAD_RSS_HASH)
+#define HN_RX_OFFLOAD_CAPS (RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+			    RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+			    RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define NETVSC_ARG_LATENCY "latency"
 #define NETVSC_ARG_RXBREAK "rx_copybreak"
@@ -238,21 +238,21 @@ hn_dev_link_update(struct rte_eth_dev *dev,
 	hn_rndis_get_linkspeed(hv);
 
 	link = (struct rte_eth_link) {
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_autoneg = ETH_LINK_SPEED_FIXED,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_autoneg = RTE_ETH_LINK_SPEED_FIXED,
 		.link_speed = hv->link_speed / 10000,
 	};
 
 	if (hv->link_status == NDIS_MEDIA_STATE_CONNECTED)
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 	else
-		link.link_status = ETH_LINK_DOWN;
+		link.link_status = RTE_ETH_LINK_DOWN;
 
 	if (old.link_status == link.link_status)
 		return 0;
 
 	PMD_INIT_LOG(DEBUG, "Port %d is %s", dev->data->port_id,
-		     (link.link_status == ETH_LINK_UP) ? "up" : "down");
+		     (link.link_status == RTE_ETH_LINK_UP) ? "up" : "down");
 
 	return rte_eth_linkstatus_set(dev, &link);
 }
@@ -263,14 +263,14 @@ static int hn_dev_info_get(struct rte_eth_dev *dev,
 	struct hn_data *hv = dev->data->dev_private;
 	int rc;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
 	dev_info->min_rx_bufsize = HN_MIN_RX_BUF_SIZE;
 	dev_info->max_rx_pktlen  = HN_MAX_XFER_LEN;
 	dev_info->max_mac_addrs  = 1;
 
 	dev_info->hash_key_size = NDIS_HASH_KEYSIZE_TOEPLITZ;
 	dev_info->flow_type_rss_offloads = hv->rss_offloads;
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 
 	dev_info->max_rx_queues = hv->max_queues;
 	dev_info->max_tx_queues = hv->max_queues;
@@ -306,8 +306,8 @@ static int hn_rss_reta_update(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < NDIS_HASH_INDCNT; i++) {
-		uint16_t idx = i / RTE_RETA_GROUP_SIZE;
-		uint16_t shift = i % RTE_RETA_GROUP_SIZE;
+		uint16_t idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint16_t shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint64_t mask = (uint64_t)1 << shift;
 
 		if (reta_conf[idx].mask & mask)
@@ -346,8 +346,8 @@ static int hn_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < NDIS_HASH_INDCNT; i++) {
-		uint16_t idx = i / RTE_RETA_GROUP_SIZE;
-		uint16_t shift = i % RTE_RETA_GROUP_SIZE;
+		uint16_t idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint16_t shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint64_t mask = (uint64_t)1 << shift;
 
 		if (reta_conf[idx].mask & mask)
@@ -362,17 +362,17 @@ static void hn_rss_hash_init(struct hn_data *hv,
 	/* Convert from DPDK RSS hash flags to NDIS hash flags */
 	hv->rss_hash = NDIS_HASH_FUNCTION_TOEPLITZ;
 
-	if (rss_conf->rss_hf & ETH_RSS_IPV4)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
 		hv->rss_hash |= NDIS_HASH_IPV4;
-	if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		hv->rss_hash |= NDIS_HASH_TCP_IPV4;
-	if (rss_conf->rss_hf & ETH_RSS_IPV6)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
 		hv->rss_hash |=  NDIS_HASH_IPV6;
-	if (rss_conf->rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX)
 		hv->rss_hash |=  NDIS_HASH_IPV6_EX;
-	if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		hv->rss_hash |= NDIS_HASH_TCP_IPV6;
-	if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		hv->rss_hash |= NDIS_HASH_TCP_IPV6_EX;
 
 	memcpy(hv->rss_key, rss_conf->rss_key ? : rss_default_key,
@@ -427,22 +427,22 @@ static int hn_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	rss_conf->rss_hf = 0;
 	if (hv->rss_hash & NDIS_HASH_IPV4)
-		rss_conf->rss_hf |= ETH_RSS_IPV4;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
 
 	if (hv->rss_hash & NDIS_HASH_TCP_IPV4)
-		rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 
 	if (hv->rss_hash & NDIS_HASH_IPV6)
-		rss_conf->rss_hf |= ETH_RSS_IPV6;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
 
 	if (hv->rss_hash & NDIS_HASH_IPV6_EX)
-		rss_conf->rss_hf |= ETH_RSS_IPV6_EX;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_EX;
 
 	if (hv->rss_hash & NDIS_HASH_TCP_IPV6)
-		rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 
 	if (hv->rss_hash & NDIS_HASH_TCP_IPV6_EX)
-		rss_conf->rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 
 	return 0;
 }
@@ -686,8 +686,8 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev_conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev_conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	unsupported = txmode->offloads & ~HN_TX_OFFLOAD_CAPS;
 	if (unsupported) {
@@ -705,7 +705,7 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	hv->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	hv->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 	err = hn_rndis_conf_offload(hv, txmode->offloads,
 				    rxmode->offloads);
diff --git a/drivers/net/netvsc/hn_rndis.c b/drivers/net/netvsc/hn_rndis.c
index 62ba39636cd8..1b63b27e0c3e 100644
--- a/drivers/net/netvsc/hn_rndis.c
+++ b/drivers/net/netvsc/hn_rndis.c
@@ -710,15 +710,15 @@ hn_rndis_query_rsscaps(struct hn_data *hv,
 
 	hv->rss_offloads = 0;
 	if (caps.ndis_caps & NDIS_RSS_CAP_IPV4)
-		hv->rss_offloads |= ETH_RSS_IPV4
-			| ETH_RSS_NONFRAG_IPV4_TCP
-			| ETH_RSS_NONFRAG_IPV4_UDP;
+		hv->rss_offloads |= RTE_ETH_RSS_IPV4
+			| RTE_ETH_RSS_NONFRAG_IPV4_TCP
+			| RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (caps.ndis_caps & NDIS_RSS_CAP_IPV6)
-		hv->rss_offloads |= ETH_RSS_IPV6
-			| ETH_RSS_NONFRAG_IPV6_TCP;
+		hv->rss_offloads |= RTE_ETH_RSS_IPV6
+			| RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (caps.ndis_caps & NDIS_RSS_CAP_IPV6_EX)
-		hv->rss_offloads |= ETH_RSS_IPV6_EX
-			| ETH_RSS_IPV6_TCP_EX;
+		hv->rss_offloads |= RTE_ETH_RSS_IPV6_EX
+			| RTE_ETH_RSS_IPV6_TCP_EX;
 
 	/* Commit! */
 	*rxr_cnt0 = rxr_cnt;
@@ -800,7 +800,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 		params.ndis_hdr.ndis_size = NDIS_OFFLOAD_PARAMS_SIZE;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_TCP4)
 			params.ndis_tcp4csum = NDIS_OFFLOAD_PARAM_TX;
 		else
@@ -812,7 +812,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) {
 		if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4)
 		    == NDIS_RXCSUM_CAP_TCP4)
 			params.ndis_tcp4csum |= NDIS_OFFLOAD_PARAM_RX;
@@ -826,7 +826,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4)
 			params.ndis_udp4csum = NDIS_OFFLOAD_PARAM_TX;
 		else
@@ -839,7 +839,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (rx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+	if (rx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4)
 			params.ndis_udp4csum |= NDIS_OFFLOAD_PARAM_RX;
 		else
@@ -851,21 +851,21 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
 		if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_IP4)
 		    == NDIS_TXCSUM_CAP_IP4)
 			params.ndis_ip4csum = NDIS_OFFLOAD_PARAM_TX;
 		else
 			goto unsupported;
 	}
-	if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
 			params.ndis_ip4csum |= NDIS_OFFLOAD_PARAM_RX;
 		else
 			goto unsupported;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		if (hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023)
 			params.ndis_lsov2_ip4 = NDIS_OFFLOAD_LSOV2_ON;
 		else
@@ -907,41 +907,41 @@ int hn_rndis_get_offload(struct hn_data *hv,
 		return error;
 	}
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-				    DEV_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				    RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_IP4)
 	    == HN_NDIS_TXCSUM_CAP_IP4)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_TCP4)
 	    == HN_NDIS_TXCSUM_CAP_TCP4 &&
 	    (hwcaps.ndis_csum.ndis_ip6_txcsum & HN_NDIS_TXCSUM_CAP_TCP6)
 	    == HN_NDIS_TXCSUM_CAP_TCP6)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4) &&
 	    (hwcaps.ndis_csum.ndis_ip6_txcsum & NDIS_TXCSUM_CAP_UDP6))
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_UDP_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
 
 	if ((hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023) &&
 	    (hwcaps.ndis_lsov2.ndis_ip6_opts & HN_NDIS_LSOV2_CAP_IP6)
 	    == HN_NDIS_LSOV2_CAP_IP6)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
-				    DEV_RX_OFFLOAD_RSS_HASH;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				    RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4) &&
 	    (hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_TCP6))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4) &&
 	    (hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_UDP6))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_UDP_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
 
 	return 0;
 }
diff --git a/drivers/net/nfb/nfb_ethdev.c b/drivers/net/nfb/nfb_ethdev.c
index 99d93ebf4667..3c39937816a4 100644
--- a/drivers/net/nfb/nfb_ethdev.c
+++ b/drivers/net/nfb/nfb_ethdev.c
@@ -200,7 +200,7 @@ nfb_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_rx_pktlen = (uint32_t)-1;
 	dev_info->max_rx_queues = dev->data->nb_rx_queues;
 	dev_info->max_tx_queues = dev->data->nb_tx_queues;
-	dev_info->speed_capa = ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -268,26 +268,26 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
 
 	status.speed = MAC_SPEED_UNKNOWN;
 
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_status = ETH_LINK_DOWN;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_autoneg = ETH_LINK_SPEED_FIXED;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_SPEED_FIXED;
 
 	if (internals->rxmac[0] != NULL) {
 		nc_rxmac_read_status(internals->rxmac[0], &status);
 
 		switch (status.speed) {
 		case MAC_SPEED_10G:
-			link.link_speed = ETH_SPEED_NUM_10G;
+			link.link_speed = RTE_ETH_SPEED_NUM_10G;
 			break;
 		case MAC_SPEED_40G:
-			link.link_speed = ETH_SPEED_NUM_40G;
+			link.link_speed = RTE_ETH_SPEED_NUM_40G;
 			break;
 		case MAC_SPEED_100G:
-			link.link_speed = ETH_SPEED_NUM_100G;
+			link.link_speed = RTE_ETH_SPEED_NUM_100G;
 			break;
 		default:
-			link.link_speed = ETH_SPEED_NUM_NONE;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 			break;
 		}
 	}
@@ -296,7 +296,7 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
 		nc_rxmac_read_status(internals->rxmac[i], &status);
 
 		if (status.enabled && status.link_up) {
-			link.link_status = ETH_LINK_UP;
+			link.link_status = RTE_ETH_LINK_UP;
 			break;
 		}
 	}
diff --git a/drivers/net/nfb/nfb_rx.c b/drivers/net/nfb/nfb_rx.c
index 3ebb332ae46c..f76e2ba64621 100644
--- a/drivers/net/nfb/nfb_rx.c
+++ b/drivers/net/nfb/nfb_rx.c
@@ -42,7 +42,7 @@ nfb_check_timestamp(struct rte_devargs *devargs)
 	}
 	/* Timestamps are enabled when there is
 	 * key-value pair: enable_timestamp=1
-	 * TODO: timestamp should be enabled with DEV_RX_OFFLOAD_TIMESTAMP
+	 * TODO: timestamp should be enabled with RTE_ETH_RX_OFFLOAD_TIMESTAMP
 	 */
 	if (rte_kvargs_process(kvlist, TIMESTAMP_ARG,
 		timestamp_check_handler, NULL) < 0) {
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 0003fd54dde5..3ea697c54462 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -160,8 +160,8 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Checking TX mode */
 	if (txmode->mq_mode) {
@@ -170,7 +170,7 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	}
 
 	/* Checking RX mode */
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS &&
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS &&
 	    !(hw->cap & NFP_NET_CFG_CTRL_RSS)) {
 		PMD_INIT_LOG(INFO, "RSS not supported");
 		return -EINVAL;
@@ -359,19 +359,19 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
 		if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
 			ctrl |= NFP_NET_CFG_CTRL_RXCSUM;
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 		if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
 			ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
 	}
 
 	hw->mtu = dev->data->mtu;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
 
 	/* L2 broadcast */
@@ -383,13 +383,13 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 		ctrl |= NFP_NET_CFG_CTRL_L2MC;
 
 	/* TX checksum offload */
-	if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 		ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
 
 	/* LSO offload */
-	if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		if (hw->cap & NFP_NET_CFG_CTRL_LSO)
 			ctrl |= NFP_NET_CFG_CTRL_LSO;
 		else
@@ -397,7 +397,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 	}
 
 	/* RX gather */
-	if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		ctrl |= NFP_NET_CFG_CTRL_GATHER;
 
 	return ctrl;
@@ -485,14 +485,14 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 	int ret;
 
 	static const uint32_t ls_to_ethtool[] = {
-		[NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = ETH_SPEED_NUM_NONE,
-		[NFP_NET_CFG_STS_LINK_RATE_UNKNOWN]     = ETH_SPEED_NUM_NONE,
-		[NFP_NET_CFG_STS_LINK_RATE_1G]          = ETH_SPEED_NUM_1G,
-		[NFP_NET_CFG_STS_LINK_RATE_10G]         = ETH_SPEED_NUM_10G,
-		[NFP_NET_CFG_STS_LINK_RATE_25G]         = ETH_SPEED_NUM_25G,
-		[NFP_NET_CFG_STS_LINK_RATE_40G]         = ETH_SPEED_NUM_40G,
-		[NFP_NET_CFG_STS_LINK_RATE_50G]         = ETH_SPEED_NUM_50G,
-		[NFP_NET_CFG_STS_LINK_RATE_100G]        = ETH_SPEED_NUM_100G,
+		[NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = RTE_ETH_SPEED_NUM_NONE,
+		[NFP_NET_CFG_STS_LINK_RATE_UNKNOWN]     = RTE_ETH_SPEED_NUM_NONE,
+		[NFP_NET_CFG_STS_LINK_RATE_1G]          = RTE_ETH_SPEED_NUM_1G,
+		[NFP_NET_CFG_STS_LINK_RATE_10G]         = RTE_ETH_SPEED_NUM_10G,
+		[NFP_NET_CFG_STS_LINK_RATE_25G]         = RTE_ETH_SPEED_NUM_25G,
+		[NFP_NET_CFG_STS_LINK_RATE_40G]         = RTE_ETH_SPEED_NUM_40G,
+		[NFP_NET_CFG_STS_LINK_RATE_50G]         = RTE_ETH_SPEED_NUM_50G,
+		[NFP_NET_CFG_STS_LINK_RATE_100G]        = RTE_ETH_SPEED_NUM_100G,
 	};
 
 	PMD_DRV_LOG(DEBUG, "Link update");
@@ -504,15 +504,15 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 	memset(&link, 0, sizeof(struct rte_eth_link));
 
 	if (nn_link_status & NFP_NET_CFG_STS_LINK)
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	nn_link_status = (nn_link_status >> NFP_NET_CFG_STS_LINK_RATE_SHIFT) &
 			 NFP_NET_CFG_STS_LINK_RATE_MASK;
 
 	if (nn_link_status >= RTE_DIM(ls_to_ethtool))
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	else
 		link.link_speed = ls_to_ethtool[nn_link_status];
 
@@ -701,26 +701,26 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = 1;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
-		dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+		dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM |
-					     DEV_RX_OFFLOAD_UDP_CKSUM |
-					     DEV_RX_OFFLOAD_TCP_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+					     RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+					     RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)
-		dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+		dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_TXCSUM)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM |
-					     DEV_TX_OFFLOAD_UDP_CKSUM |
-					     DEV_TX_OFFLOAD_TCP_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+					     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+					     RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_LSO_ANY)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_GATHER)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -757,22 +757,22 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	};
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
-		dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
-						   ETH_RSS_NONFRAG_IPV4_TCP |
-						   ETH_RSS_NONFRAG_IPV4_UDP |
-						   ETH_RSS_IPV6 |
-						   ETH_RSS_NONFRAG_IPV6_TCP |
-						   ETH_RSS_NONFRAG_IPV6_UDP;
+		dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+						   RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+						   RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+						   RTE_ETH_RSS_IPV6 |
+						   RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+						   RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 		dev_info->reta_size = NFP_NET_CFG_RSS_ITBL_SZ;
 		dev_info->hash_key_size = NFP_NET_CFG_RSS_KEY_SZ;
 	}
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			       ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
-			       ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			       RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+			       RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -843,7 +843,7 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
 	if (link.link_status)
 		PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 			    dev->data->port_id, link.link_speed,
-			    link.link_duplex == ETH_LINK_FULL_DUPLEX
+			    link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX
 			    ? "full-duplex" : "half-duplex");
 	else
 		PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -973,12 +973,12 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	new_ctrl = 0;
 
 	/* Enable vlan strip if it is not configured yet */
-	if ((mask & ETH_VLAN_STRIP_OFFLOAD) &&
+	if ((mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
 	    !(hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
 		new_ctrl = hw->ctrl | NFP_NET_CFG_CTRL_RXVLAN;
 
 	/* Disable vlan strip just if it is configured */
-	if (!(mask & ETH_VLAN_STRIP_OFFLOAD) &&
+	if (!(mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
 	    (hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
 		new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_RXVLAN;
 
@@ -1018,8 +1018,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 	 */
 	for (i = 0; i < reta_size; i += 4) {
 		/* Handling 4 RSS entries per loop */
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
 
 		if (!mask)
@@ -1099,8 +1099,8 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 	 */
 	for (i = 0; i < reta_size; i += 4) {
 		/* Handling 4 RSS entries per loop */
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
 
 		if (!mask)
@@ -1138,22 +1138,22 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 
 	rss_hf = rss_conf->rss_hf;
 
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_TCP;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_UDP;
 
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_TCP;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_UDP;
 
 	cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK;
@@ -1223,22 +1223,22 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
 	cfg_rss_ctrl = nn_cfg_readl(hw, NFP_NET_CFG_RSS_CTRL);
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 	/* Propagate current RSS hash functions to caller */
 	rss_conf->rss_hf = rss_hf;
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 1169ea77a8c7..e08e594b04fe 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -141,7 +141,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
 		new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 62cb3536e0c9..817fe64dbceb 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -103,7 +103,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
 		new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3b5c6615adfa..fc76b84b5b66 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -409,7 +409,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 	dev->data->dev_link.link_status = link_up;
 
 	link_speeds = &dev->data->dev_conf.link_speeds;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG)
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG)
 		negotiate = true;
 
 	err = hw->mac.get_link_capabilities(hw, &speed, &negotiate);
@@ -418,11 +418,11 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 
 	allowed_speeds = 0;
 	if (hw->mac.default_speeds & NGBE_LINK_SPEED_1GB_FULL)
-		allowed_speeds |= ETH_LINK_SPEED_1G;
+		allowed_speeds |= RTE_ETH_LINK_SPEED_1G;
 	if (hw->mac.default_speeds & NGBE_LINK_SPEED_100M_FULL)
-		allowed_speeds |= ETH_LINK_SPEED_100M;
+		allowed_speeds |= RTE_ETH_LINK_SPEED_100M;
 	if (hw->mac.default_speeds & NGBE_LINK_SPEED_10M_FULL)
-		allowed_speeds |= ETH_LINK_SPEED_10M;
+		allowed_speeds |= RTE_ETH_LINK_SPEED_10M;
 
 	if (*link_speeds & ~allowed_speeds) {
 		PMD_INIT_LOG(ERR, "Invalid link setting");
@@ -430,14 +430,14 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 	}
 
 	speed = 0x0;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		speed = hw->mac.default_speeds;
 	} else {
-		if (*link_speeds & ETH_LINK_SPEED_1G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed |= NGBE_LINK_SPEED_1GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_100M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed |= NGBE_LINK_SPEED_100M_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_10M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
 			speed |= NGBE_LINK_SPEED_10M_FULL;
 	}
 
@@ -653,8 +653,8 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->rx_desc_lim = rx_desc_lim;
 	dev_info->tx_desc_lim = tx_desc_lim;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_10M;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_10M;
 
 	/* Driver-preferred Rx/Tx parameters */
 	dev_info->default_rxportconf.burst_size = 32;
@@ -682,11 +682,11 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 	int wait = 1;
 
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			~ETH_LINK_SPEED_AUTONEG);
+			~RTE_ETH_LINK_SPEED_AUTONEG);
 
 	hw->mac.get_link_status = true;
 
@@ -699,8 +699,8 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 
 	err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
 	if (err != 0) {
-		link.link_speed = ETH_SPEED_NUM_NONE;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
@@ -708,27 +708,27 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 		return rte_eth_linkstatus_set(dev, &link);
 
 	intr->flags &= ~NGBE_FLAG_NEED_LINK_CONFIG;
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (link_speed) {
 	default:
 	case NGBE_LINK_SPEED_UNKNOWN:
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 
 	case NGBE_LINK_SPEED_10M_FULL:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		lan_speed = 0;
 		break;
 
 	case NGBE_LINK_SPEED_100M_FULL:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		lan_speed = 1;
 		break;
 
 	case NGBE_LINK_SPEED_1GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		lan_speed = 2;
 		break;
 	}
@@ -912,11 +912,11 @@ ngbe_dev_link_status_print(struct rte_eth_dev *dev)
 
 	rte_eth_linkstatus_get(dev, &link);
 
-	if (link.link_status == ETH_LINK_UP) {
+	if (link.link_status == RTE_ETH_LINK_UP) {
 		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned int)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -956,7 +956,7 @@ ngbe_dev_interrupt_action(struct rte_eth_dev *dev)
 		ngbe_dev_link_update(dev, 0);
 
 		/* likely to up */
-		if (link.link_status != ETH_LINK_UP)
+		if (link.link_status != RTE_ETH_LINK_UP)
 			/* handle it 1 sec later, wait it being stable */
 			timeout = NGBE_LINK_UP_CHECK_TIMEOUT;
 		/* likely to down */
diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
index 25b9e5b1ce1b..ca03469d0e6d 100644
--- a/drivers/net/null/rte_eth_null.c
+++ b/drivers/net/null/rte_eth_null.c
@@ -61,16 +61,16 @@ struct pmd_internals {
 	rte_spinlock_t rss_lock;
 
 	uint16_t reta_size;
-	struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_128 /
-			RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_128 /
+			RTE_ETH_RETA_GROUP_SIZE];
 
 	uint8_t rss_key[40];                /**< 40-byte hash key. */
 };
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(eth_null_logtype, NOTICE);
@@ -189,7 +189,7 @@ eth_dev_start(struct rte_eth_dev *dev)
 	if (dev == NULL)
 		return -EINVAL;
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -199,7 +199,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
 	if (dev == NULL)
 		return 0;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -391,9 +391,9 @@ eth_rss_reta_update(struct rte_eth_dev *dev,
 	rte_spinlock_lock(&internal->rss_lock);
 
 	/* Copy RETA table */
-	for (i = 0; i < (internal->reta_size / RTE_RETA_GROUP_SIZE); i++) {
+	for (i = 0; i < (internal->reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
 		internal->reta_conf[i].mask = reta_conf[i].mask;
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				internal->reta_conf[i].reta[j] = reta_conf[i].reta[j];
 	}
@@ -416,8 +416,8 @@ eth_rss_reta_query(struct rte_eth_dev *dev,
 	rte_spinlock_lock(&internal->rss_lock);
 
 	/* Copy RETA table */
-	for (i = 0; i < (internal->reta_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (internal->reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = internal->reta_conf[i].reta[j];
 	}
@@ -548,8 +548,8 @@ eth_dev_null_create(struct rte_vdev_device *dev, struct pmd_options *args)
 	internals->port_id = eth_dev->data->port_id;
 	rte_eth_random_addr(internals->eth_addr.addr_bytes);
 
-	internals->flow_type_rss_offloads =  ETH_RSS_PROTO_MASK;
-	internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_RETA_GROUP_SIZE;
+	internals->flow_type_rss_offloads =  RTE_ETH_RSS_PROTO_MASK;
+	internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_ETH_RETA_GROUP_SIZE;
 
 	rte_memcpy(internals->rss_key, default_rss_key, 40);
 
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index f578123ed00b..5b8cbec67b5d 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -158,7 +158,7 @@ octeontx_link_status_print(struct rte_eth_dev *eth_dev,
 		octeontx_log_info("Port %u: Link Up - speed %u Mbps - %s",
 			  (eth_dev->data->port_id),
 			  link->link_speed,
-			  link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+			  link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 			  "full-duplex" : "half-duplex");
 	else
 		octeontx_log_info("Port %d: Link Down",
@@ -171,38 +171,38 @@ octeontx_link_status_update(struct octeontx_nic *nic,
 {
 	memset(link, 0, sizeof(*link));
 
-	link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	switch (nic->speed) {
 	case OCTEONTX_LINK_SPEED_SGMII:
-		link->link_speed = ETH_SPEED_NUM_1G;
+		link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case OCTEONTX_LINK_SPEED_XAUI:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 
 	case OCTEONTX_LINK_SPEED_RXAUI:
 	case OCTEONTX_LINK_SPEED_10G_R:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case OCTEONTX_LINK_SPEED_QSGMII:
-		link->link_speed = ETH_SPEED_NUM_5G;
+		link->link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 	case OCTEONTX_LINK_SPEED_40G_R:
-		link->link_speed = ETH_SPEED_NUM_40G;
+		link->link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 
 	case OCTEONTX_LINK_SPEED_RESERVE1:
 	case OCTEONTX_LINK_SPEED_RESERVE2:
 	default:
-		link->link_speed = ETH_SPEED_NUM_NONE;
+		link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 		octeontx_log_err("incorrect link speed %d", nic->speed);
 		break;
 	}
 
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
-	link->link_autoneg = ETH_LINK_AUTONEG;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 }
 
 static void
@@ -355,20 +355,20 @@ octeontx_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
 	uint16_t flags = 0;
 
-	if (nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= OCCTX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (nic->tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= OCCTX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(nic->tx_offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= OCCTX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (nic->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= OCCTX_TX_MULTI_SEG_F;
 
 	return flags;
@@ -380,21 +380,21 @@ octeontx_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
 	uint16_t flags = 0;
 
-	if (nic->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
-			 DEV_RX_OFFLOAD_UDP_CKSUM))
+	if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			 RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= OCCTX_RX_OFFLOAD_CSUM_F;
 
-	if (nic->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= OCCTX_RX_OFFLOAD_CSUM_F;
 
-	if (nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		flags |= OCCTX_RX_MULTI_SEG_F;
 		eth_dev->data->scattered_rx = 1;
 		/* If scatter mode is enabled, TX should also be in multi
 		 * seg mode, else memory leak will occur
 		 */
-		nic->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		nic->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	}
 
 	return flags;
@@ -423,18 +423,18 @@ octeontx_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-		rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+		rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		octeontx_log_err("unsupported rx qmode %d", rxmode->mq_mode);
 		return -EINVAL;
 	}
 
-	if (!(txmode->offloads & DEV_TX_OFFLOAD_MT_LOCKFREE)) {
+	if (!(txmode->offloads & RTE_ETH_TX_OFFLOAD_MT_LOCKFREE)) {
 		PMD_INIT_LOG(NOTICE, "cant disable lockfree tx");
-		txmode->offloads |= DEV_TX_OFFLOAD_MT_LOCKFREE;
+		txmode->offloads |= RTE_ETH_TX_OFFLOAD_MT_LOCKFREE;
 	}
 
-	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		octeontx_log_err("setting link speed/duplex not supported");
 		return -EINVAL;
 	}
@@ -530,13 +530,13 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	 * when this feature has not been enabled before.
 	 */
 	if (data->dev_started && frame_size > buffsz &&
-	    !(nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	    !(nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		octeontx_log_err("Scatter mode is disabled");
 		return -EINVAL;
 	}
 
 	/* Check <seg size> * <max_seg>  >= max_frame */
-	if ((nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER)	&&
+	if ((nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	&&
 	    (frame_size > buffsz * OCCTX_RX_NB_SEG_MAX))
 		return -EINVAL;
 
@@ -571,7 +571,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
 
 	/* Setup scatter mode if needed by jumbo */
 	if (data->mtu > buffsz) {
-		nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+		nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 		nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
 		nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
 	}
@@ -843,10 +843,10 @@ octeontx_dev_info(struct rte_eth_dev *dev,
 	struct octeontx_nic *nic = octeontx_pmd_priv(dev);
 
 	/* Autonegotiation may be disabled */
-	dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
-	dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			ETH_LINK_SPEED_40G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_40G;
 
 	/* Min/Max MTU supported */
 	dev_info->min_rx_bufsize = OCCTX_MIN_FRS;
@@ -1356,7 +1356,7 @@ octeontx_create(struct rte_vdev_device *dev, int port, uint8_t evdev,
 	nic->ev_ports = 1;
 	nic->print_flag = -1;
 
-	data->dev_link.link_status = ETH_LINK_DOWN;
+	data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	data->dev_started = 0;
 	data->promiscuous = 0;
 	data->all_multicast = 0;
diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index 3a02824e3948..c493fa7a03ed 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -55,23 +55,23 @@
 #define OCCTX_MAX_MTU		(OCCTX_MAX_FRS - OCCTX_L2_OVERHEAD)
 
 #define OCTEONTX_RX_OFFLOADS		(				   \
-					 DEV_RX_OFFLOAD_CHECKSUM	 | \
-					 DEV_RX_OFFLOAD_SCTP_CKSUM       | \
-					 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-					 DEV_RX_OFFLOAD_SCATTER	         | \
-					 DEV_RX_OFFLOAD_SCATTER		 | \
-					 DEV_RX_OFFLOAD_VLAN_FILTER)
+					 RTE_ETH_RX_OFFLOAD_CHECKSUM	 | \
+					 RTE_ETH_RX_OFFLOAD_SCTP_CKSUM       | \
+					 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+					 RTE_ETH_RX_OFFLOAD_SCATTER	         | \
+					 RTE_ETH_RX_OFFLOAD_SCATTER		 | \
+					 RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 
 #define OCTEONTX_TX_OFFLOADS		(				   \
-					 DEV_TX_OFFLOAD_MBUF_FAST_FREE	 | \
-					 DEV_TX_OFFLOAD_MT_LOCKFREE	 | \
-					 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-					 DEV_TX_OFFLOAD_OUTER_UDP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_IPV4_CKSUM	 | \
-					 DEV_TX_OFFLOAD_TCP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_UDP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_SCTP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_MULTI_SEGS)
+					 RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE	 | \
+					 RTE_ETH_TX_OFFLOAD_MT_LOCKFREE	 | \
+					 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+					 RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_TCP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_UDP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 static inline struct octeontx_nic *
 octeontx_pmd_priv(struct rte_eth_dev *dev)
diff --git a/drivers/net/octeontx/octeontx_ethdev_ops.c b/drivers/net/octeontx/octeontx_ethdev_ops.c
index dbe13ce3826b..6ec2b71b0672 100644
--- a/drivers/net/octeontx/octeontx_ethdev_ops.c
+++ b/drivers/net/octeontx/octeontx_ethdev_ops.c
@@ -43,20 +43,20 @@ octeontx_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	rxmode = &dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 			rc = octeontx_vlan_hw_filter(nic, true);
 			if (rc)
 				goto done;
 
-			nic->rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+			nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			nic->rx_offload_flags |= OCCTX_RX_VLAN_FLTR_F;
 		} else {
 			rc = octeontx_vlan_hw_filter(nic, false);
 			if (rc)
 				goto done;
 
-			nic->rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+			nic->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			nic->rx_offload_flags &= ~OCCTX_RX_VLAN_FLTR_F;
 		}
 	}
@@ -139,7 +139,7 @@ octeontx_dev_vlan_offload_init(struct rte_eth_dev *dev)
 
 	TAILQ_INIT(&nic->vlan_info.fltr_tbl);
 
-	rc = octeontx_dev_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+	rc = octeontx_dev_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
 	if (rc)
 		octeontx_log_err("Failed to set vlan offload rc=%d", rc);
 
@@ -219,13 +219,13 @@ octeontx_dev_flow_ctrl_get(struct rte_eth_dev *dev,
 		return rc;
 
 	if (conf.rx_pause && conf.tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (conf.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (conf.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	/* low_water & high_water values are in Bytes */
 	fc_conf->low_water = conf.low_water;
@@ -272,10 +272,10 @@ octeontx_dev_flow_ctrl_set(struct rte_eth_dev *dev,
 		return -EINVAL;
 	}
 
-	rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-			(fc_conf->mode == RTE_FC_RX_PAUSE);
-	tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-			(fc_conf->mode == RTE_FC_TX_PAUSE);
+	rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+			(fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+	tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+			(fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
 
 	conf.high_water = fc_conf->high_water;
 	conf.low_water = fc_conf->low_water;
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 9c5d748e8575..72da8856bd86 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -21,7 +21,7 @@ nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
 
 	if (otx2_dev_is_vf(dev) ||
 	    dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG)
-		capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+		capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	return capa;
 }
@@ -33,10 +33,10 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
 
 	/* TSO not supported for earlier chip revisions */
 	if (otx2_dev_is_96xx_A0(dev) || otx2_dev_is_95xx_Ax(dev))
-		capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
-			  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			  DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-			  DEV_TX_OFFLOAD_GRE_TNL_TSO);
+		capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+			  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
 	return capa;
 }
 
@@ -66,8 +66,8 @@ nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
 	req->npa_func = otx2_npa_pf_func_get();
 	req->sso_func = otx2_sso_pf_func_get();
 	req->rx_cfg = BIT_ULL(35 /* DIS_APAD */);
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
-			 DEV_RX_OFFLOAD_UDP_CKSUM)) {
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			 RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
 		req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */);
 		req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */);
 	}
@@ -373,7 +373,7 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
 
 	aq->rq.sso_ena = 0;
 
-	if (rxq->offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		aq->rq.ipsech_ena = 1;
 
 	aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */
@@ -665,7 +665,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
 	 * These are needed in deriving raw clock value from tsc counter.
 	 * read_clock eth op returns raw clock value.
 	 */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
 	    otx2_ethdev_is_ptp_en(dev)) {
 		rc = otx2_nix_raw_clock_tsc_conv(dev);
 		if (rc) {
@@ -692,7 +692,7 @@ nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
 	 * Maximum three segments can be supported with W8, Choose
 	 * NIX_MAXSQESZ_W16 for multi segment offload.
 	 */
-	if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		return NIX_MAXSQESZ_W16;
 	else
 		return NIX_MAXSQESZ_W8;
@@ -707,29 +707,29 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rxmode *rxmode = &conf->rxmode;
 	uint16_t flags = 0;
 
-	if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
-			(dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+			(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		flags |= NIX_RX_OFFLOAD_RSS_F;
 
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
-			 DEV_RX_OFFLOAD_UDP_CKSUM))
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			 RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		flags |= NIX_RX_MULTI_SEG_F;
 
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
-				DEV_RX_OFFLOAD_QINQ_STRIP))
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				RTE_ETH_RX_OFFLOAD_QINQ_STRIP))
 		flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		flags |= NIX_RX_OFFLOAD_SECURITY_F;
 
 	if (!dev->ptype_disable)
@@ -768,43 +768,43 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
 			 offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
 
-	if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
-	    conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+	    conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
 
-	if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= NIX_TX_MULTI_SEG_F;
 
 	/* Enable Inner checksum for TSO */
-	if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+	if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		flags |= (NIX_TX_OFFLOAD_TSO_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
 	/* Enable Inner and Outer checksum for Tunnel TSO */
-	if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		    DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		    DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		flags |= (NIX_TX_OFFLOAD_TSO_F |
 			  NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
-	if (conf & DEV_TX_OFFLOAD_SECURITY)
+	if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
 		flags |= NIX_TX_OFFLOAD_SECURITY_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
 	return flags;
@@ -914,8 +914,8 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
 	buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
 
 	if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
-		dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
-		dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+		dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 		/* Setting up the rx[tx]_offload_flags due to change
 		 * in rx[tx]_offloads.
@@ -1848,21 +1848,21 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
 		goto fail_configure;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-	    rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+	    rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode);
 		goto fail_configure;
 	}
 
-	if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		otx2_err("Unsupported mq tx mode %d", txmode->mq_mode);
 		goto fail_configure;
 	}
 
 	if (otx2_dev_is_Ax(dev) &&
-	    (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
-	    ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
-	    (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+	    ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
 		otx2_err("Outer IP and SCTP checksum unsupported");
 		goto fail_configure;
 	}
@@ -2235,7 +2235,7 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
 	 * enabled in PF owning this VF
 	 */
 	memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info));
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
 	    otx2_ethdev_is_ptp_en(dev))
 		otx2_nix_timesync_enable(eth_dev);
 	else
@@ -2563,8 +2563,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
 	rc = otx2_eth_sec_ctx_create(eth_dev);
 	if (rc)
 		goto free_mac_addrs;
-	dev->tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
-	dev->rx_offload_capa |= DEV_RX_OFFLOAD_SECURITY;
+	dev->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
+	dev->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SECURITY;
 
 	/* Initialize rte-flow */
 	rc = otx2_flow_init(dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 4557a0ee1945..a5282c6c1231 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -117,43 +117,43 @@
 #define CQ_TIMER_THRESH_DEFAULT	0xAULL /* ~1usec i.e (0xA * 100nsec) */
 #define CQ_TIMER_THRESH_MAX     255
 
-#define NIX_RSS_L3_L4_SRC_DST  (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY \
-				| ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define NIX_RSS_L3_L4_SRC_DST  (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY \
+				| RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
 
-#define NIX_RSS_OFFLOAD		(ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\
-				 ETH_RSS_TCP | ETH_RSS_SCTP | \
-				 ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD | \
-				 NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | \
-				 ETH_RSS_C_VLAN)
+#define NIX_RSS_OFFLOAD		(RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |\
+				 RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | \
+				 RTE_ETH_RSS_TUNNEL | RTE_ETH_RSS_L2_PAYLOAD | \
+				 NIX_RSS_L3_L4_SRC_DST | RTE_ETH_RSS_LEVEL_MASK | \
+				 RTE_ETH_RSS_C_VLAN)
 
 #define NIX_TX_OFFLOAD_CAPA ( \
-	DEV_TX_OFFLOAD_MBUF_FAST_FREE	| \
-	DEV_TX_OFFLOAD_MT_LOCKFREE	| \
-	DEV_TX_OFFLOAD_VLAN_INSERT	| \
-	DEV_TX_OFFLOAD_QINQ_INSERT	| \
-	DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM	| \
-	DEV_TX_OFFLOAD_OUTER_UDP_CKSUM	| \
-	DEV_TX_OFFLOAD_TCP_CKSUM	| \
-	DEV_TX_OFFLOAD_UDP_CKSUM	| \
-	DEV_TX_OFFLOAD_SCTP_CKSUM	| \
-	DEV_TX_OFFLOAD_TCP_TSO		| \
-	DEV_TX_OFFLOAD_VXLAN_TNL_TSO    | \
-	DEV_TX_OFFLOAD_GENEVE_TNL_TSO   | \
-	DEV_TX_OFFLOAD_GRE_TNL_TSO	| \
-	DEV_TX_OFFLOAD_MULTI_SEGS	| \
-	DEV_TX_OFFLOAD_IPV4_CKSUM)
+	RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE	| \
+	RTE_ETH_TX_OFFLOAD_MT_LOCKFREE	| \
+	RTE_ETH_TX_OFFLOAD_VLAN_INSERT	| \
+	RTE_ETH_TX_OFFLOAD_QINQ_INSERT	| \
+	RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_TCP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_UDP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_SCTP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_TCP_TSO		| \
+	RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO    | \
+	RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO   | \
+	RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO	| \
+	RTE_ETH_TX_OFFLOAD_MULTI_SEGS	| \
+	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 
 #define NIX_RX_OFFLOAD_CAPA ( \
-	DEV_RX_OFFLOAD_CHECKSUM		| \
-	DEV_RX_OFFLOAD_SCTP_CKSUM	| \
-	DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-	DEV_RX_OFFLOAD_SCATTER		| \
-	DEV_RX_OFFLOAD_OUTER_UDP_CKSUM	| \
-	DEV_RX_OFFLOAD_VLAN_STRIP	| \
-	DEV_RX_OFFLOAD_VLAN_FILTER	| \
-	DEV_RX_OFFLOAD_QINQ_STRIP	| \
-	DEV_RX_OFFLOAD_TIMESTAMP	| \
-	DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RX_OFFLOAD_CHECKSUM		| \
+	RTE_ETH_RX_OFFLOAD_SCTP_CKSUM	| \
+	RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+	RTE_ETH_RX_OFFLOAD_SCATTER		| \
+	RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM	| \
+	RTE_ETH_RX_OFFLOAD_VLAN_STRIP	| \
+	RTE_ETH_RX_OFFLOAD_VLAN_FILTER	| \
+	RTE_ETH_RX_OFFLOAD_QINQ_STRIP	| \
+	RTE_ETH_RX_OFFLOAD_TIMESTAMP	| \
+	RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define NIX_DEFAULT_RSS_CTX_GROUP  0
 #define NIX_DEFAULT_RSS_MCAM_IDX  -1
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index 83f905315b38..60bf6c3f5f05 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -49,12 +49,12 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
 
 	val = atoi(value);
 
-	if (val <= ETH_RSS_RETA_SIZE_64)
-		val = ETH_RSS_RETA_SIZE_64;
-	else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
-		val = ETH_RSS_RETA_SIZE_128;
-	else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
-		val = ETH_RSS_RETA_SIZE_256;
+	if (val <= RTE_ETH_RSS_RETA_SIZE_64)
+		val = RTE_ETH_RSS_RETA_SIZE_64;
+	else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
+		val = RTE_ETH_RSS_RETA_SIZE_128;
+	else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
+		val = RTE_ETH_RSS_RETA_SIZE_256;
 	else
 		val = NIX_RSS_RETA_SIZE;
 
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 22a8af5cba45..d5caaa326a5a 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -26,11 +26,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	 * when this feature has not been enabled before.
 	 */
 	if (data->dev_started && frame_size > buffsz &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER))
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER))
 		return -EINVAL;
 
 	/* Check <seg size> * <max_seg>  >= max_frame */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)	&&
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	&&
 	    (frame_size > buffsz * NIX_RX_NB_SEG_MAX))
 		return -EINVAL;
 
@@ -568,17 +568,17 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
 	};
 
 	/* Auto negotiation disabled */
-	devinfo->speed_capa = ETH_LINK_SPEED_FIXED;
+	devinfo->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
 	if (!otx2_dev_is_vf_or_sdp(dev) && !otx2_dev_is_lbk(dev)) {
-		devinfo->speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G;
+		devinfo->speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G;
 
 		/* 50G and 100G to be supported for board version C0
 		 * and above.
 		 */
 		if (!otx2_dev_is_Ax(dev))
-			devinfo->speed_capa |= ETH_LINK_SPEED_50G |
-					       ETH_LINK_SPEED_100G;
+			devinfo->speed_capa |= RTE_ETH_LINK_SPEED_50G |
+					       RTE_ETH_LINK_SPEED_100G;
 	}
 
 	devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.c b/drivers/net/octeontx2/otx2_ethdev_sec.c
index c2a36883cbf2..e1654ef5b284 100644
--- a/drivers/net/octeontx2/otx2_ethdev_sec.c
+++ b/drivers/net/octeontx2/otx2_ethdev_sec.c
@@ -890,8 +890,8 @@ otx2_eth_sec_init(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(sa_width < 32 || sa_width > 512 ||
 			 !RTE_IS_POWER_OF_2(sa_width));
 
-	if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+	if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
 		return 0;
 
 	if (rte_security_dynfield_register() < 0)
@@ -933,8 +933,8 @@ otx2_eth_sec_fini(struct rte_eth_dev *eth_dev)
 	uint16_t port = eth_dev->data->port_id;
 	char name[RTE_MEMZONE_NAMESIZE];
 
-	if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+	if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
 		return;
 
 	lookup_mem_sa_tbl_clear(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 6df0732189eb..1d0fe4e950d4 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -625,7 +625,7 @@ otx2_flow_create(struct rte_eth_dev *dev,
 		goto err_exit;
 	}
 
-	if (hw->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (hw->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		rc = flow_update_sec_tt(dev, actions);
 		if (rc != 0) {
 			rte_flow_error_set(error, EIO,
diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c
index 76bf48100183..071740de86a7 100644
--- a/drivers/net/octeontx2/otx2_flow_ctrl.c
+++ b/drivers/net/octeontx2/otx2_flow_ctrl.c
@@ -54,7 +54,7 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 	int rc;
 
 	if (otx2_dev_is_lbk(dev)) {
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		return 0;
 	}
 
@@ -66,13 +66,13 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 		goto done;
 
 	if (rsp->rx_pause && rsp->tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rsp->rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (rsp->tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 done:
 	return rc;
@@ -159,10 +159,10 @@ otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	if (fc_conf->mode == fc->mode)
 		return 0;
 
-	rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_RX_PAUSE);
-	tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_TX_PAUSE);
+	rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+	tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
 
 	/* Check if TX pause frame is already enabled or not */
 	if (fc->tx_pause ^ tx_pause) {
@@ -212,11 +212,11 @@ otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev)
 	/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
 	if (otx2_dev_is_Ax(dev) &&
 	    (dev->npc_flow.switch_header_type != OTX2_PRIV_FLAGS_HIGIG) &&
-	    (fc_conf.mode == RTE_FC_FULL || fc_conf.mode == RTE_FC_RX_PAUSE)) {
+	    (fc_conf.mode == RTE_ETH_FC_FULL || fc_conf.mode == RTE_ETH_FC_RX_PAUSE)) {
 		fc_conf.mode =
-				(fc_conf.mode == RTE_FC_FULL ||
-				fc_conf.mode == RTE_FC_TX_PAUSE) ?
-				RTE_FC_TX_PAUSE : RTE_FC_NONE;
+				(fc_conf.mode == RTE_ETH_FC_FULL ||
+				fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ?
+				RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
 	}
 
 	return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf);
@@ -234,7 +234,7 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
 		return 0;
 
 	memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
-	/* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+	/* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
 	 * by AF driver, update those info in PMD structure.
 	 */
 	rc = otx2_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -242,10 +242,10 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
 		goto exit;
 
 	fc->mode = fc_conf.mode;
-	fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_RX_PAUSE);
-	fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_TX_PAUSE);
+	fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+	fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
 
 exit:
 	return rc;
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index 79b92fda8a4a..91267bbb8182 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -852,7 +852,7 @@ parse_rss_action(struct rte_eth_dev *dev,
 					  attr, "No support of RSS in egress");
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS)
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS)
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ACTION,
 					  act, "multi-queue mode is disabled");
@@ -1186,7 +1186,7 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev,
 		 *FLOW_KEY_ALG index. So, till we update the action with
 		 *flow_key_alg index, set the action to drop.
 		 */
-		if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+		if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 			flow->npc_action = NIX_RX_ACTIONOP_DROP;
 		else
 			flow->npc_action = NIX_RX_ACTIONOP_UCAST;
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
index 81dd6243b977..8f5d0eed92b6 100644
--- a/drivers/net/octeontx2/otx2_link.c
+++ b/drivers/net/octeontx2/otx2_link.c
@@ -41,7 +41,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
 		otx2_info("Port %d: Link Up - speed %u Mbps - %s",
 			  (int)(eth_dev->data->port_id),
 			  (uint32_t)link->link_speed,
-			  link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+			  link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 			  "full-duplex" : "half-duplex");
 	else
 		otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id));
@@ -92,7 +92,7 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
 
 	eth_link.link_status = link->link_up;
 	eth_link.link_speed = link->speed;
-	eth_link.link_autoneg = ETH_LINK_AUTONEG;
+	eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	eth_link.link_duplex = link->full_duplex;
 
 	otx2_dev->speed = link->speed;
@@ -111,10 +111,10 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
 static int
 lbk_link_update(struct rte_eth_link *link)
 {
-	link->link_status = ETH_LINK_UP;
-	link->link_speed = ETH_SPEED_NUM_100G;
-	link->link_autoneg = ETH_LINK_FIXED;
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_status = RTE_ETH_LINK_UP;
+	link->link_speed = RTE_ETH_SPEED_NUM_100G;
+	link->link_autoneg = RTE_ETH_LINK_FIXED;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	return 0;
 }
 
@@ -131,7 +131,7 @@ cgx_link_update(struct otx2_eth_dev *dev, struct rte_eth_link *link)
 
 	link->link_status = rsp->link_info.link_up;
 	link->link_speed = rsp->link_info.speed;
-	link->link_autoneg = ETH_LINK_AUTONEG;
+	link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	if (rsp->link_info.full_duplex)
 		link->link_duplex = rsp->link_info.full_duplex;
@@ -233,22 +233,22 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
 
 	/* 50G and 100G to be supported for board version C0 and above */
 	if (!otx2_dev_is_Ax(dev)) {
-		if (link_speeds & ETH_LINK_SPEED_100G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_100G)
 			link_speed = 100000;
-		if (link_speeds & ETH_LINK_SPEED_50G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_50G)
 			link_speed = 50000;
 	}
-	if (link_speeds & ETH_LINK_SPEED_40G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
 		link_speed = 40000;
-	if (link_speeds & ETH_LINK_SPEED_25G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
 		link_speed = 25000;
-	if (link_speeds & ETH_LINK_SPEED_20G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_20G)
 		link_speed = 20000;
-	if (link_speeds & ETH_LINK_SPEED_10G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 		link_speed = 10000;
-	if (link_speeds & ETH_LINK_SPEED_5G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_5G)
 		link_speed = 5000;
-	if (link_speeds & ETH_LINK_SPEED_1G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
 		link_speed = 1000;
 
 	return link_speed;
@@ -257,11 +257,11 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
 static inline uint8_t
 nix_parse_eth_link_duplex(uint32_t link_speeds)
 {
-	if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
-			(link_speeds & ETH_LINK_SPEED_100M_HD))
-		return ETH_LINK_HALF_DUPLEX;
+	if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+			(link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+		return RTE_ETH_LINK_HALF_DUPLEX;
 	else
-		return ETH_LINK_FULL_DUPLEX;
+		return RTE_ETH_LINK_FULL_DUPLEX;
 }
 
 int
@@ -279,7 +279,7 @@ otx2_apply_link_speed(struct rte_eth_dev *eth_dev)
 	cfg.speed = nix_parse_link_speeds(dev, conf->link_speeds);
 	if (cfg.speed != SPEED_NONE && cfg.speed != dev->speed) {
 		cfg.duplex = nix_parse_eth_link_duplex(conf->link_speeds);
-		cfg.an = (conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+		cfg.an = (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 		return cgx_change_mode(dev, &cfg);
 	}
diff --git a/drivers/net/octeontx2/otx2_mcast.c b/drivers/net/octeontx2/otx2_mcast.c
index f84aa1bf570c..b9c63ad3bc21 100644
--- a/drivers/net/octeontx2/otx2_mcast.c
+++ b/drivers/net/octeontx2/otx2_mcast.c
@@ -100,7 +100,7 @@ nix_hw_update_mc_addr_list(struct rte_eth_dev *eth_dev)
 
 		action = NIX_RX_ACTIONOP_UCAST;
 
-		if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+		if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 			action = NIX_RX_ACTIONOP_RSS;
 			action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
 		}
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
index 91e5c0f6bd11..abb213058792 100644
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -250,7 +250,7 @@ otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
 	/* System time should be already on by default */
 	nix_start_timecounters(eth_dev);
 
-	dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+	dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 	dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
@@ -287,7 +287,7 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
 	if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
 		return -EINVAL;
 
-	dev->rx_offloads &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+	dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F;
 	dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F;
 
diff --git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
index 7dbe5f69ae65..68cef1caa394 100644
--- a/drivers/net/octeontx2/otx2_rss.c
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -85,8 +85,8 @@ otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Copy RETA table */
-	for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask >> j) & 0x01)
 				rss->ind_tbl[idx] = reta_conf[i].reta[j];
 			idx++;
@@ -118,8 +118,8 @@ otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Copy RETA table */
-	for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = rss->ind_tbl[j];
 	}
@@ -178,23 +178,23 @@ rss_get_key(struct otx2_eth_dev *dev, uint8_t *key)
 }
 
 #define RSS_IPV4_ENABLE ( \
-			  ETH_RSS_IPV4 | \
-			  ETH_RSS_FRAG_IPV4 | \
-			  ETH_RSS_NONFRAG_IPV4_UDP | \
-			  ETH_RSS_NONFRAG_IPV4_TCP | \
-			  ETH_RSS_NONFRAG_IPV4_SCTP)
+			  RTE_ETH_RSS_IPV4 | \
+			  RTE_ETH_RSS_FRAG_IPV4 | \
+			  RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
 #define RSS_IPV6_ENABLE ( \
-			  ETH_RSS_IPV6 | \
-			  ETH_RSS_FRAG_IPV6 | \
-			  ETH_RSS_NONFRAG_IPV6_UDP | \
-			  ETH_RSS_NONFRAG_IPV6_TCP | \
-			  ETH_RSS_NONFRAG_IPV6_SCTP)
+			  RTE_ETH_RSS_IPV6 | \
+			  RTE_ETH_RSS_FRAG_IPV6 | \
+			  RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 #define RSS_IPV6_EX_ENABLE ( \
-			     ETH_RSS_IPV6_EX | \
-			     ETH_RSS_IPV6_TCP_EX | \
-			     ETH_RSS_IPV6_UDP_EX)
+			     RTE_ETH_RSS_IPV6_EX | \
+			     RTE_ETH_RSS_IPV6_TCP_EX | \
+			     RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define RSS_MAX_LEVELS   3
 
@@ -233,24 +233,24 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
 
 	dev->rss_info.nix_rss = ethdev_rss;
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
 	    dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) {
 		flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
 	}
 
-	if (ethdev_rss & ETH_RSS_C_VLAN)
+	if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
 
-	if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
 
-	if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
 
-	if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
 
-	if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
 
 	if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -259,34 +259,34 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
 	if (ethdev_rss & RSS_IPV6_ENABLE)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
 
-	if (ethdev_rss & ETH_RSS_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_TCP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_UDP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_SCTP)
+	if (ethdev_rss & RTE_ETH_RSS_SCTP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
 
 	if (ethdev_rss & RSS_IPV6_EX_ENABLE)
 		flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
 
-	if (ethdev_rss & ETH_RSS_PORT)
+	if (ethdev_rss & RTE_ETH_RSS_PORT)
 		flowkey_cfg |= FLOW_KEY_TYPE_PORT;
 
-	if (ethdev_rss & ETH_RSS_NVGRE)
+	if (ethdev_rss & RTE_ETH_RSS_NVGRE)
 		flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
 
-	if (ethdev_rss & ETH_RSS_VXLAN)
+	if (ethdev_rss & RTE_ETH_RSS_VXLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
 
-	if (ethdev_rss & ETH_RSS_GENEVE)
+	if (ethdev_rss & RTE_ETH_RSS_GENEVE)
 		flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
 
-	if (ethdev_rss & ETH_RSS_GTPU)
+	if (ethdev_rss & RTE_ETH_RSS_GTPU)
 		flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
 
 	return flowkey_cfg;
@@ -343,7 +343,7 @@ otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
 		otx2_nix_rss_set_key(dev, rss_conf->rss_key,
 				     (uint32_t)rss_conf->rss_key_len);
 
-	rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 	flowkey_cfg =
@@ -390,7 +390,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
 	int rc;
 
 	/* Skip further configuration if selected mode is not RSS */
-	if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS || !qcnt)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS || !qcnt)
 		return 0;
 
 	/* Update default RSS key and cfg */
@@ -408,7 +408,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
 	}
 
 	rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
-	rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 	flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, rss_hash_level);
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index ffeade5952dc..986902287b67 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -414,12 +414,12 @@ NIX_RX_FASTPATH_MODES
 	/* For PTP enabled, scalar rx function should be chosen as most of the
 	 * PTP apps are implemented to rx burst 1 pkt.
 	 */
-	if (dev->scalar_ena || dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (dev->scalar_ena || dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 		pick_rx_func(eth_dev, nix_eth_rx_burst);
 	else
 		pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
 
 	/* Copy multi seg version with no offload for tear down sequence */
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index ff299f00b913..c60190074926 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -1070,7 +1070,7 @@ NIX_TX_FASTPATH_MODES
 	else
 		pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
 
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
 
 	rte_mb();
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index f5161e17a16d..cce643b7b51d 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -50,7 +50,7 @@ nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
 
 	action = NIX_RX_ACTIONOP_UCAST;
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		action = NIX_RX_ACTIONOP_RSS;
 		action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
 	}
@@ -99,7 +99,7 @@ nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type,
 	 * Take offset from LA since in case of untagged packet,
 	 * lbptr is zero.
 	 */
-	if (type == ETH_VLAN_TYPE_OUTER) {
+	if (type == RTE_ETH_VLAN_TYPE_OUTER) {
 		vtag_action.act.vtag0_def = vtag_index;
 		vtag_action.act.vtag0_lid = NPC_LID_LA;
 		vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
@@ -413,7 +413,7 @@ nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
 		if (vlan->strip_on ||
 		    (vlan->qinq_on && !vlan->qinq_before_def)) {
 			if (eth_dev->data->dev_conf.rxmode.mq_mode ==
-								ETH_MQ_RX_RSS)
+								RTE_ETH_MQ_RX_RSS)
 				vlan->def_rx_mcam_ent.action |=
 							NIX_RX_ACTIONOP_RSS;
 			else
@@ -717,48 +717,48 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 
 	rxmode = &eth_dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
-			offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
+			offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			rc = nix_vlan_hw_strip(eth_dev, true);
 		} else {
-			offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+			offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			rc = nix_vlan_hw_strip(eth_dev, false);
 		}
 		if (rc)
 			goto done;
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
-			offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
+			offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			rc = nix_vlan_hw_filter(eth_dev, true, 0);
 		} else {
-			offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+			offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			rc = nix_vlan_hw_filter(eth_dev, false, 0);
 		}
 		if (rc)
 			goto done;
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) {
 		if (!dev->vlan_info.qinq_on) {
-			offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+			offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 			rc = otx2_nix_config_double_vlan(eth_dev, true);
 			if (rc)
 				goto done;
 		}
 	} else {
 		if (dev->vlan_info.qinq_on) {
-			offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+			offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 			rc = otx2_nix_config_double_vlan(eth_dev, false);
 			if (rc)
 				goto done;
 		}
 	}
 
-	if (offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
-			DEV_RX_OFFLOAD_QINQ_STRIP)) {
+	if (offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+			RTE_ETH_RX_OFFLOAD_QINQ_STRIP)) {
 		dev->rx_offloads |= offloads;
 		dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
 		otx2_eth_set_rx_function(eth_dev);
@@ -780,7 +780,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
 	tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox);
 
 	tpid_cfg->tpid = tpid;
-	if (type == ETH_VLAN_TYPE_OUTER)
+	if (type == RTE_ETH_VLAN_TYPE_OUTER)
 		tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER;
 	else
 		tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER;
@@ -789,7 +789,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
 	if (rc)
 		return rc;
 
-	if (type == ETH_VLAN_TYPE_OUTER)
+	if (type == RTE_ETH_VLAN_TYPE_OUTER)
 		dev->vlan_info.outer_vlan_tpid = tpid;
 	else
 		dev->vlan_info.inner_vlan_tpid = tpid;
@@ -864,7 +864,7 @@ otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev,       uint16_t vlan_id, int on)
 		vlan->outer_vlan_idx = 0;
 	}
 
-	rc = nix_vlan_handle_default_tx_entry(dev, ETH_VLAN_TYPE_OUTER,
+	rc = nix_vlan_handle_default_tx_entry(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					      vtag_index, on);
 	if (rc < 0) {
 		printf("Default tx entry failed with rc %d\n", rc);
@@ -986,12 +986,12 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
 	} else {
 		/* Reinstall all mcam entries now if filter offload is set */
 		if (eth_dev->data->dev_conf.rxmode.offloads &
-		    DEV_RX_OFFLOAD_VLAN_FILTER)
+		    RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			nix_vlan_reinstall_vlan_filters(eth_dev);
 	}
 
 	mask =
-	    ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+	    RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
 	rc = otx2_nix_vlan_offload_set(eth_dev, mask);
 	if (rc) {
 		otx2_err("Failed to set vlan offload rc=%d", rc);
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index 698d22e22685..74dc36a17648 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -33,14 +33,14 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
 
 	otx_epvf = OTX_EP_DEV(eth_dev);
 
-	devinfo->speed_capa = ETH_LINK_SPEED_10G;
+	devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
 	devinfo->max_rx_queues = otx_epvf->max_rx_queues;
 	devinfo->max_tx_queues = otx_epvf->max_tx_queues;
 
 	devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
 	devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
-	devinfo->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
-	devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+	devinfo->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
+	devinfo->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
 
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index aa4dcd33cc79..9338b30672ec 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -563,7 +563,7 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
 			struct otx_ep_buf_free_info *finfo;
 			int j, frags, num_sg;
 
-			if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+			if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
 				goto xmit_fail;
 
 			finfo = (struct otx_ep_buf_free_info *)rte_malloc(NULL,
@@ -697,7 +697,7 @@ otx2_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
 			struct otx_ep_buf_free_info *finfo;
 			int j, frags, num_sg;
 
-			if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+			if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
 				goto xmit_fail;
 
 			finfo = (struct otx_ep_buf_free_info *)
@@ -954,7 +954,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
 	droq_pkt->l4_len = hdr_lens.l4_len;
 
 	if (droq_pkt->nb_segs > 1 &&
-	    !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	    !(otx_ep->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		rte_pktmbuf_free(droq_pkt);
 		goto oq_read_fail;
 	}
diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index d695c5eef7b0..ec29fd6bc53c 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -136,10 +136,10 @@ static const char *valid_arguments[] = {
 };
 
 static struct rte_eth_link pmd_link = {
-		.link_speed = ETH_SPEED_NUM_10G,
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_status = ETH_LINK_DOWN,
-		.link_autoneg = ETH_LINK_FIXED,
+		.link_speed = RTE_ETH_SPEED_NUM_10G,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_status = RTE_ETH_LINK_DOWN,
+		.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(eth_pcap_logtype, NOTICE);
@@ -659,7 +659,7 @@ eth_dev_start(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_tx_queues; i++)
 		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -714,7 +714,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_tx_queues; i++)
 		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index 4cc002ee8fab..047010e15ed0 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -22,15 +22,15 @@ struct pfe_vdev_init_params {
 static struct pfe *g_pfe;
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 /* Supported Tx offloads */
 static uint64_t dev_tx_offloads_sup =
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 /* TODO: make pfe_svr a runtime option.
  * Driver should be able to get the SVR
@@ -601,9 +601,9 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 	}
 
 	link.link_status = lstatus;
-	link.link_speed = ETH_LINK_SPEED_1G;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_speed = RTE_ETH_LINK_SPEED_1G;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	pfe_eth_atomic_write_link_status(dev, &link);
 
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 6667c2d7ab6d..511742c6a1b3 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -65,8 +65,8 @@ typedef u32 offsize_t;      /* In DWORDS !!! */
 struct eth_phy_cfg {
 /* 0 = autoneg, 1000/10000/20000/25000/40000/50000/100000 */
 	u32 speed;
-#define ETH_SPEED_AUTONEG   0
-#define ETH_SPEED_SMARTLINQ  0x8 /* deprecated - use link_modes field instead */
+#define RTE_ETH_SPEED_AUTONEG   0
+#define RTE_ETH_SPEED_SMARTLINQ  0x8 /* deprecated - use link_modes field instead */
 
 	u32 pause;      /* bitmask */
 #define ETH_PAUSE_NONE		0x0
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 27f6932dc74e..c907d7fd8312 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -342,9 +342,9 @@ qede_assign_rxtx_handlers(struct rte_eth_dev *dev, bool is_dummy)
 	}
 
 	use_tx_offload = !!(tx_offloads &
-			    (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
-			     DEV_TX_OFFLOAD_TCP_TSO | /* tso */
-			     DEV_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
+			    (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
+			     RTE_ETH_TX_OFFLOAD_TCP_TSO | /* tso */
+			     RTE_ETH_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
 
 	if (use_tx_offload) {
 		DP_INFO(edev, "Assigning qede_xmit_pkts\n");
@@ -1002,16 +1002,16 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
 	uint64_t rx_offloads = eth_dev->data->dev_conf.rxmode.offloads;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			(void)qede_vlan_stripping(eth_dev, 1);
 		else
 			(void)qede_vlan_stripping(eth_dev, 0);
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* VLAN filtering kicks in when a VLAN is added */
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 			qede_vlan_filter_set(eth_dev, 0, 1);
 		} else {
 			if (qdev->configured_vlans > 1) { /* Excluding VLAN0 */
@@ -1022,7 +1022,7 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 				 * enabled
 				 */
 				eth_dev->data->dev_conf.rxmode.offloads |=
-						DEV_RX_OFFLOAD_VLAN_FILTER;
+						RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			} else {
 				qede_vlan_filter_set(eth_dev, 0, 0);
 			}
@@ -1069,11 +1069,11 @@ int qede_config_rss(struct rte_eth_dev *eth_dev)
 	/* Configure default RETA */
 	memset(reta_conf, 0, sizeof(reta_conf));
 	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++)
-		reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+		reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
 
 	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
-		id = i / RTE_RETA_GROUP_SIZE;
-		pos = i % RTE_RETA_GROUP_SIZE;
+		id = i / RTE_ETH_RETA_GROUP_SIZE;
+		pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		q = i % QEDE_RSS_COUNT(eth_dev);
 		reta_conf[id].reta[pos] = q;
 	}
@@ -1112,12 +1112,12 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
 	}
 
 	/* Configure TPA parameters */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		if (qede_enable_tpa(eth_dev, true))
 			return -EINVAL;
 		/* Enable scatter mode for LRO */
 		if (!eth_dev->data->scattered_rx)
-			rxmode->offloads |= DEV_RX_OFFLOAD_SCATTER;
+			rxmode->offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 	}
 
 	/* Start queues */
@@ -1132,7 +1132,7 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
 	 * Also, we would like to retain similar behavior in PF case, so we
 	 * don't do PF/VF specific check here.
 	 */
-	if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 		if (qede_config_rss(eth_dev))
 			goto err;
 
@@ -1272,8 +1272,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE(edev);
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* We need to have min 1 RX queue.There is no min check in
 	 * rte_eth_dev_configure(), so we are checking it here.
@@ -1291,8 +1291,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 		DP_NOTICE(edev, false,
 			  "Invalid devargs supplied, requested change will not take effect\n");
 
-	if (!(rxmode->mq_mode == ETH_MQ_RX_NONE ||
-	      rxmode->mq_mode == ETH_MQ_RX_RSS)) {
+	if (!(rxmode->mq_mode == RTE_ETH_MQ_RX_NONE ||
+	      rxmode->mq_mode == RTE_ETH_MQ_RX_RSS)) {
 		DP_ERR(edev, "Unsupported multi-queue mode\n");
 		return -ENOTSUP;
 	}
@@ -1312,7 +1312,7 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 			return -ENOMEM;
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		eth_dev->data->scattered_rx = 1;
 
 	if (qede_start_vport(qdev, eth_dev->data->mtu))
@@ -1321,8 +1321,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 	qdev->mtu = eth_dev->data->mtu;
 
 	/* Enable VLAN offloads by default */
-	ret = qede_vlan_offload_set(eth_dev, ETH_VLAN_STRIP_MASK  |
-					     ETH_VLAN_FILTER_MASK);
+	ret = qede_vlan_offload_set(eth_dev, RTE_ETH_VLAN_STRIP_MASK  |
+					     RTE_ETH_VLAN_FILTER_MASK);
 	if (ret)
 		return ret;
 
@@ -1385,34 +1385,34 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->reta_size = ECORE_RSS_IND_TABLE_SIZE;
 	dev_info->hash_key_size = ECORE_RSS_KEY_SIZE * sizeof(uint32_t);
 	dev_info->flow_type_rss_offloads = (uint64_t)QEDE_RSS_OFFLOAD_ALL;
-	dev_info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM	|
-				     DEV_RX_OFFLOAD_UDP_CKSUM	|
-				     DEV_RX_OFFLOAD_TCP_CKSUM	|
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				     DEV_RX_OFFLOAD_TCP_LRO	|
-				     DEV_RX_OFFLOAD_KEEP_CRC    |
-				     DEV_RX_OFFLOAD_SCATTER	|
-				     DEV_RX_OFFLOAD_VLAN_FILTER |
-				     DEV_RX_OFFLOAD_VLAN_STRIP  |
-				     DEV_RX_OFFLOAD_RSS_HASH);
+	dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM	|
+				     RTE_ETH_RX_OFFLOAD_UDP_CKSUM	|
+				     RTE_ETH_RX_OFFLOAD_TCP_CKSUM	|
+				     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     RTE_ETH_RX_OFFLOAD_TCP_LRO	|
+				     RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+				     RTE_ETH_RX_OFFLOAD_SCATTER	|
+				     RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				     RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+				     RTE_ETH_RX_OFFLOAD_RSS_HASH);
 	dev_info->rx_queue_offload_capa = 0;
 
 	/* TX offloads are on a per-packet basis, so it is applicable
 	 * to both at port and queue levels.
 	 */
-	dev_info->tx_offload_capa = (DEV_TX_OFFLOAD_VLAN_INSERT	|
-				     DEV_TX_OFFLOAD_IPV4_CKSUM	|
-				     DEV_TX_OFFLOAD_UDP_CKSUM	|
-				     DEV_TX_OFFLOAD_TCP_CKSUM	|
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				     DEV_TX_OFFLOAD_MULTI_SEGS  |
-				     DEV_TX_OFFLOAD_TCP_TSO	|
-				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO);
+	dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_VLAN_INSERT	|
+				     RTE_ETH_TX_OFFLOAD_IPV4_CKSUM	|
+				     RTE_ETH_TX_OFFLOAD_UDP_CKSUM	|
+				     RTE_ETH_TX_OFFLOAD_TCP_CKSUM	|
+				     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+				     RTE_ETH_TX_OFFLOAD_TCP_TSO	|
+				     RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO);
 	dev_info->tx_queue_offload_capa = dev_info->tx_offload_capa;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
-		.offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+		.offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
 	};
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1424,17 +1424,17 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
 	memset(&link, 0, sizeof(struct qed_link_output));
 	qdev->ops->common->get_link(edev, &link);
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G)
-		speed_cap |= ETH_LINK_SPEED_1G;
+		speed_cap |= RTE_ETH_LINK_SPEED_1G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G)
-		speed_cap |= ETH_LINK_SPEED_10G;
+		speed_cap |= RTE_ETH_LINK_SPEED_10G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G)
-		speed_cap |= ETH_LINK_SPEED_25G;
+		speed_cap |= RTE_ETH_LINK_SPEED_25G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G)
-		speed_cap |= ETH_LINK_SPEED_40G;
+		speed_cap |= RTE_ETH_LINK_SPEED_40G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G)
-		speed_cap |= ETH_LINK_SPEED_50G;
+		speed_cap |= RTE_ETH_LINK_SPEED_50G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G)
-		speed_cap |= ETH_LINK_SPEED_100G;
+		speed_cap |= RTE_ETH_LINK_SPEED_100G;
 	dev_info->speed_capa = speed_cap;
 
 	return 0;
@@ -1461,10 +1461,10 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
 	/* Link Mode */
 	switch (q_link.duplex) {
 	case QEDE_DUPLEX_HALF:
-		link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case QEDE_DUPLEX_FULL:
-		link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case QEDE_DUPLEX_UNKNOWN:
 	default:
@@ -1473,11 +1473,11 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
 	link.link_duplex = link_duplex;
 
 	/* Link Status */
-	link.link_status = q_link.link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	link.link_status = q_link.link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	/* AN */
 	link.link_autoneg = (q_link.supported_caps & QEDE_SUPPORTED_AUTONEG) ?
-			     ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+			     RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
 
 	DP_INFO(edev, "Link - Speed %u Mode %u AN %u Status %u\n",
 		link.link_speed, link.link_duplex,
@@ -2012,12 +2012,12 @@ static int qede_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Pause is assumed to be supported (SUPPORTED_Pause) */
-	if (fc_conf->mode == RTE_FC_FULL)
+	if (fc_conf->mode == RTE_ETH_FC_FULL)
 		params.pause_config |= (QED_LINK_PAUSE_TX_ENABLE |
 					QED_LINK_PAUSE_RX_ENABLE);
-	if (fc_conf->mode == RTE_FC_TX_PAUSE)
+	if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
 		params.pause_config |= QED_LINK_PAUSE_TX_ENABLE;
-	if (fc_conf->mode == RTE_FC_RX_PAUSE)
+	if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
 		params.pause_config |= QED_LINK_PAUSE_RX_ENABLE;
 
 	params.link_up = true;
@@ -2041,13 +2041,13 @@ static int qede_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 
 	if (current_link.pause_config & (QED_LINK_PAUSE_RX_ENABLE |
 					 QED_LINK_PAUSE_TX_ENABLE))
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (current_link.pause_config & QED_LINK_PAUSE_RX_ENABLE)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (current_link.pause_config & QED_LINK_PAUSE_TX_ENABLE)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -2088,14 +2088,14 @@ qede_dev_supported_ptypes_get(struct rte_eth_dev *eth_dev)
 static void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf)
 {
 	*rss_caps = 0;
-	*rss_caps |= (hf & ETH_RSS_IPV4)              ? ECORE_RSS_IPV4 : 0;
-	*rss_caps |= (hf & ETH_RSS_IPV6)              ? ECORE_RSS_IPV6 : 0;
-	*rss_caps |= (hf & ETH_RSS_IPV6_EX)           ? ECORE_RSS_IPV6 : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_TCP)  ? ECORE_RSS_IPV4_TCP : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_TCP)  ? ECORE_RSS_IPV6_TCP : 0;
-	*rss_caps |= (hf & ETH_RSS_IPV6_TCP_EX)       ? ECORE_RSS_IPV6_TCP : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_UDP)  ? ECORE_RSS_IPV4_UDP : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_UDP)  ? ECORE_RSS_IPV6_UDP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV4)              ? ECORE_RSS_IPV4 : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV6)              ? ECORE_RSS_IPV6 : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV6_EX)           ? ECORE_RSS_IPV6 : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)  ? ECORE_RSS_IPV4_TCP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)  ? ECORE_RSS_IPV6_TCP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV6_TCP_EX)       ? ECORE_RSS_IPV6_TCP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)  ? ECORE_RSS_IPV4_UDP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)  ? ECORE_RSS_IPV6_UDP : 0;
 }
 
 int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
@@ -2221,7 +2221,7 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 	uint8_t entry;
 	int rc = 0;
 
-	if (reta_size > ETH_RSS_RETA_SIZE_128) {
+	if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
 		DP_ERR(edev, "reta_size %d is not supported by hardware\n",
 		       reta_size);
 		return -EINVAL;
@@ -2245,8 +2245,8 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 
 	for_each_hwfn(edev, i) {
 		for (j = 0; j < reta_size; j++) {
-			idx = j / RTE_RETA_GROUP_SIZE;
-			shift = j % RTE_RETA_GROUP_SIZE;
+			idx = j / RTE_ETH_RETA_GROUP_SIZE;
+			shift = j % RTE_ETH_RETA_GROUP_SIZE;
 			if (reta_conf[idx].mask & (1ULL << shift)) {
 				entry = reta_conf[idx].reta[shift];
 				fid = entry * edev->num_hwfns + i;
@@ -2282,15 +2282,15 @@ static int qede_rss_reta_query(struct rte_eth_dev *eth_dev,
 	uint16_t i, idx, shift;
 	uint8_t entry;
 
-	if (reta_size > ETH_RSS_RETA_SIZE_128) {
+	if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
 		DP_ERR(edev, "reta_size %d is not supported\n",
 		       reta_size);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift)) {
 			entry = qdev->rss_ind_table[i];
 			reta_conf[idx].reta[shift] = entry;
@@ -2718,16 +2718,16 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
 	adapter->ipgre.num_filters = 0;
 	if (is_vf) {
 		adapter->vxlan.enable = true;
-		adapter->vxlan.filter_type = ETH_TUNNEL_FILTER_IMAC |
-					     ETH_TUNNEL_FILTER_IVLAN;
+		adapter->vxlan.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+					     RTE_ETH_TUNNEL_FILTER_IVLAN;
 		adapter->vxlan.udp_port = QEDE_VXLAN_DEF_PORT;
 		adapter->geneve.enable = true;
-		adapter->geneve.filter_type = ETH_TUNNEL_FILTER_IMAC |
-					      ETH_TUNNEL_FILTER_IVLAN;
+		adapter->geneve.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+					      RTE_ETH_TUNNEL_FILTER_IVLAN;
 		adapter->geneve.udp_port = QEDE_GENEVE_DEF_PORT;
 		adapter->ipgre.enable = true;
-		adapter->ipgre.filter_type = ETH_TUNNEL_FILTER_IMAC |
-					     ETH_TUNNEL_FILTER_IVLAN;
+		adapter->ipgre.filter_type = RTE_ETH_TUNNEL_FILTER_IMAC |
+					     RTE_ETH_TUNNEL_FILTER_IVLAN;
 	} else {
 		adapter->vxlan.enable = false;
 		adapter->geneve.enable = false;
diff --git a/drivers/net/qede/qede_filter.c b/drivers/net/qede/qede_filter.c
index c756594bfc4b..440440423a32 100644
--- a/drivers/net/qede/qede_filter.c
+++ b/drivers/net/qede/qede_filter.c
@@ -20,97 +20,97 @@ const struct _qede_udp_tunn_types {
 	const char *string;
 } qede_tunn_types[] = {
 	{
-		ETH_TUNNEL_FILTER_OMAC,
+		RTE_ETH_TUNNEL_FILTER_OMAC,
 		ECORE_FILTER_MAC,
 		ECORE_TUNN_CLSS_MAC_VLAN,
 		"outer-mac"
 	},
 	{
-		ETH_TUNNEL_FILTER_TENID,
+		RTE_ETH_TUNNEL_FILTER_TENID,
 		ECORE_FILTER_VNI,
 		ECORE_TUNN_CLSS_MAC_VNI,
 		"vni"
 	},
 	{
-		ETH_TUNNEL_FILTER_IMAC,
+		RTE_ETH_TUNNEL_FILTER_IMAC,
 		ECORE_FILTER_INNER_MAC,
 		ECORE_TUNN_CLSS_INNER_MAC_VLAN,
 		"inner-mac"
 	},
 	{
-		ETH_TUNNEL_FILTER_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_IVLAN,
 		ECORE_FILTER_INNER_VLAN,
 		ECORE_TUNN_CLSS_INNER_MAC_VLAN,
 		"inner-vlan"
 	},
 	{
-		ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_TENID,
+		RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_TENID,
 		ECORE_FILTER_MAC_VNI_PAIR,
 		ECORE_TUNN_CLSS_MAC_VNI,
 		"outer-mac and vni"
 	},
 	{
-		ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_IMAC,
+		RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_IMAC,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"outer-mac and inner-mac"
 	},
 	{
-		ETH_TUNNEL_FILTER_OMAC | ETH_TUNNEL_FILTER_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"outer-mac and inner-vlan"
 	},
 	{
-		ETH_TUNNEL_FILTER_TENID | ETH_TUNNEL_FILTER_IMAC,
+		RTE_ETH_TUNNEL_FILTER_TENID | RTE_ETH_TUNNEL_FILTER_IMAC,
 		ECORE_FILTER_INNER_MAC_VNI_PAIR,
 		ECORE_TUNN_CLSS_INNER_MAC_VNI,
 		"vni and inner-mac",
 	},
 	{
-		ETH_TUNNEL_FILTER_TENID | ETH_TUNNEL_FILTER_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_TENID | RTE_ETH_TUNNEL_FILTER_IVLAN,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"vni and inner-vlan",
 	},
 	{
-		ETH_TUNNEL_FILTER_IMAC | ETH_TUNNEL_FILTER_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
 		ECORE_FILTER_INNER_PAIR,
 		ECORE_TUNN_CLSS_INNER_MAC_VLAN,
 		"inner-mac and inner-vlan",
 	},
 	{
-		ETH_TUNNEL_FILTER_OIP,
+		RTE_ETH_TUNNEL_FILTER_OIP,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"outer-IP"
 	},
 	{
-		ETH_TUNNEL_FILTER_IIP,
+		RTE_ETH_TUNNEL_FILTER_IIP,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"inner-IP"
 	},
 	{
-		RTE_TUNNEL_FILTER_IMAC_IVLAN,
+		RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"IMAC_IVLAN"
 	},
 	{
-		RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID,
+		RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"IMAC_IVLAN_TENID"
 	},
 	{
-		RTE_TUNNEL_FILTER_IMAC_TENID,
+		RTE_ETH_TUNNEL_FILTER_IMAC_TENID,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"IMAC_TENID"
 	},
 	{
-		RTE_TUNNEL_FILTER_OMAC_TENID_IMAC,
+		RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC,
 		ECORE_FILTER_UNUSED,
 		MAX_ECORE_TUNN_CLSS,
 		"OMAC_TENID_IMAC"
@@ -144,7 +144,7 @@ int qede_check_fdir_support(struct rte_eth_dev *eth_dev)
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct rte_fdir_conf *fdir = &eth_dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fdir = &eth_dev->data->dev_conf.fdir_conf;
 
 	/* check FDIR modes */
 	switch (fdir->mode) {
@@ -542,7 +542,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
 	memset(&tunn, 0, sizeof(tunn));
 
 	switch (tunnel_udp->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (qdev->vxlan.udp_port != tunnel_udp->udp_port) {
 			DP_ERR(edev, "UDP port %u doesn't exist\n",
 				tunnel_udp->udp_port);
@@ -570,7 +570,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
 					ECORE_TUNN_CLSS_MAC_VLAN, false);
 
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (qdev->geneve.udp_port != tunnel_udp->udp_port) {
 			DP_ERR(edev, "UDP port %u doesn't exist\n",
 				tunnel_udp->udp_port);
@@ -622,7 +622,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
 	memset(&tunn, 0, sizeof(tunn));
 
 	switch (tunnel_udp->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (qdev->vxlan.udp_port == tunnel_udp->udp_port) {
 			DP_INFO(edev,
 				"UDP port %u for VXLAN was already configured\n",
@@ -659,7 +659,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
 
 		qdev->vxlan.udp_port = udp_port;
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (qdev->geneve.udp_port == tunnel_udp->udp_port) {
 			DP_INFO(edev,
 				"UDP port %u for GENEVE was already configured\n",
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index c2263787b4ec..d585db8b61e8 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -249,7 +249,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
 	bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
 	/* cache align the mbuf size to simplfy rx_buf_size calculation */
 	bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
-	if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)	||
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	||
 	    (max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) {
 		if (!dev->data->scattered_rx) {
 			DP_INFO(edev, "Forcing scatter-gather mode\n");
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index c9334448c887..15112b83f4f7 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -73,14 +73,14 @@
 #define QEDE_MAX_ETHER_HDR_LEN	(RTE_ETHER_HDR_LEN + QEDE_ETH_OVERHEAD)
 #define QEDE_ETH_MAX_LEN	(RTE_ETHER_MTU + QEDE_MAX_ETHER_HDR_LEN)
 
-#define QEDE_RSS_OFFLOAD_ALL    (ETH_RSS_IPV4			|\
-				 ETH_RSS_NONFRAG_IPV4_TCP	|\
-				 ETH_RSS_NONFRAG_IPV4_UDP	|\
-				 ETH_RSS_IPV6			|\
-				 ETH_RSS_NONFRAG_IPV6_TCP	|\
-				 ETH_RSS_NONFRAG_IPV6_UDP	|\
-				 ETH_RSS_VXLAN			|\
-				 ETH_RSS_GENEVE)
+#define QEDE_RSS_OFFLOAD_ALL    (RTE_ETH_RSS_IPV4			|\
+				 RTE_ETH_RSS_NONFRAG_IPV4_TCP	|\
+				 RTE_ETH_RSS_NONFRAG_IPV4_UDP	|\
+				 RTE_ETH_RSS_IPV6			|\
+				 RTE_ETH_RSS_NONFRAG_IPV6_TCP	|\
+				 RTE_ETH_RSS_NONFRAG_IPV6_UDP	|\
+				 RTE_ETH_RSS_VXLAN			|\
+				 RTE_ETH_RSS_GENEVE)
 
 #define QEDE_RXTX_MAX(qdev) \
 	(RTE_MAX(qdev->num_rx_queues, qdev->num_tx_queues))
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index 0440019e07e1..db10f035dfcb 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -56,10 +56,10 @@ struct pmd_internals {
 };
 
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(eth_ring_logtype, NOTICE);
@@ -102,7 +102,7 @@ eth_dev_configure(struct rte_eth_dev *dev __rte_unused) { return 0; }
 static int
 eth_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -110,21 +110,21 @@ static int
 eth_dev_stop(struct rte_eth_dev *dev)
 {
 	dev->data->dev_started = 0;
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
 static int
 eth_dev_set_link_down(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
 static int
 eth_dev_set_link_up(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -163,8 +163,8 @@ eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_mac_addrs = 1;
 	dev_info->max_rx_pktlen = (uint32_t)-1;
 	dev_info->max_rx_queues = (uint16_t)internals->max_rx_queues;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	dev_info->max_tx_queues = (uint16_t)internals->max_tx_queues;
 	dev_info->min_rx_bufsize = 0;
 
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index f79f4d5ffc94..79a27c7703a8 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -105,13 +105,13 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
 {
 	uint32_t phy_caps = 0;
 
-	if (~speeds & ETH_LINK_SPEED_FIXED) {
+	if (~speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		phy_caps |= (1 << EFX_PHY_CAP_AN);
 		/*
 		 * If no speeds are specified in the mask, any supported
 		 * may be negotiated
 		 */
-		if (speeds == ETH_LINK_SPEED_AUTONEG)
+		if (speeds == RTE_ETH_LINK_SPEED_AUTONEG)
 			phy_caps |=
 				(1 << EFX_PHY_CAP_1000FDX) |
 				(1 << EFX_PHY_CAP_10000FDX) |
@@ -120,17 +120,17 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
 				(1 << EFX_PHY_CAP_50000FDX) |
 				(1 << EFX_PHY_CAP_100000FDX);
 	}
-	if (speeds & ETH_LINK_SPEED_1G)
+	if (speeds & RTE_ETH_LINK_SPEED_1G)
 		phy_caps |= (1 << EFX_PHY_CAP_1000FDX);
-	if (speeds & ETH_LINK_SPEED_10G)
+	if (speeds & RTE_ETH_LINK_SPEED_10G)
 		phy_caps |= (1 << EFX_PHY_CAP_10000FDX);
-	if (speeds & ETH_LINK_SPEED_25G)
+	if (speeds & RTE_ETH_LINK_SPEED_25G)
 		phy_caps |= (1 << EFX_PHY_CAP_25000FDX);
-	if (speeds & ETH_LINK_SPEED_40G)
+	if (speeds & RTE_ETH_LINK_SPEED_40G)
 		phy_caps |= (1 << EFX_PHY_CAP_40000FDX);
-	if (speeds & ETH_LINK_SPEED_50G)
+	if (speeds & RTE_ETH_LINK_SPEED_50G)
 		phy_caps |= (1 << EFX_PHY_CAP_50000FDX);
-	if (speeds & ETH_LINK_SPEED_100G)
+	if (speeds & RTE_ETH_LINK_SPEED_100G)
 		phy_caps |= (1 << EFX_PHY_CAP_100000FDX);
 
 	return phy_caps;
@@ -400,10 +400,10 @@ sfc_set_fw_subvariant(struct sfc_adapter *sa)
 			tx_offloads |= txq_info->offloads;
 	}
 
-	if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			   DEV_TX_OFFLOAD_TCP_CKSUM |
-			   DEV_TX_OFFLOAD_UDP_CKSUM |
-			   DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
 		req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_DEFAULT;
 	else
 		req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_NO_TX_CSUM;
@@ -898,7 +898,7 @@ sfc_attach(struct sfc_adapter *sa)
 	sa->priv.shared->tunnel_encaps =
 		encp->enc_tunnel_encapsulations_supported;
 
-	if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		sa->tso = encp->enc_fw_assisted_tso_v2_enabled ||
 			  encp->enc_tso_v3_enabled;
 		if (!sa->tso)
@@ -907,8 +907,8 @@ sfc_attach(struct sfc_adapter *sa)
 
 	if (sa->tso &&
 	    (sfc_dp_tx_offload_capa(sa->priv.dp_tx) &
-	     (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-	      DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
+	     (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+	      RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
 		sa->tso_encap = encp->enc_fw_assisted_tso_v2_encap_enabled ||
 				encp->enc_tso_v3_enabled;
 		if (!sa->tso_encap)
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index d958fd642fb1..eeb73a7530ef 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -979,11 +979,11 @@ struct sfc_dp_rx sfc_ef100_rx = {
 				  SFC_DP_RX_FEAT_INTR |
 				  SFC_DP_RX_FEAT_STATS,
 	.dev_offload_capa	= 0,
-	.queue_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-				  DEV_RX_OFFLOAD_SCATTER |
-				  DEV_RX_OFFLOAD_RSS_HASH,
+	.queue_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				  RTE_ETH_RX_OFFLOAD_SCATTER |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
 	.get_dev_info		= sfc_ef100_rx_get_dev_info,
 	.qsize_up_rings		= sfc_ef100_rx_qsize_up_rings,
 	.qcreate		= sfc_ef100_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index e166fda888b1..67980a587fe4 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -971,16 +971,16 @@ struct sfc_dp_tx sfc_ef100_tx = {
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS |
 				  SFC_DP_TX_FEAT_STATS,
 	.dev_offload_capa	= 0,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_VLAN_INSERT |
-				  DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_MULTI_SEGS |
-				  DEV_TX_OFFLOAD_TCP_TSO |
-				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				  RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				  RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
 	.get_dev_info		= sfc_ef100_get_dev_info,
 	.qsize_up_rings		= sfc_ef100_tx_qsize_up_rings,
 	.qcreate		= sfc_ef100_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 991329e86f01..9ea207cca163 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -746,8 +746,8 @@ struct sfc_dp_rx sfc_ef10_essb_rx = {
 	},
 	.features		= SFC_DP_RX_FEAT_FLOW_FLAG |
 				  SFC_DP_RX_FEAT_FLOW_MARK,
-	.dev_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_RSS_HASH,
+	.dev_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
 	.queue_offload_capa	= 0,
 	.get_dev_info		= sfc_ef10_essb_rx_get_dev_info,
 	.pool_ops_supported	= sfc_ef10_essb_rx_pool_ops_supported,
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 49a7d4fb42fd..9aaabd30eee6 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -819,10 +819,10 @@ struct sfc_dp_rx sfc_ef10_rx = {
 	},
 	.features		= SFC_DP_RX_FEAT_MULTI_PROCESS |
 				  SFC_DP_RX_FEAT_INTR,
-	.dev_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_RX_OFFLOAD_RSS_HASH,
-	.queue_offload_capa	= DEV_RX_OFFLOAD_SCATTER,
+	.dev_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
+	.queue_offload_capa	= RTE_ETH_RX_OFFLOAD_SCATTER,
 	.get_dev_info		= sfc_ef10_rx_get_dev_info,
 	.qsize_up_rings		= sfc_ef10_rx_qsize_up_rings,
 	.qcreate		= sfc_ef10_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index ed43adb4ca5c..e7da4608bcb0 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -958,9 +958,9 @@ sfc_ef10_tx_qcreate(uint16_t port_id, uint16_t queue_id,
 	if (txq->sw_ring == NULL)
 		goto fail_sw_ring_alloc;
 
-	if (info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-			      DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			      DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) {
+	if (info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+			      RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			      RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) {
 		txq->tsoh = rte_calloc_socket("sfc-ef10-txq-tsoh",
 					      info->txq_entries,
 					      SFC_TSOH_STD_LEN,
@@ -1125,14 +1125,14 @@ struct sfc_dp_tx sfc_ef10_tx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_EF10,
 	},
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
-	.dev_offload_capa	= DEV_TX_OFFLOAD_MULTI_SEGS,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_TSO |
-				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+	.dev_offload_capa	= RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
 	.get_dev_info		= sfc_ef10_get_dev_info,
 	.qsize_up_rings		= sfc_ef10_tx_qsize_up_rings,
 	.qcreate		= sfc_ef10_tx_qcreate,
@@ -1152,11 +1152,11 @@ struct sfc_dp_tx sfc_ef10_simple_tx = {
 		.type		= SFC_DP_TX,
 	},
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
-	.dev_offload_capa	= DEV_TX_OFFLOAD_MBUF_FAST_FREE,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM,
+	.dev_offload_capa	= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM,
 	.get_dev_info		= sfc_ef10_get_dev_info,
 	.qsize_up_rings		= sfc_ef10_tx_qsize_up_rings,
 	.qcreate		= sfc_ef10_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index de0fac899f77..26973075ef4d 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -105,19 +105,19 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_vfs = sa->sriov.num_vfs;
 
 	/* Autonegotiation may be disabled */
-	dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_1000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_1G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_10000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_10G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_25000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_25G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_40000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_50000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_100000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
 
 	dev_info->max_rx_queues = sa->rxq_max;
 	dev_info->max_tx_queues = sa->txq_max;
@@ -145,8 +145,8 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->tx_offload_capa = sfc_tx_get_dev_offload_caps(sa) |
 				    dev_info->tx_queue_offload_capa;
 
-	if (dev_info->tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
-		txq_offloads_def |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	if (dev_info->tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+		txq_offloads_def |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->default_txconf.offloads |= txq_offloads_def;
 
@@ -988,16 +988,16 @@ sfc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	switch (link_fc) {
 	case 0:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		break;
 	case EFX_FCNTL_RESPOND:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case EFX_FCNTL_GENERATE:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case (EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE):
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	default:
 		sfc_err(sa, "%s: unexpected flow control value %#x",
@@ -1028,16 +1028,16 @@ sfc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		fcntl = 0;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		fcntl = EFX_FCNTL_RESPOND;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		fcntl = EFX_FCNTL_GENERATE;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		fcntl = EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE;
 		break;
 	default:
@@ -1312,7 +1312,7 @@ sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 	qinfo->conf.rx_deferred_start = rxq_info->deferred_start;
 	qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads;
 	if (rxq_info->type_flags & EFX_RXQ_FLAG_SCATTER) {
-		qinfo->conf.offloads |= DEV_RX_OFFLOAD_SCATTER;
+		qinfo->conf.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 		qinfo->scattered_rx = 1;
 	}
 	qinfo->nb_desc = rxq_info->entries;
@@ -1522,9 +1522,9 @@ static efx_tunnel_protocol_t
 sfc_tunnel_rte_type_to_efx_udp_proto(enum rte_eth_tunnel_type rte_type)
 {
 	switch (rte_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		return EFX_TUNNEL_PROTOCOL_VXLAN;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		return EFX_TUNNEL_PROTOCOL_GENEVE;
 	default:
 		return EFX_TUNNEL_NPROTOS;
@@ -1651,7 +1651,7 @@ sfc_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	/*
 	 * Mapping of hash configuration between RTE and EFX is not one-to-one,
-	 * hence, conversion is done here to derive a correct set of ETH_RSS
+	 * hence, conversion is done here to derive a correct set of RTE_ETH_RSS
 	 * flags which corresponds to the active EFX configuration stored
 	 * locally in 'sfc_adapter' and kept up-to-date
 	 */
@@ -1777,8 +1777,8 @@ sfc_dev_rss_reta_query(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	for (entry = 0; entry < reta_size; entry++) {
-		int grp = entry / RTE_RETA_GROUP_SIZE;
-		int grp_idx = entry % RTE_RETA_GROUP_SIZE;
+		int grp = entry / RTE_ETH_RETA_GROUP_SIZE;
+		int grp_idx = entry % RTE_ETH_RETA_GROUP_SIZE;
 
 		if ((reta_conf[grp].mask >> grp_idx) & 1)
 			reta_conf[grp].reta[grp_idx] = rss->tbl[entry];
@@ -1827,10 +1827,10 @@ sfc_dev_rss_reta_update(struct rte_eth_dev *dev,
 	rte_memcpy(rss_tbl_new, rss->tbl, sizeof(rss->tbl));
 
 	for (entry = 0; entry < reta_size; entry++) {
-		int grp_idx = entry % RTE_RETA_GROUP_SIZE;
+		int grp_idx = entry % RTE_ETH_RETA_GROUP_SIZE;
 		struct rte_eth_rss_reta_entry64 *grp;
 
-		grp = &reta_conf[entry / RTE_RETA_GROUP_SIZE];
+		grp = &reta_conf[entry / RTE_ETH_RETA_GROUP_SIZE];
 
 		if (grp->mask & (1ull << grp_idx)) {
 			if (grp->reta[grp_idx] >= rss->channels) {
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 81b9923644aa..23399fcab252 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -391,7 +391,7 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = NULL;
 	const struct rte_flow_item_vlan *mask = NULL;
 	const struct rte_flow_item_vlan supp_mask = {
-		.tci = rte_cpu_to_be_16(ETH_VLAN_ID_MAX),
+		.tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
 		.inner_type = RTE_BE16(0xffff),
 	};
 
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index 5320d8903dac..27b02b1119fb 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -573,66 +573,66 @@ sfc_port_link_mode_to_info(efx_link_mode_t link_mode,
 
 	memset(link_info, 0, sizeof(*link_info));
 	if ((link_mode == EFX_LINK_DOWN) || (link_mode == EFX_LINK_UNKNOWN))
-		link_info->link_status = ETH_LINK_DOWN;
+		link_info->link_status = RTE_ETH_LINK_DOWN;
 	else
-		link_info->link_status = ETH_LINK_UP;
+		link_info->link_status = RTE_ETH_LINK_UP;
 
 	switch (link_mode) {
 	case EFX_LINK_10HDX:
-		link_info->link_speed  = ETH_SPEED_NUM_10M;
-		link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_10M;
+		link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case EFX_LINK_10FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_10M;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_10M;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_100HDX:
-		link_info->link_speed  = ETH_SPEED_NUM_100M;
-		link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_100M;
+		link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case EFX_LINK_100FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_100M;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_100M;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_1000HDX:
-		link_info->link_speed  = ETH_SPEED_NUM_1G;
-		link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_1G;
+		link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case EFX_LINK_1000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_1G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_1G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_10000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_10G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_10G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_25000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_25G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_25G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_40000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_40G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_40G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_50000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_50G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_50G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_100000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_100G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_100G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	default:
 		SFC_ASSERT(B_FALSE);
 		/* FALLTHROUGH */
 	case EFX_LINK_UNKNOWN:
 	case EFX_LINK_DOWN:
-		link_info->link_speed  = ETH_SPEED_NUM_NONE;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_NONE;
 		link_info->link_duplex = 0;
 		break;
 	}
 
-	link_info->link_autoneg = ETH_LINK_AUTONEG;
+	link_info->link_autoneg = RTE_ETH_LINK_AUTONEG;
 }
 
 int
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index 2500b14cb006..9d88d554c1ba 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -405,7 +405,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
 	}
 
 	switch (conf->rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		if (nb_rx_queues != 1) {
 			sfcr_err(sr, "Rx RSS is not supported with %u queues",
 				 nb_rx_queues);
@@ -420,7 +420,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
 			ret = -EINVAL;
 		}
 		break;
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 		break;
 	default:
 		sfcr_err(sr, "Rx mode MQ modes other than RSS not supported");
@@ -428,7 +428,7 @@ sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
 		break;
 	}
 
-	if (conf->txmode.mq_mode != ETH_MQ_TX_NONE) {
+	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
 		sfcr_err(sr, "Tx mode MQ modes not supported");
 		ret = -EINVAL;
 	}
@@ -553,8 +553,8 @@ sfc_repr_dev_link_update(struct rte_eth_dev *dev,
 		sfc_port_link_mode_to_info(EFX_LINK_UNKNOWN, &link);
 	} else {
 		memset(&link, 0, sizeof(link));
-		link.link_status = ETH_LINK_UP;
-		link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		link.link_status = RTE_ETH_LINK_UP;
+		link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index c60ef17a922a..23df27c8f45a 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -648,9 +648,9 @@ struct sfc_dp_rx sfc_efx_rx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_RX_EFX,
 	},
 	.features		= SFC_DP_RX_FEAT_INTR,
-	.dev_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_RSS_HASH,
-	.queue_offload_capa	= DEV_RX_OFFLOAD_SCATTER,
+	.dev_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
+	.queue_offload_capa	= RTE_ETH_RX_OFFLOAD_SCATTER,
 	.qsize_up_rings		= sfc_efx_rx_qsize_up_rings,
 	.qcreate		= sfc_efx_rx_qcreate,
 	.qdestroy		= sfc_efx_rx_qdestroy,
@@ -931,7 +931,7 @@ sfc_rx_get_offload_mask(struct sfc_adapter *sa)
 	uint64_t no_caps = 0;
 
 	if (encp->enc_tunnel_encapsulations_supported == 0)
-		no_caps |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		no_caps |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 	return ~no_caps;
 }
@@ -1140,7 +1140,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 
 	if (!sfc_rx_check_scatter(sa->port.pdu, buf_size,
 				  encp->enc_rx_prefix_size,
-				  (offloads & DEV_RX_OFFLOAD_SCATTER),
+				  (offloads & RTE_ETH_RX_OFFLOAD_SCATTER),
 				  encp->enc_rx_scatter_max,
 				  &error)) {
 		sfc_err(sa, "RxQ %d (internal %u) MTU check failed: %s",
@@ -1166,15 +1166,15 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 		rxq_info->type = EFX_RXQ_TYPE_DEFAULT;
 
 	rxq_info->type_flags |=
-		(offloads & DEV_RX_OFFLOAD_SCATTER) ?
+		(offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ?
 		EFX_RXQ_FLAG_SCATTER : EFX_RXQ_FLAG_NONE;
 
 	if ((encp->enc_tunnel_encapsulations_supported != 0) &&
 	    (sfc_dp_rx_offload_capa(sa->priv.dp_rx) &
-	     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+	     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
 		rxq_info->type_flags |= EFX_RXQ_FLAG_INNER_CLASSES;
 
-	if (offloads & DEV_RX_OFFLOAD_RSS_HASH)
+	if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)
 		rxq_info->type_flags |= EFX_RXQ_FLAG_RSS_HASH;
 
 	if ((sa->negotiated_rx_metadata & RTE_ETH_RX_METADATA_USER_FLAG) != 0)
@@ -1211,7 +1211,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 	rxq_info->refill_mb_pool = mb_pool;
 
 	if (rss->hash_support == EFX_RX_HASH_AVAILABLE && rss->channels > 0 &&
-	    (offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	    (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		rxq_info->rxq_flags = SFC_RXQ_FLAG_RSS_HASH;
 	else
 		rxq_info->rxq_flags = 0;
@@ -1313,19 +1313,19 @@ sfc_rx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
  * Mapping between RTE RSS hash functions and their EFX counterparts.
  */
 static const struct sfc_rss_hf_rte_to_efx sfc_rss_hf_map[] = {
-	{ ETH_RSS_NONFRAG_IPV4_TCP,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 	  EFX_RX_HASH(IPV4_TCP, 4TUPLE) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 	  EFX_RX_HASH(IPV4_UDP, 4TUPLE) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX,
 	  EFX_RX_HASH(IPV6_TCP, 4TUPLE) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX,
 	  EFX_RX_HASH(IPV6_UDP, 4TUPLE) },
-	{ ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER,
+	{ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	  EFX_RX_HASH(IPV4_TCP, 2TUPLE) | EFX_RX_HASH(IPV4_UDP, 2TUPLE) |
 	  EFX_RX_HASH(IPV4, 2TUPLE) },
-	{ ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER |
-	  ETH_RSS_IPV6_EX,
+	{ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+	  RTE_ETH_RSS_IPV6_EX,
 	  EFX_RX_HASH(IPV6_TCP, 2TUPLE) | EFX_RX_HASH(IPV6_UDP, 2TUPLE) |
 	  EFX_RX_HASH(IPV6, 2TUPLE) }
 };
@@ -1645,10 +1645,10 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
 	int rc = 0;
 
 	switch (rxmode->mq_mode) {
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 		/* No special checks are required */
 		break;
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		if (rss->context_type == EFX_RX_SCALE_UNAVAILABLE) {
 			sfc_err(sa, "RSS is not available");
 			rc = EINVAL;
@@ -1665,16 +1665,16 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
 	 * so unsupported offloads cannot be added as the result of
 	 * below check.
 	 */
-	if ((rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM) !=
-	    (offloads_supported & DEV_RX_OFFLOAD_CHECKSUM)) {
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM) !=
+	    (offloads_supported & RTE_ETH_RX_OFFLOAD_CHECKSUM)) {
 		sfc_warn(sa, "Rx checksum offloads cannot be disabled - always on (IPv4/TCP/UDP)");
-		rxmode->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 	}
 
-	if ((offloads_supported & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
-	    (~rxmode->offloads & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+	if ((offloads_supported & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+	    (~rxmode->offloads & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 		sfc_warn(sa, "Rx outer IPv4 checksum offload cannot be disabled - always on");
-		rxmode->offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 	}
 
 	return rc;
@@ -1820,7 +1820,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	}
 
 configure_rss:
-	rss->channels = (dev_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) ?
+	rss->channels = (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) ?
 			 MIN(sas->ethdev_rxq_count, EFX_MAXRSS) : 0;
 
 	if (rss->channels > 0) {
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 13392cdd5a09..0273788c20ce 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -54,23 +54,23 @@ sfc_tx_get_offload_mask(struct sfc_adapter *sa)
 	uint64_t no_caps = 0;
 
 	if (!encp->enc_hw_tx_insert_vlan_enabled)
-		no_caps |= DEV_TX_OFFLOAD_VLAN_INSERT;
+		no_caps |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if (!encp->enc_tunnel_encapsulations_supported)
-		no_caps |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+		no_caps |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 	if (!sa->tso)
-		no_caps |= DEV_TX_OFFLOAD_TCP_TSO;
+		no_caps |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (!sa->tso_encap ||
 	    (encp->enc_tunnel_encapsulations_supported &
 	     (1u << EFX_TUNNEL_PROTOCOL_VXLAN)) == 0)
-		no_caps |= DEV_TX_OFFLOAD_VXLAN_TNL_TSO;
+		no_caps |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
 
 	if (!sa->tso_encap ||
 	    (encp->enc_tunnel_encapsulations_supported &
 	     (1u << EFX_TUNNEL_PROTOCOL_GENEVE)) == 0)
-		no_caps |= DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+		no_caps |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
 
 	return ~no_caps;
 }
@@ -114,8 +114,8 @@ sfc_tx_qcheck_conf(struct sfc_adapter *sa, unsigned int txq_max_fill_level,
 	}
 
 	/* We either perform both TCP and UDP offload, or no offload at all */
-	if (((offloads & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) !=
-	    ((offloads & DEV_TX_OFFLOAD_UDP_CKSUM) == 0)) {
+	if (((offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) !=
+	    ((offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0)) {
 		sfc_err(sa, "TCP and UDP offloads can't be set independently");
 		rc = EINVAL;
 	}
@@ -309,7 +309,7 @@ sfc_tx_check_mode(struct sfc_adapter *sa, const struct rte_eth_txmode *txmode)
 	int rc = 0;
 
 	switch (txmode->mq_mode) {
-	case ETH_MQ_TX_NONE:
+	case RTE_ETH_MQ_TX_NONE:
 		break;
 	default:
 		sfc_err(sa, "Tx multi-queue mode %u not supported",
@@ -529,23 +529,23 @@ sfc_tx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 	if (rc != 0)
 		goto fail_ev_qstart;
 
-	if (txq_info->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+	if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		flags |= EFX_TXQ_CKSUM_IPV4;
 
-	if (txq_info->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+	if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 		flags |= EFX_TXQ_CKSUM_INNER_IPV4;
 
-	if ((txq_info->offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
-	    (txq_info->offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+	if ((txq_info->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+	    (txq_info->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
 		flags |= EFX_TXQ_CKSUM_TCPUDP;
 
-		if (offloads_supported & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+		if (offloads_supported & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 			flags |= EFX_TXQ_CKSUM_INNER_TCPUDP;
 	}
 
-	if (txq_info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+	if (txq_info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
 		flags |= EFX_TXQ_FATSOV2;
 
 	rc = efx_tx_qcreate(sa->nic, txq->hw_index, 0, &txq->mem,
@@ -876,9 +876,9 @@ sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 		/*
 		 * Here VLAN TCI is expected to be zero in case if no
-		 * DEV_TX_OFFLOAD_VLAN_INSERT capability is advertised;
+		 * RTE_ETH_TX_OFFLOAD_VLAN_INSERT capability is advertised;
 		 * if the calling app ignores the absence of
-		 * DEV_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
+		 * RTE_ETH_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
 		 * TX_ERROR will occur
 		 */
 		pkt_descs += sfc_efx_tx_maybe_insert_tag(txq, m_seg, &pend);
@@ -1242,13 +1242,13 @@ struct sfc_dp_tx sfc_efx_tx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_TX_EFX,
 	},
 	.features		= 0,
-	.dev_offload_capa	= DEV_TX_OFFLOAD_VLAN_INSERT |
-				  DEV_TX_OFFLOAD_MULTI_SEGS,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_TSO,
+	.dev_offload_capa	= RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				  RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_TSO,
 	.qsize_up_rings		= sfc_efx_tx_qsize_up_rings,
 	.qcreate		= sfc_efx_tx_qcreate,
 	.qdestroy		= sfc_efx_tx_qdestroy,
diff --git a/drivers/net/softnic/rte_eth_softnic.c b/drivers/net/softnic/rte_eth_softnic.c
index b3b55b9035b1..3ef33818a9e0 100644
--- a/drivers/net/softnic/rte_eth_softnic.c
+++ b/drivers/net/softnic/rte_eth_softnic.c
@@ -173,7 +173,7 @@ pmd_dev_start(struct rte_eth_dev *dev)
 		return status;
 
 	/* Link UP */
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -184,7 +184,7 @@ pmd_dev_stop(struct rte_eth_dev *dev)
 	struct pmd_internals *p = dev->data->dev_private;
 
 	/* Link DOWN */
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	/* Firmware */
 	softnic_pipeline_disable_all(p);
@@ -386,10 +386,10 @@ pmd_ethdev_register(struct rte_vdev_device *vdev,
 
 	/* dev->data */
 	dev->data->dev_private = dev_private;
-	dev->data->dev_link.link_speed = ETH_SPEED_NUM_100G;
-	dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+	dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	dev->data->mac_addrs = &eth_addr;
 	dev->data->promiscuous = 1;
 	dev->data->numa_node = params->cpu_id;
diff --git a/drivers/net/szedata2/rte_eth_szedata2.c b/drivers/net/szedata2/rte_eth_szedata2.c
index 3c6a285e3c5e..6a084e3e1b1b 100644
--- a/drivers/net/szedata2/rte_eth_szedata2.c
+++ b/drivers/net/szedata2/rte_eth_szedata2.c
@@ -1042,7 +1042,7 @@ static int
 eth_dev_configure(struct rte_eth_dev *dev)
 {
 	struct rte_eth_dev_data *data = dev->data;
-	if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		dev->rx_pkt_burst = eth_szedata2_rx_scattered;
 		data->scattered_rx = 1;
 	} else {
@@ -1064,11 +1064,11 @@ eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_rx_queues = internals->max_rx_queues;
 	dev_info->max_tx_queues = internals->max_tx_queues;
 	dev_info->min_rx_bufsize = 0;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
 	dev_info->tx_offload_capa = 0;
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->tx_queue_offload_capa = 0;
-	dev_info->speed_capa = ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -1202,10 +1202,10 @@ eth_link_update(struct rte_eth_dev *dev,
 
 	memset(&link, 0, sizeof(link));
 
-	link.link_speed = ETH_SPEED_NUM_100G;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_status = ETH_LINK_UP;
-	link.link_autoneg = ETH_LINK_FIXED;
+	link.link_speed = RTE_ETH_SPEED_NUM_100G;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 
 	rte_eth_linkstatus_set(dev, &link);
 	return 0;
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index e4f1ad45219e..5d5350d78e03 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -70,16 +70,16 @@
 
 #define TAP_IOV_DEFAULT_MAX 1024
 
-#define TAP_RX_OFFLOAD (DEV_RX_OFFLOAD_SCATTER |	\
-			DEV_RX_OFFLOAD_IPV4_CKSUM |	\
-			DEV_RX_OFFLOAD_UDP_CKSUM |	\
-			DEV_RX_OFFLOAD_TCP_CKSUM)
+#define TAP_RX_OFFLOAD (RTE_ETH_RX_OFFLOAD_SCATTER |	\
+			RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	\
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
 
-#define TAP_TX_OFFLOAD (DEV_TX_OFFLOAD_MULTI_SEGS |	\
-			DEV_TX_OFFLOAD_IPV4_CKSUM |	\
-			DEV_TX_OFFLOAD_UDP_CKSUM |	\
-			DEV_TX_OFFLOAD_TCP_CKSUM |	\
-			DEV_TX_OFFLOAD_TCP_TSO)
+#define TAP_TX_OFFLOAD (RTE_ETH_TX_OFFLOAD_MULTI_SEGS |	\
+			RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |	\
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |	\
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM |	\
+			RTE_ETH_TX_OFFLOAD_TCP_TSO)
 
 static int tap_devices_count;
 
@@ -97,10 +97,10 @@ static const char *valid_arguments[] = {
 static volatile uint32_t tap_trigger;	/* Rx trigger */
 
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 static void
@@ -433,7 +433,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 		len = readv(process_private->rxq_fds[rxq->queue_id],
 			*rxq->iovecs,
-			1 + (rxq->rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ?
+			1 + (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ?
 			     rxq->nb_rx_desc : 1));
 		if (len < (int)sizeof(struct tun_pi))
 			break;
@@ -489,7 +489,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		seg->next = NULL;
 		mbuf->packet_type = rte_net_get_ptype(mbuf, NULL,
 						      RTE_PTYPE_ALL_MASK);
-		if (rxq->rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+		if (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 			tap_verify_csum(mbuf);
 
 		/* account for the receive frame */
@@ -866,7 +866,7 @@ tap_link_set_down(struct rte_eth_dev *dev)
 	struct pmd_internals *pmd = dev->data->dev_private;
 	struct ifreq ifr = { .ifr_flags = IFF_UP };
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 0, LOCAL_ONLY);
 }
 
@@ -876,7 +876,7 @@ tap_link_set_up(struct rte_eth_dev *dev)
 	struct pmd_internals *pmd = dev->data->dev_private;
 	struct ifreq ifr = { .ifr_flags = IFF_UP };
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 1, LOCAL_AND_REMOTE);
 }
 
@@ -956,30 +956,30 @@ tap_dev_speed_capa(void)
 	uint32_t speed = pmd_link.link_speed;
 	uint32_t capa = 0;
 
-	if (speed >= ETH_SPEED_NUM_10M)
-		capa |= ETH_LINK_SPEED_10M;
-	if (speed >= ETH_SPEED_NUM_100M)
-		capa |= ETH_LINK_SPEED_100M;
-	if (speed >= ETH_SPEED_NUM_1G)
-		capa |= ETH_LINK_SPEED_1G;
-	if (speed >= ETH_SPEED_NUM_5G)
-		capa |= ETH_LINK_SPEED_2_5G;
-	if (speed >= ETH_SPEED_NUM_5G)
-		capa |= ETH_LINK_SPEED_5G;
-	if (speed >= ETH_SPEED_NUM_10G)
-		capa |= ETH_LINK_SPEED_10G;
-	if (speed >= ETH_SPEED_NUM_20G)
-		capa |= ETH_LINK_SPEED_20G;
-	if (speed >= ETH_SPEED_NUM_25G)
-		capa |= ETH_LINK_SPEED_25G;
-	if (speed >= ETH_SPEED_NUM_40G)
-		capa |= ETH_LINK_SPEED_40G;
-	if (speed >= ETH_SPEED_NUM_50G)
-		capa |= ETH_LINK_SPEED_50G;
-	if (speed >= ETH_SPEED_NUM_56G)
-		capa |= ETH_LINK_SPEED_56G;
-	if (speed >= ETH_SPEED_NUM_100G)
-		capa |= ETH_LINK_SPEED_100G;
+	if (speed >= RTE_ETH_SPEED_NUM_10M)
+		capa |= RTE_ETH_LINK_SPEED_10M;
+	if (speed >= RTE_ETH_SPEED_NUM_100M)
+		capa |= RTE_ETH_LINK_SPEED_100M;
+	if (speed >= RTE_ETH_SPEED_NUM_1G)
+		capa |= RTE_ETH_LINK_SPEED_1G;
+	if (speed >= RTE_ETH_SPEED_NUM_5G)
+		capa |= RTE_ETH_LINK_SPEED_2_5G;
+	if (speed >= RTE_ETH_SPEED_NUM_5G)
+		capa |= RTE_ETH_LINK_SPEED_5G;
+	if (speed >= RTE_ETH_SPEED_NUM_10G)
+		capa |= RTE_ETH_LINK_SPEED_10G;
+	if (speed >= RTE_ETH_SPEED_NUM_20G)
+		capa |= RTE_ETH_LINK_SPEED_20G;
+	if (speed >= RTE_ETH_SPEED_NUM_25G)
+		capa |= RTE_ETH_LINK_SPEED_25G;
+	if (speed >= RTE_ETH_SPEED_NUM_40G)
+		capa |= RTE_ETH_LINK_SPEED_40G;
+	if (speed >= RTE_ETH_SPEED_NUM_50G)
+		capa |= RTE_ETH_LINK_SPEED_50G;
+	if (speed >= RTE_ETH_SPEED_NUM_56G)
+		capa |= RTE_ETH_LINK_SPEED_56G;
+	if (speed >= RTE_ETH_SPEED_NUM_100G)
+		capa |= RTE_ETH_LINK_SPEED_100G;
 
 	return capa;
 }
@@ -1196,15 +1196,15 @@ tap_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 		tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, REMOTE_ONLY);
 		if (!(ifr.ifr_flags & IFF_UP) ||
 		    !(ifr.ifr_flags & IFF_RUNNING)) {
-			dev_link->link_status = ETH_LINK_DOWN;
+			dev_link->link_status = RTE_ETH_LINK_DOWN;
 			return 0;
 		}
 	}
 	tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, LOCAL_ONLY);
 	dev_link->link_status =
 		((ifr.ifr_flags & IFF_UP) && (ifr.ifr_flags & IFF_RUNNING) ?
-		 ETH_LINK_UP :
-		 ETH_LINK_DOWN);
+		 RTE_ETH_LINK_UP :
+		 RTE_ETH_LINK_DOWN);
 	return 0;
 }
 
@@ -1391,7 +1391,7 @@ tap_gso_ctx_setup(struct rte_gso_ctx *gso_ctx, struct rte_eth_dev *dev)
 	int ret;
 
 	/* initialize GSO context */
-	gso_types = DEV_TX_OFFLOAD_TCP_TSO;
+	gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	if (!pmd->gso_ctx_mp) {
 		/*
 		 * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE
@@ -1606,9 +1606,9 @@ tap_tx_queue_setup(struct rte_eth_dev *dev,
 
 	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
 	txq->csum = !!(offloads &
-			(DEV_TX_OFFLOAD_IPV4_CKSUM |
-			 DEV_TX_OFFLOAD_UDP_CKSUM |
-			 DEV_TX_OFFLOAD_TCP_CKSUM));
+			(RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			 RTE_ETH_TX_OFFLOAD_TCP_CKSUM));
 
 	ret = tap_setup_queue(dev, internals, tx_queue_id, 0);
 	if (ret == -1)
@@ -1760,7 +1760,7 @@ static int
 tap_flow_ctrl_get(struct rte_eth_dev *dev __rte_unused,
 		  struct rte_eth_fc_conf *fc_conf)
 {
-	fc_conf->mode = RTE_FC_NONE;
+	fc_conf->mode = RTE_ETH_FC_NONE;
 	return 0;
 }
 
@@ -1768,7 +1768,7 @@ static int
 tap_flow_ctrl_set(struct rte_eth_dev *dev __rte_unused,
 		  struct rte_eth_fc_conf *fc_conf)
 {
-	if (fc_conf->mode != RTE_FC_NONE)
+	if (fc_conf->mode != RTE_ETH_FC_NONE)
 		return -ENOTSUP;
 	return 0;
 }
@@ -2262,7 +2262,7 @@ rte_pmd_tun_probe(struct rte_vdev_device *dev)
 			}
 		}
 	}
-	pmd_link.link_speed = ETH_SPEED_NUM_10G;
+	pmd_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 
 	TAP_LOG(DEBUG, "Initializing pmd_tun for %s", name);
 
@@ -2436,7 +2436,7 @@ rte_pmd_tap_probe(struct rte_vdev_device *dev)
 		return 0;
 	}
 
-	speed = ETH_SPEED_NUM_10G;
+	speed = RTE_ETH_SPEED_NUM_10G;
 
 	/* use tap%d which causes kernel to choose next available */
 	strlcpy(tap_name, DEFAULT_TAP_NAME "%d", RTE_ETH_NAME_MAX_LEN);
diff --git a/drivers/net/tap/tap_rss.h b/drivers/net/tap/tap_rss.h
index 176e7180bdaa..48c151cf6b68 100644
--- a/drivers/net/tap/tap_rss.h
+++ b/drivers/net/tap/tap_rss.h
@@ -13,7 +13,7 @@
 #define TAP_RSS_HASH_KEY_SIZE 40
 
 /* Supported RSS */
-#define TAP_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP))
+#define TAP_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP))
 
 /* hashed fields for RSS */
 enum hash_field {
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 328d6d56d921..38a2ddc633b5 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -61,14 +61,14 @@ nicvf_link_status_update(struct nicvf *nic,
 {
 	memset(link, 0, sizeof(*link));
 
-	link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	if (nic->duplex == NICVF_HALF_DUPLEX)
-		link->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	else if (nic->duplex == NICVF_FULL_DUPLEX)
-		link->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link->link_speed = nic->speed;
-	link->link_autoneg = ETH_LINK_AUTONEG;
+	link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 }
 
 static void
@@ -134,7 +134,7 @@ nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		/* rte_eth_link_get() might need to wait up to 9 seconds */
 		for (i = 0; i < MAX_CHECK_TIME; i++) {
 			nicvf_link_status_update(nic, &link);
-			if (link.link_status == ETH_LINK_UP)
+			if (link.link_status == RTE_ETH_LINK_UP)
 				break;
 			rte_delay_ms(CHECK_INTERVAL);
 		}
@@ -390,35 +390,35 @@ nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
 {
 	uint64_t nic_rss = 0;
 
-	if (ethdev_rss & ETH_RSS_IPV4)
+	if (ethdev_rss & RTE_ETH_RSS_IPV4)
 		nic_rss |= RSS_IP_ENA;
 
-	if (ethdev_rss & ETH_RSS_IPV6)
+	if (ethdev_rss & RTE_ETH_RSS_IPV6)
 		nic_rss |= RSS_IP_ENA;
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
 
-	if (ethdev_rss & ETH_RSS_PORT)
+	if (ethdev_rss & RTE_ETH_RSS_PORT)
 		nic_rss |= RSS_L2_EXTENDED_HASH_ENA;
 
 	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
-		if (ethdev_rss & ETH_RSS_VXLAN)
+		if (ethdev_rss & RTE_ETH_RSS_VXLAN)
 			nic_rss |= RSS_TUN_VXLAN_ENA;
 
-		if (ethdev_rss & ETH_RSS_GENEVE)
+		if (ethdev_rss & RTE_ETH_RSS_GENEVE)
 			nic_rss |= RSS_TUN_GENEVE_ENA;
 
-		if (ethdev_rss & ETH_RSS_NVGRE)
+		if (ethdev_rss & RTE_ETH_RSS_NVGRE)
 			nic_rss |= RSS_TUN_NVGRE_ENA;
 	}
 
@@ -431,28 +431,28 @@ nicvf_rss_nic_to_ethdev(struct nicvf *nic,  uint64_t nic_rss)
 	uint64_t ethdev_rss = 0;
 
 	if (nic_rss & RSS_IP_ENA)
-		ethdev_rss |= (ETH_RSS_IPV4 | ETH_RSS_IPV6);
+		ethdev_rss |= (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6);
 
 	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_TCP_ENA))
-		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_TCP |
-				ETH_RSS_NONFRAG_IPV6_TCP);
+		ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP);
 
 	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_UDP_ENA))
-		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_UDP |
-				ETH_RSS_NONFRAG_IPV6_UDP);
+		ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP);
 
 	if (nic_rss & RSS_L2_EXTENDED_HASH_ENA)
-		ethdev_rss |= ETH_RSS_PORT;
+		ethdev_rss |= RTE_ETH_RSS_PORT;
 
 	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
 		if (nic_rss & RSS_TUN_VXLAN_ENA)
-			ethdev_rss |= ETH_RSS_VXLAN;
+			ethdev_rss |= RTE_ETH_RSS_VXLAN;
 
 		if (nic_rss & RSS_TUN_GENEVE_ENA)
-			ethdev_rss |= ETH_RSS_GENEVE;
+			ethdev_rss |= RTE_ETH_RSS_GENEVE;
 
 		if (nic_rss & RSS_TUN_NVGRE_ENA)
-			ethdev_rss |= ETH_RSS_NVGRE;
+			ethdev_rss |= RTE_ETH_RSS_NVGRE;
 	}
 	return ethdev_rss;
 }
@@ -479,8 +479,8 @@ nicvf_dev_reta_query(struct rte_eth_dev *dev,
 		return ret;
 
 	/* Copy RETA table */
-	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = tbl[j];
 	}
@@ -509,8 +509,8 @@ nicvf_dev_reta_update(struct rte_eth_dev *dev,
 		return ret;
 
 	/* Copy RETA table */
-	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				tbl[j] = reta_conf[i].reta[j];
 	}
@@ -807,9 +807,9 @@ nicvf_configure_rss(struct rte_eth_dev *dev)
 		    dev->data->nb_rx_queues,
 		    dev->data->dev_conf.lpbk_mode, rsshf);
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
 		ret = nicvf_rss_term(nic);
-	else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+	else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 		ret = nicvf_rss_config(nic, dev->data->nb_rx_queues, rsshf);
 	if (ret)
 		PMD_INIT_LOG(ERR, "Failed to configure RSS %d", ret);
@@ -870,7 +870,7 @@ nicvf_set_tx_function(struct rte_eth_dev *dev)
 
 	for (i = 0; i < dev->data->nb_tx_queues; i++) {
 		txq = dev->data->tx_queues[i];
-		if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
 			multiseg = true;
 			break;
 		}
@@ -992,7 +992,7 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
 	txq->offloads = offloads;
 
-	is_single_pool = !!(offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE);
+	is_single_pool = !!(offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE);
 
 	/* Choose optimum free threshold value for multipool case */
 	if (!is_single_pool) {
@@ -1382,11 +1382,11 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	PMD_INIT_FUNC_TRACE();
 
 	/* Autonegotiation may be disabled */
-	dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
-	dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
-				 ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+				 RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
 	if (nicvf_hw_version(nic) != PCI_SUB_DEVICE_ID_CN81XX_NICVF)
-		dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
 
 	dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU;
 	dev_info->max_rx_pktlen = NIC_HW_MAX_MTU + RTE_ETHER_HDR_LEN;
@@ -1415,10 +1415,10 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = NICVF_DEFAULT_TX_FREE_THRESH,
-		.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE |
-			DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM   |
-			DEV_TX_OFFLOAD_UDP_CKSUM          |
-			DEV_TX_OFFLOAD_TCP_CKSUM,
+		.offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+			RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM   |
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM          |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM,
 	};
 
 	return 0;
@@ -1582,8 +1582,8 @@ nicvf_vf_start(struct rte_eth_dev *dev, struct nicvf *nic, uint32_t rbdrsz)
 		     nic->rbdr->tail, nb_rbdr_desc, nic->vf_id);
 
 	/* Configure VLAN Strip */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	ret = nicvf_vlan_offload_config(dev, mask);
 
 	/* Based on the packet type(IPv4 or IPv6), the nicvf HW aligns L3 data
@@ -1711,7 +1711,7 @@ nicvf_dev_start(struct rte_eth_dev *dev)
 	/* Setup scatter mode if needed by jumbo */
 	if (dev->data->mtu + (uint32_t)NIC_HW_L2_OVERHEAD + 2 * VLAN_TAG_SIZE > buffsz)
 		dev->data->scattered_rx = 1;
-	if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
+	if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) != 0)
 		dev->data->scattered_rx = 1;
 
 	/* Setup MTU */
@@ -1896,8 +1896,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (!rte_eal_has_hugepages()) {
 		PMD_INIT_LOG(INFO, "Huge page is not configured");
@@ -1909,8 +1909,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-		rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+		rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		PMD_INIT_LOG(INFO, "Unsupported rx qmode %d", rxmode->mq_mode);
 		return -EINVAL;
 	}
@@ -1920,7 +1920,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(INFO, "Setting link speed/duplex not supported");
 		return -EINVAL;
 	}
@@ -1955,7 +1955,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		nic->offload_cksum = 1;
 
 	PMD_INIT_LOG(DEBUG, "Configured ethdev port%d hwcap=0x%" PRIx64,
@@ -2032,8 +2032,8 @@ nicvf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	struct nicvf *nic = nicvf_pmd_priv(dev);
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			nicvf_vlan_hw_strip(nic, true);
 		else
 			nicvf_vlan_hw_strip(nic, false);
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index 5d38750d6313..cb474e26b81e 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -16,32 +16,32 @@
 #define NICVF_UNKNOWN_DUPLEX		0xff
 
 #define NICVF_RSS_OFFLOAD_PASS1 ( \
-	ETH_RSS_PORT | \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_PORT | \
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define NICVF_RSS_OFFLOAD_TUNNEL ( \
-	ETH_RSS_VXLAN | \
-	ETH_RSS_GENEVE | \
-	ETH_RSS_NVGRE)
+	RTE_ETH_RSS_VXLAN | \
+	RTE_ETH_RSS_GENEVE | \
+	RTE_ETH_RSS_NVGRE)
 
 #define NICVF_TX_OFFLOAD_CAPA ( \
-	DEV_TX_OFFLOAD_IPV4_CKSUM       | \
-	DEV_TX_OFFLOAD_UDP_CKSUM        | \
-	DEV_TX_OFFLOAD_TCP_CKSUM        | \
-	DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-	DEV_TX_OFFLOAD_MBUF_FAST_FREE   | \
-	DEV_TX_OFFLOAD_MULTI_SEGS)
+	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM       | \
+	RTE_ETH_TX_OFFLOAD_UDP_CKSUM        | \
+	RTE_ETH_TX_OFFLOAD_TCP_CKSUM        | \
+	RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+	RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE   | \
+	RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define NICVF_RX_OFFLOAD_CAPA ( \
-	DEV_RX_OFFLOAD_CHECKSUM    | \
-	DEV_RX_OFFLOAD_VLAN_STRIP  | \
-	DEV_RX_OFFLOAD_SCATTER     | \
-	DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RX_OFFLOAD_CHECKSUM    | \
+	RTE_ETH_RX_OFFLOAD_VLAN_STRIP  | \
+	RTE_ETH_RX_OFFLOAD_SCATTER     | \
+	RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define NICVF_DEFAULT_RX_FREE_THRESH    224
 #define NICVF_DEFAULT_TX_FREE_THRESH    224
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 7b46ffb68635..0b0f9db7cb2a 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -998,7 +998,7 @@ txgbe_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
 	rxbal = rd32(hw, TXGBE_RXBAL(rxq->reg_idx));
 	rxbah = rd32(hw, TXGBE_RXBAH(rxq->reg_idx));
 	rxcfg = rd32(hw, TXGBE_RXCFG(rxq->reg_idx));
-	if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 		restart = (rxcfg & TXGBE_RXCFG_ENA) &&
 			!(rxcfg & TXGBE_RXCFG_VLAN);
 		rxcfg |= TXGBE_RXCFG_VLAN;
@@ -1033,7 +1033,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 	vlan_ext = (portctrl & TXGBE_PORTCTL_VLANEXT);
 	qinq = vlan_ext && (portctrl & TXGBE_PORTCTL_QINQ);
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
+	case RTE_ETH_VLAN_TYPE_INNER:
 		if (vlan_ext) {
 			wr32m(hw, TXGBE_VLANCTL,
 				TXGBE_VLANCTL_TPID_MASK,
@@ -1053,7 +1053,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 				TXGBE_TAGTPID_LSB(tpid));
 		}
 		break;
-	case ETH_VLAN_TYPE_OUTER:
+	case RTE_ETH_VLAN_TYPE_OUTER:
 		if (vlan_ext) {
 			/* Only the high 16-bits is valid */
 			wr32m(hw, TXGBE_EXTAG,
@@ -1138,10 +1138,10 @@ txgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 
 	if (on) {
 		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
-		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
 		rxq->vlan_flags = PKT_RX_VLAN;
-		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
 
@@ -1240,7 +1240,7 @@ txgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
 
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			txgbe_vlan_strip_queue_set(dev, i, 1);
 		else
 			txgbe_vlan_strip_queue_set(dev, i, 0);
@@ -1254,17 +1254,17 @@ txgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	struct txgbe_rx_queue *rxq;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		rxmode = &dev->data->dev_conf.rxmode;
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 		else
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 	}
 }
@@ -1275,25 +1275,25 @@ txgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	rxmode = &dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_STRIP_MASK)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK)
 		txgbe_vlan_hw_strip_config(dev);
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			txgbe_vlan_hw_filter_enable(dev);
 		else
 			txgbe_vlan_hw_filter_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			txgbe_vlan_hw_extend_enable(dev);
 		else
 			txgbe_vlan_hw_extend_disable(dev);
 	}
 
-	if (mask & ETH_QINQ_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+	if (mask & RTE_ETH_QINQ_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
 			txgbe_qinq_hw_strip_enable(dev);
 		else
 			txgbe_qinq_hw_strip_disable(dev);
@@ -1331,10 +1331,10 @@ txgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
 	switch (nb_rx_q) {
 	case 1:
 	case 2:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
 		break;
 	case 4:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
 		break;
 	default:
 		return -EINVAL;
@@ -1357,18 +1357,18 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
 		/* check multi-queue mode */
 		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
 			break;
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
 			/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
 			PMD_INIT_LOG(ERR, "SRIOV active,"
 					" unsupported mq_mode rx %d.",
 					dev_conf->rxmode.mq_mode);
 			return -EINVAL;
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
 			if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
 				if (txgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
 					PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -1378,13 +1378,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 					return -EINVAL;
 				}
 			break;
-		case ETH_MQ_RX_VMDQ_ONLY:
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_NONE:
 			/* if nothing mq mode configure, use default scheme */
 			dev->data->dev_conf.rxmode.mq_mode =
-				ETH_MQ_RX_VMDQ_ONLY;
+				RTE_ETH_MQ_RX_VMDQ_ONLY;
 			break;
-		default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+		default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
 			/* SRIOV only works in VMDq enable mode */
 			PMD_INIT_LOG(ERR, "SRIOV is active,"
 					" wrong mq_mode rx %d.",
@@ -1393,13 +1393,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 		}
 
 		switch (dev_conf->txmode.mq_mode) {
-		case ETH_MQ_TX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+		case RTE_ETH_MQ_TX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+			dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
 			break;
-		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
+		default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
 			dev->data->dev_conf.txmode.mq_mode =
-				ETH_MQ_TX_VMDQ_ONLY;
+				RTE_ETH_MQ_TX_VMDQ_ONLY;
 			break;
 		}
 
@@ -1414,13 +1414,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 			return -EINVAL;
 		}
 	} else {
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
 			PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
 					  " not supported.");
 			return -EINVAL;
 		}
 		/* check configuration for vmdb+dcb mode */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_conf *conf;
 
 			if (nb_rx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1429,15 +1429,15 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools must be %d or %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_tx_conf *conf;
 
 			if (nb_tx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1446,39 +1446,39 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools != %d and"
 						" nb_queue_pools != %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
 
 		/* For DCB mode check our configuration before we go further */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
 			const struct rte_eth_dcb_rx_conf *conf;
 
 			conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
 
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 			const struct rte_eth_dcb_tx_conf *conf;
 
 			conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
@@ -1495,8 +1495,8 @@ txgbe_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multiple queue mode checking */
 	ret  = txgbe_check_mq_mode(dev);
@@ -1694,15 +1694,15 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 		goto error;
 	}
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = txgbe_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
 		goto error;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
 		/* Enable vlan filtering for VMDq */
 		txgbe_vmdq_vlan_hw_filter_enable(dev);
 	}
@@ -1763,8 +1763,8 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 	if (err)
 		goto error;
 
-	allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_10G;
+	allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G;
 
 	link_speeds = &dev->data->dev_conf.link_speeds;
 	if (((*link_speeds) >> 1) & ~(allowed_speeds >> 1)) {
@@ -1773,20 +1773,20 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 	}
 
 	speed = 0x0;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		speed = (TXGBE_LINK_SPEED_100M_FULL |
 			 TXGBE_LINK_SPEED_1GB_FULL |
 			 TXGBE_LINK_SPEED_10GB_FULL);
 	} else {
-		if (*link_speeds & ETH_LINK_SPEED_10G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
 			speed |= TXGBE_LINK_SPEED_10GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
 			speed |= TXGBE_LINK_SPEED_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_2_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
 			speed |= TXGBE_LINK_SPEED_2_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_1G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed |= TXGBE_LINK_SPEED_1GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_100M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed |= TXGBE_LINK_SPEED_100M_FULL;
 	}
 
@@ -2601,7 +2601,7 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
-	dev_info->max_vmdq_pools = ETH_64_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->vmdq_queue_num = dev_info->max_rx_queues;
 	dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
@@ -2634,11 +2634,11 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->tx_desc_lim = tx_desc_lim;
 
 	dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
-	dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
 
 	/* Driver-preferred Rx/Tx parameters */
 	dev_info->default_rxportconf.burst_size = 32;
@@ -2695,11 +2695,11 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	int wait = 1;
 
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_AUTONEG);
 
 	hw->mac.get_link_status = true;
 
@@ -2713,8 +2713,8 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
 
 	if (err != 0) {
-		link.link_speed = ETH_SPEED_NUM_100M;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
@@ -2733,34 +2733,34 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	}
 
 	intr->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (link_speed) {
 	default:
 	case TXGBE_LINK_SPEED_UNKNOWN:
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	case TXGBE_LINK_SPEED_100M_FULL:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	case TXGBE_LINK_SPEED_1GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case TXGBE_LINK_SPEED_2_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_2_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 
 	case TXGBE_LINK_SPEED_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 
 	case TXGBE_LINK_SPEED_10GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	}
 
@@ -2990,7 +2990,7 @@ txgbe_dev_link_status_print(struct rte_eth_dev *dev)
 		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned int)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3221,13 +3221,13 @@ txgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		tx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -3359,16 +3359,16 @@ txgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 		return -ENOTSUP;
 	}
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += 4) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
 		if (!mask)
 			continue;
@@ -3400,16 +3400,16 @@ txgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += 4) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
 		if (!mask)
 			continue;
@@ -3576,12 +3576,12 @@ txgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
 		return -ENOTSUP;
 
 	if (on) {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = ~0;
 			wr32(hw, TXGBE_UCADDRTBL(i), ~0);
 		}
 	} else {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = 0;
 			wr32(hw, TXGBE_UCADDRTBL(i), 0);
 		}
@@ -3605,15 +3605,15 @@ txgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
 {
 	uint32_t new_val = orig_val;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
 		new_val |= TXGBE_POOLETHCTL_UTA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
 		new_val |= TXGBE_POOLETHCTL_MCHA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 		new_val |= TXGBE_POOLETHCTL_UCHA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 		new_val |= TXGBE_POOLETHCTL_BCA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 		new_val |= TXGBE_POOLETHCTL_MCP;
 
 	return new_val;
@@ -4264,15 +4264,15 @@ txgbe_start_timecounters(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 
 	switch (link.link_speed) {
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		incval = TXGBE_INCVAL_100;
 		shift = TXGBE_INCVAL_SHIFT_100;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		incval = TXGBE_INCVAL_1GB;
 		shift = TXGBE_INCVAL_SHIFT_1GB;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 	default:
 		incval = TXGBE_INCVAL_10GB;
 		shift = TXGBE_INCVAL_SHIFT_10GB;
@@ -4628,7 +4628,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	uint8_t nb_tcs;
 	uint8_t i, j;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
 	else
 		dcb_info->nb_tcs = 1;
@@ -4639,7 +4639,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	if (dcb_config->vt_mode) { /* vt is enabled */
 		struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
 		if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
 			for (j = 0; j < nb_tcs; j++) {
@@ -4663,9 +4663,9 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	} else { /* vt is disabled */
 		struct rte_eth_dcb_rx_conf *rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
-		if (dcb_info->nb_tcs == ETH_4_TCS) {
+		if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4678,7 +4678,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 			dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
 			dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
 			dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
-		} else if (dcb_info->nb_tcs == ETH_8_TCS) {
+		} else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4908,7 +4908,7 @@ txgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = txgbe_e_tag_filter_add(dev, l2_tunnel);
 		break;
 	default:
@@ -4939,7 +4939,7 @@ txgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 		return ret;
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = txgbe_e_tag_filter_del(dev, l2_tunnel);
 		break;
 	default:
@@ -4979,7 +4979,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
 			ret = -EINVAL;
@@ -4987,7 +4987,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_VXLANPORT, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add Geneve port 0 is not allowed.");
 			ret = -EINVAL;
@@ -4995,7 +4995,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_GENEVEPORT, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add Teredo port 0 is not allowed.");
 			ret = -EINVAL;
@@ -5003,7 +5003,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_TEREDOPORT, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
 			ret = -EINVAL;
@@ -5035,7 +5035,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORT);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5045,7 +5045,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_VXLANPORT, 0);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		cur_port = (uint16_t)rd32(hw, TXGBE_GENEVEPORT);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5055,7 +5055,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_GENEVEPORT, 0);
 		break;
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		cur_port = (uint16_t)rd32(hw, TXGBE_TEREDOPORT);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5065,7 +5065,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_TEREDOPORT, 0);
 		break;
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORTGPE);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index fd65d89ffe7d..8304b68292da 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -60,15 +60,15 @@
 #define TXGBE_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
 
 #define TXGBE_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define TXGBE_MISC_VEC_ID               RTE_INTR_VEC_ZERO_OFFSET
 #define TXGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 43dc0ed39b75..283b52e8f3db 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -486,14 +486,14 @@ txgbevf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
-	dev_info->max_vmdq_pools = ETH_64_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
 				     dev_info->rx_queue_offload_capa);
 	dev_info->tx_queue_offload_capa = txgbe_get_tx_queue_offloads(dev);
 	dev_info->tx_offload_capa = txgbe_get_tx_port_offloads(dev);
 	dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -574,22 +574,22 @@ txgbevf_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
 		     dev->data->port_id);
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/*
 	 * VF has no ability to enable/disable HW CRC
 	 * Keep the persistent behavior the same as Host PF
 	 */
 #ifndef RTE_LIBRTE_TXGBE_PF_DISABLE_STRIP_CRC
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
-		conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #else
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
 		PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #endif
 
@@ -647,8 +647,8 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
 	txgbevf_set_vfta_all(dev, 1);
 
 	/* Set HW strip */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = txgbevf_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -891,10 +891,10 @@ txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	int on = 0;
 
 	/* VF function only support hw strip feature, others are not support */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
-			on = !!(rxq->offloads &	DEV_RX_OFFLOAD_VLAN_STRIP);
+			on = !!(rxq->offloads &	RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 			txgbevf_vlan_strip_queue_set(dev, i, on);
 		}
 	}
diff --git a/drivers/net/txgbe/txgbe_fdir.c b/drivers/net/txgbe/txgbe_fdir.c
index 8abb86228608..e303d87176ed 100644
--- a/drivers/net/txgbe/txgbe_fdir.c
+++ b/drivers/net/txgbe/txgbe_fdir.c
@@ -102,22 +102,22 @@ txgbe_fdir_enable(struct txgbe_hw *hw, uint32_t fdirctrl)
  * flexbytes matching field, and drop queue (only for perfect matching mode).
  */
 static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf,
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf,
 		     uint32_t *fdirctrl, uint32_t *flex)
 {
 	*fdirctrl = 0;
 	*flex = 0;
 
 	switch (conf->pballoc) {
-	case RTE_FDIR_PBALLOC_64K:
+	case RTE_ETH_FDIR_PBALLOC_64K:
 		/* 8k - 1 signature filters */
 		*fdirctrl |= TXGBE_FDIRCTL_BUF_64K;
 		break;
-	case RTE_FDIR_PBALLOC_128K:
+	case RTE_ETH_FDIR_PBALLOC_128K:
 		/* 16k - 1 signature filters */
 		*fdirctrl |= TXGBE_FDIRCTL_BUF_128K;
 		break;
-	case RTE_FDIR_PBALLOC_256K:
+	case RTE_ETH_FDIR_PBALLOC_256K:
 		/* 32k - 1 signature filters */
 		*fdirctrl |= TXGBE_FDIRCTL_BUF_256K;
 		break;
@@ -521,15 +521,15 @@ txgbe_atr_compute_hash(struct txgbe_atr_input *atr_input,
 
 static uint32_t
 atr_compute_perfect_hash(struct txgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
 	uint32_t bucket_hash;
 
 	bucket_hash = txgbe_atr_compute_hash(input,
 				TXGBE_ATR_BUCKET_HASH_KEY);
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		bucket_hash &= PERFECT_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		bucket_hash &= PERFECT_BUCKET_128KB_HASH_MASK;
 	else
 		bucket_hash &= PERFECT_BUCKET_64KB_HASH_MASK;
@@ -564,15 +564,15 @@ txgbe_fdir_check_cmd_complete(struct txgbe_hw *hw, uint32_t *fdircmd)
  */
 static uint32_t
 atr_compute_signature_hash(struct txgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
 	uint32_t bucket_hash, sig_hash;
 
 	bucket_hash = txgbe_atr_compute_hash(input,
 				TXGBE_ATR_BUCKET_HASH_KEY);
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		bucket_hash &= SIG_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		bucket_hash &= SIG_BUCKET_128KB_HASH_MASK;
 	else
 		bucket_hash &= SIG_BUCKET_64KB_HASH_MASK;
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index eae400b14176..6d7fd1842843 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -1215,7 +1215,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
 		return -rte_errno;
 	}
 
-	filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+	filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
 	/**
 	 * grp and e_cid_base are bit fields and only use 14 bits.
 	 * e-tag id is taken as little endian by HW.
diff --git a/drivers/net/txgbe/txgbe_ipsec.c b/drivers/net/txgbe/txgbe_ipsec.c
index ccd747973ba2..445733f3ba46 100644
--- a/drivers/net/txgbe/txgbe_ipsec.c
+++ b/drivers/net/txgbe/txgbe_ipsec.c
@@ -372,7 +372,7 @@ txgbe_crypto_create_session(void *device,
 	aead_xform = &conf->crypto_xform->aead;
 
 	if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 			ic_session->op = TXGBE_OP_AUTHENTICATED_DECRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -380,7 +380,7 @@ txgbe_crypto_create_session(void *device,
 			return -ENOTSUP;
 		}
 	} else {
-		if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+		if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 			ic_session->op = TXGBE_OP_AUTHENTICATED_ENCRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -611,11 +611,11 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	tx_offloads = dev->data->dev_conf.txmode.offloads;
 
 	/* sanity checks */
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
 		return -1;
 	}
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
 		return -1;
 	}
@@ -634,7 +634,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	reg |= TXGBE_SECRXCTL_CRCSTRIP;
 	wr32(hw, TXGBE_SECRXCTL, reg);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		wr32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA, 0);
 		reg = rd32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA);
 		if (reg != 0) {
@@ -642,7 +642,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 			return -1;
 		}
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 		wr32(hw, TXGBE_SECTXCTL, TXGBE_SECTXCTL_STFWD);
 		reg = rd32(hw, TXGBE_SECTXCTL);
 		if (reg != TXGBE_SECTXCTL_STFWD) {
diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c
index a48972b1a381..30be2873307a 100644
--- a/drivers/net/txgbe/txgbe_pf.c
+++ b/drivers/net/txgbe/txgbe_pf.c
@@ -101,15 +101,15 @@ int txgbe_pf_host_init(struct rte_eth_dev *eth_dev)
 	memset(uta_info, 0, sizeof(struct txgbe_uta_info));
 	hw->mac.mc_filter_type = 0;
 
-	if (vf_num >= ETH_32_POOLS) {
+	if (vf_num >= RTE_ETH_32_POOLS) {
 		nb_queue = 2;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
-	} else if (vf_num >= ETH_16_POOLS) {
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+	} else if (vf_num >= RTE_ETH_16_POOLS) {
 		nb_queue = 4;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
 	} else {
 		nb_queue = 8;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
 	}
 
 	RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -256,13 +256,13 @@ int txgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
 	gcr_ext &= ~TXGBE_PORTCTL_NUMVT_MASK;
 
 	switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		gcr_ext |= TXGBE_PORTCTL_NUMVT_64;
 		break;
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		gcr_ext |= TXGBE_PORTCTL_NUMVT_32;
 		break;
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		gcr_ext |= TXGBE_PORTCTL_NUMVT_16;
 		break;
 	}
@@ -611,29 +611,29 @@ txgbe_get_vf_queues(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf)
 	/* Notify VF of number of DCB traffic classes */
 	eth_conf = &eth_dev->data->dev_conf;
 	switch (eth_conf->txmode.mq_mode) {
-	case ETH_MQ_TX_NONE:
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_NONE:
+	case RTE_ETH_MQ_TX_DCB:
 		PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
 			", but its tx mode = %d\n", vf,
 			eth_conf->txmode.mq_mode);
 		return -1;
 
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		vmdq_dcb_tx_conf = &eth_conf->tx_adv_conf.vmdq_dcb_tx_conf;
 		switch (vmdq_dcb_tx_conf->nb_queue_pools) {
-		case ETH_16_POOLS:
-			num_tcs = ETH_8_TCS;
+		case RTE_ETH_16_POOLS:
+			num_tcs = RTE_ETH_8_TCS;
 			break;
-		case ETH_32_POOLS:
-			num_tcs = ETH_4_TCS;
+		case RTE_ETH_32_POOLS:
+			num_tcs = RTE_ETH_4_TCS;
 			break;
 		default:
 			return -1;
 		}
 		break;
 
-	/* ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
-	case ETH_MQ_TX_VMDQ_ONLY:
+	/* RTE_ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
+	case RTE_ETH_MQ_TX_VMDQ_ONLY:
 		hw = TXGBE_DEV_HW(eth_dev);
 		vmvir = rd32(hw, TXGBE_POOLTAG(vf));
 		vlana = vmvir & TXGBE_POOLTAG_ACT_MASK;
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 7e18dcce0a86..1204dc5499a5 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1960,7 +1960,7 @@ txgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
 uint64_t
 txgbe_get_rx_queue_offloads(struct rte_eth_dev *dev __rte_unused)
 {
-	return DEV_RX_OFFLOAD_VLAN_STRIP;
+	return RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 }
 
 uint64_t
@@ -1970,34 +1970,34 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
 	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
 	struct rte_eth_dev_sriov *sriov = &RTE_ETH_DEV_SRIOV(dev);
 
-	offloads = DEV_RX_OFFLOAD_IPV4_CKSUM  |
-		   DEV_RX_OFFLOAD_UDP_CKSUM   |
-		   DEV_RX_OFFLOAD_TCP_CKSUM   |
-		   DEV_RX_OFFLOAD_KEEP_CRC    |
-		   DEV_RX_OFFLOAD_VLAN_FILTER |
-		   DEV_RX_OFFLOAD_RSS_HASH |
-		   DEV_RX_OFFLOAD_SCATTER;
+	offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+		   RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+		   RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		   RTE_ETH_RX_OFFLOAD_RSS_HASH |
+		   RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	if (!txgbe_is_vf(dev))
-		offloads |= (DEV_RX_OFFLOAD_VLAN_FILTER |
-			     DEV_RX_OFFLOAD_QINQ_STRIP |
-			     DEV_RX_OFFLOAD_VLAN_EXTEND);
+		offloads |= (RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+			     RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+			     RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
 
 	/*
 	 * RSC is only supported by PF devices in a non-SR-IOV
 	 * mode.
 	 */
 	if (hw->mac.type == txgbe_mac_raptor && !sriov->active)
-		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+		offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 
 	if (hw->mac.type == txgbe_mac_raptor)
-		offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
 
-	offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+	offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		offloads |= DEV_RX_OFFLOAD_SECURITY;
+		offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
 #endif
 
 	return offloads;
@@ -2222,32 +2222,32 @@ txgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
 	uint64_t tx_offload_capa;
 
 	tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM   |
-		DEV_TX_OFFLOAD_SCTP_CKSUM  |
-		DEV_TX_OFFLOAD_TCP_TSO     |
-		DEV_TX_OFFLOAD_UDP_TSO	   |
-		DEV_TX_OFFLOAD_UDP_TNL_TSO	|
-		DEV_TX_OFFLOAD_IP_TNL_TSO	|
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO	|
-		DEV_TX_OFFLOAD_GRE_TNL_TSO	|
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO	|
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO	|
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO     |
+		RTE_ETH_TX_OFFLOAD_UDP_TSO	   |
+		RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_IP_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	if (!txgbe_is_vf(dev))
-		tx_offload_capa |= DEV_TX_OFFLOAD_QINQ_INSERT;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
 
-	tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+	tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 
-	tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-			   DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+	tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
 #endif
 	return tx_offload_capa;
 }
@@ -2349,7 +2349,7 @@ txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_deferred_start = tx_conf->tx_deferred_start;
 #ifdef RTE_LIB_SECURITY
 	txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SECURITY);
+			RTE_ETH_TX_OFFLOAD_SECURITY);
 #endif
 
 	/* Modification to set tail pointer for virtual function
@@ -2599,7 +2599,7 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
 		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -2900,20 +2900,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
 	if (hw->mac.type == txgbe_mac_raptor_vf) {
 		mrqc = rd32(hw, TXGBE_VFPLCFG);
 		mrqc &= ~TXGBE_VFPLCFG_RSSMASK;
-		if (rss_hf & ETH_RSS_IPV4)
+		if (rss_hf & RTE_ETH_RSS_IPV4)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV4TCP;
-		if (rss_hf & ETH_RSS_IPV6 ||
-		    rss_hf & ETH_RSS_IPV6_EX)
+		if (rss_hf & RTE_ETH_RSS_IPV6 ||
+		    rss_hf & RTE_ETH_RSS_IPV6_EX)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV6;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
-		    rss_hf & ETH_RSS_IPV6_TCP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV6TCP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV4UDP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
-		    rss_hf & ETH_RSS_IPV6_UDP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV6UDP;
 
 		if (rss_hf)
@@ -2930,20 +2930,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
 	} else {
 		mrqc = rd32(hw, TXGBE_RACTL);
 		mrqc &= ~TXGBE_RACTL_RSSMASK;
-		if (rss_hf & ETH_RSS_IPV4)
+		if (rss_hf & RTE_ETH_RSS_IPV4)
 			mrqc |= TXGBE_RACTL_RSSIPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			mrqc |= TXGBE_RACTL_RSSIPV4TCP;
-		if (rss_hf & ETH_RSS_IPV6 ||
-		    rss_hf & ETH_RSS_IPV6_EX)
+		if (rss_hf & RTE_ETH_RSS_IPV6 ||
+		    rss_hf & RTE_ETH_RSS_IPV6_EX)
 			mrqc |= TXGBE_RACTL_RSSIPV6;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
-		    rss_hf & ETH_RSS_IPV6_TCP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 			mrqc |= TXGBE_RACTL_RSSIPV6TCP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 			mrqc |= TXGBE_RACTL_RSSIPV4UDP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
-		    rss_hf & ETH_RSS_IPV6_UDP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 			mrqc |= TXGBE_RACTL_RSSIPV6UDP;
 
 		if (rss_hf)
@@ -2984,39 +2984,39 @@ txgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 	if (hw->mac.type == txgbe_mac_raptor_vf) {
 		mrqc = rd32(hw, TXGBE_VFPLCFG);
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV4)
-			rss_hf |= ETH_RSS_IPV4;
+			rss_hf |= RTE_ETH_RSS_IPV4;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV4TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV6)
-			rss_hf |= ETH_RSS_IPV6 |
-				  ETH_RSS_IPV6_EX;
+			rss_hf |= RTE_ETH_RSS_IPV6 |
+				  RTE_ETH_RSS_IPV6_EX;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV6TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
-				  ETH_RSS_IPV6_TCP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				  RTE_ETH_RSS_IPV6_TCP_EX;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV4UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV6UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
-				  ETH_RSS_IPV6_UDP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				  RTE_ETH_RSS_IPV6_UDP_EX;
 		if (!(mrqc & TXGBE_VFPLCFG_RSSENA))
 			rss_hf = 0;
 	} else {
 		mrqc = rd32(hw, TXGBE_RACTL);
 		if (mrqc & TXGBE_RACTL_RSSIPV4)
-			rss_hf |= ETH_RSS_IPV4;
+			rss_hf |= RTE_ETH_RSS_IPV4;
 		if (mrqc & TXGBE_RACTL_RSSIPV4TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		if (mrqc & TXGBE_RACTL_RSSIPV6)
-			rss_hf |= ETH_RSS_IPV6 |
-				  ETH_RSS_IPV6_EX;
+			rss_hf |= RTE_ETH_RSS_IPV6 |
+				  RTE_ETH_RSS_IPV6_EX;
 		if (mrqc & TXGBE_RACTL_RSSIPV6TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
-				  ETH_RSS_IPV6_TCP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				  RTE_ETH_RSS_IPV6_TCP_EX;
 		if (mrqc & TXGBE_RACTL_RSSIPV4UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 		if (mrqc & TXGBE_RACTL_RSSIPV6UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
-				  ETH_RSS_IPV6_UDP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				  RTE_ETH_RSS_IPV6_UDP_EX;
 		if (!(mrqc & TXGBE_RACTL_RSSENA))
 			rss_hf = 0;
 	}
@@ -3046,7 +3046,7 @@ txgbe_rss_configure(struct rte_eth_dev *dev)
 	 */
 	if (adapter->rss_reta_updated == 0) {
 		reta = 0;
-		for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+		for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
 			if (j == dev->data->nb_rx_queues)
 				j = 0;
 			reta = (reta >> 8) | LS32(j, 24, 0xFF);
@@ -3083,12 +3083,12 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
 	num_pools = cfg->nb_queue_pools;
 	/* Check we have a valid number of pools */
-	if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+	if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
 		txgbe_rss_disable(dev);
 		return;
 	}
 	/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
-	nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+	nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
 
 	/*
 	 * split rx buffer up into sections, each for 1 traffic class
@@ -3103,7 +3103,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
 	}
 	/* zero alloc all unused TCs */
-	for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		uint32_t rxpbsize = rd32(hw, TXGBE_PBRXSIZE(i));
 
 		rxpbsize &= (~(0x3FF << 10));
@@ -3111,7 +3111,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
 	}
 
-	if (num_pools == ETH_16_POOLS) {
+	if (num_pools == RTE_ETH_16_POOLS) {
 		mrqc = TXGBE_PORTCTL_NUMTC_8;
 		mrqc |= TXGBE_PORTCTL_NUMVT_16;
 	} else {
@@ -3130,7 +3130,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	wr32(hw, TXGBE_POOLCTL, vt_ctl);
 
 	queue_mapping = 0;
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 		/*
 		 * mapping is done with 3 bits per priority,
 		 * so shift by i*3 each time
@@ -3151,7 +3151,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		wr32(hw, TXGBE_VLANTBL(i), 0xFFFFFFFF);
 
 	wr32(hw, TXGBE_POOLRXENA(0),
-			num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+			num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	wr32(hw, TXGBE_ETHADDRIDX, 0);
 	wr32(hw, TXGBE_ETHADDRASSL, 0xFFFFFFFF);
@@ -3221,7 +3221,7 @@ txgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
 	/*PF VF Transmit Enable*/
 	wr32(hw, TXGBE_POOLTXENA(0),
 		vmdq_tx_conf->nb_queue_pools ==
-				ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+				RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	/*Configure general DCB TX parameters*/
 	txgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3237,12 +3237,12 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
-	if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3252,7 +3252,7 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3270,12 +3270,12 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
-	if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3285,7 +3285,7 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3312,7 +3312,7 @@ txgbe_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3339,7 +3339,7 @@ txgbe_dcb_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3475,7 +3475,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(dev);
 
 	switch (dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_VMDQ_DCB:
+	case RTE_ETH_MQ_RX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/*
@@ -3486,8 +3486,8 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		/*Configure general VMDQ and DCB RX parameters*/
 		txgbe_vmdq_dcb_configure(dev);
 		break;
-	case ETH_MQ_RX_DCB:
-	case ETH_MQ_RX_DCB_RSS:
+	case RTE_ETH_MQ_RX_DCB:
+	case RTE_ETH_MQ_RX_DCB_RSS:
 		dcb_config->vt_mode = false;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -3500,7 +3500,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		break;
 	}
 	switch (dev->data->dev_conf.txmode.mq_mode) {
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/* get DCB and VT TX configuration parameters
@@ -3511,7 +3511,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		txgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
 		break;
 
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_DCB:
 		dcb_config->vt_mode = false;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/* get DCB TX configuration parameters from rte_eth_conf */
@@ -3527,15 +3527,15 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	nb_tcs = dcb_config->num_tcs.pfc_tcs;
 	/* Unpack map */
 	txgbe_dcb_unpack_map_cee(dcb_config, TXGBE_DCB_RX_CONFIG, map);
-	if (nb_tcs == ETH_4_TCS) {
+	if (nb_tcs == RTE_ETH_4_TCS) {
 		/* Avoid un-configured priority mapping to TC0 */
 		uint8_t j = 4;
 		uint8_t mask = 0xFF;
 
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
 			mask = (uint8_t)(mask & (~(1 << map[i])));
 		for (i = 0; mask && (i < TXGBE_DCB_TC_MAX); i++) {
-			if ((mask & 0x1) && j < ETH_DCB_NUM_USER_PRIORITIES)
+			if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
 				map[j++] = i;
 			mask >>= 1;
 		}
@@ -3576,7 +3576,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
 
 		/* zero alloc all unused TCs */
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			wr32(hw, TXGBE_PBRXSIZE(i), 0);
 	}
 	if (config_dcb_tx) {
@@ -3592,7 +3592,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			wr32(hw, TXGBE_PBTXDMATH(i), txpbthresh);
 		}
 		/* Clear unused TCs, if any, to zero buffer size*/
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			wr32(hw, TXGBE_PBTXSIZE(i), 0);
 			wr32(hw, TXGBE_PBTXDMATH(i), 0);
 		}
@@ -3634,7 +3634,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	txgbe_dcb_config_tc_stats_raptor(hw, dcb_config);
 
 	/* Check if the PFC is supported */
-	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
 		for (i = 0; i < nb_tcs; i++) {
 			/* If the TC count is 8,
@@ -3648,7 +3648,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			tc->pfc = txgbe_dcb_pfc_enabled;
 		}
 		txgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
-		if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+		if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
 			pfc_en &= 0x0F;
 		ret = txgbe_dcb_config_pfc(hw, pfc_en, map);
 	}
@@ -3719,12 +3719,12 @@ void txgbe_configure_dcb(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	/* check support mq_mode for DCB */
-	if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB &&
-	    dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB &&
-	    dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS)
+	if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
 		return;
 
-	if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+	if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
 		return;
 
 	/** Configure DCB hardware **/
@@ -3780,7 +3780,7 @@ txgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 
 	/* pool enabling for receive - 64 */
 	wr32(hw, TXGBE_POOLRXENA(0), UINT32_MAX);
-	if (num_pools == ETH_64_POOLS)
+	if (num_pools == RTE_ETH_64_POOLS)
 		wr32(hw, TXGBE_POOLRXENA(1), UINT32_MAX);
 
 	/*
@@ -3904,11 +3904,11 @@ txgbe_config_vf_rss(struct rte_eth_dev *dev)
 	mrqc = rd32(hw, TXGBE_PORTCTL);
 	mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_64;
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_32;
 		break;
 
@@ -3931,15 +3931,15 @@ txgbe_config_vf_default(struct rte_eth_dev *dev)
 	mrqc = rd32(hw, TXGBE_PORTCTL);
 	mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_64;
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_32;
 		break;
 
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_16;
 		break;
 	default:
@@ -3962,21 +3962,21 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * any DCB/RSS w/o VMDq multi-queue setting
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_DCB_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			txgbe_rss_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
 			txgbe_vmdq_dcb_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
 			txgbe_vmdq_rx_hw_configure(dev);
 			break;
 
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_NONE:
 		default:
 			/* if mq_mode is none, disable rss mode.*/
 			txgbe_rss_disable(dev);
@@ -3987,18 +3987,18 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * Support RSS together with SRIOV.
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			txgbe_config_vf_rss(dev);
 			break;
-		case ETH_MQ_RX_VMDQ_DCB:
-		case ETH_MQ_RX_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_DCB:
 		/* In SRIOV, the configuration is the same as VMDq case */
 			txgbe_vmdq_dcb_configure(dev);
 			break;
 		/* DCB/RSS together with SRIOV is not supported */
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
-		case ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
 			PMD_INIT_LOG(ERR,
 				"Could not support DCB/RSS with VMDq & SRIOV");
 			return -1;
@@ -4028,7 +4028,7 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV inactive scheme
 		 * any DCB w/o VMDq multi-queue setting
 		 */
-		if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+		if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
 			txgbe_vmdq_tx_hw_configure(hw);
 		else
 			wr32m(hw, TXGBE_PORTCTL, TXGBE_PORTCTL_NUMVT_MASK, 0);
@@ -4038,13 +4038,13 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV active scheme
 		 * FIXME if support DCB together with VMDq & SRIOV
 		 */
-		case ETH_64_POOLS:
+		case RTE_ETH_64_POOLS:
 			mtqc = TXGBE_PORTCTL_NUMVT_64;
 			break;
-		case ETH_32_POOLS:
+		case RTE_ETH_32_POOLS:
 			mtqc = TXGBE_PORTCTL_NUMVT_32;
 			break;
-		case ETH_16_POOLS:
+		case RTE_ETH_16_POOLS:
 			mtqc = TXGBE_PORTCTL_NUMVT_16;
 			break;
 		default:
@@ -4107,10 +4107,10 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* Sanity check */
 	dev->dev_ops->dev_infos_get(dev, &dev_info);
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		rsc_capable = true;
 
-	if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
 				   "support it");
 		return -EINVAL;
@@ -4118,22 +4118,22 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* RSC global configuration */
 
-	if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
-	     (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+	     (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		PMD_INIT_LOG(CRIT, "LRO can't be enabled when HW CRC "
 				    "is disabled");
 		return -EINVAL;
 	}
 
 	rfctl = rd32(hw, TXGBE_PSRCTL);
-	if (rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if (rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		rfctl &= ~TXGBE_PSRCTL_RSCDIA;
 	else
 		rfctl |= TXGBE_PSRCTL_RSCDIA;
 	wr32(hw, TXGBE_PSRCTL, rfctl);
 
 	/* If LRO hasn't been requested - we are done here. */
-	if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		return 0;
 
 	/* Set PSRCTL.RSCACK bit */
@@ -4273,7 +4273,7 @@ txgbe_set_rx_function(struct rte_eth_dev *dev)
 		struct txgbe_rx_queue *rxq = dev->data->rx_queues[i];
 
 		rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_SECURITY);
+				RTE_ETH_RX_OFFLOAD_SECURITY);
 	}
 #endif
 }
@@ -4316,7 +4316,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Configure CRC stripping, if any.
 	 */
 	hlreg0 = rd32(hw, TXGBE_SECRXCTL);
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		hlreg0 &= ~TXGBE_SECRXCTL_CRCSTRIP;
 	else
 		hlreg0 |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4344,7 +4344,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	/* Setup RX queues */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -4354,7 +4354,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 * call to configure.
 		 */
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -4391,11 +4391,11 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 		if (dev->data->mtu + TXGBE_ETH_OVERHEAD +
 				2 * TXGBE_VLAN_TAG_SIZE > buf_size)
 			dev->data->scattered_rx = 1;
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		dev->data->scattered_rx = 1;
 
 	/*
@@ -4410,7 +4410,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 */
 	rxcsum = rd32(hw, TXGBE_PSRCTL);
 	rxcsum |= TXGBE_PSRCTL_PCSD;
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= TXGBE_PSRCTL_L4CSUM;
 	else
 		rxcsum &= ~TXGBE_PSRCTL_L4CSUM;
@@ -4419,7 +4419,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 
 	if (hw->mac.type == txgbe_mac_raptor) {
 		rdrxctl = rd32(hw, TXGBE_SECRXCTL);
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rdrxctl &= ~TXGBE_SECRXCTL_CRCSTRIP;
 		else
 			rdrxctl |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4542,8 +4542,8 @@ txgbe_dev_rxtx_start(struct rte_eth_dev *dev)
 		txgbe_setup_loopback_link_raptor(hw);
 
 #ifdef RTE_LIB_SECURITY
-	if ((dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) ||
-	    (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_SECURITY)) {
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) ||
+	    (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY)) {
 		ret = txgbe_crypto_enable_ipsec(dev);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR,
@@ -4851,7 +4851,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	/* Set PSR type for VF RSS according to max Rx queue */
 	psrtype = TXGBE_VFPLCFG_PSRL4HDR |
@@ -4903,7 +4903,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
 		 */
 		wr32(hw, TXGBE_RXCFG(i), srrctl);
 
-		if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
 		    /* It adds dual VLAN length for supporting dual VLAN */
 		    (dev->data->mtu + TXGBE_ETH_OVERHEAD +
 				2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
@@ -4912,8 +4912,8 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
 			dev->data->scattered_rx = 1;
 		}
 
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	/*
@@ -5084,7 +5084,7 @@ txgbe_config_rss_filter(struct rte_eth_dev *dev,
 	 * little-endian order.
 	 */
 	reta = 0;
-	for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+	for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
 		if (j == conf->conf.queue_num)
 			j = 0;
 		reta = (reta >> 8) | LS32(conf->conf.queue[j], 24, 0xFF);
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index b96f58a3f848..27d4c842c0e7 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -309,7 +309,7 @@ struct txgbe_rx_queue {
 	uint8_t             rx_deferred_start; /**< not in global dev start. */
 	/** flags to set in mbuf when a vlan is detected. */
 	uint64_t            vlan_flags;
-	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
 	/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
 	struct rte_mbuf fake_mbuf;
 	/** hold packets to return to application */
@@ -392,7 +392,7 @@ struct txgbe_tx_queue {
 	uint8_t             pthresh;       /**< Prefetch threshold register. */
 	uint8_t             hthresh;       /**< Host threshold register. */
 	uint8_t             wthresh;       /**< Write-back threshold reg. */
-	uint64_t            offloads; /* Tx offload flags of DEV_TX_OFFLOAD_* */
+	uint64_t            offloads; /* Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
 	uint32_t            ctx_curr;      /**< Hardware context states. */
 	/** Hardware context0 history. */
 	struct txgbe_ctx_info ctx_cache[TXGBE_CTX_NUM];
diff --git a/drivers/net/txgbe/txgbe_tm.c b/drivers/net/txgbe/txgbe_tm.c
index 3abe3959eb1a..3171be73d05d 100644
--- a/drivers/net/txgbe/txgbe_tm.c
+++ b/drivers/net/txgbe/txgbe_tm.c
@@ -118,14 +118,14 @@ txgbe_tc_nb_get(struct rte_eth_dev *dev)
 	uint8_t nb_tcs = 0;
 
 	eth_conf = &dev->data->dev_conf;
-	if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+	if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 		nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
-	} else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	} else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
-		    ETH_32_POOLS)
-			nb_tcs = ETH_4_TCS;
+		    RTE_ETH_32_POOLS)
+			nb_tcs = RTE_ETH_4_TCS;
 		else
-			nb_tcs = ETH_8_TCS;
+			nb_tcs = RTE_ETH_8_TCS;
 	} else {
 		nb_tcs = 1;
 	}
@@ -364,10 +364,10 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 	if (vf_num) {
 		/* no DCB */
 		if (nb_tcs == 1) {
-			if (vf_num >= ETH_32_POOLS) {
+			if (vf_num >= RTE_ETH_32_POOLS) {
 				*nb = 2;
 				*base = vf_num * 2;
-			} else if (vf_num >= ETH_16_POOLS) {
+			} else if (vf_num >= RTE_ETH_16_POOLS) {
 				*nb = 4;
 				*base = vf_num * 4;
 			} else {
@@ -381,7 +381,7 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 		}
 	} else {
 		/* VT off */
-		if (nb_tcs == ETH_8_TCS) {
+		if (nb_tcs == RTE_ETH_8_TCS) {
 			switch (tc_node_no) {
 			case 0:
 				*base = 0;
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a7935a716de9..27f81a5cafc5 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -125,8 +125,8 @@ static pthread_mutex_t internal_list_lock = PTHREAD_MUTEX_INITIALIZER;
 
 static struct rte_eth_link pmd_link = {
 		.link_speed = 10000,
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_status = ETH_LINK_DOWN
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_status = RTE_ETH_LINK_DOWN
 };
 
 struct rte_vhost_vring_state {
@@ -823,7 +823,7 @@ new_device(int vid)
 
 	rte_vhost_get_mtu(vid, &eth_dev->data->mtu);
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	rte_atomic32_set(&internal->dev_attached, 1);
 	update_queuing_status(eth_dev);
@@ -858,7 +858,7 @@ destroy_device(int vid)
 	rte_atomic32_set(&internal->dev_attached, 0);
 	update_queuing_status(eth_dev);
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	if (eth_dev->data->rx_queues && eth_dev->data->tx_queues) {
 		for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1124,7 +1124,7 @@ eth_dev_configure(struct rte_eth_dev *dev)
 	if (vhost_driver_setup(dev) < 0)
 		return -1;
 
-	internal->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	internal->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 	return 0;
 }
@@ -1273,9 +1273,9 @@ eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_tx_queues = internal->max_queues;
 	dev_info->min_rx_bufsize = 0;
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-				DEV_TX_OFFLOAD_VLAN_INSERT;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	return 0;
 }
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 047d3f43a3cf..74ede2aeccc1 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -712,7 +712,7 @@ int
 virtio_dev_close(struct rte_eth_dev *dev)
 {
 	struct virtio_hw *hw = dev->data->dev_private;
-	struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+	struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
 
 	PMD_INIT_LOG(DEBUG, "virtio_dev_close");
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1771,7 +1771,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 		     hw->mac_addr[0], hw->mac_addr[1], hw->mac_addr[2],
 		     hw->mac_addr[3], hw->mac_addr[4], hw->mac_addr[5]);
 
-	if (hw->speed == ETH_SPEED_NUM_UNKNOWN) {
+	if (hw->speed == RTE_ETH_SPEED_NUM_UNKNOWN) {
 		if (virtio_with_feature(hw, VIRTIO_NET_F_SPEED_DUPLEX)) {
 			config = &local_config;
 			virtio_read_dev_config(hw,
@@ -1785,7 +1785,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 		}
 	}
 	if (hw->duplex == DUPLEX_UNKNOWN)
-		hw->duplex = ETH_LINK_FULL_DUPLEX;
+		hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	PMD_INIT_LOG(DEBUG, "link speed = %d, duplex = %d",
 		hw->speed, hw->duplex);
 	if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ)) {
@@ -1884,7 +1884,7 @@ int
 eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
 {
 	struct virtio_hw *hw = eth_dev->data->dev_private;
-	uint32_t speed = ETH_SPEED_NUM_UNKNOWN;
+	uint32_t speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	int vectorized = 0;
 	int ret;
 
@@ -1955,22 +1955,22 @@ static uint32_t
 virtio_dev_speed_capa_get(uint32_t speed)
 {
 	switch (speed) {
-	case ETH_SPEED_NUM_10G:
-		return ETH_LINK_SPEED_10G;
-	case ETH_SPEED_NUM_20G:
-		return ETH_LINK_SPEED_20G;
-	case ETH_SPEED_NUM_25G:
-		return ETH_LINK_SPEED_25G;
-	case ETH_SPEED_NUM_40G:
-		return ETH_LINK_SPEED_40G;
-	case ETH_SPEED_NUM_50G:
-		return ETH_LINK_SPEED_50G;
-	case ETH_SPEED_NUM_56G:
-		return ETH_LINK_SPEED_56G;
-	case ETH_SPEED_NUM_100G:
-		return ETH_LINK_SPEED_100G;
-	case ETH_SPEED_NUM_200G:
-		return ETH_LINK_SPEED_200G;
+	case RTE_ETH_SPEED_NUM_10G:
+		return RTE_ETH_LINK_SPEED_10G;
+	case RTE_ETH_SPEED_NUM_20G:
+		return RTE_ETH_LINK_SPEED_20G;
+	case RTE_ETH_SPEED_NUM_25G:
+		return RTE_ETH_LINK_SPEED_25G;
+	case RTE_ETH_SPEED_NUM_40G:
+		return RTE_ETH_LINK_SPEED_40G;
+	case RTE_ETH_SPEED_NUM_50G:
+		return RTE_ETH_LINK_SPEED_50G;
+	case RTE_ETH_SPEED_NUM_56G:
+		return RTE_ETH_LINK_SPEED_56G;
+	case RTE_ETH_SPEED_NUM_100G:
+		return RTE_ETH_LINK_SPEED_100G;
+	case RTE_ETH_SPEED_NUM_200G:
+		return RTE_ETH_LINK_SPEED_200G;
 	default:
 		return 0;
 	}
@@ -2086,14 +2086,14 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "configure");
 	req_features = VIRTIO_PMD_DEFAULT_GUEST_FEATURES;
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) {
 		PMD_DRV_LOG(ERR,
 			"Unsupported Rx multi queue mode %d",
 			rxmode->mq_mode);
 		return -EINVAL;
 	}
 
-	if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		PMD_DRV_LOG(ERR,
 			"Unsupported Tx multi queue mode %d",
 			txmode->mq_mode);
@@ -2111,20 +2111,20 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 
 	hw->max_rx_pkt_len = ether_hdr_len + rxmode->mtu;
 
-	if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
-			   DEV_RX_OFFLOAD_TCP_CKSUM))
+	if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
 		req_features |= (1ULL << VIRTIO_NET_F_GUEST_CSUM);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		req_features |=
 			(1ULL << VIRTIO_NET_F_GUEST_TSO4) |
 			(1ULL << VIRTIO_NET_F_GUEST_TSO6);
 
-	if (tx_offloads & (DEV_TX_OFFLOAD_UDP_CKSUM |
-			   DEV_TX_OFFLOAD_TCP_CKSUM))
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM))
 		req_features |= (1ULL << VIRTIO_NET_F_CSUM);
 
-	if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		req_features |=
 			(1ULL << VIRTIO_NET_F_HOST_TSO4) |
 			(1ULL << VIRTIO_NET_F_HOST_TSO6);
@@ -2136,15 +2136,15 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 			return ret;
 	}
 
-	if ((rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
-			    DEV_RX_OFFLOAD_TCP_CKSUM)) &&
+	if ((rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			    RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) &&
 		!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_CSUM)) {
 		PMD_DRV_LOG(ERR,
 			"rx checksum not available on this host");
 		return -ENOTSUP;
 	}
 
-	if ((rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) &&
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
 		(!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO4) ||
 		 !virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO6))) {
 		PMD_DRV_LOG(ERR,
@@ -2156,12 +2156,12 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 	if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
 		virtio_dev_cq_start(dev);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 		hw->vlan_strip = 1;
 
-	hw->rx_ol_scatter = (rx_offloads & DEV_RX_OFFLOAD_SCATTER);
+	hw->rx_ol_scatter = (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
 
-	if ((rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 			!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
 		PMD_DRV_LOG(ERR,
 			    "vlan filtering not available on this host");
@@ -2214,7 +2214,7 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 				hw->use_vec_rx = 0;
 			}
 
-			if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+			if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 				PMD_DRV_LOG(INFO,
 					"disabled packed ring vectorized rx for TCP_LRO enabled");
 				hw->use_vec_rx = 0;
@@ -2241,10 +2241,10 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 				hw->use_vec_rx = 0;
 			}
 
-			if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
-					   DEV_RX_OFFLOAD_TCP_CKSUM |
-					   DEV_RX_OFFLOAD_TCP_LRO |
-					   DEV_RX_OFFLOAD_VLAN_STRIP)) {
+			if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+					   RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+					   RTE_ETH_RX_OFFLOAD_TCP_LRO |
+					   RTE_ETH_RX_OFFLOAD_VLAN_STRIP)) {
 				PMD_DRV_LOG(INFO,
 					"disabled split ring vectorized rx for offloading enabled");
 				hw->use_vec_rx = 0;
@@ -2437,7 +2437,7 @@ virtio_dev_stop(struct rte_eth_dev *dev)
 {
 	struct virtio_hw *hw = dev->data->dev_private;
 	struct rte_eth_link link;
-	struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+	struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
 
 	PMD_INIT_LOG(DEBUG, "stop");
 	dev->data->dev_started = 0;
@@ -2478,28 +2478,28 @@ virtio_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complet
 	memset(&link, 0, sizeof(link));
 	link.link_duplex = hw->duplex;
 	link.link_speed  = hw->speed;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	if (!hw->started) {
-		link.link_status = ETH_LINK_DOWN;
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	} else if (virtio_with_feature(hw, VIRTIO_NET_F_STATUS)) {
 		PMD_INIT_LOG(DEBUG, "Get link status from hw");
 		virtio_read_dev_config(hw,
 				offsetof(struct virtio_net_config, status),
 				&status, sizeof(status));
 		if ((status & VIRTIO_NET_S_LINK_UP) == 0) {
-			link.link_status = ETH_LINK_DOWN;
-			link.link_speed = ETH_SPEED_NUM_NONE;
+			link.link_status = RTE_ETH_LINK_DOWN;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 			PMD_INIT_LOG(DEBUG, "Port %d is down",
 				     dev->data->port_id);
 		} else {
-			link.link_status = ETH_LINK_UP;
+			link.link_status = RTE_ETH_LINK_UP;
 			PMD_INIT_LOG(DEBUG, "Port %d is up",
 				     dev->data->port_id);
 		}
 	} else {
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -2512,8 +2512,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct virtio_hw *hw = dev->data->dev_private;
 	uint64_t offloads = rxmode->offloads;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if ((offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if ((offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 				!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
 
 			PMD_DRV_LOG(NOTICE,
@@ -2523,8 +2523,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		}
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK)
-		hw->vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	if (mask & RTE_ETH_VLAN_STRIP_MASK)
+		hw->vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 	return 0;
 }
@@ -2546,32 +2546,32 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = hw->max_mtu;
 
 	host_features = VIRTIO_OPS(hw)->get_features(hw);
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	if (host_features & (1ULL << VIRTIO_NET_F_MRG_RXBUF))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SCATTER;
 	if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
 		dev_info->rx_offload_capa |=
-			DEV_RX_OFFLOAD_TCP_CKSUM |
-			DEV_RX_OFFLOAD_UDP_CKSUM;
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
 	}
 	if (host_features & (1ULL << VIRTIO_NET_F_CTRL_VLAN))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_FILTER;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	tso_mask = (1ULL << VIRTIO_NET_F_GUEST_TSO4) |
 		(1ULL << VIRTIO_NET_F_GUEST_TSO6);
 	if ((host_features & tso_mask) == tso_mask)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_LRO;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-				    DEV_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				    RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	if (host_features & (1ULL << VIRTIO_NET_F_CSUM)) {
 		dev_info->tx_offload_capa |=
-			DEV_TX_OFFLOAD_UDP_CKSUM |
-			DEV_TX_OFFLOAD_TCP_CKSUM;
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 	}
 	tso_mask = (1ULL << VIRTIO_NET_F_HOST_TSO4) |
 		(1ULL << VIRTIO_NET_F_HOST_TSO6);
 	if ((host_features & tso_mask) == tso_mask)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (host_features & (1ULL << VIRTIO_F_RING_PACKED)) {
 		/*
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index a19895af1f17..26d9edf5319c 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -41,20 +41,20 @@
 #define	VMXNET3_TX_MAX_SEG	UINT8_MAX
 
 #define VMXNET3_TX_OFFLOAD_CAP		\
-	(DEV_TX_OFFLOAD_VLAN_INSERT |	\
-	 DEV_TX_OFFLOAD_TCP_CKSUM |	\
-	 DEV_TX_OFFLOAD_UDP_CKSUM |	\
-	 DEV_TX_OFFLOAD_TCP_TSO |	\
-	 DEV_TX_OFFLOAD_MULTI_SEGS)
+	(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |	\
+	 RTE_ETH_TX_OFFLOAD_TCP_CKSUM |	\
+	 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |	\
+	 RTE_ETH_TX_OFFLOAD_TCP_TSO |	\
+	 RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define VMXNET3_RX_OFFLOAD_CAP		\
-	(DEV_RX_OFFLOAD_VLAN_STRIP |	\
-	 DEV_RX_OFFLOAD_VLAN_FILTER |   \
-	 DEV_RX_OFFLOAD_SCATTER |	\
-	 DEV_RX_OFFLOAD_UDP_CKSUM |	\
-	 DEV_RX_OFFLOAD_TCP_CKSUM |	\
-	 DEV_RX_OFFLOAD_TCP_LRO |	\
-	 DEV_RX_OFFLOAD_RSS_HASH)
+	(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |	\
+	 RTE_ETH_RX_OFFLOAD_VLAN_FILTER |   \
+	 RTE_ETH_RX_OFFLOAD_SCATTER |	\
+	 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+	 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |	\
+	 RTE_ETH_RX_OFFLOAD_TCP_LRO |	\
+	 RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 int vmxnet3_segs_dynfield_offset = -1;
 
@@ -398,9 +398,9 @@ eth_vmxnet3_dev_init(struct rte_eth_dev *eth_dev)
 
 	/* set the initial link status */
 	memset(&link, 0, sizeof(link));
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_speed = ETH_SPEED_NUM_10G;
-	link.link_autoneg = ETH_LINK_FIXED;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 	rte_eth_linkstatus_set(eth_dev, &link);
 
 	return 0;
@@ -486,8 +486,8 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (dev->data->nb_tx_queues > VMXNET3_MAX_TX_QUEUES ||
 	    dev->data->nb_rx_queues > VMXNET3_MAX_RX_QUEUES) {
@@ -547,7 +547,7 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
 	hw->queueDescPA = mz->iova;
 	hw->queue_desc_len = (uint16_t)size;
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		/* Allocate memory structure for UPT1_RSSConf and configure */
 		mz = gpa_zone_reserve(dev, sizeof(struct VMXNET3_RSSConf),
 				      "rss_conf", rte_socket_id(),
@@ -843,15 +843,15 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
 	devRead->rxFilterConf.rxMode = 0;
 
 	/* Setting up feature flags */
-	if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		devRead->misc.uptFeatures |= VMXNET3_F_RXCSUM;
 
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		devRead->misc.uptFeatures |= VMXNET3_F_LRO;
 		devRead->misc.maxNumRxSG = 0;
 	}
 
-	if (port_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (port_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		ret = vmxnet3_rss_configure(dev);
 		if (ret != VMXNET3_SUCCESS)
 			return ret;
@@ -863,7 +863,7 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
 	}
 
 	ret = vmxnet3_dev_vlan_offload_set(dev,
-			ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+			RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
 	if (ret)
 		return ret;
 
@@ -930,7 +930,7 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
 	}
 
 	if (VMXNET3_VERSION_GE_4(hw) &&
-	    dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	    dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		/* Check for additional RSS  */
 		ret = vmxnet3_v4_rss_configure(dev);
 		if (ret != VMXNET3_SUCCESS) {
@@ -1039,9 +1039,9 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
 
 	/* Clear recorded link status */
 	memset(&link, 0, sizeof(link));
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_speed = ETH_SPEED_NUM_10G;
-	link.link_autoneg = ETH_LINK_FIXED;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 	rte_eth_linkstatus_set(dev, &link);
 
 	hw->adapter_stopped = 1;
@@ -1365,7 +1365,7 @@ vmxnet3_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
 	dev_info->min_mtu = VMXNET3_MIN_MTU;
 	dev_info->max_mtu = VMXNET3_MAX_MTU;
-	dev_info->speed_capa = ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
 	dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
 
 	dev_info->flow_type_rss_offloads = VMXNET3_RSS_OFFLOAD_ALL;
@@ -1447,10 +1447,10 @@ __vmxnet3_dev_link_update(struct rte_eth_dev *dev,
 	ret = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD);
 
 	if (ret & 0x1)
-		link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_speed = ETH_SPEED_NUM_10G;
-	link.link_autoneg = ETH_LINK_FIXED;
+		link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 
 	return rte_eth_linkstatus_set(dev, &link);
 }
@@ -1503,7 +1503,7 @@ vmxnet3_dev_promiscuous_disable(struct rte_eth_dev *dev)
 	uint32_t *vf_table = hw->shared->devRead.rxFilterConf.vfTable;
 	uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 		memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
 	else
 		memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
@@ -1573,8 +1573,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	uint32_t *vf_table = devRead->rxFilterConf.vfTable;
 	uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			devRead->misc.uptFeatures |= UPT1_F_RXVLAN;
 		else
 			devRead->misc.uptFeatures &= ~UPT1_F_RXVLAN;
@@ -1583,8 +1583,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 				       VMXNET3_CMD_UPDATE_FEATURE);
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
 		else
 			memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.h b/drivers/net/vmxnet3/vmxnet3_ethdev.h
index 8950175460f0..ef858ac9512f 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.h
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.h
@@ -32,18 +32,18 @@
 				VMXNET3_MAX_RX_QUEUES + 1)
 
 #define VMXNET3_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 
 #define VMXNET3_V4_RSS_MASK ( \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define VMXNET3_MANDATORY_V4_RSS ( \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP)
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 
 /* RSS configuration structure - shared with device through GPA */
 typedef struct VMXNET3_RSSConf {
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index b01c4c01f9c9..870100fa4f11 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -1326,13 +1326,13 @@ vmxnet3_v4_rss_configure(struct rte_eth_dev *dev)
 	rss_hf = port_rss_conf->rss_hf &
 		(VMXNET3_V4_RSS_MASK | VMXNET3_RSS_OFFLOAD_ALL);
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP6;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP6;
 
 	VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD,
@@ -1389,13 +1389,13 @@ vmxnet3_rss_configure(struct rte_eth_dev *dev)
 	/* loading hashType */
 	dev_rss_conf->hashType = 0;
 	rss_hf = port_rss_conf->rss_hf & VMXNET3_RSS_OFFLOAD_ALL;
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV4;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV6;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV6;
 
 	return VMXNET3_SUCCESS;
diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
index 68e3c13730ad..a9fef2297842 100644
--- a/examples/bbdev_app/main.c
+++ b/examples/bbdev_app/main.c
@@ -71,11 +71,11 @@ mbuf_input(struct rte_mbuf *mbuf)
 
 static const struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -328,7 +328,7 @@ check_port_link_status(uint16_t port_id)
 
 		if (link_get_err >= 0 && link.link_status) {
 			const char *dp = (link.link_duplex ==
-				ETH_LINK_FULL_DUPLEX) ?
+				RTE_ETH_LINK_FULL_DUPLEX) ?
 				"full-duplex" : "half-duplex";
 			printf("\nPort %u Link Up - speed %s - %s\n",
 				port_id,
diff --git a/examples/bond/main.c b/examples/bond/main.c
index 6352a715c0d9..3f41d8e5965d 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -115,17 +115,17 @@ static struct rte_mempool *mbuf_pool;
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -149,9 +149,9 @@ slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
 			"Error during getting device (port %u) info: %s\n",
 			portid, strerror(-retval));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 		dev_info.flow_type_rss_offloads;
@@ -241,9 +241,9 @@ bond_port_init(struct rte_mempool *mbuf_pool)
 			"Error during getting device (port %u) info: %s\n",
 			BOND_PORT, strerror(-retval));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	retval = rte_eth_dev_configure(BOND_PORT, 1, 1, &local_port_conf);
 	if (retval != 0)
 		rte_exit(EXIT_FAILURE, "port %u: configuration failed (res=%d)\n",
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index 8c4a8feec0c2..c681e237ea46 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -80,15 +80,15 @@ struct app_stats prev_app_stats;
 
 static const struct rte_eth_conf port_conf_default = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
-			.rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
-				ETH_RSS_TCP | ETH_RSS_SCTP,
+			.rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+				RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
 		}
 	},
 };
@@ -126,9 +126,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	port_conf.rx_adv_conf.rss_conf.rss_hf &=
 		dev_info.flow_type_rss_offloads;
diff --git a/examples/ethtool/ethtool-app/main.c b/examples/ethtool/ethtool-app/main.c
index 1bc675962bf3..cdd9e9b60bd8 100644
--- a/examples/ethtool/ethtool-app/main.c
+++ b/examples/ethtool/ethtool-app/main.c
@@ -98,7 +98,7 @@ static void setup_ports(struct app_config *app_cfg, int cnt_ports)
 	int ret;
 
 	memset(&cfg_port, 0, sizeof(cfg_port));
-	cfg_port.txmode.mq_mode = ETH_MQ_TX_NONE;
+	cfg_port.txmode.mq_mode = RTE_ETH_MQ_TX_NONE;
 
 	for (idx_port = 0; idx_port < cnt_ports; idx_port++) {
 		struct app_port *ptr_port = &app_cfg->ports[idx_port];
diff --git a/examples/ethtool/lib/rte_ethtool.c b/examples/ethtool/lib/rte_ethtool.c
index 413251630709..e7cdf8d5775b 100644
--- a/examples/ethtool/lib/rte_ethtool.c
+++ b/examples/ethtool/lib/rte_ethtool.c
@@ -233,13 +233,13 @@ rte_ethtool_get_pauseparam(uint16_t port_id,
 	pause_param->tx_pause = 0;
 	pause_param->rx_pause = 0;
 	switch (fc_conf.mode) {
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		pause_param->rx_pause = 1;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		pause_param->tx_pause = 1;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		pause_param->rx_pause = 1;
 		pause_param->tx_pause = 1;
 	default:
@@ -277,14 +277,14 @@ rte_ethtool_set_pauseparam(uint16_t port_id,
 
 	if (pause_param->tx_pause) {
 		if (pause_param->rx_pause)
-			fc_conf.mode = RTE_FC_FULL;
+			fc_conf.mode = RTE_ETH_FC_FULL;
 		else
-			fc_conf.mode = RTE_FC_TX_PAUSE;
+			fc_conf.mode = RTE_ETH_FC_TX_PAUSE;
 	} else {
 		if (pause_param->rx_pause)
-			fc_conf.mode = RTE_FC_RX_PAUSE;
+			fc_conf.mode = RTE_ETH_FC_RX_PAUSE;
 		else
-			fc_conf.mode = RTE_FC_NONE;
+			fc_conf.mode = RTE_ETH_FC_NONE;
 	}
 
 	status = rte_eth_dev_flow_ctrl_set(port_id, &fc_conf);
@@ -398,12 +398,12 @@ rte_ethtool_net_set_rx_mode(uint16_t port_id)
 	for (vf = 0; vf < num_vfs; vf++) {
 #ifdef RTE_NET_IXGBE
 		rte_pmd_ixgbe_set_vf_rxmode(port_id, vf,
-			ETH_VMDQ_ACCEPT_UNTAG, 0);
+			RTE_ETH_VMDQ_ACCEPT_UNTAG, 0);
 #endif
 	}
 
 	/* Enable Rx vlan filter, VF unspport status is discard */
-	ret = rte_eth_dev_set_vlan_offload(port_id, ETH_VLAN_FILTER_MASK);
+	ret = rte_eth_dev_set_vlan_offload(port_id, RTE_ETH_VLAN_FILTER_MASK);
 	if (ret != 0)
 		return ret;
 
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index e26be8edf28f..193a16463449 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -283,13 +283,13 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 	struct rte_eth_rxconf rx_conf;
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
-				.rss_hf = ETH_RSS_IP |
-					  ETH_RSS_TCP |
-					  ETH_RSS_UDP,
+				.rss_hf = RTE_ETH_RSS_IP |
+					  RTE_ETH_RSS_TCP |
+					  RTE_ETH_RSS_UDP,
 			}
 		}
 	};
@@ -311,12 +311,12 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_RSS_HASH)
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_RSS_HASH)
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	rx_conf = dev_info.default_rxconf;
 	rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index 476b147bdfcc..1b841d46ad93 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -614,13 +614,13 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 	struct rte_eth_rxconf rx_conf;
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
-				.rss_hf = ETH_RSS_IP |
-					  ETH_RSS_TCP |
-					  ETH_RSS_UDP,
+				.rss_hf = RTE_ETH_RSS_IP |
+					  RTE_ETH_RSS_TCP |
+					  RTE_ETH_RSS_UDP,
 			}
 		}
 	};
@@ -642,9 +642,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	rx_conf = dev_info.default_rxconf;
 	rx_conf.offloads = port_conf.rxmode.offloads;
 
diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
index 8a43f6ac0f92..6185b340600c 100644
--- a/examples/flow_classify/flow_classify.c
+++ b/examples/flow_classify/flow_classify.c
@@ -212,9 +212,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/flow_filtering/main.c b/examples/flow_filtering/main.c
index dd8a33d036ee..bfc1949c8428 100644
--- a/examples/flow_filtering/main.c
+++ b/examples/flow_filtering/main.c
@@ -113,7 +113,7 @@ assert_link_status(void)
 	memset(&link, 0, sizeof(link));
 	do {
 		link_get_err = rte_eth_link_get(port_id, &link);
-		if (link_get_err == 0 && link.link_status == ETH_LINK_UP)
+		if (link_get_err == 0 && link.link_status == RTE_ETH_LINK_UP)
 			break;
 		rte_delay_ms(CHECK_INTERVAL);
 	} while (--rep_cnt);
@@ -121,7 +121,7 @@ assert_link_status(void)
 	if (link_get_err < 0)
 		rte_exit(EXIT_FAILURE, ":: error: link get is failing: %s\n",
 			 rte_strerror(-link_get_err));
-	if (link.link_status == ETH_LINK_DOWN)
+	if (link.link_status == RTE_ETH_LINK_DOWN)
 		rte_exit(EXIT_FAILURE, ":: error: link is still down\n");
 }
 
@@ -138,12 +138,12 @@ init_port(void)
 		},
 		.txmode = {
 			.offloads =
-				DEV_TX_OFFLOAD_VLAN_INSERT |
-				DEV_TX_OFFLOAD_IPV4_CKSUM  |
-				DEV_TX_OFFLOAD_UDP_CKSUM   |
-				DEV_TX_OFFLOAD_TCP_CKSUM   |
-				DEV_TX_OFFLOAD_SCTP_CKSUM  |
-				DEV_TX_OFFLOAD_TCP_TSO,
+				RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_TCP_TSO,
 		},
 	};
 	struct rte_eth_txconf txq_conf;
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index ccfee585f850..b1aa2767a0af 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -819,12 +819,12 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
 	/* Configuring port to use RSS for multiple RX queues. 8< */
 	static const struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_PROTO_MASK,
+				.rss_hf = RTE_ETH_RSS_PROTO_MASK,
 			}
 		}
 	};
@@ -852,9 +852,9 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
 
 	local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 		dev_info.flow_type_rss_offloads;
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(portid, nb_queues, 1, &local_port_conf);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Cannot configure device:"
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 8644454a9aef..0307709f2b4a 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -149,13 +149,13 @@ static struct rte_eth_conf port_conf = {
 		.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
 			RTE_ETHER_CRC_LEN,
 		.split_hdr_size = 0,
-		.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
-			     DEV_RX_OFFLOAD_SCATTER),
+		.offloads = (RTE_ETH_RX_OFFLOAD_CHECKSUM |
+			     RTE_ETH_RX_OFFLOAD_SCATTER),
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_MULTI_SEGS),
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
 	},
 };
 
@@ -624,7 +624,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 9ba02e687adb..0290767af473 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -45,7 +45,7 @@ link_next(struct link *link)
 static struct rte_eth_conf port_conf_default = {
 	.link_speeds = 0,
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
 		.split_hdr_size = 0, /* Header split buffer size */
 	},
@@ -57,12 +57,12 @@ static struct rte_eth_conf port_conf_default = {
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
 
-#define RETA_CONF_SIZE     (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE     (RTE_ETH_RSS_RETA_SIZE_512 / RTE_ETH_RETA_GROUP_SIZE)
 
 static int
 rss_setup(uint16_t port_id,
@@ -77,11 +77,11 @@ rss_setup(uint16_t port_id,
 	memset(reta_conf, 0, sizeof(reta_conf));
 
 	for (i = 0; i < reta_size; i++)
-		reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+		reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
 
 	for (i = 0; i < reta_size; i++) {
-		uint32_t reta_id = i / RTE_RETA_GROUP_SIZE;
-		uint32_t reta_pos = i % RTE_RETA_GROUP_SIZE;
+		uint32_t reta_id = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint32_t reta_pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint32_t rss_qs_pos = i % rss->n_queues;
 
 		reta_conf[reta_id].reta[reta_pos] =
@@ -139,7 +139,7 @@ link_create(const char *name, struct link_params *params)
 	rss = params->rx.rss;
 	if (rss) {
 		if ((port_info.reta_size == 0) ||
-			(port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+			(port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
 			return NULL;
 
 		if ((rss->n_queues == 0) ||
@@ -157,9 +157,9 @@ link_create(const char *name, struct link_params *params)
 	/* Port */
 	memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
 	if (rss) {
-		port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+		port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
 		port_conf.rx_adv_conf.rss_conf.rss_hf =
-			(ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+			(RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
 			port_info.flow_type_rss_offloads;
 	}
 
@@ -267,5 +267,5 @@ link_is_up(const char *name)
 	if (rte_eth_link_get(link->port_id, &link_params) < 0)
 		return 0;
 
-	return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+	return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
 }
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 4f0e12e62447..a9f9bd477007 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -161,22 +161,22 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_RSS,
+		.mq_mode        = RTE_ETH_MQ_RX_RSS,
 		.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
 			RTE_ETHER_CRC_LEN,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_MULTI_SEGS),
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
 	},
 };
 
@@ -738,7 +738,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -1096,9 +1096,9 @@ main(int argc, char **argv)
 		n_tx_queue = nb_lcores;
 		if (n_tx_queue > MAX_TX_QUEUE_PER_PORT)
 			n_tx_queue = MAX_TX_QUEUE_PER_PORT;
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 5f5ec260f315..feddd84d1551 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -234,19 +234,19 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode	= ETH_MQ_RX_RSS,
+		.mq_mode	= RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
-				ETH_RSS_TCP | ETH_RSS_SCTP,
+			.rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+				RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -1455,10 +1455,10 @@ print_usage(const char *prgname)
 		"               \"parallel\" : Parallel\n"
 		"  --" CMD_LINE_OPT_RX_OFFLOAD
 		": bitmask of the RX HW offload capabilities to enable/use\n"
-		"                         (DEV_RX_OFFLOAD_*)\n"
+		"                         (RTE_ETH_RX_OFFLOAD_*)\n"
 		"  --" CMD_LINE_OPT_TX_OFFLOAD
 		": bitmask of the TX HW offload capabilities to enable/use\n"
-		"                         (DEV_TX_OFFLOAD_*)\n"
+		"                         (RTE_ETH_TX_OFFLOAD_*)\n"
 		"  --" CMD_LINE_OPT_REASSEMBLE " NUM"
 		": max number of entries in reassemble(fragment) table\n"
 		"    (zero (default value) disables reassembly)\n"
@@ -1909,7 +1909,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2212,8 +2212,8 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 	local_port_conf.rxmode.mtu = mtu_size;
 
 	if (multi_seg_required()) {
-		local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
-		local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		local_port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+		local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	}
 
 	local_port_conf.rxmode.offloads |= req_rx_offloads;
@@ -2236,12 +2236,12 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 			portid, local_port_conf.txmode.offloads,
 			dev_info.tx_offload_capa);
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM)
-		local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
+		local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 
 	printf("port %u configurng rx_offloads=0x%" PRIx64
 		", tx_offloads=0x%" PRIx64 "\n",
@@ -2299,7 +2299,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 		/* Pre-populate pkt offloads based on capabilities */
 		qconf->outbound.ipv4_offloads = PKT_TX_IPV4;
 		qconf->outbound.ipv6_offloads = PKT_TX_IPV6;
-		if (local_port_conf.txmode.offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+		if (local_port_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 			qconf->outbound.ipv4_offloads |= PKT_TX_IP_CKSUM;
 
 		tx_queueid++;
@@ -2660,7 +2660,7 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads)
 	struct rte_flow *flow;
 	int ret;
 
-	if (!(rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+	if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
 		return;
 
 	/* Add the default rte_flow to enable SECURITY for all ESP packets */
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 17a28556c971..5cdd794f017f 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -986,7 +986,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
 
 	if (inbound) {
 		if ((dev_info.rx_offload_capa &
-				DEV_RX_OFFLOAD_SECURITY) == 0) {
+				RTE_ETH_RX_OFFLOAD_SECURITY) == 0) {
 			RTE_LOG(WARNING, PORT,
 				"hardware RX IPSec offload is not supported\n");
 			return -EINVAL;
@@ -994,7 +994,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
 
 	} else { /* outbound */
 		if ((dev_info.tx_offload_capa &
-				DEV_TX_OFFLOAD_SECURITY) == 0) {
+				RTE_ETH_TX_OFFLOAD_SECURITY) == 0) {
 			RTE_LOG(WARNING, PORT,
 				"hardware TX IPSec offload is not supported\n");
 			return -EINVAL;
@@ -1628,7 +1628,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
 				rule_type ==
 				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
 				&& rule->portid == port_id)
-			*rx_offloads |= DEV_RX_OFFLOAD_SECURITY;
+			*rx_offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
 	}
 
 	/* Check for outbound rules that use offloads and use this port */
@@ -1639,7 +1639,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
 				rule_type ==
 				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
 				&& rule->portid == port_id)
-			*tx_offloads |= DEV_TX_OFFLOAD_SECURITY;
+			*tx_offloads |= RTE_ETH_TX_OFFLOAD_SECURITY;
 	}
 	return 0;
 }
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index 87538dccc879..32670f80bc2b 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -115,8 +115,8 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
 	},
 };
 
@@ -620,7 +620,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/kni/main.c b/examples/kni/main.c
index 1790ec024072..f780be712ec0 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -95,7 +95,7 @@ static struct kni_port_params *kni_port_params_array[RTE_MAX_ETHPORTS];
 /* Options for configuring ethernet port */
 static struct rte_eth_conf port_conf = {
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -608,9 +608,9 @@ init_port(uint16_t port)
 			"Error during getting device (port %u) info: %s\n",
 			port, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(port, 1, 1, &local_port_conf);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Could not configure port%u (%d)\n",
@@ -688,7 +688,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index c646f1748ca7..42c04abbbb34 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -216,11 +216,11 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -1808,7 +1808,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2632,9 +2632,9 @@ initialize_ports(struct l2fwd_crypto_options *options)
 			return retval;
 		}
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		retval = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (retval < 0) {
 			printf("Cannot configure device: err=%d, port=%u\n",
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index 9040be5ed9b6..cf3d1b8aaf40 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -14,7 +14,7 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
 			.split_hdr_size = 0,
 		},
 		.txmode = {
-			.mq_mode = ETH_MQ_TX_NONE,
+			.mq_mode = RTE_ETH_MQ_TX_NONE,
 		},
 	};
 	uint16_t nb_ports_available = 0;
@@ -22,9 +22,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
 	int ret;
 
 	if (rsrc->event_mode) {
-		port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+		port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
 		port_conf.rx_adv_conf.rss_conf.rss_key = NULL;
-		port_conf.rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP;
+		port_conf.rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP;
 	}
 
 	/* Initialise each port */
@@ -60,9 +60,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
 				local_port_conf.rx_adv_conf.rss_conf.rss_hf);
 		}
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure RX and TX queue. 8< */
 		ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 1db89f2bd139..9806204b81d1 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -395,7 +395,7 @@ check_all_ports_link_status(struct l2fwd_resources *rsrc,
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c
index 06280321b1f2..092ea0189c7f 100644
--- a/examples/l2fwd-jobstats/main.c
+++ b/examples/l2fwd-jobstats/main.c
@@ -94,7 +94,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -726,7 +726,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -869,9 +869,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure the RX and TX queues. 8< */
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/l2fwd-keepalive/main.c b/examples/l2fwd-keepalive/main.c
index 07271affb4a9..78e43f9c091e 100644
--- a/examples/l2fwd-keepalive/main.c
+++ b/examples/l2fwd-keepalive/main.c
@@ -83,7 +83,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -478,7 +478,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -650,9 +650,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
 			rte_exit(EXIT_FAILURE,
diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
index f3deeba0a665..3edabd1dd19b 100644
--- a/examples/l2fwd/main.c
+++ b/examples/l2fwd/main.c
@@ -95,7 +95,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -606,7 +606,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -792,9 +792,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure the number of queues for a port. */
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 1890c88a5b01..fea414ae5929 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -124,19 +124,19 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode	= ETH_MQ_RX_RSS,
+		.mq_mode	= RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
-				ETH_RSS_TCP | ETH_RSS_SCTP,
+			.rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+				RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -1936,7 +1936,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2004,7 +2004,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -2088,9 +2088,9 @@ main(int argc, char **argv)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index 05385807e83e..7f00c65609ed 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -111,17 +111,17 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -607,7 +607,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* Clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -731,7 +731,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -828,9 +828,9 @@ main(int argc, char **argv)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 6aa1b66ecfcc..5a4359a368b5 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -250,18 +250,18 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_RSS,
+		.mq_mode        = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_UDP,
+			.rss_hf = RTE_ETH_RSS_UDP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	}
 };
 
@@ -2197,7 +2197,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2510,7 +2510,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -2638,9 +2638,9 @@ main(int argc, char **argv)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 961860ea18ef..7c7613a83aad 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -75,9 +75,9 @@ l3fwd_eth_dev_port_setup(struct rte_eth_conf *port_conf)
 			rte_panic("Error during getting device (port %u) info:"
 				  "%s\n", port_id, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+						RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 						dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index f27c76bb7a73..51cbf81f1afa 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -120,18 +120,18 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -903,7 +903,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -988,7 +988,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -1053,15 +1053,15 @@ l3fwd_poll_resource_setup(void)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
 
 		if (dev_info.max_rx_queues == 1)
-			local_port_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+			local_port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
 
 		if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
 				port_conf.rx_adv_conf.rss_conf.rss_hf) {
diff --git a/examples/link_status_interrupt/main.c b/examples/link_status_interrupt/main.c
index e4542df11f87..8714acddd110 100644
--- a/examples/link_status_interrupt/main.c
+++ b/examples/link_status_interrupt/main.c
@@ -83,7 +83,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.intr_conf = {
 		.lsc = 1, /**< lsc interrupt feature enabled */
@@ -147,7 +147,7 @@ print_stats(void)
 			   link_get_err < 0 ? "0" :
 			   rte_eth_link_speed_to_str(link.link_speed),
 			   link_get_err < 0 ? "Link get failed" :
-			   (link.link_duplex == ETH_LINK_FULL_DUPLEX ? \
+			   (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex"),
 			   port_statistics[portid].tx,
 			   port_statistics[portid].rx,
@@ -507,7 +507,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -634,9 +634,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure RX and TX queues. 8< */
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/multi_process/client_server_mp/mp_server/init.c b/examples/multi_process/client_server_mp/mp_server/init.c
index 1ad71ca7ec5f..23307073c904 100644
--- a/examples/multi_process/client_server_mp/mp_server/init.c
+++ b/examples/multi_process/client_server_mp/mp_server/init.c
@@ -94,7 +94,7 @@ init_port(uint16_t port_num)
 	/* for port configuration all features are off by default */
 	const struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS
+			.mq_mode = RTE_ETH_MQ_RX_RSS
 		}
 	};
 	const uint16_t rx_rings = 1, tx_rings = num_clients;
@@ -213,7 +213,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/multi_process/symmetric_mp/main.c b/examples/multi_process/symmetric_mp/main.c
index 01dc3acf34d5..85955375f1bf 100644
--- a/examples/multi_process/symmetric_mp/main.c
+++ b/examples/multi_process/symmetric_mp/main.c
@@ -176,18 +176,18 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
 {
 	struct rte_eth_conf port_conf = {
 			.rxmode = {
-				.mq_mode	= ETH_MQ_RX_RSS,
+				.mq_mode	= RTE_ETH_MQ_RX_RSS,
 				.split_hdr_size = 0,
-				.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+				.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 			},
 			.rx_adv_conf = {
 				.rss_conf = {
 					.rss_key = NULL,
-					.rss_hf = ETH_RSS_IP,
+					.rss_hf = RTE_ETH_RSS_IP,
 				},
 			},
 			.txmode = {
-				.mq_mode = ETH_MQ_TX_NONE,
+				.mq_mode = RTE_ETH_MQ_TX_NONE,
 			}
 	};
 	const uint16_t rx_rings = num_queues, tx_rings = num_queues;
@@ -218,9 +218,9 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
 
 	info.default_rxconf.rx_drop_en = 1;
 
-	if (info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
 	port_conf.rx_adv_conf.rss_conf.rss_hf &= info.flow_type_rss_offloads;
@@ -392,7 +392,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
index e9a388710647..f110fc129f55 100644
--- a/examples/ntb/ntb_fwd.c
+++ b/examples/ntb/ntb_fwd.c
@@ -89,17 +89,17 @@ static uint16_t pkt_burst = NTB_DFLT_PKT_BURST;
 
 static struct rte_eth_conf eth_port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c
index 4f6982bc1289..b01ac60fd196 100644
--- a/examples/packet_ordering/main.c
+++ b/examples/packet_ordering/main.c
@@ -294,9 +294,9 @@ configure_eth_port(uint16_t port_id)
 		return ret;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(port_id, rxRings, txRings, &port_conf);
 	if (ret != 0)
 		return ret;
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 5de5df997ee9..baeee9298d57 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -307,18 +307,18 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_TCP,
+			.rss_hf = RTE_ETH_RSS_TCP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -3441,7 +3441,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -3494,7 +3494,7 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
 	conf->rxmode.mtu = max_pkt_len - overhead_len;
 
 	if (conf->rxmode.mtu > RTE_ETHER_MTU)
-		conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		conf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return 0;
 }
@@ -3593,9 +3593,9 @@ main(int argc, char **argv)
 				"Invalid max packet length: %u (port %u)\n",
 				max_pkt_len, portid);
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 4f20dfc4be06..569207a79d62 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -133,7 +133,7 @@ mempool_find(struct obj *obj, const char *name)
 static struct rte_eth_conf port_conf_default = {
 	.link_speeds = 0,
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
 		.split_hdr_size = 0, /* Header split buffer size */
 	},
@@ -145,12 +145,12 @@ static struct rte_eth_conf port_conf_default = {
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
 
-#define RETA_CONF_SIZE     (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE     (RTE_ETH_RSS_RETA_SIZE_512 / RTE_ETH_RETA_GROUP_SIZE)
 
 static int
 rss_setup(uint16_t port_id,
@@ -165,11 +165,11 @@ rss_setup(uint16_t port_id,
 	memset(reta_conf, 0, sizeof(reta_conf));
 
 	for (i = 0; i < reta_size; i++)
-		reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+		reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
 
 	for (i = 0; i < reta_size; i++) {
-		uint32_t reta_id = i / RTE_RETA_GROUP_SIZE;
-		uint32_t reta_pos = i % RTE_RETA_GROUP_SIZE;
+		uint32_t reta_id = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint32_t reta_pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint32_t rss_qs_pos = i % rss->n_queues;
 
 		reta_conf[reta_id].reta[reta_pos] =
@@ -227,7 +227,7 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
 	rss = params->rx.rss;
 	if (rss) {
 		if ((port_info.reta_size == 0) ||
-			(port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+			(port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
 			return NULL;
 
 		if ((rss->n_queues == 0) ||
@@ -245,9 +245,9 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
 	/* Port */
 	memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
 	if (rss) {
-		port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+		port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
 		port_conf.rx_adv_conf.rss_conf.rss_hf =
-			(ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+			(RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
 			port_info.flow_type_rss_offloads;
 	}
 
@@ -356,7 +356,7 @@ link_is_up(struct obj *obj, const char *name)
 	if (rte_eth_link_get(link->port_id, &link_params) < 0)
 		return 0;
 
-	return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+	return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
 }
 
 struct link *
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 229a277032cb..979d9eb9e9d0 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -193,14 +193,14 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	/* Force full Tx path in the driver, required for IEEE1588 */
-	port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index c32d2e12e633..743bae2da50a 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -51,18 +51,18 @@ static struct rte_mempool *pool = NULL;
  ***/
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode	= ETH_MQ_RX_RSS,
+		.mq_mode	= RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_DCB_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -332,8 +332,8 @@ main(int argc, char **argv)
 			"Error during getting device (port %u) info: %s\n",
 			port_rx, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
-		conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+		conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
 	if (conf.rx_adv_conf.rss_conf.rss_hf !=
@@ -378,8 +378,8 @@ main(int argc, char **argv)
 			"Error during getting device (port %u) info: %s\n",
 			port_tx, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
-		conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+		conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
 	if (conf.rx_adv_conf.rss_conf.rss_hf !=
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 1367569c65db..9b34e4a76b1b 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -60,7 +60,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_DCB_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -105,9 +105,9 @@ app_init_port(uint16_t portid, struct rte_mempool *mp)
 			"Error during getting device (port %u) info: %s\n",
 			portid, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE,
diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
index 6845c396b8d9..1903d8b095a1 100644
--- a/examples/rxtx_callbacks/main.c
+++ b/examples/rxtx_callbacks/main.c
@@ -141,17 +141,17 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	if (hw_timestamping) {
-		if (!(dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)) {
+		if (!(dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
 			printf("\nERROR: Port %u does not support hardware timestamping\n"
 					, port);
 			return -1;
 		}
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 		rte_mbuf_dyn_rx_timestamp_register(&hwts_dynfield_offset, NULL);
 		if (hwts_dynfield_offset < 0) {
 			printf("ERROR: Failed to register timestamp field\n");
diff --git a/examples/server_node_efd/server/init.c b/examples/server_node_efd/server/init.c
index 9ebd88bac20e..074fee5b26b2 100644
--- a/examples/server_node_efd/server/init.c
+++ b/examples/server_node_efd/server/init.c
@@ -96,7 +96,7 @@ init_port(uint16_t port_num)
 	/* for port configuration all features are off by default */
 	struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 	};
 	const uint16_t rx_rings = 1, tx_rings = num_nodes;
@@ -115,9 +115,9 @@ init_port(uint16_t port_num)
 	if (retval != 0)
 		return retval;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/*
 	 * Standard DPDK port initialisation - config port, then set up
@@ -277,7 +277,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
index fd7207aee758..16435ee3ccc2 100644
--- a/examples/skeleton/basicfwd.c
+++ b/examples/skeleton/basicfwd.c
@@ -49,9 +49,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 999809e6ed41..49c134a3042f 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -110,23 +110,23 @@ static int nb_sockets;
 /* empty vmdq configuration structure. Filled in programatically */
 static struct rte_eth_conf vmdq_conf_default = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_VMDQ_ONLY,
+		.mq_mode        = RTE_ETH_MQ_RX_VMDQ_ONLY,
 		.split_hdr_size = 0,
 		/*
 		 * VLAN strip is necessary for 1G NIC such as I350,
 		 * this fixes bug of ipv4 forwarding in guest can't
 		 * forward pakets from one virtio dev to another virtio dev.
 		 */
-		.offloads = DEV_RX_OFFLOAD_VLAN_STRIP,
+		.offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP,
 	},
 
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_TCP_CKSUM |
-			     DEV_TX_OFFLOAD_VLAN_INSERT |
-			     DEV_TX_OFFLOAD_MULTI_SEGS |
-			     DEV_TX_OFFLOAD_TCP_TSO),
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+			     RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+			     RTE_ETH_TX_OFFLOAD_TCP_TSO),
 	},
 	.rx_adv_conf = {
 		/*
@@ -134,7 +134,7 @@ static struct rte_eth_conf vmdq_conf_default = {
 		 * appropriate values
 		 */
 		.vmdq_rx_conf = {
-			.nb_queue_pools = ETH_8_POOLS,
+			.nb_queue_pools = RTE_ETH_8_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -291,9 +291,9 @@ port_init(uint16_t port)
 		return -1;
 
 	rx_rings = (uint16_t)dev_info.max_rx_queues;
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	/* Configure ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
 	if (retval != 0) {
@@ -557,8 +557,8 @@ us_vhost_parse_args(int argc, char **argv)
 		case 'P':
 			promiscuous = 1;
 			vmdq_conf_default.rx_adv_conf.vmdq_rx_conf.rx_mode =
-				ETH_VMDQ_ACCEPT_BROADCAST |
-				ETH_VMDQ_ACCEPT_MULTICAST;
+				RTE_ETH_VMDQ_ACCEPT_BROADCAST |
+				RTE_ETH_VMDQ_ACCEPT_MULTICAST;
 			break;
 
 		case OPT_VM2VM_NUM:
diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
index e19d79a40802..b159291d77ce 100644
--- a/examples/vm_power_manager/main.c
+++ b/examples/vm_power_manager/main.c
@@ -73,9 +73,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
@@ -270,7 +270,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 		       /* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
index ee7f4324e141..1f336082e5c1 100644
--- a/examples/vmdq/main.c
+++ b/examples/vmdq/main.c
@@ -66,12 +66,12 @@ static uint8_t rss_enable;
 /* empty vmdq configuration structure. Filled in programatically */
 static const struct rte_eth_conf vmdq_conf_default = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_VMDQ_ONLY,
+		.mq_mode        = RTE_ETH_MQ_RX_VMDQ_ONLY,
 		.split_hdr_size = 0,
 	},
 
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.rx_adv_conf = {
 		/*
@@ -79,7 +79,7 @@ static const struct rte_eth_conf vmdq_conf_default = {
 		 * appropriate values
 		 */
 		.vmdq_rx_conf = {
-			.nb_queue_pools = ETH_8_POOLS,
+			.nb_queue_pools = RTE_ETH_8_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -157,11 +157,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t num_pools)
 	(void)(rte_memcpy(&eth_conf->rx_adv_conf.vmdq_rx_conf, &conf,
 		   sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf)));
 	if (rss_enable) {
-		eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
-		eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
-							ETH_RSS_UDP |
-							ETH_RSS_TCP |
-							ETH_RSS_SCTP;
+		eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
+		eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+							RTE_ETH_RSS_UDP |
+							RTE_ETH_RSS_TCP |
+							RTE_ETH_RSS_SCTP;
 	}
 	return 0;
 }
@@ -259,9 +259,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	retval = rte_eth_dev_configure(port, rxRings, txRings, &port_conf);
 	if (retval != 0)
 		return retval;
diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c
index 14c20e6a8b26..1a19f1799bd2 100644
--- a/examples/vmdq_dcb/main.c
+++ b/examples/vmdq_dcb/main.c
@@ -60,8 +60,8 @@ static uint16_t ports[RTE_MAX_ETHPORTS];
 static unsigned num_ports;
 
 /* number of pools (if user does not specify any, 32 by default */
-static enum rte_eth_nb_pools num_pools = ETH_32_POOLS;
-static enum rte_eth_nb_tcs   num_tcs   = ETH_4_TCS;
+static enum rte_eth_nb_pools num_pools = RTE_ETH_32_POOLS;
+static enum rte_eth_nb_tcs   num_tcs   = RTE_ETH_4_TCS;
 static uint16_t num_queues, num_vmdq_queues;
 static uint16_t vmdq_pool_base, vmdq_queue_base;
 static uint8_t rss_enable;
@@ -69,11 +69,11 @@ static uint8_t rss_enable;
 /* Empty vmdq+dcb configuration structure. Filled in programmatically. 8< */
 static const struct rte_eth_conf vmdq_dcb_conf_default = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_VMDQ_DCB,
+		.mq_mode        = RTE_ETH_MQ_RX_VMDQ_DCB,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_VMDQ_DCB,
+		.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB,
 	},
 	/*
 	 * should be overridden separately in code with
@@ -81,7 +81,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 	 */
 	.rx_adv_conf = {
 		.vmdq_dcb_conf = {
-			.nb_queue_pools = ETH_32_POOLS,
+			.nb_queue_pools = RTE_ETH_32_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -89,12 +89,12 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 			.dcb_tc = {0},
 		},
 		.dcb_rx_conf = {
-				.nb_tcs = ETH_4_TCS,
+				.nb_tcs = RTE_ETH_4_TCS,
 				/** Traffic class each UP mapped to. */
 				.dcb_tc = {0},
 		},
 		.vmdq_rx_conf = {
-			.nb_queue_pools = ETH_32_POOLS,
+			.nb_queue_pools = RTE_ETH_32_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -103,7 +103,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 	},
 	.tx_adv_conf = {
 		.vmdq_dcb_tx_conf = {
-			.nb_queue_pools = ETH_32_POOLS,
+			.nb_queue_pools = RTE_ETH_32_POOLS,
 			.dcb_tc = {0},
 		},
 	},
@@ -157,7 +157,7 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
 		conf.pool_map[i].pools = 1UL << i;
 		vmdq_conf.pool_map[i].pools = 1UL << i;
 	}
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		conf.dcb_tc[i] = i % num_tcs;
 		dcb_conf.dcb_tc[i] = i % num_tcs;
 		tx_conf.dcb_tc[i] = i % num_tcs;
@@ -173,11 +173,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
 	(void)(rte_memcpy(&eth_conf->tx_adv_conf.vmdq_dcb_tx_conf, &tx_conf,
 			  sizeof(tx_conf)));
 	if (rss_enable) {
-		eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
-		eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
-							ETH_RSS_UDP |
-							ETH_RSS_TCP |
-							ETH_RSS_SCTP;
+		eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
+		eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+							RTE_ETH_RSS_UDP |
+							RTE_ETH_RSS_TCP |
+							RTE_ETH_RSS_SCTP;
 	}
 	return 0;
 }
@@ -271,9 +271,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
 	port_conf.rx_adv_conf.rss_conf.rss_hf &=
@@ -382,9 +382,9 @@ vmdq_parse_num_pools(const char *q_arg)
 	if (n != 16 && n != 32)
 		return -1;
 	if (n == 16)
-		num_pools = ETH_16_POOLS;
+		num_pools = RTE_ETH_16_POOLS;
 	else
-		num_pools = ETH_32_POOLS;
+		num_pools = RTE_ETH_32_POOLS;
 
 	return 0;
 }
@@ -404,9 +404,9 @@ vmdq_parse_num_tcs(const char *q_arg)
 	if (n != 4 && n != 8)
 		return -1;
 	if (n == 4)
-		num_tcs = ETH_4_TCS;
+		num_tcs = RTE_ETH_4_TCS;
 	else
-		num_tcs = ETH_8_TCS;
+		num_tcs = RTE_ETH_8_TCS;
 
 	return 0;
 }
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 0174ba03d7f3..c134b878684e 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -116,7 +116,7 @@ struct rte_eth_dev_data {
 			/**< Device Ethernet link address.
 			 *   @see rte_eth_dev_release_port()
 			 */
-	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+	uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
 			/**< Bitmap associating MAC addresses to pools. */
 	struct rte_ether_addr *hash_mac_addrs;
 			/**< Device Ethernet MAC addresses of hash filtering.
@@ -1657,23 +1657,23 @@ struct rte_eth_syn_filter {
 /**
  * filter type of tunneling packet
  */
-#define ETH_TUNNEL_FILTER_OMAC  0x01 /**< filter by outer MAC addr */
-#define ETH_TUNNEL_FILTER_OIP   0x02 /**< filter by outer IP Addr */
-#define ETH_TUNNEL_FILTER_TENID 0x04 /**< filter by tenant ID */
-#define ETH_TUNNEL_FILTER_IMAC  0x08 /**< filter by inner MAC addr */
-#define ETH_TUNNEL_FILTER_IVLAN 0x10 /**< filter by inner VLAN ID */
-#define ETH_TUNNEL_FILTER_IIP   0x20 /**< filter by inner IP addr */
-
-#define RTE_TUNNEL_FILTER_IMAC_IVLAN (ETH_TUNNEL_FILTER_IMAC | \
-					ETH_TUNNEL_FILTER_IVLAN)
-#define RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID (ETH_TUNNEL_FILTER_IMAC | \
-					ETH_TUNNEL_FILTER_IVLAN | \
-					ETH_TUNNEL_FILTER_TENID)
-#define RTE_TUNNEL_FILTER_IMAC_TENID (ETH_TUNNEL_FILTER_IMAC | \
-					ETH_TUNNEL_FILTER_TENID)
-#define RTE_TUNNEL_FILTER_OMAC_TENID_IMAC (ETH_TUNNEL_FILTER_OMAC | \
-					ETH_TUNNEL_FILTER_TENID | \
-					ETH_TUNNEL_FILTER_IMAC)
+#define RTE_ETH_TUNNEL_FILTER_OMAC  0x01 /**< filter by outer MAC addr */
+#define RTE_ETH_TUNNEL_FILTER_OIP   0x02 /**< filter by outer IP Addr */
+#define RTE_ETH_TUNNEL_FILTER_TENID 0x04 /**< filter by tenant ID */
+#define RTE_ETH_TUNNEL_FILTER_IMAC  0x08 /**< filter by inner MAC addr */
+#define RTE_ETH_TUNNEL_FILTER_IVLAN 0x10 /**< filter by inner VLAN ID */
+#define RTE_ETH_TUNNEL_FILTER_IIP   0x20 /**< filter by inner IP addr */
+
+#define RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN (RTE_ETH_TUNNEL_FILTER_IMAC | \
+					  RTE_ETH_TUNNEL_FILTER_IVLAN)
+#define RTE_ETH_TUNNEL_FILTER_IMAC_IVLAN_TENID (RTE_ETH_TUNNEL_FILTER_IMAC | \
+						RTE_ETH_TUNNEL_FILTER_IVLAN | \
+						RTE_ETH_TUNNEL_FILTER_TENID)
+#define RTE_ETH_TUNNEL_FILTER_IMAC_TENID (RTE_ETH_TUNNEL_FILTER_IMAC | \
+					  RTE_ETH_TUNNEL_FILTER_TENID)
+#define RTE_ETH_TUNNEL_FILTER_OMAC_TENID_IMAC (RTE_ETH_TUNNEL_FILTER_OMAC | \
+					       RTE_ETH_TUNNEL_FILTER_TENID | \
+					       RTE_ETH_TUNNEL_FILTER_IMAC)
 
 /**
  *  Select IPv4 or IPv6 for tunnel filters.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 3b8ef9ef22e7..49ff506851cf 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -101,9 +101,6 @@ static const struct rte_eth_xstats_name_off eth_dev_txq_stats_strings[] = {
 #define RTE_NB_TXQ_STATS RTE_DIM(eth_dev_txq_stats_strings)
 
 #define RTE_RX_OFFLOAD_BIT2STR(_name)	\
-	{ DEV_RX_OFFLOAD_##_name, #_name }
-
-#define RTE_ETH_RX_OFFLOAD_BIT2STR(_name)	\
 	{ RTE_ETH_RX_OFFLOAD_##_name, #_name }
 
 static const struct {
@@ -128,14 +125,14 @@ static const struct {
 	RTE_RX_OFFLOAD_BIT2STR(SCTP_CKSUM),
 	RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM),
 	RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
-	RTE_ETH_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
+	RTE_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
 };
 
 #undef RTE_RX_OFFLOAD_BIT2STR
 #undef RTE_ETH_RX_OFFLOAD_BIT2STR
 
 #define RTE_TX_OFFLOAD_BIT2STR(_name)	\
-	{ DEV_TX_OFFLOAD_##_name, #_name }
+	{ RTE_ETH_TX_OFFLOAD_##_name, #_name }
 
 static const struct {
 	uint64_t offload;
@@ -1173,32 +1170,32 @@ uint32_t
 rte_eth_speed_bitflag(uint32_t speed, int duplex)
 {
 	switch (speed) {
-	case ETH_SPEED_NUM_10M:
-		return duplex ? ETH_LINK_SPEED_10M : ETH_LINK_SPEED_10M_HD;
-	case ETH_SPEED_NUM_100M:
-		return duplex ? ETH_LINK_SPEED_100M : ETH_LINK_SPEED_100M_HD;
-	case ETH_SPEED_NUM_1G:
-		return ETH_LINK_SPEED_1G;
-	case ETH_SPEED_NUM_2_5G:
-		return ETH_LINK_SPEED_2_5G;
-	case ETH_SPEED_NUM_5G:
-		return ETH_LINK_SPEED_5G;
-	case ETH_SPEED_NUM_10G:
-		return ETH_LINK_SPEED_10G;
-	case ETH_SPEED_NUM_20G:
-		return ETH_LINK_SPEED_20G;
-	case ETH_SPEED_NUM_25G:
-		return ETH_LINK_SPEED_25G;
-	case ETH_SPEED_NUM_40G:
-		return ETH_LINK_SPEED_40G;
-	case ETH_SPEED_NUM_50G:
-		return ETH_LINK_SPEED_50G;
-	case ETH_SPEED_NUM_56G:
-		return ETH_LINK_SPEED_56G;
-	case ETH_SPEED_NUM_100G:
-		return ETH_LINK_SPEED_100G;
-	case ETH_SPEED_NUM_200G:
-		return ETH_LINK_SPEED_200G;
+	case RTE_ETH_SPEED_NUM_10M:
+		return duplex ? RTE_ETH_LINK_SPEED_10M : RTE_ETH_LINK_SPEED_10M_HD;
+	case RTE_ETH_SPEED_NUM_100M:
+		return duplex ? RTE_ETH_LINK_SPEED_100M : RTE_ETH_LINK_SPEED_100M_HD;
+	case RTE_ETH_SPEED_NUM_1G:
+		return RTE_ETH_LINK_SPEED_1G;
+	case RTE_ETH_SPEED_NUM_2_5G:
+		return RTE_ETH_LINK_SPEED_2_5G;
+	case RTE_ETH_SPEED_NUM_5G:
+		return RTE_ETH_LINK_SPEED_5G;
+	case RTE_ETH_SPEED_NUM_10G:
+		return RTE_ETH_LINK_SPEED_10G;
+	case RTE_ETH_SPEED_NUM_20G:
+		return RTE_ETH_LINK_SPEED_20G;
+	case RTE_ETH_SPEED_NUM_25G:
+		return RTE_ETH_LINK_SPEED_25G;
+	case RTE_ETH_SPEED_NUM_40G:
+		return RTE_ETH_LINK_SPEED_40G;
+	case RTE_ETH_SPEED_NUM_50G:
+		return RTE_ETH_LINK_SPEED_50G;
+	case RTE_ETH_SPEED_NUM_56G:
+		return RTE_ETH_LINK_SPEED_56G;
+	case RTE_ETH_SPEED_NUM_100G:
+		return RTE_ETH_LINK_SPEED_100G;
+	case RTE_ETH_SPEED_NUM_200G:
+		return RTE_ETH_LINK_SPEED_200G;
 	default:
 		return 0;
 	}
@@ -1503,7 +1500,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 * If LRO is enabled, check that the maximum aggregated packet
 	 * size is supported by the configured device.
 	 */
-	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		uint32_t max_rx_pktlen;
 		uint32_t overhead_len;
 
@@ -1560,12 +1557,12 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	}
 
 	/* Check if Rx RSS distribution is disabled but RSS hash is enabled. */
-	if (((dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) == 0) &&
-	    (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
+	    (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		RTE_ETHDEV_LOG(ERR,
 			"Ethdev port_id=%u config invalid Rx mq_mode without RSS but %s offload is requested\n",
 			port_id,
-			rte_eth_dev_rx_offload_name(DEV_RX_OFFLOAD_RSS_HASH));
+			rte_eth_dev_rx_offload_name(RTE_ETH_RX_OFFLOAD_RSS_HASH));
 		ret = -EINVAL;
 		goto rollback;
 	}
@@ -2174,7 +2171,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	 * size is supported by the configured device.
 	 */
 	/* Get the real Ethernet overhead length */
-	if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (local_conf.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		uint32_t overhead_len;
 		uint32_t max_rx_pktlen;
 		int ret;
@@ -2754,21 +2751,21 @@ const char *
 rte_eth_link_speed_to_str(uint32_t link_speed)
 {
 	switch (link_speed) {
-	case ETH_SPEED_NUM_NONE: return "None";
-	case ETH_SPEED_NUM_10M:  return "10 Mbps";
-	case ETH_SPEED_NUM_100M: return "100 Mbps";
-	case ETH_SPEED_NUM_1G:   return "1 Gbps";
-	case ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
-	case ETH_SPEED_NUM_5G:   return "5 Gbps";
-	case ETH_SPEED_NUM_10G:  return "10 Gbps";
-	case ETH_SPEED_NUM_20G:  return "20 Gbps";
-	case ETH_SPEED_NUM_25G:  return "25 Gbps";
-	case ETH_SPEED_NUM_40G:  return "40 Gbps";
-	case ETH_SPEED_NUM_50G:  return "50 Gbps";
-	case ETH_SPEED_NUM_56G:  return "56 Gbps";
-	case ETH_SPEED_NUM_100G: return "100 Gbps";
-	case ETH_SPEED_NUM_200G: return "200 Gbps";
-	case ETH_SPEED_NUM_UNKNOWN: return "Unknown";
+	case RTE_ETH_SPEED_NUM_NONE: return "None";
+	case RTE_ETH_SPEED_NUM_10M:  return "10 Mbps";
+	case RTE_ETH_SPEED_NUM_100M: return "100 Mbps";
+	case RTE_ETH_SPEED_NUM_1G:   return "1 Gbps";
+	case RTE_ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
+	case RTE_ETH_SPEED_NUM_5G:   return "5 Gbps";
+	case RTE_ETH_SPEED_NUM_10G:  return "10 Gbps";
+	case RTE_ETH_SPEED_NUM_20G:  return "20 Gbps";
+	case RTE_ETH_SPEED_NUM_25G:  return "25 Gbps";
+	case RTE_ETH_SPEED_NUM_40G:  return "40 Gbps";
+	case RTE_ETH_SPEED_NUM_50G:  return "50 Gbps";
+	case RTE_ETH_SPEED_NUM_56G:  return "56 Gbps";
+	case RTE_ETH_SPEED_NUM_100G: return "100 Gbps";
+	case RTE_ETH_SPEED_NUM_200G: return "200 Gbps";
+	case RTE_ETH_SPEED_NUM_UNKNOWN: return "Unknown";
 	default: return "Invalid";
 	}
 }
@@ -2792,14 +2789,14 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
 		return -EINVAL;
 	}
 
-	if (eth_link->link_status == ETH_LINK_DOWN)
+	if (eth_link->link_status == RTE_ETH_LINK_DOWN)
 		return snprintf(str, len, "Link down");
 	else
 		return snprintf(str, len, "Link up at %s %s %s",
 			rte_eth_link_speed_to_str(eth_link->link_speed),
-			(eth_link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+			(eth_link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 			"FDX" : "HDX",
-			(eth_link->link_autoneg == ETH_LINK_AUTONEG) ?
+			(eth_link->link_autoneg == RTE_ETH_LINK_AUTONEG) ?
 			"Autoneg" : "Fixed");
 }
 
@@ -3706,7 +3703,7 @@ rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on)
 	dev = &rte_eth_devices[port_id];
 
 	if (!(dev->data->dev_conf.rxmode.offloads &
-	      DEV_RX_OFFLOAD_VLAN_FILTER)) {
+	      RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
 		RTE_ETHDEV_LOG(ERR, "Port %u: vlan-filtering disabled\n",
 			port_id);
 		return -ENOSYS;
@@ -3793,44 +3790,44 @@ rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask)
 	dev_offloads = orig_offloads;
 
 	/* check which option changed by application */
-	cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	cur = !!(offload_mask & RTE_ETH_VLAN_STRIP_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
-		mask |= ETH_VLAN_STRIP_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+		mask |= RTE_ETH_VLAN_STRIP_MASK;
 	}
 
-	cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+	cur = !!(offload_mask & RTE_ETH_VLAN_FILTER_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
-		mask |= ETH_VLAN_FILTER_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+		mask |= RTE_ETH_VLAN_FILTER_MASK;
 	}
 
-	cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND);
+	cur = !!(offload_mask & RTE_ETH_VLAN_EXTEND_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
-		mask |= ETH_VLAN_EXTEND_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
+		mask |= RTE_ETH_VLAN_EXTEND_MASK;
 	}
 
-	cur = !!(offload_mask & ETH_QINQ_STRIP_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP);
+	cur = !!(offload_mask & RTE_ETH_QINQ_STRIP_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
-		mask |= ETH_QINQ_STRIP_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
+		mask |= RTE_ETH_QINQ_STRIP_MASK;
 	}
 
 	/*no change*/
@@ -3875,17 +3872,17 @@ rte_eth_dev_get_vlan_offload(uint16_t port_id)
 	dev = &rte_eth_devices[port_id];
 	dev_offloads = &dev->data->dev_conf.rxmode.offloads;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-		ret |= ETH_VLAN_STRIP_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+		ret |= RTE_ETH_VLAN_STRIP_OFFLOAD;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
-		ret |= ETH_VLAN_FILTER_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+		ret |= RTE_ETH_VLAN_FILTER_OFFLOAD;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
-		ret |= ETH_VLAN_EXTEND_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
+		ret |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
-		ret |= ETH_QINQ_STRIP_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+		ret |= RTE_ETH_QINQ_STRIP_OFFLOAD;
 
 	return ret;
 }
@@ -3962,7 +3959,7 @@ rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (pfc_conf->priority > (ETH_DCB_NUM_USER_PRIORITIES - 1)) {
+	if (pfc_conf->priority > (RTE_ETH_DCB_NUM_USER_PRIORITIES - 1)) {
 		RTE_ETHDEV_LOG(ERR, "Invalid priority, only 0-7 allowed\n");
 		return -EINVAL;
 	}
@@ -3980,7 +3977,7 @@ eth_check_reta_mask(struct rte_eth_rss_reta_entry64 *reta_conf,
 {
 	uint16_t i, num;
 
-	num = (reta_size + RTE_RETA_GROUP_SIZE - 1) / RTE_RETA_GROUP_SIZE;
+	num = (reta_size + RTE_ETH_RETA_GROUP_SIZE - 1) / RTE_ETH_RETA_GROUP_SIZE;
 	for (i = 0; i < num; i++) {
 		if (reta_conf[i].mask)
 			return 0;
@@ -4002,8 +3999,8 @@ eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & (1ULL << shift)) &&
 			(reta_conf[idx].reta[shift] >= max_rxq)) {
 			RTE_ETHDEV_LOG(ERR,
@@ -4159,7 +4156,7 @@ rte_eth_dev_udp_tunnel_port_add(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+	if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
 		RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
 		return -EINVAL;
 	}
@@ -4185,7 +4182,7 @@ rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+	if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
 		RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
 		return -EINVAL;
 	}
@@ -4326,8 +4323,8 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr,
 			port_id);
 		return -EINVAL;
 	}
-	if (pool >= ETH_64_POOLS) {
-		RTE_ETHDEV_LOG(ERR, "Pool id must be 0-%d\n", ETH_64_POOLS - 1);
+	if (pool >= RTE_ETH_64_POOLS) {
+		RTE_ETHDEV_LOG(ERR, "Pool id must be 0-%d\n", RTE_ETH_64_POOLS - 1);
 		return -EINVAL;
 	}
 
@@ -6236,7 +6233,7 @@ eth_dev_handle_port_link_status(const char *cmd __rte_unused,
 	rte_tel_data_add_dict_string(d, status_str, "UP");
 	rte_tel_data_add_dict_u64(d, "speed", link.link_speed);
 	rte_tel_data_add_dict_string(d, "duplex",
-			(link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+			(link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 				"full-duplex" : "half-duplex");
 	return 0;
 }
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 69766eaae2d4..5f9fe0f55953 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -249,7 +249,7 @@ void rte_eth_iterator_cleanup(struct rte_dev_iterator *iter);
  * field is not supported, its value is 0.
  * All byte-related statistics do not include Ethernet FCS regardless
  * of whether these bytes have been delivered to the application
- * (see DEV_RX_OFFLOAD_KEEP_CRC).
+ * (see RTE_ETH_RX_OFFLOAD_KEEP_CRC).
  */
 struct rte_eth_stats {
 	uint64_t ipackets;  /**< Total number of successfully received packets. */
@@ -279,43 +279,75 @@ struct rte_eth_stats {
 /**@{@name Link speed capabilities
  * Device supported speeds bitmap flags
  */
-#define ETH_LINK_SPEED_AUTONEG  (0 <<  0)  /**< Autonegotiate (all speeds) */
-#define ETH_LINK_SPEED_FIXED    (1 <<  0)  /**< Disable autoneg (fixed speed) */
-#define ETH_LINK_SPEED_10M_HD   (1 <<  1)  /**<  10 Mbps half-duplex */
-#define ETH_LINK_SPEED_10M      (1 <<  2)  /**<  10 Mbps full-duplex */
-#define ETH_LINK_SPEED_100M_HD  (1 <<  3)  /**< 100 Mbps half-duplex */
-#define ETH_LINK_SPEED_100M     (1 <<  4)  /**< 100 Mbps full-duplex */
-#define ETH_LINK_SPEED_1G       (1 <<  5)  /**<   1 Gbps */
-#define ETH_LINK_SPEED_2_5G     (1 <<  6)  /**< 2.5 Gbps */
-#define ETH_LINK_SPEED_5G       (1 <<  7)  /**<   5 Gbps */
-#define ETH_LINK_SPEED_10G      (1 <<  8)  /**<  10 Gbps */
-#define ETH_LINK_SPEED_20G      (1 <<  9)  /**<  20 Gbps */
-#define ETH_LINK_SPEED_25G      (1 << 10)  /**<  25 Gbps */
-#define ETH_LINK_SPEED_40G      (1 << 11)  /**<  40 Gbps */
-#define ETH_LINK_SPEED_50G      (1 << 12)  /**<  50 Gbps */
-#define ETH_LINK_SPEED_56G      (1 << 13)  /**<  56 Gbps */
-#define ETH_LINK_SPEED_100G     (1 << 14)  /**< 100 Gbps */
-#define ETH_LINK_SPEED_200G     (1 << 15)  /**< 200 Gbps */
+#define RTE_ETH_LINK_SPEED_AUTONEG  (0 <<  0)  /**< Autonegotiate (all speeds) */
+#define ETH_LINK_SPEED_AUTONEG	RTE_ETH_LINK_SPEED_AUTONEG
+#define RTE_ETH_LINK_SPEED_FIXED    (1 <<  0)  /**< Disable autoneg (fixed speed) */
+#define ETH_LINK_SPEED_FIXED	RTE_ETH_LINK_SPEED_FIXED
+#define RTE_ETH_LINK_SPEED_10M_HD   (1 <<  1)  /**<  10 Mbps half-duplex */
+#define ETH_LINK_SPEED_10M_HD	RTE_ETH_LINK_SPEED_10M_HD
+#define RTE_ETH_LINK_SPEED_10M      (1 <<  2)  /**<  10 Mbps full-duplex */
+#define ETH_LINK_SPEED_10M	RTE_ETH_LINK_SPEED_10M
+#define RTE_ETH_LINK_SPEED_100M_HD  (1 <<  3)  /**< 100 Mbps half-duplex */
+#define ETH_LINK_SPEED_100M_HD	RTE_ETH_LINK_SPEED_100M_HD
+#define RTE_ETH_LINK_SPEED_100M     (1 <<  4)  /**< 100 Mbps full-duplex */
+#define ETH_LINK_SPEED_100M	RTE_ETH_LINK_SPEED_100M
+#define RTE_ETH_LINK_SPEED_1G       (1 <<  5)  /**<   1 Gbps */
+#define ETH_LINK_SPEED_1G	RTE_ETH_LINK_SPEED_1G
+#define RTE_ETH_LINK_SPEED_2_5G     (1 <<  6)  /**< 2.5 Gbps */
+#define ETH_LINK_SPEED_2_5G	RTE_ETH_LINK_SPEED_2_5G
+#define RTE_ETH_LINK_SPEED_5G       (1 <<  7)  /**<   5 Gbps */
+#define ETH_LINK_SPEED_5G	RTE_ETH_LINK_SPEED_5G
+#define RTE_ETH_LINK_SPEED_10G      (1 <<  8)  /**<  10 Gbps */
+#define ETH_LINK_SPEED_10G	RTE_ETH_LINK_SPEED_10G
+#define RTE_ETH_LINK_SPEED_20G      (1 <<  9)  /**<  20 Gbps */
+#define ETH_LINK_SPEED_20G	RTE_ETH_LINK_SPEED_20G
+#define RTE_ETH_LINK_SPEED_25G      (1 << 10)  /**<  25 Gbps */
+#define ETH_LINK_SPEED_25G	RTE_ETH_LINK_SPEED_25G
+#define RTE_ETH_LINK_SPEED_40G      (1 << 11)  /**<  40 Gbps */
+#define ETH_LINK_SPEED_40G	RTE_ETH_LINK_SPEED_40G
+#define RTE_ETH_LINK_SPEED_50G      (1 << 12)  /**<  50 Gbps */
+#define ETH_LINK_SPEED_50G	RTE_ETH_LINK_SPEED_50G
+#define RTE_ETH_LINK_SPEED_56G      (1 << 13)  /**<  56 Gbps */
+#define ETH_LINK_SPEED_56G	RTE_ETH_LINK_SPEED_56G
+#define RTE_ETH_LINK_SPEED_100G     (1 << 14)  /**< 100 Gbps */
+#define ETH_LINK_SPEED_100G	RTE_ETH_LINK_SPEED_100G
+#define RTE_ETH_LINK_SPEED_200G     (1 << 15)  /**< 200 Gbps */
+#define ETH_LINK_SPEED_200G	RTE_ETH_LINK_SPEED_200G
 /**@}*/
 
 /**@{@name Link speed
  * Ethernet numeric link speeds in Mbps
  */
-#define ETH_SPEED_NUM_NONE         0 /**< Not defined */
-#define ETH_SPEED_NUM_10M         10 /**<  10 Mbps */
-#define ETH_SPEED_NUM_100M       100 /**< 100 Mbps */
-#define ETH_SPEED_NUM_1G        1000 /**<   1 Gbps */
-#define ETH_SPEED_NUM_2_5G      2500 /**< 2.5 Gbps */
-#define ETH_SPEED_NUM_5G        5000 /**<   5 Gbps */
-#define ETH_SPEED_NUM_10G      10000 /**<  10 Gbps */
-#define ETH_SPEED_NUM_20G      20000 /**<  20 Gbps */
-#define ETH_SPEED_NUM_25G      25000 /**<  25 Gbps */
-#define ETH_SPEED_NUM_40G      40000 /**<  40 Gbps */
-#define ETH_SPEED_NUM_50G      50000 /**<  50 Gbps */
-#define ETH_SPEED_NUM_56G      56000 /**<  56 Gbps */
-#define ETH_SPEED_NUM_100G    100000 /**< 100 Gbps */
-#define ETH_SPEED_NUM_200G    200000 /**< 200 Gbps */
-#define ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define RTE_ETH_SPEED_NUM_NONE         0 /**< Not defined */
+#define ETH_SPEED_NUM_NONE	RTE_ETH_SPEED_NUM_NONE
+#define RTE_ETH_SPEED_NUM_10M         10 /**<  10 Mbps */
+#define ETH_SPEED_NUM_10M	RTE_ETH_SPEED_NUM_10M
+#define RTE_ETH_SPEED_NUM_100M       100 /**< 100 Mbps */
+#define ETH_SPEED_NUM_100M	RTE_ETH_SPEED_NUM_100M
+#define RTE_ETH_SPEED_NUM_1G        1000 /**<   1 Gbps */
+#define ETH_SPEED_NUM_1G	RTE_ETH_SPEED_NUM_1G
+#define RTE_ETH_SPEED_NUM_2_5G      2500 /**< 2.5 Gbps */
+#define ETH_SPEED_NUM_2_5G	RTE_ETH_SPEED_NUM_2_5G
+#define RTE_ETH_SPEED_NUM_5G        5000 /**<   5 Gbps */
+#define ETH_SPEED_NUM_5G	RTE_ETH_SPEED_NUM_5G
+#define RTE_ETH_SPEED_NUM_10G      10000 /**<  10 Gbps */
+#define ETH_SPEED_NUM_10G	RTE_ETH_SPEED_NUM_10G
+#define RTE_ETH_SPEED_NUM_20G      20000 /**<  20 Gbps */
+#define ETH_SPEED_NUM_20G	RTE_ETH_SPEED_NUM_20G
+#define RTE_ETH_SPEED_NUM_25G      25000 /**<  25 Gbps */
+#define ETH_SPEED_NUM_25G	RTE_ETH_SPEED_NUM_25G
+#define RTE_ETH_SPEED_NUM_40G      40000 /**<  40 Gbps */
+#define ETH_SPEED_NUM_40G	RTE_ETH_SPEED_NUM_40G
+#define RTE_ETH_SPEED_NUM_50G      50000 /**<  50 Gbps */
+#define ETH_SPEED_NUM_50G	RTE_ETH_SPEED_NUM_50G
+#define RTE_ETH_SPEED_NUM_56G      56000 /**<  56 Gbps */
+#define ETH_SPEED_NUM_56G	RTE_ETH_SPEED_NUM_56G
+#define RTE_ETH_SPEED_NUM_100G    100000 /**< 100 Gbps */
+#define ETH_SPEED_NUM_100G	RTE_ETH_SPEED_NUM_100G
+#define RTE_ETH_SPEED_NUM_200G    200000 /**< 200 Gbps */
+#define ETH_SPEED_NUM_200G	RTE_ETH_SPEED_NUM_200G
+#define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define ETH_SPEED_NUM_UNKNOWN	RTE_ETH_SPEED_NUM_UNKNOWN
 /**@}*/
 
 /**
@@ -323,21 +355,27 @@ struct rte_eth_stats {
  */
 __extension__
 struct rte_eth_link {
-	uint32_t link_speed;        /**< ETH_SPEED_NUM_ */
-	uint16_t link_duplex  : 1;  /**< ETH_LINK_[HALF/FULL]_DUPLEX */
-	uint16_t link_autoneg : 1;  /**< ETH_LINK_[AUTONEG/FIXED] */
-	uint16_t link_status  : 1;  /**< ETH_LINK_[DOWN/UP] */
+	uint32_t link_speed;        /**< RTE_ETH_SPEED_NUM_ */
+	uint16_t link_duplex  : 1;  /**< RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+	uint16_t link_autoneg : 1;  /**< RTE_ETH_LINK_[AUTONEG/FIXED] */
+	uint16_t link_status  : 1;  /**< RTE_ETH_LINK_[DOWN/UP] */
 } __rte_aligned(8);      /**< aligned for atomic64 read/write */
 
 /**@{@name Link negotiation
  * Constants used in link management.
  */
-#define ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
-#define ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
-#define ETH_LINK_DOWN        0 /**< Link is down (see link_status). */
-#define ETH_LINK_UP          1 /**< Link is up (see link_status). */
-#define ETH_LINK_FIXED       0 /**< No autonegotiation (see link_autoneg). */
-#define ETH_LINK_AUTONEG     1 /**< Autonegotiated (see link_autoneg). */
+#define RTE_ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
+#define ETH_LINK_HALF_DUPLEX	RTE_ETH_LINK_HALF_DUPLEX
+#define RTE_ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
+#define ETH_LINK_FULL_DUPLEX	RTE_ETH_LINK_FULL_DUPLEX
+#define RTE_ETH_LINK_DOWN        0 /**< Link is down (see link_status). */
+#define ETH_LINK_DOWN		RTE_ETH_LINK_DOWN
+#define RTE_ETH_LINK_UP          1 /**< Link is up (see link_status). */
+#define ETH_LINK_UP		RTE_ETH_LINK_UP
+#define RTE_ETH_LINK_FIXED       0 /**< No autonegotiation (see link_autoneg). */
+#define ETH_LINK_FIXED		RTE_ETH_LINK_FIXED
+#define RTE_ETH_LINK_AUTONEG     1 /**< Autonegotiated (see link_autoneg). */
+#define ETH_LINK_AUTONEG	RTE_ETH_LINK_AUTONEG
 #define RTE_ETH_LINK_MAX_STR_LEN 40 /**< Max length of default link string. */
 /**@}*/
 
@@ -354,9 +392,12 @@ struct rte_eth_thresh {
 /**@{@name Multi-queue mode
  * @see rte_eth_conf.rxmode.mq_mode.
  */
-#define ETH_MQ_RX_RSS_FLAG  0x1 /**< Enable RSS. @see rte_eth_rss_conf */
-#define ETH_MQ_RX_DCB_FLAG  0x2 /**< Enable DCB. */
-#define ETH_MQ_RX_VMDQ_FLAG 0x4 /**< Enable VMDq. */
+#define RTE_ETH_MQ_RX_RSS_FLAG  0x1
+#define ETH_MQ_RX_RSS_FLAG	RTE_ETH_MQ_RX_RSS_FLAG
+#define RTE_ETH_MQ_RX_DCB_FLAG  0x2
+#define ETH_MQ_RX_DCB_FLAG	RTE_ETH_MQ_RX_DCB_FLAG
+#define RTE_ETH_MQ_RX_VMDQ_FLAG 0x4
+#define ETH_MQ_RX_VMDQ_FLAG	RTE_ETH_MQ_RX_VMDQ_FLAG
 /**@}*/
 
 /**
@@ -365,50 +406,49 @@ struct rte_eth_thresh {
  */
 enum rte_eth_rx_mq_mode {
 	/** None of DCB,RSS or VMDQ mode */
-	ETH_MQ_RX_NONE = 0,
+	RTE_ETH_MQ_RX_NONE = 0,
 
 	/** For RX side, only RSS is on */
-	ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
+	RTE_ETH_MQ_RX_RSS = RTE_ETH_MQ_RX_RSS_FLAG,
 	/** For RX side,only DCB is on. */
-	ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
+	RTE_ETH_MQ_RX_DCB = RTE_ETH_MQ_RX_DCB_FLAG,
 	/** Both DCB and RSS enable */
-	ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG,
+	RTE_ETH_MQ_RX_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
 
 	/** Only VMDQ, no RSS nor DCB */
-	ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_ONLY = RTE_ETH_MQ_RX_VMDQ_FLAG,
 	/** RSS mode with VMDQ */
-	ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG,
 	/** Use VMDQ+DCB to route traffic to queues */
-	ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG | ETH_MQ_RX_DCB_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_DCB = RTE_ETH_MQ_RX_VMDQ_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
 	/** Enable both VMDQ and DCB in VMDq */
-	ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG |
-				 ETH_MQ_RX_VMDQ_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG |
+				 RTE_ETH_MQ_RX_VMDQ_FLAG,
 };
 
-/**
- * for rx mq mode backward compatible
- */
-#define ETH_RSS                       ETH_MQ_RX_RSS
-#define VMDQ_DCB                      ETH_MQ_RX_VMDQ_DCB
-#define ETH_DCB_RX                    ETH_MQ_RX_DCB
+#define ETH_MQ_RX_NONE		RTE_ETH_MQ_RX_NONE
+#define ETH_MQ_RX_RSS		RTE_ETH_MQ_RX_RSS
+#define ETH_MQ_RX_DCB		RTE_ETH_MQ_RX_DCB
+#define ETH_MQ_RX_DCB_RSS	RTE_ETH_MQ_RX_DCB_RSS
+#define ETH_MQ_RX_VMDQ_ONLY	RTE_ETH_MQ_RX_VMDQ_ONLY
+#define ETH_MQ_RX_VMDQ_RSS	RTE_ETH_MQ_RX_VMDQ_RSS
+#define ETH_MQ_RX_VMDQ_DCB	RTE_ETH_MQ_RX_VMDQ_DCB
+#define ETH_MQ_RX_VMDQ_DCB_RSS	RTE_ETH_MQ_RX_VMDQ_DCB_RSS
 
 /**
  * A set of values to identify what method is to be used to transmit
  * packets using multi-TCs.
  */
 enum rte_eth_tx_mq_mode {
-	ETH_MQ_TX_NONE    = 0,  /**< It is in neither DCB nor VT mode. */
-	ETH_MQ_TX_DCB,          /**< For TX side,only DCB is on. */
-	ETH_MQ_TX_VMDQ_DCB,	/**< For TX side,both DCB and VT is on. */
-	ETH_MQ_TX_VMDQ_ONLY,    /**< Only VT on, no DCB */
+	RTE_ETH_MQ_TX_NONE    = 0,  /**< It is in neither DCB nor VT mode. */
+	RTE_ETH_MQ_TX_DCB,          /**< For TX side,only DCB is on. */
+	RTE_ETH_MQ_TX_VMDQ_DCB,	/**< For TX side,both DCB and VT is on. */
+	RTE_ETH_MQ_TX_VMDQ_ONLY,    /**< Only VT on, no DCB */
 };
-
-/**
- * for tx mq mode backward compatible
- */
-#define ETH_DCB_NONE                ETH_MQ_TX_NONE
-#define ETH_VMDQ_DCB_TX             ETH_MQ_TX_VMDQ_DCB
-#define ETH_DCB_TX                  ETH_MQ_TX_DCB
+#define ETH_MQ_TX_NONE		RTE_ETH_MQ_TX_NONE
+#define ETH_MQ_TX_DCB		RTE_ETH_MQ_TX_DCB
+#define ETH_MQ_TX_VMDQ_DCB	RTE_ETH_MQ_TX_VMDQ_DCB
+#define ETH_MQ_TX_VMDQ_ONLY	RTE_ETH_MQ_TX_VMDQ_ONLY
 
 /**
  * A structure used to configure the RX features of an Ethernet port.
@@ -421,7 +461,7 @@ struct rte_eth_rxmode {
 	uint32_t max_lro_pkt_size;
 	uint16_t split_hdr_size;  /**< hdr buf size (header_split enabled).*/
 	/**
-	 * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Per-port Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
 	 * Only offloads set on rx_offload_capa field on rte_eth_dev_info
 	 * structure are allowed to be set.
 	 */
@@ -436,12 +476,17 @@ struct rte_eth_rxmode {
  * Note that single VLAN is treated the same as inner VLAN.
  */
 enum rte_vlan_type {
-	ETH_VLAN_TYPE_UNKNOWN = 0,
-	ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
-	ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
-	ETH_VLAN_TYPE_MAX,
+	RTE_ETH_VLAN_TYPE_UNKNOWN = 0,
+	RTE_ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
+	RTE_ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
+	RTE_ETH_VLAN_TYPE_MAX,
 };
 
+#define ETH_VLAN_TYPE_UNKNOWN	RTE_ETH_VLAN_TYPE_UNKNOWN
+#define ETH_VLAN_TYPE_INNER	RTE_ETH_VLAN_TYPE_INNER
+#define ETH_VLAN_TYPE_OUTER	RTE_ETH_VLAN_TYPE_OUTER
+#define ETH_VLAN_TYPE_MAX	RTE_ETH_VLAN_TYPE_MAX
+
 /**
  * A structure used to describe a vlan filter.
  * If the bit corresponding to a VID is set, such VID is on.
@@ -512,74 +557,113 @@ struct rte_eth_rss_conf {
  * Below macros are defined for RSS offload types, they can be used to
  * fill rte_eth_rss_conf.rss_hf or rte_flow_action_rss.types.
  */
-#define ETH_RSS_IPV4               (1ULL << 2)
-#define ETH_RSS_FRAG_IPV4          (1ULL << 3)
-#define ETH_RSS_NONFRAG_IPV4_TCP   (1ULL << 4)
-#define ETH_RSS_NONFRAG_IPV4_UDP   (1ULL << 5)
-#define ETH_RSS_NONFRAG_IPV4_SCTP  (1ULL << 6)
-#define ETH_RSS_NONFRAG_IPV4_OTHER (1ULL << 7)
-#define ETH_RSS_IPV6               (1ULL << 8)
-#define ETH_RSS_FRAG_IPV6          (1ULL << 9)
-#define ETH_RSS_NONFRAG_IPV6_TCP   (1ULL << 10)
-#define ETH_RSS_NONFRAG_IPV6_UDP   (1ULL << 11)
-#define ETH_RSS_NONFRAG_IPV6_SCTP  (1ULL << 12)
-#define ETH_RSS_NONFRAG_IPV6_OTHER (1ULL << 13)
-#define ETH_RSS_L2_PAYLOAD         (1ULL << 14)
-#define ETH_RSS_IPV6_EX            (1ULL << 15)
-#define ETH_RSS_IPV6_TCP_EX        (1ULL << 16)
-#define ETH_RSS_IPV6_UDP_EX        (1ULL << 17)
-#define ETH_RSS_PORT               (1ULL << 18)
-#define ETH_RSS_VXLAN              (1ULL << 19)
-#define ETH_RSS_GENEVE             (1ULL << 20)
-#define ETH_RSS_NVGRE              (1ULL << 21)
-#define ETH_RSS_GTPU               (1ULL << 23)
-#define ETH_RSS_ETH                (1ULL << 24)
-#define ETH_RSS_S_VLAN             (1ULL << 25)
-#define ETH_RSS_C_VLAN             (1ULL << 26)
-#define ETH_RSS_ESP                (1ULL << 27)
-#define ETH_RSS_AH                 (1ULL << 28)
-#define ETH_RSS_L2TPV3             (1ULL << 29)
-#define ETH_RSS_PFCP               (1ULL << 30)
-#define ETH_RSS_PPPOE		   (1ULL << 31)
-#define ETH_RSS_ECPRI		   (1ULL << 32)
-#define ETH_RSS_MPLS		   (1ULL << 33)
-#define ETH_RSS_IPV4_CHKSUM	   (1ULL << 34)
-
-/**
- * The ETH_RSS_L4_CHKSUM works on checksum field of any L4 header.
- * It is similar to ETH_RSS_PORT that they don't specify the specific type of
+#define RTE_ETH_RSS_IPV4               (1ULL << 2)
+#define ETH_RSS_IPV4		RTE_ETH_RSS_IPV4
+#define RTE_ETH_RSS_FRAG_IPV4          (1ULL << 3)
+#define ETH_RSS_FRAG_IPV4	RTE_ETH_RSS_FRAG_IPV4
+#define RTE_ETH_RSS_NONFRAG_IPV4_TCP   (1ULL << 4)
+#define ETH_RSS_NONFRAG_IPV4_TCP	RTE_ETH_RSS_NONFRAG_IPV4_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV4_UDP   (1ULL << 5)
+#define ETH_RSS_NONFRAG_IPV4_UDP	RTE_ETH_RSS_NONFRAG_IPV4_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV4_SCTP  (1ULL << 6)
+#define ETH_RSS_NONFRAG_IPV4_SCTP	RTE_ETH_RSS_NONFRAG_IPV4_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV4_OTHER (1ULL << 7)
+#define ETH_RSS_NONFRAG_IPV4_OTHER	RTE_ETH_RSS_NONFRAG_IPV4_OTHER
+#define RTE_ETH_RSS_IPV6               (1ULL << 8)
+#define ETH_RSS_IPV6		RTE_ETH_RSS_IPV6
+#define RTE_ETH_RSS_FRAG_IPV6          (1ULL << 9)
+#define ETH_RSS_FRAG_IPV6	RTE_ETH_RSS_FRAG_IPV6
+#define RTE_ETH_RSS_NONFRAG_IPV6_TCP   (1ULL << 10)
+#define ETH_RSS_NONFRAG_IPV6_TCP	RTE_ETH_RSS_NONFRAG_IPV6_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV6_UDP   (1ULL << 11)
+#define ETH_RSS_NONFRAG_IPV6_UDP	RTE_ETH_RSS_NONFRAG_IPV6_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV6_SCTP  (1ULL << 12)
+#define ETH_RSS_NONFRAG_IPV6_SCTP	RTE_ETH_RSS_NONFRAG_IPV6_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV6_OTHER (1ULL << 13)
+#define ETH_RSS_NONFRAG_IPV6_OTHER	RTE_ETH_RSS_NONFRAG_IPV6_OTHER
+#define RTE_ETH_RSS_L2_PAYLOAD         (1ULL << 14)
+#define ETH_RSS_L2_PAYLOAD	RTE_ETH_RSS_L2_PAYLOAD
+#define RTE_ETH_RSS_IPV6_EX            (1ULL << 15)
+#define ETH_RSS_IPV6_EX		RTE_ETH_RSS_IPV6_EX
+#define RTE_ETH_RSS_IPV6_TCP_EX        (1ULL << 16)
+#define ETH_RSS_IPV6_TCP_EX	RTE_ETH_RSS_IPV6_TCP_EX
+#define RTE_ETH_RSS_IPV6_UDP_EX        (1ULL << 17)
+#define ETH_RSS_IPV6_UDP_EX	RTE_ETH_RSS_IPV6_UDP_EX
+#define RTE_ETH_RSS_PORT               (1ULL << 18)
+#define ETH_RSS_PORT		RTE_ETH_RSS_PORT
+#define RTE_ETH_RSS_VXLAN              (1ULL << 19)
+#define ETH_RSS_VXLAN		RTE_ETH_RSS_VXLAN
+#define RTE_ETH_RSS_GENEVE             (1ULL << 20)
+#define ETH_RSS_GENEVE		RTE_ETH_RSS_GENEVE
+#define RTE_ETH_RSS_NVGRE              (1ULL << 21)
+#define ETH_RSS_NVGRE		RTE_ETH_RSS_NVGRE
+#define RTE_ETH_RSS_GTPU               (1ULL << 23)
+#define ETH_RSS_GTPU		RTE_ETH_RSS_GTPU
+#define RTE_ETH_RSS_ETH                (1ULL << 24)
+#define ETH_RSS_ETH		RTE_ETH_RSS_ETH
+#define RTE_ETH_RSS_S_VLAN             (1ULL << 25)
+#define ETH_RSS_S_VLAN		RTE_ETH_RSS_S_VLAN
+#define RTE_ETH_RSS_C_VLAN             (1ULL << 26)
+#define ETH_RSS_C_VLAN		RTE_ETH_RSS_C_VLAN
+#define RTE_ETH_RSS_ESP                (1ULL << 27)
+#define ETH_RSS_ESP		RTE_ETH_RSS_ESP
+#define RTE_ETH_RSS_AH                 (1ULL << 28)
+#define ETH_RSS_AH		RTE_ETH_RSS_AH
+#define RTE_ETH_RSS_L2TPV3             (1ULL << 29)
+#define ETH_RSS_L2TPV3		RTE_ETH_RSS_L2TPV3
+#define RTE_ETH_RSS_PFCP               (1ULL << 30)
+#define ETH_RSS_PFCP		RTE_ETH_RSS_PFCP
+#define RTE_ETH_RSS_PPPOE              (1ULL << 31)
+#define ETH_RSS_PPPOE		RTE_ETH_RSS_PPPOE
+#define RTE_ETH_RSS_ECPRI              (1ULL << 32)
+#define ETH_RSS_ECPRI		RTE_ETH_RSS_ECPRI
+#define RTE_ETH_RSS_MPLS               (1ULL << 33)
+#define ETH_RSS_MPLS		RTE_ETH_RSS_MPLS
+#define RTE_ETH_RSS_IPV4_CHKSUM        (1ULL << 34)
+#define ETH_RSS_IPV4_CHKSUM	RTE_ETH_RSS_IPV4_CHKSUM
+
+/**
+ * The RTE_ETH_RSS_L4_CHKSUM works on checksum field of any L4 header.
+ * It is similar to RTE_ETH_RSS_PORT that they don't specify the specific type of
  * L4 header. This macro is defined to replace some specific L4 (TCP/UDP/SCTP)
  * checksum type for constructing the use of RSS offload bits.
  *
  * Due to above reason, some old APIs (and configuration) don't support
- * ETH_RSS_L4_CHKSUM. The rte_flow RSS API supports it.
+ * RTE_ETH_RSS_L4_CHKSUM. The rte_flow RSS API supports it.
  *
  * For the case that checksum is not used in an UDP header,
  * it takes the reserved value 0 as input for the hash function.
  */
-#define ETH_RSS_L4_CHKSUM          (1ULL << 35)
+#define RTE_ETH_RSS_L4_CHKSUM          (1ULL << 35)
+#define ETH_RSS_L4_CHKSUM	RTE_ETH_RSS_L4_CHKSUM
 
 /*
- * We use the following macros to combine with above ETH_RSS_* for
+ * We use the following macros to combine with above RTE_ETH_RSS_* for
  * more specific input set selection. These bits are defined starting
  * from the high end of the 64 bits.
- * Note: If we use above ETH_RSS_* without SRC/DST_ONLY, it represents
+ * Note: If we use above RTE_ETH_RSS_* without SRC/DST_ONLY, it represents
  * both SRC and DST are taken into account. If SRC_ONLY and DST_ONLY of
  * the same level are used simultaneously, it is the same case as none of
  * them are added.
  */
-#define ETH_RSS_L3_SRC_ONLY        (1ULL << 63)
-#define ETH_RSS_L3_DST_ONLY        (1ULL << 62)
-#define ETH_RSS_L4_SRC_ONLY        (1ULL << 61)
-#define ETH_RSS_L4_DST_ONLY        (1ULL << 60)
-#define ETH_RSS_L2_SRC_ONLY        (1ULL << 59)
-#define ETH_RSS_L2_DST_ONLY        (1ULL << 58)
+#define RTE_ETH_RSS_L3_SRC_ONLY        (1ULL << 63)
+#define ETH_RSS_L3_SRC_ONLY	RTE_ETH_RSS_L3_SRC_ONLY
+#define RTE_ETH_RSS_L3_DST_ONLY        (1ULL << 62)
+#define ETH_RSS_L3_DST_ONLY	RTE_ETH_RSS_L3_DST_ONLY
+#define RTE_ETH_RSS_L4_SRC_ONLY        (1ULL << 61)
+#define ETH_RSS_L4_SRC_ONLY	RTE_ETH_RSS_L4_SRC_ONLY
+#define RTE_ETH_RSS_L4_DST_ONLY        (1ULL << 60)
+#define ETH_RSS_L4_DST_ONLY	RTE_ETH_RSS_L4_DST_ONLY
+#define RTE_ETH_RSS_L2_SRC_ONLY        (1ULL << 59)
+#define ETH_RSS_L2_SRC_ONLY	RTE_ETH_RSS_L2_SRC_ONLY
+#define RTE_ETH_RSS_L2_DST_ONLY        (1ULL << 58)
+#define ETH_RSS_L2_DST_ONLY	RTE_ETH_RSS_L2_DST_ONLY
 
 /*
  * Only select IPV6 address prefix as RSS input set according to
- * https://tools.ietf.org/html/rfc6052
- * Must be combined with ETH_RSS_IPV6, ETH_RSS_NONFRAG_IPV6_UDP,
- * ETH_RSS_NONFRAG_IPV6_TCP, ETH_RSS_NONFRAG_IPV6_SCTP.
+ * https:tools.ietf.org/html/rfc6052
+ * Must be combined with RTE_ETH_RSS_IPV6, RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ * RTE_ETH_RSS_NONFRAG_IPV6_TCP, RTE_ETH_RSS_NONFRAG_IPV6_SCTP.
  */
 #define RTE_ETH_RSS_L3_PRE32	   (1ULL << 57)
 #define RTE_ETH_RSS_L3_PRE40	   (1ULL << 56)
@@ -601,22 +685,27 @@ struct rte_eth_rss_conf {
  * It basically stands for the innermost encapsulation level RSS
  * can be performed on according to PMD and device capabilities.
  */
-#define ETH_RSS_LEVEL_PMD_DEFAULT       (0ULL << 50)
+#define RTE_ETH_RSS_LEVEL_PMD_DEFAULT       (0ULL << 50)
+#define ETH_RSS_LEVEL_PMD_DEFAULT	RTE_ETH_RSS_LEVEL_PMD_DEFAULT
 
 /**
  * level 1, requests RSS to be performed on the outermost packet
  * encapsulation level.
  */
-#define ETH_RSS_LEVEL_OUTERMOST         (1ULL << 50)
+#define RTE_ETH_RSS_LEVEL_OUTERMOST         (1ULL << 50)
+#define ETH_RSS_LEVEL_OUTERMOST	RTE_ETH_RSS_LEVEL_OUTERMOST
 
 /**
  * level 2, requests RSS to be performed on the specified inner packet
  * encapsulation level, from outermost to innermost (lower to higher values).
  */
-#define ETH_RSS_LEVEL_INNERMOST         (2ULL << 50)
-#define ETH_RSS_LEVEL_MASK              (3ULL << 50)
+#define RTE_ETH_RSS_LEVEL_INNERMOST         (2ULL << 50)
+#define ETH_RSS_LEVEL_INNERMOST	RTE_ETH_RSS_LEVEL_INNERMOST
+#define RTE_ETH_RSS_LEVEL_MASK              (3ULL << 50)
+#define ETH_RSS_LEVEL_MASK	RTE_ETH_RSS_LEVEL_MASK
 
-#define ETH_RSS_LEVEL(rss_hf) ((rss_hf & ETH_RSS_LEVEL_MASK) >> 50)
+#define RTE_ETH_RSS_LEVEL(rss_hf) ((rss_hf & RTE_ETH_RSS_LEVEL_MASK) >> 50)
+#define ETH_RSS_LEVEL(rss_hf)	RTE_ETH_RSS_LEVEL(rss_hf)
 
 /**
  * For input set change of hash filter, if SRC_ONLY and DST_ONLY of
@@ -631,219 +720,312 @@ struct rte_eth_rss_conf {
 static inline uint64_t
 rte_eth_rss_hf_refine(uint64_t rss_hf)
 {
-	if ((rss_hf & ETH_RSS_L3_SRC_ONLY) && (rss_hf & ETH_RSS_L3_DST_ONLY))
-		rss_hf &= ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+	if ((rss_hf & RTE_ETH_RSS_L3_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L3_DST_ONLY))
+		rss_hf &= ~(RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
 
-	if ((rss_hf & ETH_RSS_L4_SRC_ONLY) && (rss_hf & ETH_RSS_L4_DST_ONLY))
-		rss_hf &= ~(ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+	if ((rss_hf & RTE_ETH_RSS_L4_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L4_DST_ONLY))
+		rss_hf &= ~(RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
 
 	return rss_hf;
 }
 
-#define ETH_RSS_IPV6_PRE32 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE32 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32	RTE_ETH_RSS_IPV6_PRE32
 
-#define ETH_RSS_IPV6_PRE40 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE40 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40	RTE_ETH_RSS_IPV6_PRE40
 
-#define ETH_RSS_IPV6_PRE48 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE48 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48	RTE_ETH_RSS_IPV6_PRE48
 
-#define ETH_RSS_IPV6_PRE56 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE56 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56	RTE_ETH_RSS_IPV6_PRE56
 
-#define ETH_RSS_IPV6_PRE64 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE64 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64	RTE_ETH_RSS_IPV6_PRE64
 
-#define ETH_RSS_IPV6_PRE96 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE96 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96	RTE_ETH_RSS_IPV6_PRE96
 
-#define ETH_RSS_IPV6_PRE32_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE32_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_UDP	RTE_ETH_RSS_IPV6_PRE32_UDP
 
-#define ETH_RSS_IPV6_PRE40_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE40_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_UDP	RTE_ETH_RSS_IPV6_PRE40_UDP
 
-#define ETH_RSS_IPV6_PRE48_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE48_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_UDP	RTE_ETH_RSS_IPV6_PRE48_UDP
 
-#define ETH_RSS_IPV6_PRE56_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE56_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_UDP	RTE_ETH_RSS_IPV6_PRE56_UDP
 
-#define ETH_RSS_IPV6_PRE64_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE64_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_UDP	RTE_ETH_RSS_IPV6_PRE64_UDP
 
-#define ETH_RSS_IPV6_PRE96_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE96_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_UDP	RTE_ETH_RSS_IPV6_PRE96_UDP
 
-#define ETH_RSS_IPV6_PRE32_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE32_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_TCP	RTE_ETH_RSS_IPV6_PRE32_TCP
 
-#define ETH_RSS_IPV6_PRE40_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE40_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_TCP	RTE_ETH_RSS_IPV6_PRE40_TCP
 
-#define ETH_RSS_IPV6_PRE48_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE48_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_TCP	RTE_ETH_RSS_IPV6_PRE48_TCP
 
-#define ETH_RSS_IPV6_PRE56_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE56_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_TCP	RTE_ETH_RSS_IPV6_PRE56_TCP
 
-#define ETH_RSS_IPV6_PRE64_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE64_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_TCP	RTE_ETH_RSS_IPV6_PRE64_TCP
 
-#define ETH_RSS_IPV6_PRE96_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE96_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_TCP	RTE_ETH_RSS_IPV6_PRE96_TCP
 
-#define ETH_RSS_IPV6_PRE32_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE32_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_SCTP	RTE_ETH_RSS_IPV6_PRE32_SCTP
 
-#define ETH_RSS_IPV6_PRE40_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE40_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_SCTP	RTE_ETH_RSS_IPV6_PRE40_SCTP
 
-#define ETH_RSS_IPV6_PRE48_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE48_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_SCTP	RTE_ETH_RSS_IPV6_PRE48_SCTP
 
-#define ETH_RSS_IPV6_PRE56_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE56_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_SCTP	RTE_ETH_RSS_IPV6_PRE56_SCTP
 
-#define ETH_RSS_IPV6_PRE64_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE64_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_SCTP	RTE_ETH_RSS_IPV6_PRE64_SCTP
 
-#define ETH_RSS_IPV6_PRE96_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE96_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE96)
-
-#define ETH_RSS_IP ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_IPV6_EX)
-
-#define ETH_RSS_UDP ( \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_UDP_EX)
-
-#define ETH_RSS_TCP ( \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_IPV6_TCP_EX)
-
-#define ETH_RSS_SCTP ( \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP)
-
-#define ETH_RSS_TUNNEL ( \
-	ETH_RSS_VXLAN  | \
-	ETH_RSS_GENEVE | \
-	ETH_RSS_NVGRE)
-
-#define ETH_RSS_VLAN ( \
-	ETH_RSS_S_VLAN  | \
-	ETH_RSS_C_VLAN)
+#define ETH_RSS_IPV6_PRE96_SCTP	RTE_ETH_RSS_IPV6_PRE96_SCTP
+
+#define RTE_ETH_RSS_IP ( \
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_IPV6_EX)
+#define ETH_RSS_IP	RTE_ETH_RSS_IP
+
+#define RTE_ETH_RSS_UDP ( \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
+#define ETH_RSS_UDP	RTE_ETH_RSS_UDP
+
+#define RTE_ETH_RSS_TCP ( \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_IPV6_TCP_EX)
+#define ETH_RSS_TCP	RTE_ETH_RSS_TCP
+
+#define RTE_ETH_RSS_SCTP ( \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+#define ETH_RSS_SCTP	RTE_ETH_RSS_SCTP
+
+#define RTE_ETH_RSS_TUNNEL ( \
+	RTE_ETH_RSS_VXLAN  | \
+	RTE_ETH_RSS_GENEVE | \
+	RTE_ETH_RSS_NVGRE)
+#define ETH_RSS_TUNNEL	RTE_ETH_RSS_TUNNEL
+
+#define RTE_ETH_RSS_VLAN ( \
+	RTE_ETH_RSS_S_VLAN  | \
+	RTE_ETH_RSS_C_VLAN)
+#define ETH_RSS_VLAN	RTE_ETH_RSS_VLAN
 
 /**< Mask of valid RSS hash protocols */
-#define ETH_RSS_PROTO_MASK ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L2_PAYLOAD | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX | \
-	ETH_RSS_PORT  | \
-	ETH_RSS_VXLAN | \
-	ETH_RSS_GENEVE | \
-	ETH_RSS_NVGRE | \
-	ETH_RSS_MPLS)
+#define RTE_ETH_RSS_PROTO_MASK ( \
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L2_PAYLOAD | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX | \
+	RTE_ETH_RSS_PORT  | \
+	RTE_ETH_RSS_VXLAN | \
+	RTE_ETH_RSS_GENEVE | \
+	RTE_ETH_RSS_NVGRE | \
+	RTE_ETH_RSS_MPLS)
+#define ETH_RSS_PROTO_MASK	RTE_ETH_RSS_PROTO_MASK
 
 /*
  * Definitions used for redirection table entry size.
  * Some RSS RETA sizes may not be supported by some drivers, check the
  * documentation or the description of relevant functions for more details.
  */
-#define ETH_RSS_RETA_SIZE_64  64
-#define ETH_RSS_RETA_SIZE_128 128
-#define ETH_RSS_RETA_SIZE_256 256
-#define ETH_RSS_RETA_SIZE_512 512
-#define RTE_RETA_GROUP_SIZE   64
+#define RTE_ETH_RSS_RETA_SIZE_64  64
+#define ETH_RSS_RETA_SIZE_64	RTE_ETH_RSS_RETA_SIZE_64
+#define RTE_ETH_RSS_RETA_SIZE_128 128
+#define ETH_RSS_RETA_SIZE_128	RTE_ETH_RSS_RETA_SIZE_128
+#define RTE_ETH_RSS_RETA_SIZE_256 256
+#define ETH_RSS_RETA_SIZE_256	RTE_ETH_RSS_RETA_SIZE_256
+#define RTE_ETH_RSS_RETA_SIZE_512 512
+#define ETH_RSS_RETA_SIZE_512	RTE_ETH_RSS_RETA_SIZE_512
+#define RTE_ETH_RETA_GROUP_SIZE   64
+#define RTE_RETA_GROUP_SIZE	RTE_ETH_RETA_GROUP_SIZE
 
 /**@{@name VMDq and DCB maximums */
-#define ETH_VMDQ_MAX_VLAN_FILTERS   64 /**< Maximum nb. of VMDQ vlan filters. */
-#define ETH_DCB_NUM_USER_PRIORITIES 8  /**< Maximum nb. of DCB priorities. */
-#define ETH_VMDQ_DCB_NUM_QUEUES     128 /**< Maximum nb. of VMDQ DCB queues. */
-#define ETH_DCB_NUM_QUEUES          128 /**< Maximum nb. of DCB queues. */
+#define RTE_ETH_VMDQ_MAX_VLAN_FILTERS   64 /**< Maximum nb. of VMDQ vlan filters. */
+#define ETH_VMDQ_MAX_VLAN_FILTERS	RTE_ETH_VMDQ_MAX_VLAN_FILTERS
+#define RTE_ETH_DCB_NUM_USER_PRIORITIES 8  /**< Maximum nb. of DCB priorities. */
+#define ETH_DCB_NUM_USER_PRIORITIES	RTE_ETH_DCB_NUM_USER_PRIORITIES
+#define RTE_ETH_VMDQ_DCB_NUM_QUEUES     128 /**< Maximum nb. of VMDQ DCB queues. */
+#define ETH_VMDQ_DCB_NUM_QUEUES	RTE_ETH_VMDQ_DCB_NUM_QUEUES
+#define RTE_ETH_DCB_NUM_QUEUES          128 /**< Maximum nb. of DCB queues. */
+#define ETH_DCB_NUM_QUEUES	RTE_ETH_DCB_NUM_QUEUES
 /**@}*/
 
 /**@{@name DCB capabilities */
-#define ETH_DCB_PG_SUPPORT      0x00000001 /**< Priority Group(ETS) support. */
-#define ETH_DCB_PFC_SUPPORT     0x00000002 /**< Priority Flow Control support. */
+#define RTE_ETH_DCB_PG_SUPPORT      0x00000001 /**< Priority Group(ETS) support. */
+#define ETH_DCB_PG_SUPPORT	RTE_ETH_DCB_PG_SUPPORT
+#define RTE_ETH_DCB_PFC_SUPPORT     0x00000002 /**< Priority Flow Control support. */
+#define ETH_DCB_PFC_SUPPORT	RTE_ETH_DCB_PFC_SUPPORT
 /**@}*/
 
 /**@{@name VLAN offload bits */
-#define ETH_VLAN_STRIP_OFFLOAD   0x0001 /**< VLAN Strip  On/Off */
-#define ETH_VLAN_FILTER_OFFLOAD  0x0002 /**< VLAN Filter On/Off */
-#define ETH_VLAN_EXTEND_OFFLOAD  0x0004 /**< VLAN Extend On/Off */
-#define ETH_QINQ_STRIP_OFFLOAD   0x0008 /**< QINQ Strip On/Off */
-
-#define ETH_VLAN_STRIP_MASK   0x0001 /**< VLAN Strip  setting mask */
-#define ETH_VLAN_FILTER_MASK  0x0002 /**< VLAN Filter  setting mask*/
-#define ETH_VLAN_EXTEND_MASK  0x0004 /**< VLAN Extend  setting mask*/
-#define ETH_QINQ_STRIP_MASK   0x0008 /**< QINQ Strip  setting mask */
-#define ETH_VLAN_ID_MAX       0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define RTE_ETH_VLAN_STRIP_OFFLOAD   0x0001 /**< VLAN Strip  On/Off */
+#define ETH_VLAN_STRIP_OFFLOAD	RTE_ETH_VLAN_STRIP_OFFLOAD
+#define RTE_ETH_VLAN_FILTER_OFFLOAD  0x0002 /**< VLAN Filter On/Off */
+#define ETH_VLAN_FILTER_OFFLOAD	RTE_ETH_VLAN_FILTER_OFFLOAD
+#define RTE_ETH_VLAN_EXTEND_OFFLOAD  0x0004 /**< VLAN Extend On/Off */
+#define ETH_VLAN_EXTEND_OFFLOAD	RTE_ETH_VLAN_EXTEND_OFFLOAD
+#define RTE_ETH_QINQ_STRIP_OFFLOAD   0x0008 /**< QINQ Strip On/Off */
+#define ETH_QINQ_STRIP_OFFLOAD	RTE_ETH_QINQ_STRIP_OFFLOAD
+
+#define RTE_ETH_VLAN_STRIP_MASK   0x0001 /**< VLAN Strip  setting mask */
+#define ETH_VLAN_STRIP_MASK	RTE_ETH_VLAN_STRIP_MASK
+#define RTE_ETH_VLAN_FILTER_MASK  0x0002 /**< VLAN Filter  setting mask*/
+#define ETH_VLAN_FILTER_MASK	RTE_ETH_VLAN_FILTER_MASK
+#define RTE_ETH_VLAN_EXTEND_MASK  0x0004 /**< VLAN Extend  setting mask*/
+#define ETH_VLAN_EXTEND_MASK	RTE_ETH_VLAN_EXTEND_MASK
+#define RTE_ETH_QINQ_STRIP_MASK   0x0008 /**< QINQ Strip  setting mask */
+#define ETH_QINQ_STRIP_MASK	RTE_ETH_QINQ_STRIP_MASK
+#define RTE_ETH_VLAN_ID_MAX       0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define ETH_VLAN_ID_MAX		RTE_ETH_VLAN_ID_MAX
 /**@}*/
 
 /* Definitions used for receive MAC address   */
-#define ETH_NUM_RECEIVE_MAC_ADDR  128 /**< Maximum nb. of receive mac addr. */
+#define RTE_ETH_NUM_RECEIVE_MAC_ADDR  128 /**< Maximum nb. of receive mac addr. */
+#define ETH_NUM_RECEIVE_MAC_ADDR	RTE_ETH_NUM_RECEIVE_MAC_ADDR
 
 /* Definitions used for unicast hash  */
-#define ETH_VMDQ_NUM_UC_HASH_ARRAY  128 /**< Maximum nb. of UC hash array. */
+#define RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY  128 /**< Maximum nb. of UC hash array. */
+#define ETH_VMDQ_NUM_UC_HASH_ARRAY	RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY
 
 /**@{@name VMDq Rx mode
  * @see rte_eth_vmdq_rx_conf.rx_mode
  */
-#define ETH_VMDQ_ACCEPT_UNTAG   0x0001 /**< accept untagged packets. */
-#define ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
-#define ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
-#define ETH_VMDQ_ACCEPT_BROADCAST   0x0008 /**< accept broadcast packets. */
-#define ETH_VMDQ_ACCEPT_MULTICAST   0x0010 /**< multicast promiscuous. */
+#define RTE_ETH_VMDQ_ACCEPT_UNTAG   0x0001 /**< accept untagged packets. */
+#define ETH_VMDQ_ACCEPT_UNTAG	RTE_ETH_VMDQ_ACCEPT_UNTAG
+#define RTE_ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
+#define ETH_VMDQ_ACCEPT_HASH_MC	RTE_ETH_VMDQ_ACCEPT_HASH_MC
+#define RTE_ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
+#define ETH_VMDQ_ACCEPT_HASH_UC	RTE_ETH_VMDQ_ACCEPT_HASH_UC
+#define RTE_ETH_VMDQ_ACCEPT_BROADCAST   0x0008 /**< accept broadcast packets. */
+#define ETH_VMDQ_ACCEPT_BROADCAST	RTE_ETH_VMDQ_ACCEPT_BROADCAST
+#define RTE_ETH_VMDQ_ACCEPT_MULTICAST   0x0010 /**< multicast promiscuous. */
+#define ETH_VMDQ_ACCEPT_MULTICAST	RTE_ETH_VMDQ_ACCEPT_MULTICAST
 /**@}*/
 
+/** Maximum nb. of vlan per mirror rule */
+#define RTE_ETH_MIRROR_MAX_VLANS       64
+#define ETH_MIRROR_MAX_VLANS	RTE_ETH_MIRROR_MAX_VLANS
+
+#define RTE_ETH_MIRROR_VIRTUAL_POOL_UP     0x01  /**< Virtual Pool uplink Mirroring. */
+#define ETH_MIRROR_VIRTUAL_POOL_UP	RTE_ETH_MIRROR_VIRTUAL_POOL_UP
+#define RTE_ETH_MIRROR_UPLINK_PORT         0x02  /**< Uplink Port Mirroring. */
+#define ETH_MIRROR_UPLINK_PORT	RTE_ETH_MIRROR_UPLINK_PORT
+#define RTE_ETH_MIRROR_DOWNLINK_PORT       0x04  /**< Downlink Port Mirroring. */
+#define ETH_MIRROR_DOWNLINK_PORT	RTE_ETH_MIRROR_DOWNLINK_PORT
+#define RTE_ETH_MIRROR_VLAN                0x08  /**< VLAN Mirroring. */
+#define ETH_MIRROR_VLAN		RTE_ETH_MIRROR_VLAN
+#define RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN   0x10  /**< Virtual Pool downlink Mirroring. */
+#define ETH_MIRROR_VIRTUAL_POOL_DOWN	RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN
+
+/**
+ * A structure used to configure VLAN traffic mirror of an Ethernet port.
+ */
+struct rte_eth_vlan_mirror {
+	uint64_t vlan_mask; /**< mask for valid VLAN ID. */
+	/** VLAN ID list for vlan mirroring. */
+	uint16_t vlan_id[RTE_ETH_MIRROR_MAX_VLANS];
+};
+
+/**
+ * A structure used to configure traffic mirror of an Ethernet port.
+ */
+struct rte_eth_mirror_conf {
+	uint8_t rule_type; /**< Mirroring rule type */
+	uint8_t dst_pool;  /**< Destination pool for this mirror rule. */
+	uint64_t pool_mask; /**< Bitmap of pool for pool mirroring */
+	/** VLAN ID setting for VLAN mirroring. */
+	struct rte_eth_vlan_mirror vlan;
+};
+
 /**
  * A structure used to configure 64 entries of Redirection Table of the
  * Receive Side Scaling (RSS) feature of an Ethernet port. To configure
@@ -853,7 +1035,7 @@ rte_eth_rss_hf_refine(uint64_t rss_hf)
 struct rte_eth_rss_reta_entry64 {
 	uint64_t mask;
 	/**< Mask bits indicate which entries need to be updated/queried. */
-	uint16_t reta[RTE_RETA_GROUP_SIZE];
+	uint16_t reta[RTE_ETH_RETA_GROUP_SIZE];
 	/**< Group of 64 redirection table entries. */
 };
 
@@ -862,38 +1044,44 @@ struct rte_eth_rss_reta_entry64 {
  * in DCB configurations
  */
 enum rte_eth_nb_tcs {
-	ETH_4_TCS = 4, /**< 4 TCs with DCB. */
-	ETH_8_TCS = 8  /**< 8 TCs with DCB. */
+	RTE_ETH_4_TCS = 4, /**< 4 TCs with DCB. */
+	RTE_ETH_8_TCS = 8  /**< 8 TCs with DCB. */
 };
+#define ETH_4_TCS RTE_ETH_4_TCS
+#define ETH_8_TCS RTE_ETH_8_TCS
 
 /**
  * This enum indicates the possible number of queue pools
  * in VMDQ configurations.
  */
 enum rte_eth_nb_pools {
-	ETH_8_POOLS = 8,    /**< 8 VMDq pools. */
-	ETH_16_POOLS = 16,  /**< 16 VMDq pools. */
-	ETH_32_POOLS = 32,  /**< 32 VMDq pools. */
-	ETH_64_POOLS = 64   /**< 64 VMDq pools. */
+	RTE_ETH_8_POOLS = 8,    /**< 8 VMDq pools. */
+	RTE_ETH_16_POOLS = 16,  /**< 16 VMDq pools. */
+	RTE_ETH_32_POOLS = 32,  /**< 32 VMDq pools. */
+	RTE_ETH_64_POOLS = 64   /**< 64 VMDq pools. */
 };
+#define ETH_8_POOLS	RTE_ETH_8_POOLS
+#define ETH_16_POOLS	RTE_ETH_16_POOLS
+#define ETH_32_POOLS	RTE_ETH_32_POOLS
+#define ETH_64_POOLS	RTE_ETH_64_POOLS
 
 /* This structure may be extended in future. */
 struct rte_eth_dcb_rx_conf {
 	enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs */
 	/** Traffic class each UP mapped to. */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_vmdq_dcb_tx_conf {
 	enum rte_eth_nb_pools nb_queue_pools; /**< With DCB, 16 or 32 pools. */
 	/** Traffic class each UP mapped to. */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_dcb_tx_conf {
 	enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs. */
 	/** Traffic class each UP mapped to. */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_vmdq_tx_conf {
@@ -919,8 +1107,8 @@ struct rte_eth_vmdq_dcb_conf {
 	struct {
 		uint16_t vlan_id; /**< The vlan id of the received frame */
 		uint64_t pools;   /**< Bitmask of pools for packet rx */
-	} pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	} pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 	/**< Selects a queue in a pool */
 };
 
@@ -931,7 +1119,7 @@ struct rte_eth_vmdq_dcb_conf {
  * Using this feature, packets are routed to a pool of queues. By default,
  * the pool selection is based on the MAC address, the vlan id in the
  * vlan tag as specified in the pool_map array.
- * Passing the ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool
+ * Passing the RTE_ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool
  * selection using only the MAC address. MAC address to pool mapping is done
  * using the rte_eth_dev_mac_addr_add function, with the pool parameter
  * corresponding to the pool id.
@@ -952,7 +1140,7 @@ struct rte_eth_vmdq_rx_conf {
 	struct {
 		uint16_t vlan_id; /**< The vlan id of the received frame */
 		uint64_t pools;   /**< Bitmask of pools for packet rx */
-	} pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
+	} pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
 };
 
 /**
@@ -961,7 +1149,7 @@ struct rte_eth_vmdq_rx_conf {
 struct rte_eth_txmode {
 	enum rte_eth_tx_mq_mode mq_mode; /**< TX multi-queues mode. */
 	/**
-	 * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+	 * Per-port Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags.
 	 * Only offloads set on tx_offload_capa field on rte_eth_dev_info
 	 * structure are allowed to be set.
 	 */
@@ -1045,7 +1233,7 @@ struct rte_eth_rxconf {
 	uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
 	uint16_t rx_nseg; /**< Number of descriptions in rx_seg array. */
 	/**
-	 * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Per-queue Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
 	 * Only offloads set on rx_queue_offload_capa or rx_offload_capa
 	 * fields on rte_eth_dev_info structure are allowed to be set.
 	 */
@@ -1074,7 +1262,7 @@ struct rte_eth_txconf {
 
 	uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
 	/**
-	 * Per-queue Tx offloads to be set  using DEV_TX_OFFLOAD_* flags.
+	 * Per-queue Tx offloads to be set  using RTE_ETH_TX_OFFLOAD_* flags.
 	 * Only offloads set on tx_queue_offload_capa or tx_offload_capa
 	 * fields on rte_eth_dev_info structure are allowed to be set.
 	 */
@@ -1185,12 +1373,17 @@ struct rte_eth_desc_lim {
  * This enum indicates the flow control mode
  */
 enum rte_eth_fc_mode {
-	RTE_FC_NONE = 0, /**< Disable flow control. */
-	RTE_FC_RX_PAUSE, /**< RX pause frame, enable flowctrl on TX side. */
-	RTE_FC_TX_PAUSE, /**< TX pause frame, enable flowctrl on RX side. */
-	RTE_FC_FULL      /**< Enable flow control on both side. */
+	RTE_ETH_FC_NONE = 0, /**< Disable flow control. */
+	RTE_ETH_FC_RX_PAUSE, /**< RX pause frame, enable flowctrl on TX side. */
+	RTE_ETH_FC_TX_PAUSE, /**< TX pause frame, enable flowctrl on RX side. */
+	RTE_ETH_FC_FULL      /**< Enable flow control on both side. */
 };
 
+#define RTE_FC_NONE	RTE_ETH_FC_NONE
+#define RTE_FC_RX_PAUSE	RTE_ETH_FC_RX_PAUSE
+#define RTE_FC_TX_PAUSE	RTE_ETH_FC_TX_PAUSE
+#define RTE_FC_FULL	RTE_ETH_FC_FULL
+
 /**
  * A structure used to configure Ethernet flow control parameter.
  * These parameters will be configured into the register of the NIC.
@@ -1221,18 +1414,29 @@ struct rte_eth_pfc_conf {
  * @see rte_eth_udp_tunnel
  */
 enum rte_eth_tunnel_type {
-	RTE_TUNNEL_TYPE_NONE = 0,
-	RTE_TUNNEL_TYPE_VXLAN,
-	RTE_TUNNEL_TYPE_GENEVE,
-	RTE_TUNNEL_TYPE_TEREDO,
-	RTE_TUNNEL_TYPE_NVGRE,
-	RTE_TUNNEL_TYPE_IP_IN_GRE,
-	RTE_L2_TUNNEL_TYPE_E_TAG,
-	RTE_TUNNEL_TYPE_VXLAN_GPE,
-	RTE_TUNNEL_TYPE_ECPRI,
-	RTE_TUNNEL_TYPE_MAX,
+	RTE_ETH_TUNNEL_TYPE_NONE = 0,
+	RTE_ETH_TUNNEL_TYPE_VXLAN,
+	RTE_ETH_TUNNEL_TYPE_GENEVE,
+	RTE_ETH_TUNNEL_TYPE_TEREDO,
+	RTE_ETH_TUNNEL_TYPE_NVGRE,
+	RTE_ETH_TUNNEL_TYPE_IP_IN_GRE,
+	RTE_ETH_L2_TUNNEL_TYPE_E_TAG,
+	RTE_ETH_TUNNEL_TYPE_VXLAN_GPE,
+	RTE_ETH_TUNNEL_TYPE_ECPRI,
+	RTE_ETH_TUNNEL_TYPE_MAX,
 };
 
+#define RTE_TUNNEL_TYPE_NONE		RTE_ETH_TUNNEL_TYPE_NONE
+#define RTE_TUNNEL_TYPE_VXLAN		RTE_ETH_TUNNEL_TYPE_VXLAN
+#define RTE_TUNNEL_TYPE_GENEVE		RTE_ETH_TUNNEL_TYPE_GENEVE
+#define RTE_TUNNEL_TYPE_TEREDO		RTE_ETH_TUNNEL_TYPE_TEREDO
+#define RTE_TUNNEL_TYPE_NVGRE		RTE_ETH_TUNNEL_TYPE_NVGRE
+#define RTE_TUNNEL_TYPE_IP_IN_GRE	RTE_ETH_TUNNEL_TYPE_IP_IN_GRE
+#define RTE_L2_TUNNEL_TYPE_E_TAG	RTE_ETH_L2_TUNNEL_TYPE_E_TAG
+#define RTE_TUNNEL_TYPE_VXLAN_GPE	RTE_ETH_TUNNEL_TYPE_VXLAN_GPE
+#define RTE_TUNNEL_TYPE_ECPRI		RTE_ETH_TUNNEL_TYPE_ECPRI
+#define RTE_TUNNEL_TYPE_MAX		RTE_ETH_TUNNEL_TYPE_MAX
+
 /* Deprecated API file for rte_eth_dev_filter_* functions */
 #include "rte_eth_ctrl.h"
 
@@ -1240,11 +1444,16 @@ enum rte_eth_tunnel_type {
  *  Memory space that can be configured to store Flow Director filters
  *  in the board memory.
  */
-enum rte_fdir_pballoc_type {
-	RTE_FDIR_PBALLOC_64K = 0,  /**< 64k. */
-	RTE_FDIR_PBALLOC_128K,     /**< 128k. */
-	RTE_FDIR_PBALLOC_256K,     /**< 256k. */
+enum rte_eth_fdir_pballoc_type {
+	RTE_ETH_FDIR_PBALLOC_64K = 0,  /**< 64k. */
+	RTE_ETH_FDIR_PBALLOC_128K,     /**< 128k. */
+	RTE_ETH_FDIR_PBALLOC_256K,     /**< 256k. */
 };
+#define rte_fdir_pballoc_type	rte_eth_fdir_pballoc_type
+
+#define RTE_FDIR_PBALLOC_64K	RTE_ETH_FDIR_PBALLOC_64K
+#define RTE_FDIR_PBALLOC_128K	RTE_ETH_FDIR_PBALLOC_128K
+#define RTE_FDIR_PBALLOC_256K	RTE_ETH_FDIR_PBALLOC_256K
 
 /**
  *  Select report mode of FDIR hash information in RX descriptors.
@@ -1261,9 +1470,9 @@ enum rte_fdir_status_mode {
  *
  * If mode is RTE_FDIR_MODE_NONE, the pballoc value is ignored.
  */
-struct rte_fdir_conf {
+struct rte_eth_fdir_conf {
 	enum rte_fdir_mode mode; /**< Flow Director mode. */
-	enum rte_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
+	enum rte_eth_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
 	enum rte_fdir_status_mode status;  /**< How to report FDIR hash. */
 	/** RX queue of packets matching a "drop" filter in perfect mode. */
 	uint8_t drop_queue;
@@ -1272,6 +1481,8 @@ struct rte_fdir_conf {
 	/**< Flex payload configuration. */
 };
 
+#define rte_fdir_conf rte_eth_fdir_conf
+
 /**
  * UDP tunneling configuration.
  *
@@ -1289,7 +1500,7 @@ struct rte_eth_udp_tunnel {
 /**
  * A structure used to enable/disable specific device interrupts.
  */
-struct rte_intr_conf {
+struct rte_eth_intr_conf {
 	/** enable/disable lsc interrupt. 0 (default) - disable, 1 enable */
 	uint32_t lsc:1;
 	/** enable/disable rxq interrupt. 0 (default) - disable, 1 enable */
@@ -1298,18 +1509,20 @@ struct rte_intr_conf {
 	uint32_t rmv:1;
 };
 
+#define rte_intr_conf rte_eth_intr_conf
+
 /**
  * A structure used to configure an Ethernet port.
  * Depending upon the RX multi-queue mode, extra advanced
  * configuration settings may be needed.
  */
 struct rte_eth_conf {
-	uint32_t link_speeds; /**< bitmap of ETH_LINK_SPEED_XXX of speeds to be
-				used. ETH_LINK_SPEED_FIXED disables link
+	uint32_t link_speeds; /**< bitmap of RTE_ETH_LINK_SPEED_XXX of speeds to be
+				used. RTE_ETH_LINK_SPEED_FIXED disables link
 				autonegotiation, and a unique speed shall be
 				set. Otherwise, the bitmap defines the set of
 				speeds to be advertised. If the special value
-				ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
+				RTE_ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
 				supported are advertised. */
 	struct rte_eth_rxmode rxmode; /**< Port RX configuration. */
 	struct rte_eth_txmode txmode; /**< Port TX configuration. */
@@ -1335,48 +1548,70 @@ struct rte_eth_conf {
 		struct rte_eth_vmdq_tx_conf vmdq_tx_conf;
 		/**< Port vmdq TX configuration. */
 	} tx_adv_conf; /**< Port TX DCB configuration (union). */
-	/** Currently,Priority Flow Control(PFC) are supported,if DCB with PFC
-	    is needed,and the variable must be set ETH_DCB_PFC_SUPPORT. */
+	/**
+	 * Currently,Priority Flow Control(PFC) are supported,if DCB with PFC
+	 * is needed,and the variable must be set RTE_ETH_DCB_PFC_SUPPORT.
+	 */
 	uint32_t dcb_capability_en;
-	struct rte_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
-	struct rte_intr_conf intr_conf; /**< Interrupt mode configuration. */
+	struct rte_eth_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
+	struct rte_eth_intr_conf intr_conf; /**< Interrupt mode configuration. */
 };
 
 /**
  * RX offload capabilities of a device.
  */
-#define DEV_RX_OFFLOAD_VLAN_STRIP  0x00000001
-#define DEV_RX_OFFLOAD_IPV4_CKSUM  0x00000002
-#define DEV_RX_OFFLOAD_UDP_CKSUM   0x00000004
-#define DEV_RX_OFFLOAD_TCP_CKSUM   0x00000008
-#define DEV_RX_OFFLOAD_TCP_LRO     0x00000010
-#define DEV_RX_OFFLOAD_QINQ_STRIP  0x00000020
-#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
-#define DEV_RX_OFFLOAD_MACSEC_STRIP     0x00000080
-#define DEV_RX_OFFLOAD_HEADER_SPLIT	0x00000100
-#define DEV_RX_OFFLOAD_VLAN_FILTER	0x00000200
-#define DEV_RX_OFFLOAD_VLAN_EXTEND	0x00000400
-#define DEV_RX_OFFLOAD_SCATTER		0x00002000
+#define RTE_ETH_RX_OFFLOAD_VLAN_STRIP  0x00000001
+#define DEV_RX_OFFLOAD_VLAN_STRIP	RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+#define RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  0x00000002
+#define DEV_RX_OFFLOAD_IPV4_CKSUM	RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_UDP_CKSUM   0x00000004
+#define DEV_RX_OFFLOAD_UDP_CKSUM	RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_CKSUM   0x00000008
+#define DEV_RX_OFFLOAD_TCP_CKSUM	RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_LRO     0x00000010
+#define DEV_RX_OFFLOAD_TCP_LRO		RTE_ETH_RX_OFFLOAD_TCP_LRO
+#define RTE_ETH_RX_OFFLOAD_QINQ_STRIP  0x00000020
+#define DEV_RX_OFFLOAD_QINQ_STRIP	RTE_ETH_RX_OFFLOAD_QINQ_STRIP
+#define RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
+#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM	RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_MACSEC_STRIP     0x00000080
+#define DEV_RX_OFFLOAD_MACSEC_STRIP	RTE_ETH_RX_OFFLOAD_MACSEC_STRIP
+#define RTE_ETH_RX_OFFLOAD_HEADER_SPLIT	0x00000100
+#define DEV_RX_OFFLOAD_HEADER_SPLIT	RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
+#define RTE_ETH_RX_OFFLOAD_VLAN_FILTER	0x00000200
+#define DEV_RX_OFFLOAD_VLAN_FILTER	RTE_ETH_RX_OFFLOAD_VLAN_FILTER
+#define RTE_ETH_RX_OFFLOAD_VLAN_EXTEND	0x00000400
+#define DEV_RX_OFFLOAD_VLAN_EXTEND	RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
+#define RTE_ETH_RX_OFFLOAD_SCATTER	0x00002000
+#define DEV_RX_OFFLOAD_SCATTER		RTE_ETH_RX_OFFLOAD_SCATTER
 /**
  * Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
  * and RTE_MBUF_DYNFLAG_RX_TIMESTAMP_NAME is set in ol_flags.
  * The mbuf field and flag are registered when the offload is configured.
  */
-#define DEV_RX_OFFLOAD_TIMESTAMP	0x00004000
-#define DEV_RX_OFFLOAD_SECURITY         0x00008000
-#define DEV_RX_OFFLOAD_KEEP_CRC		0x00010000
-#define DEV_RX_OFFLOAD_SCTP_CKSUM	0x00020000
-#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM  0x00040000
-#define DEV_RX_OFFLOAD_RSS_HASH		0x00080000
+#define RTE_ETH_RX_OFFLOAD_TIMESTAMP	0x00004000
+#define DEV_RX_OFFLOAD_TIMESTAMP	RTE_ETH_RX_OFFLOAD_TIMESTAMP
+#define RTE_ETH_RX_OFFLOAD_SECURITY     0x00008000
+#define DEV_RX_OFFLOAD_SECURITY		RTE_ETH_RX_OFFLOAD_SECURITY
+#define RTE_ETH_RX_OFFLOAD_KEEP_CRC	0x00010000
+#define DEV_RX_OFFLOAD_KEEP_CRC		RTE_ETH_RX_OFFLOAD_KEEP_CRC
+#define RTE_ETH_RX_OFFLOAD_SCTP_CKSUM	0x00020000
+#define DEV_RX_OFFLOAD_SCTP_CKSUM	RTE_ETH_RX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM  0x00040000
+#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM	RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_RSS_HASH	0x00080000
+#define DEV_RX_OFFLOAD_RSS_HASH	RTE_ETH_RX_OFFLOAD_RSS_HASH
 #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
 
-#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
-				 DEV_RX_OFFLOAD_UDP_CKSUM | \
-				 DEV_RX_OFFLOAD_TCP_CKSUM)
-#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
-			     DEV_RX_OFFLOAD_VLAN_FILTER | \
-			     DEV_RX_OFFLOAD_VLAN_EXTEND | \
-			     DEV_RX_OFFLOAD_QINQ_STRIP)
+#define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+				 RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+				 RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_CHECKSUM	RTE_ETH_RX_OFFLOAD_CHECKSUM
+#define RTE_ETH_RX_OFFLOAD_VLAN (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+			     RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+			     RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+			     RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+#define DEV_RX_OFFLOAD_VLAN	RTE_ETH_RX_OFFLOAD_VLAN
 
 /*
  * If new Rx offload capabilities are defined, they also must be
@@ -1386,52 +1621,74 @@ struct rte_eth_conf {
 /**
  * TX offload capabilities of a device.
  */
-#define DEV_TX_OFFLOAD_VLAN_INSERT 0x00000001
-#define DEV_TX_OFFLOAD_IPV4_CKSUM  0x00000002
-#define DEV_TX_OFFLOAD_UDP_CKSUM   0x00000004
-#define DEV_TX_OFFLOAD_TCP_CKSUM   0x00000008
-#define DEV_TX_OFFLOAD_SCTP_CKSUM  0x00000010
-#define DEV_TX_OFFLOAD_TCP_TSO     0x00000020
-#define DEV_TX_OFFLOAD_UDP_TSO     0x00000040
-#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_QINQ_INSERT 0x00000100
-#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO    0x00000200    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GRE_TNL_TSO      0x00000400    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_IPIP_TNL_TSO     0x00000800    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO   0x00001000    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_MACSEC_INSERT    0x00002000
-#define DEV_TX_OFFLOAD_MT_LOCKFREE      0x00004000
+#define RTE_ETH_TX_OFFLOAD_VLAN_INSERT 0x00000001
+#define DEV_TX_OFFLOAD_VLAN_INSERT	RTE_ETH_TX_OFFLOAD_VLAN_INSERT
+#define RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  0x00000002
+#define DEV_TX_OFFLOAD_IPV4_CKSUM	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_UDP_CKSUM   0x00000004
+#define DEV_TX_OFFLOAD_UDP_CKSUM	RTE_ETH_TX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_CKSUM   0x00000008
+#define DEV_TX_OFFLOAD_TCP_CKSUM	RTE_ETH_TX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  0x00000010
+#define DEV_TX_OFFLOAD_SCTP_CKSUM	RTE_ETH_TX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_TSO     0x00000020
+#define DEV_TX_OFFLOAD_TCP_TSO		RTE_ETH_TX_OFFLOAD_TCP_TSO
+#define RTE_ETH_TX_OFFLOAD_UDP_TSO     0x00000040
+#define DEV_TX_OFFLOAD_UDP_TSO		RTE_ETH_TX_OFFLOAD_UDP_TSO
+#define RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM	RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_QINQ_INSERT 0x00000100
+#define DEV_TX_OFFLOAD_QINQ_INSERT	RTE_ETH_TX_OFFLOAD_QINQ_INSERT
+#define RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO    0x00000200    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO	RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO      0x00000400    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GRE_TNL_TSO	RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO     0x00000800    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_IPIP_TNL_TSO	RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO   0x00001000    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO	RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_MACSEC_INSERT    0x00002000
+#define DEV_TX_OFFLOAD_MACSEC_INSERT	RTE_ETH_TX_OFFLOAD_MACSEC_INSERT
+#define RTE_ETH_TX_OFFLOAD_MT_LOCKFREE      0x00004000
+#define DEV_TX_OFFLOAD_MT_LOCKFREE	RTE_ETH_TX_OFFLOAD_MT_LOCKFREE
 /**< Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
  * tx queue without SW lock.
  */
-#define DEV_TX_OFFLOAD_MULTI_SEGS	0x00008000
+#define RTE_ETH_TX_OFFLOAD_MULTI_SEGS	0x00008000
+#define DEV_TX_OFFLOAD_MULTI_SEGS	RTE_ETH_TX_OFFLOAD_MULTI_SEGS
 /**< Device supports multi segment send. */
-#define DEV_TX_OFFLOAD_MBUF_FAST_FREE	0x00010000
+#define RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE	0x00010000
+#define DEV_TX_OFFLOAD_MBUF_FAST_FREE	RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
 /**< Device supports optimization for fast release of mbufs.
  *   When set application must guarantee that per-queue all mbufs comes from
  *   the same mempool and has refcnt = 1.
  */
-#define DEV_TX_OFFLOAD_SECURITY         0x00020000
+#define RTE_ETH_TX_OFFLOAD_SECURITY         0x00020000
+#define DEV_TX_OFFLOAD_SECURITY	RTE_ETH_TX_OFFLOAD_SECURITY
 /**
  * Device supports generic UDP tunneled packet TSO.
  * Application must set PKT_TX_TUNNEL_UDP and other mbuf fields required
  * for tunnel TSO.
  */
-#define DEV_TX_OFFLOAD_UDP_TNL_TSO      0x00040000
+#define RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO      0x00040000
+#define DEV_TX_OFFLOAD_UDP_TNL_TSO	RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO
 /**
  * Device supports generic IP tunneled packet TSO.
  * Application must set PKT_TX_TUNNEL_IP and other mbuf fields required
  * for tunnel TSO.
  */
-#define DEV_TX_OFFLOAD_IP_TNL_TSO       0x00080000
+#define RTE_ETH_TX_OFFLOAD_IP_TNL_TSO       0x00080000
+#define DEV_TX_OFFLOAD_IP_TNL_TSO	RTE_ETH_TX_OFFLOAD_IP_TNL_TSO
 /** Device supports outer UDP checksum */
-#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM  0x00100000
+#define RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM  0x00100000
+#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM	RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM
 /**
  * Device sends on time read from RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
  * if RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME is set in ol_flags.
  * The mbuf field and flag are registered when the offload is configured.
  */
-#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP	RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP
 /*
  * If new Tx offload capabilities are defined, they also must be
  * mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1563,7 +1820,7 @@ struct rte_eth_dev_info {
 	uint16_t vmdq_pool_base;  /**< First ID of VMDQ pools. */
 	struct rte_eth_desc_lim rx_desc_lim;  /**< RX descriptors limits */
 	struct rte_eth_desc_lim tx_desc_lim;  /**< TX descriptors limits */
-	uint32_t speed_capa;  /**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+	uint32_t speed_capa;  /**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
 	/** Configured number of rx/tx queues */
 	uint16_t nb_rx_queues; /**< Number of RX queues. */
 	uint16_t nb_tx_queues; /**< Number of TX queues. */
@@ -1667,8 +1924,10 @@ struct rte_eth_xstat_name {
 	char name[RTE_ETH_XSTATS_NAME_SIZE]; /**< The statistic name. */
 };
 
-#define ETH_DCB_NUM_TCS    8
-#define ETH_MAX_VMDQ_POOL  64
+#define RTE_ETH_DCB_NUM_TCS    8
+#define ETH_DCB_NUM_TCS	RTE_ETH_DCB_NUM_TCS
+#define RTE_ETH_MAX_VMDQ_POOL  64
+#define ETH_MAX_VMDQ_POOL	RTE_ETH_MAX_VMDQ_POOL
 
 /**
  * A structure used to get the information of queue and
@@ -1679,12 +1938,12 @@ struct rte_eth_dcb_tc_queue_mapping {
 	struct {
 		uint16_t base;
 		uint16_t nb_queue;
-	} tc_rxq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+	} tc_rxq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS];
 	/** rx queues assigned to tc per Pool */
 	struct {
 		uint16_t base;
 		uint16_t nb_queue;
-	} tc_txq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+	} tc_txq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS];
 };
 
 /**
@@ -1693,8 +1952,8 @@ struct rte_eth_dcb_tc_queue_mapping {
  */
 struct rte_eth_dcb_info {
 	uint8_t nb_tcs;        /**< number of TCs */
-	uint8_t prio_tc[ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
-	uint8_t tc_bws[ETH_DCB_NUM_TCS]; /**< TX BW percentage for each TC */
+	uint8_t prio_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
+	uint8_t tc_bws[RTE_ETH_DCB_NUM_TCS]; /**< TX BW percentage for each TC */
 	/** rx queues assigned to tc */
 	struct rte_eth_dcb_tc_queue_mapping tc_queue;
 };
@@ -1718,7 +1977,7 @@ enum rte_eth_fec_mode {
 
 /* A structure used to get capabilities per link speed */
 struct rte_eth_fec_capa {
-	uint32_t speed; /**< Link speed (see ETH_SPEED_NUM_*) */
+	uint32_t speed; /**< Link speed (see RTE_ETH_SPEED_NUM_*) */
 	uint32_t capa;  /**< FEC capabilities bitmask */
 };
 
@@ -1741,13 +2000,17 @@ struct rte_eth_fec_capa {
 
 /**@{@name L2 tunnel configuration */
 /**< l2 tunnel enable mask */
-#define ETH_L2_TUNNEL_ENABLE_MASK       0x00000001
+#define RTE_ETH_L2_TUNNEL_ENABLE_MASK       0x00000001
+#define ETH_L2_TUNNEL_ENABLE_MASK	RTE_ETH_L2_TUNNEL_ENABLE_MASK
 /**< l2 tunnel insertion mask */
-#define ETH_L2_TUNNEL_INSERTION_MASK    0x00000002
+#define RTE_ETH_L2_TUNNEL_INSERTION_MASK    0x00000002
+#define ETH_L2_TUNNEL_INSERTION_MASK	RTE_ETH_L2_TUNNEL_INSERTION_MASK
 /**< l2 tunnel stripping mask */
-#define ETH_L2_TUNNEL_STRIPPING_MASK    0x00000004
+#define RTE_ETH_L2_TUNNEL_STRIPPING_MASK    0x00000004
+#define ETH_L2_TUNNEL_STRIPPING_MASK	RTE_ETH_L2_TUNNEL_STRIPPING_MASK
 /**< l2 tunnel forwarding mask */
-#define ETH_L2_TUNNEL_FORWARDING_MASK   0x00000008
+#define RTE_ETH_L2_TUNNEL_FORWARDING_MASK   0x00000008
+#define ETH_L2_TUNNEL_FORWARDING_MASK	RTE_ETH_L2_TUNNEL_FORWARDING_MASK
 /**@}*/
 
 /**
@@ -2058,14 +2321,14 @@ uint16_t rte_eth_dev_count_total(void);
  * @param speed
  *   Numerical speed value in Mbps
  * @param duplex
- *   ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
+ *   RTE_ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
  * @return
  *   0 if the speed cannot be mapped
  */
 uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
 
 /**
- * Get DEV_RX_OFFLOAD_* flag name.
+ * Get RTE_ETH_RX_OFFLOAD_* flag name.
  *
  * @param offload
  *   Offload flag.
@@ -2075,7 +2338,7 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
 const char *rte_eth_dev_rx_offload_name(uint64_t offload);
 
 /**
- * Get DEV_TX_OFFLOAD_* flag name.
+ * Get RTE_ETH_TX_OFFLOAD_* flag name.
  *
  * @param offload
  *   Offload flag.
@@ -2169,7 +2432,7 @@ rte_eth_dev_is_removed(uint16_t port_id);
  *   of the Prefetch, Host, and Write-Back threshold registers of the receive
  *   ring.
  *   In addition it contains the hardware offloads features to activate using
- *   the DEV_RX_OFFLOAD_* flags.
+ *   the RTE_ETH_RX_OFFLOAD_* flags.
  *   If an offloading set in rx_conf->offloads
  *   hasn't been set in the input argument eth_conf->rxmode.offloads
  *   to rte_eth_dev_configure(), it is a new added offloading, it must be
@@ -2746,7 +3009,7 @@ const char *rte_eth_link_speed_to_str(uint32_t link_speed);
  *
  * @param str
  *   A pointer to a string to be filled with textual representation of
- *   device status. At least ETH_LINK_MAX_STR_LEN bytes should be allocated to
+ *   device status. At least RTE_ETH_LINK_MAX_STR_LEN bytes should be allocated to
  *   store default link status text.
  * @param len
  *   Length of available memory at 'str' string.
@@ -3292,10 +3555,10 @@ int rte_eth_dev_set_vlan_ether_type(uint16_t port_id,
  *   The port identifier of the Ethernet device.
  * @param offload_mask
  *   The VLAN Offload bit mask can be mixed use with "OR"
- *       ETH_VLAN_STRIP_OFFLOAD
- *       ETH_VLAN_FILTER_OFFLOAD
- *       ETH_VLAN_EXTEND_OFFLOAD
- *       ETH_QINQ_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_FILTER_OFFLOAD
+ *       RTE_ETH_VLAN_EXTEND_OFFLOAD
+ *       RTE_ETH_QINQ_STRIP_OFFLOAD
  * @return
  *   - (0) if successful.
  *   - (-ENOTSUP) if hardware-assisted VLAN filtering not configured.
@@ -3311,10 +3574,10 @@ int rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask);
  *   The port identifier of the Ethernet device.
  * @return
  *   - (>0) if successful. Bit mask to indicate
- *       ETH_VLAN_STRIP_OFFLOAD
- *       ETH_VLAN_FILTER_OFFLOAD
- *       ETH_VLAN_EXTEND_OFFLOAD
- *       ETH_QINQ_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_FILTER_OFFLOAD
+ *       RTE_ETH_VLAN_EXTEND_OFFLOAD
+ *       RTE_ETH_QINQ_STRIP_OFFLOAD
  *   - (-ENODEV) if *port_id* invalid.
  */
 int rte_eth_dev_get_vlan_offload(uint16_t port_id);
@@ -5339,7 +5602,7 @@ uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
  * rte_eth_tx_burst() function must [attempt to] free the *rte_mbuf*  buffers
  * of those packets whose transmission was effectively completed.
  *
- * If the PMD is DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+ * If the PMD is RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
  * invoke this function concurrently on the same tx queue without SW lock.
  * @see rte_eth_dev_info_get, struct rte_eth_txconf::offloads
  *
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index d5bfdaaaf2ec..2b320092990b 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2710,7 +2710,7 @@ struct rte_flow_action_rss {
 	 * through.
 	 */
 	uint32_t level;
-	uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+	uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
 	uint32_t key_len; /**< Hash key length in bytes. */
 	uint32_t queue_num; /**< Number of entries in @p queue. */
 	const uint8_t *key; /**< Hash key. */
diff --git a/lib/gso/rte_gso.c b/lib/gso/rte_gso.c
index 0d02ec3cee05..119fdcac0b7f 100644
--- a/lib/gso/rte_gso.c
+++ b/lib/gso/rte_gso.c
@@ -15,13 +15,13 @@
 #include "gso_udp4.h"
 
 #define ILLEGAL_UDP_GSO_CTX(ctx) \
-	((((ctx)->gso_types & DEV_TX_OFFLOAD_UDP_TSO) == 0) || \
+	((((ctx)->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO) == 0) || \
 	 (ctx)->gso_size < RTE_GSO_UDP_SEG_SIZE_MIN)
 
 #define ILLEGAL_TCP_GSO_CTX(ctx) \
-	((((ctx)->gso_types & (DEV_TX_OFFLOAD_TCP_TSO | \
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-		DEV_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
+	((((ctx)->gso_types & (RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
 		(ctx)->gso_size < RTE_GSO_SEG_SIZE_MIN)
 
 int
@@ -54,28 +54,28 @@ rte_gso_segment(struct rte_mbuf *pkt,
 	ol_flags = pkt->ol_flags;
 
 	if ((IS_IPV4_VXLAN_TCP4(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
 			((IS_IPV4_GRE_TCP4(pkt->ol_flags) &&
-			 (gso_ctx->gso_types & DEV_TX_OFFLOAD_GRE_TNL_TSO)))) {
+			 (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))) {
 		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
 		ret = gso_tunnel_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_VXLAN_UDP4(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) &&
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
 		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
 		ret = gso_tunnel_udp4_segment(pkt, gso_size,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_TCP(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_TCP_TSO)) {
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
 		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
 		ret = gso_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_UDP(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
 		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
 		ret = gso_udp4_segment(pkt, gso_size, direct_pool,
 				indirect_pool, pkts_out, nb_pkts_out);
diff --git a/lib/gso/rte_gso.h b/lib/gso/rte_gso.h
index d93ee8e5b171..0a65afc11e64 100644
--- a/lib/gso/rte_gso.h
+++ b/lib/gso/rte_gso.h
@@ -52,11 +52,11 @@ struct rte_gso_ctx {
 	uint32_t gso_types;
 	/**< the bit mask of required GSO types. The GSO library
 	 * uses the same macros as that of describing device TX
-	 * offloading capabilities (i.e. DEV_TX_OFFLOAD_*_TSO) for
+	 * offloading capabilities (i.e. RTE_ETH_TX_OFFLOAD_*_TSO) for
 	 * gso_types.
 	 *
 	 * For example, if applications want to segment TCP/IPv4
-	 * packets, set DEV_TX_OFFLOAD_TCP_TSO in gso_types.
+	 * packets, set RTE_ETH_TX_OFFLOAD_TCP_TSO in gso_types.
 	 */
 	uint16_t gso_size;
 	/**< maximum size of an output GSO segment, including packet
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index d6f167994411..5a5b6b1e33c1 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -185,7 +185,7 @@ extern "C" {
  * The detection of PKT_RX_OUTER_L4_CKSUM_GOOD shall be based on the given
  * HW capability, At minimum, the PMD should support
  * PKT_RX_OUTER_L4_CKSUM_UNKNOWN and PKT_RX_OUTER_L4_CKSUM_BAD states
- * if the DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
+ * if the RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
  */
 #define PKT_RX_OUTER_L4_CKSUM_MASK	((1ULL << 21) | (1ULL << 22))
 
@@ -208,7 +208,7 @@ extern "C" {
  * a) Fill outer_l2_len and outer_l3_len in mbuf.
  * b) Set the PKT_TX_OUTER_UDP_CKSUM flag.
  * c) Set the PKT_TX_OUTER_IPV4 or PKT_TX_OUTER_IPV6 flag.
- * 2) Configure DEV_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
+ * 2) Configure RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
  */
 #define PKT_TX_OUTER_UDP_CKSUM     (1ULL << 41)
 
@@ -253,7 +253,7 @@ extern "C" {
  * It can be used for tunnels which are not standards or listed above.
  * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_GRE
  * or PKT_TX_TUNNEL_IPIP if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_IP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_IP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
  * PKT_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
@@ -266,7 +266,7 @@ extern "C" {
  * It can be used for tunnels which are not standards or listed above.
  * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_VXLAN
  * if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_UDP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
  * PKT_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
diff --git a/lib/mbuf/rte_mbuf_dyn.h b/lib/mbuf/rte_mbuf_dyn.h
index fb03cf1dcf90..29abe8da53cf 100644
--- a/lib/mbuf/rte_mbuf_dyn.h
+++ b/lib/mbuf/rte_mbuf_dyn.h
@@ -37,7 +37,7 @@
  *   of the dynamic field to be registered:
  *   const struct rte_mbuf_dynfield rte_dynfield_my_feature = { ... };
  * - The application initializes the PMD, and asks for this feature
- *   at port initialization by passing DEV_RX_OFFLOAD_MY_FEATURE in
+ *   at port initialization by passing RTE_ETH_RX_OFFLOAD_MY_FEATURE in
  *   rxconf. This will make the PMD to register the field by calling
  *   rte_mbuf_dynfield_register(&rte_dynfield_my_feature). The PMD
  *   stores the returned offset.
-- 
2.31.1


^ permalink raw reply	[relevance 1%]

* Re: [dpdk-dev] [PATCH v3 2/2] security: add reserved bitfields
  2021-10-18  5:22  3%   ` [dpdk-dev] [PATCH v3 2/2] security: add reserved bitfields Akhil Goyal
@ 2021-10-18 15:39  0%     ` Akhil Goyal
  0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-10-18 15:39 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas, david.marchand, hemant.agrawal, Anoob Joseph,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
	konstantin.ananyev, radu.nicolau, ajit.khaparde,
	Nagadheeraj Rottela, Ankur Dwivedi, ciara.power, Ray Kinsella

> In struct rte_security_ipsec_sa_options, for every new option
> added, there is an ABI breakage, to avoid, a reserved_opts
> bitfield is added to for the remaining bits available in the
> structure.
> Now for every new sa option, these reserved_opts can be reduced
> and new option can be added.
> 
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Ray Kinsella <mdr@ashroe.eu>
> ---
> v3:
> - added a comment for requesting user to clear reserved_opts.
> - removed LIST_END enumerators patch. It will be handled separately.
> 
Series applied to dpdk-next-crypto

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [EXT] Re: [PATCH v4 14/14] eventdev: mark trace variables as internal
  2021-10-17  5:58  0%     ` Jerin Jacob
@ 2021-10-18 15:06  0%       ` Pavan Nikhilesh Bhagavatula
  2021-10-19  7:01  3%         ` David Marchand
  0 siblings, 1 reply; 200+ results
From: Pavan Nikhilesh Bhagavatula @ 2021-10-18 15:06 UTC (permalink / raw)
  To: Jerin Jacob, Ray Kinsella, David Marchand
  Cc: Jerin Jacob Kollanukkaran, dpdk-dev

>On Sat, Oct 16, 2021 at 12:34 AM <pbhagavatula@marvell.com> wrote:
>>
>> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>>
>> Mark rte_trace global variables as internal i.e. remove them
>> from experimental section of version map.
>> Some of them are used in inline APIs, mark those as global.
>>
>> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>> Acked-by: Ray Kinsella <mdr@ashroe.eu>
>> ---
>>  doc/guides/rel_notes/release_21_11.rst | 12 +++++
>>  lib/eventdev/version.map               | 71 ++++++++++++--------------
>>  2 files changed, 44 insertions(+), 39 deletions(-)
>>
>> diff --git a/doc/guides/rel_notes/release_21_11.rst
>b/doc/guides/rel_notes/release_21_11.rst
>> index 38e601c236..5b4a05c3ae 100644
>> --- a/doc/guides/rel_notes/release_21_11.rst
>> +++ b/doc/guides/rel_notes/release_21_11.rst
>> @@ -226,6 +226,9 @@ API Changes
>>    the crypto/security operation. This field will be used to communicate
>>    events such as soft expiry with IPsec in lookaside mode.
>>
>> +* eventdev: Event vector configuration APIs have been made stable.
>> +  Move memory used by timer adapters to hugepage. This will
>prevent TLB misses
>> +  if any and aligns to memory structure of other subsystems.
>>
>>  ABI Changes
>>  -----------
>> @@ -277,6 +280,15 @@ ABI Changes
>>    were added in structure ``rte_event_eth_rx_adapter_stats`` to get
>additional
>>    status.
>>
>> +* eventdev: A new structure ``rte_event_fp_ops`` has been added
>which is now used
>> +  by the fastpath inline functions. The structures ``rte_eventdev``,
>> +  ``rte_eventdev_data`` have been made internal.
>``rte_eventdevs[]`` can't be
>> +  accessed directly by user any more. This change is transparent to
>both
>> +  applications and PMDs.
>> +
>> +* eventdev: Re-arrange fields in ``rte_event_timer`` to remove
>holes.
>> +  ``rte_event_timer_adapter_pmd.h`` has been made internal.
>
>Looks good. Please fix the following, If there are no objections, I
>will merge the next version.
>
>1) Please move the doc update to respective patches

Ack, will move in next version.

>2) Following checkpath issue
>[for-main]dell[dpdk-next-eventdev] $ ./devtools/checkpatches.sh -n
>14
>
>### eventdev: move inline APIs into separate structure
>
>INFO: symbol event_dev_fp_ops_reset has been added to the
>INTERNAL
>section of the version map
>INFO: symbol event_dev_fp_ops_set has been added to the INTERNAL
>section of the version map
>INFO: symbol event_dev_probing_finish has been added to the
>INTERNAL
>section of the version map

These can be ignored as they are internal

>ERROR: symbol rte_event_fp_ops is added in the DPDK_22 section, but
>is
>expected to be added in the EXPERIMENTAL section of the version map

This is a replacement for rte_eventdevs, ethdev rework also doesn’t mark
it as experimental. @David Marchand @Ray Kinsella any opinions?


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] ring: fix size of name array in ring structure
  @ 2021-10-18 14:54  0% ` Honnappa Nagarahalli
  2021-10-19  7:01  0%   ` Tu, Lijuan
  2021-10-20 23:06  0% ` Ananyev, Konstantin
  1 sibling, 1 reply; 200+ results
From: Honnappa Nagarahalli @ 2021-10-18 14:54 UTC (permalink / raw)
  To: Honnappa Nagarahalli, dev, andrew.rybchenko, konstantin.ananyev
  Cc: nd, zoltan.kiss, Lijuan Tu, nd

This patch has a CI failure in DTS in test_scatter_mbuf_2048 for Fortville_Spirit NIC. I am not sure how this change is related to the failure. The log is as follows:

TestScatter: Test Case test_scatter_mbuf_2048 Result FAILED: 'packet receive error'

Has anyone seen this error? Is this a known issue?

Thanks,
Honnappa

> -----Original Message-----
> From: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Sent: Thursday, October 14, 2021 3:56 PM
> To: dev@dpdk.org; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; andrew.rybchenko@oktetlabs.ru;
> konstantin.ananyev@intel.com
> Cc: nd <nd@arm.com>; zoltan.kiss@schaman.hu
> Subject: [PATCH] ring: fix size of name array in ring structure
> 
> Use correct define for the name array size. The change breaks ABI and hence
> cannot be backported to stable branches.
> 
> Fixes: 38c9817ee1d8 ("mempool: adjust name size in related data types")
> Cc: zoltan.kiss@schaman.hu
> 
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> ---
>  lib/ring/rte_ring_core.h | 7 +------
>  1 file changed, 1 insertion(+), 6 deletions(-)
> 
> diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h index
> 31f7200fa9..46ad584f9c 100644
> --- a/lib/ring/rte_ring_core.h
> +++ b/lib/ring/rte_ring_core.h
> @@ -118,12 +118,7 @@ struct rte_ring_hts_headtail {
>   * a problem.
>   */
>  struct rte_ring {
> -	/*
> -	 * Note: this field kept the RTE_MEMZONE_NAMESIZE size due to ABI
> -	 * compatibility requirements, it could be changed to
> RTE_RING_NAMESIZE
> -	 * next time the ABI changes
> -	 */
> -	char name[RTE_MEMZONE_NAMESIZE] __rte_cache_aligned;
> +	char name[RTE_RING_NAMESIZE] __rte_cache_aligned;
>  	/**< Name of the ring. */
>  	int flags;               /**< Flags supplied at creation. */
>  	const struct rte_memzone *memzone;
> --
> 2.25.1


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v3 6/7] cryptodev: update fast path APIs to use new flat array
    2021-10-18 14:41  2%     ` [dpdk-dev] [PATCH v3 3/7] cryptodev: move inline APIs into separate structure Akhil Goyal
@ 2021-10-18 14:42  3%     ` Akhil Goyal
  2021-10-19 12:28  0%       ` Ananyev, Konstantin
    2 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-18 14:42 UTC (permalink / raw)
  To: dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
	konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
	adwivedi, ciara.power, Akhil Goyal

Rework fast-path cryptodev functions to use rte_crypto_fp_ops[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 lib/cryptodev/rte_cryptodev.h | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index ce0dca72be..56e3868ada 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -1832,13 +1832,18 @@ static inline uint16_t
 rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
 		struct rte_crypto_op **ops, uint16_t nb_ops)
 {
-	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+	const struct rte_crypto_fp_ops *fp_ops;
+	void *qp;
 
 	rte_cryptodev_trace_dequeue_burst(dev_id, qp_id, (void **)ops, nb_ops);
-	nb_ops = (*dev->dequeue_burst)
-			(dev->data->queue_pairs[qp_id], ops, nb_ops);
+
+	fp_ops = &rte_crypto_fp_ops[dev_id];
+	qp = fp_ops->qp.data[qp_id];
+
+	nb_ops = fp_ops->dequeue_burst(qp, ops, nb_ops);
+
 #ifdef RTE_CRYPTO_CALLBACKS
-	if (unlikely(dev->deq_cbs != NULL)) {
+	if (unlikely(fp_ops->qp.deq_cb != NULL)) {
 		struct rte_cryptodev_cb_rcu *list;
 		struct rte_cryptodev_cb *cb;
 
@@ -1848,7 +1853,7 @@ rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
 		 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 		 * not required.
 		 */
-		list = &dev->deq_cbs[qp_id];
+		list = &fp_ops->qp.deq_cb[qp_id];
 		rte_rcu_qsbr_thread_online(list->qsbr, 0);
 		cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
 
@@ -1899,10 +1904,13 @@ static inline uint16_t
 rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
 		struct rte_crypto_op **ops, uint16_t nb_ops)
 {
-	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+	const struct rte_crypto_fp_ops *fp_ops;
+	void *qp;
 
+	fp_ops = &rte_crypto_fp_ops[dev_id];
+	qp = fp_ops->qp.data[qp_id];
 #ifdef RTE_CRYPTO_CALLBACKS
-	if (unlikely(dev->enq_cbs != NULL)) {
+	if (unlikely(fp_ops->qp.enq_cb != NULL)) {
 		struct rte_cryptodev_cb_rcu *list;
 		struct rte_cryptodev_cb *cb;
 
@@ -1912,7 +1920,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
 		 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 		 * not required.
 		 */
-		list = &dev->enq_cbs[qp_id];
+		list = &fp_ops->qp.enq_cb[qp_id];
 		rte_rcu_qsbr_thread_online(list->qsbr, 0);
 		cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
 
@@ -1927,8 +1935,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
 #endif
 
 	rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops, nb_ops);
-	return (*dev->enqueue_burst)(
-			dev->data->queue_pairs[qp_id], ops, nb_ops);
+	return fp_ops->enqueue_burst(qp, ops, nb_ops);
 }
 
 
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v3 3/7] cryptodev: move inline APIs into separate structure
  @ 2021-10-18 14:41  2%     ` Akhil Goyal
  2021-10-19 16:00  0%       ` Zhang, Roy Fan
  2021-10-18 14:42  3%     ` [dpdk-dev] [PATCH v3 6/7] cryptodev: update fast path APIs to use new flat array Akhil Goyal
    2 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-18 14:41 UTC (permalink / raw)
  To: dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
	konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
	adwivedi, ciara.power, Akhil Goyal, Rebecca Troy

Move fastpath inline function pointers from rte_cryptodev into a
separate structure accessed via a flat array.
The intension is to make rte_cryptodev and related structures private
to avoid future API/ABI breakages.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Tested-by: Rebecca Troy <rebecca.troy@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 lib/cryptodev/cryptodev_pmd.c      | 53 +++++++++++++++++++++++++++++-
 lib/cryptodev/cryptodev_pmd.h      | 11 +++++++
 lib/cryptodev/rte_cryptodev.c      | 19 +++++++++++
 lib/cryptodev/rte_cryptodev_core.h | 29 ++++++++++++++++
 lib/cryptodev/version.map          |  5 +++
 5 files changed, 116 insertions(+), 1 deletion(-)

diff --git a/lib/cryptodev/cryptodev_pmd.c b/lib/cryptodev/cryptodev_pmd.c
index 44a70ecb35..fd74543682 100644
--- a/lib/cryptodev/cryptodev_pmd.c
+++ b/lib/cryptodev/cryptodev_pmd.c
@@ -3,7 +3,7 @@
  */
 
 #include <sys/queue.h>
-
+#include <rte_errno.h>
 #include <rte_string_fns.h>
 #include <rte_malloc.h>
 
@@ -160,3 +160,54 @@ rte_cryptodev_pmd_destroy(struct rte_cryptodev *cryptodev)
 
 	return 0;
 }
+
+static uint16_t
+dummy_crypto_enqueue_burst(__rte_unused void *qp,
+			   __rte_unused struct rte_crypto_op **ops,
+			   __rte_unused uint16_t nb_ops)
+{
+	CDEV_LOG_ERR(
+		"crypto enqueue burst requested for unconfigured device");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+dummy_crypto_dequeue_burst(__rte_unused void *qp,
+			   __rte_unused struct rte_crypto_op **ops,
+			   __rte_unused uint16_t nb_ops)
+{
+	CDEV_LOG_ERR(
+		"crypto dequeue burst requested for unconfigured device");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+void
+cryptodev_fp_ops_reset(struct rte_crypto_fp_ops *fp_ops)
+{
+	static struct rte_cryptodev_cb_rcu dummy_cb[RTE_MAX_QUEUES_PER_PORT];
+	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+	static const struct rte_crypto_fp_ops dummy = {
+		.enqueue_burst = dummy_crypto_enqueue_burst,
+		.dequeue_burst = dummy_crypto_dequeue_burst,
+		.qp = {
+			.data = dummy_data,
+			.enq_cb = dummy_cb,
+			.deq_cb = dummy_cb,
+		},
+	};
+
+	*fp_ops = dummy;
+}
+
+void
+cryptodev_fp_ops_set(struct rte_crypto_fp_ops *fp_ops,
+		     const struct rte_cryptodev *dev)
+{
+	fp_ops->enqueue_burst = dev->enqueue_burst;
+	fp_ops->dequeue_burst = dev->dequeue_burst;
+	fp_ops->qp.data = dev->data->queue_pairs;
+	fp_ops->qp.enq_cb = dev->enq_cbs;
+	fp_ops->qp.deq_cb = dev->deq_cbs;
+}
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index 36606dd10b..a71edbb991 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -516,6 +516,17 @@ RTE_INIT(init_ ##driver_id)\
 	driver_id = rte_cryptodev_allocate_driver(&crypto_drv, &(drv));\
 }
 
+/* Reset crypto device fastpath APIs to dummy values. */
+__rte_internal
+void
+cryptodev_fp_ops_reset(struct rte_crypto_fp_ops *fp_ops);
+
+/* Setup crypto device fastpath APIs. */
+__rte_internal
+void
+cryptodev_fp_ops_set(struct rte_crypto_fp_ops *fp_ops,
+		     const struct rte_cryptodev *dev);
+
 static inline void *
 get_sym_session_private_data(const struct rte_cryptodev_sym_session *sess,
 		uint8_t driver_id) {
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index eb86e629aa..305e013ebb 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -53,6 +53,9 @@ static struct rte_cryptodev_global cryptodev_globals = {
 		.nb_devs		= 0
 };
 
+/* Public fastpath APIs. */
+struct rte_crypto_fp_ops rte_crypto_fp_ops[RTE_CRYPTO_MAX_DEVS];
+
 /* spinlock for crypto device callbacks */
 static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
 
@@ -917,6 +920,8 @@ rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
 
 	dev_id = cryptodev->data->dev_id;
 
+	cryptodev_fp_ops_reset(rte_crypto_fp_ops + dev_id);
+
 	/* Close device only if device operations have been set */
 	if (cryptodev->dev_ops) {
 		ret = rte_cryptodev_close(dev_id);
@@ -1080,6 +1085,9 @@ rte_cryptodev_start(uint8_t dev_id)
 	}
 
 	diag = (*dev->dev_ops->dev_start)(dev);
+	/* expose selection of PMD fast-path functions */
+	cryptodev_fp_ops_set(rte_crypto_fp_ops + dev_id, dev);
+
 	rte_cryptodev_trace_start(dev_id, diag);
 	if (diag == 0)
 		dev->data->dev_started = 1;
@@ -1109,6 +1117,9 @@ rte_cryptodev_stop(uint8_t dev_id)
 		return;
 	}
 
+	/* point fast-path functions to dummy ones */
+	cryptodev_fp_ops_reset(rte_crypto_fp_ops + dev_id);
+
 	(*dev->dev_ops->dev_stop)(dev);
 	rte_cryptodev_trace_stop(dev_id);
 	dev->data->dev_started = 0;
@@ -2411,3 +2422,11 @@ rte_cryptodev_allocate_driver(struct cryptodev_driver *crypto_drv,
 
 	return nb_drivers++;
 }
+
+RTE_INIT(cryptodev_init_fp_ops)
+{
+	uint32_t i;
+
+	for (i = 0; i != RTE_DIM(rte_crypto_fp_ops); i++)
+		cryptodev_fp_ops_reset(rte_crypto_fp_ops + i);
+}
diff --git a/lib/cryptodev/rte_cryptodev_core.h b/lib/cryptodev/rte_cryptodev_core.h
index 1633e55889..e9e9a44b3c 100644
--- a/lib/cryptodev/rte_cryptodev_core.h
+++ b/lib/cryptodev/rte_cryptodev_core.h
@@ -25,6 +25,35 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
 		struct rte_crypto_op **ops,	uint16_t nb_ops);
 /**< Enqueue packets for processing on queue pair of a device. */
 
+/**
+ * @internal
+ * Structure used to hold opaque pointers to internal ethdev Rx/Tx
+ * queues data.
+ * The main purpose to expose these pointers at all - allow compiler
+ * to fetch this data for fast-path cryptodev inline functions in advance.
+ */
+struct rte_cryptodev_qpdata {
+	/** points to array of internal queue pair data pointers. */
+	void **data;
+	/** points to array of enqueue callback data pointers */
+	struct rte_cryptodev_cb_rcu *enq_cb;
+	/** points to array of dequeue callback data pointers */
+	struct rte_cryptodev_cb_rcu *deq_cb;
+};
+
+struct rte_crypto_fp_ops {
+	/** PMD enqueue burst function. */
+	enqueue_pkt_burst_t enqueue_burst;
+	/** PMD dequeue burst function. */
+	dequeue_pkt_burst_t dequeue_burst;
+	/** Internal queue pair data pointers. */
+	struct rte_cryptodev_qpdata qp;
+	/** Reserved for future ops. */
+	uintptr_t reserved[4];
+} __rte_cache_aligned;
+
+extern struct rte_crypto_fp_ops rte_crypto_fp_ops[RTE_CRYPTO_MAX_DEVS];
+
 /**
  * @internal
  * The data part, with no function pointers, associated with each device.
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index 43cf937e40..ed62ced221 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -45,6 +45,9 @@ DPDK_22 {
 	rte_cryptodev_sym_session_init;
 	rte_cryptodevs;
 
+	#added in 21.11
+	rte_crypto_fp_ops;
+
 	local: *;
 };
 
@@ -109,6 +112,8 @@ EXPERIMENTAL {
 INTERNAL {
 	global:
 
+	cryptodev_fp_ops_reset;
+	cryptodev_fp_ops_set;
 	rte_cryptodev_allocate_driver;
 	rte_cryptodev_pmd_allocate;
 	rte_cryptodev_pmd_callback_process;
-- 
2.25.1


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH v8 2/4] mempool: add non-IO flag
  @ 2021-10-18 14:40  3%           ` Dmitry Kozlyuk
    1 sibling, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-18 14:40 UTC (permalink / raw)
  To: dev
  Cc: David Marchand, Matan Azrad, Andrew Rybchenko, Maryam Tahhan,
	Reshma Pattan, Olivier Matz

Mempool is a generic allocator that is not necessarily used
for device IO operations and its memory for DMA.
Add MEMPOOL_F_NON_IO flag to mark such mempools automatically
a) if their objects are not contiguous;
b) if IOVA is not available for any object.
Other components can inspect this flag
in order to optimize their memory management.

Discussion: https://mails.dpdk.org/archives/dev/2021-August/216654.html

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/proc-info/main.c                   |   6 +-
 app/test/test_mempool.c                | 114 +++++++++++++++++++++++++
 doc/guides/rel_notes/release_21_11.rst |   3 +
 lib/mempool/rte_mempool.c              |  10 +++
 lib/mempool/rte_mempool.h              |   2 +
 5 files changed, 133 insertions(+), 2 deletions(-)

diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index a8e928fa9f..8ec9cadd79 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -1295,7 +1295,8 @@ show_mempool(char *name)
 				"\t  -- No cache align (%c)\n"
 				"\t  -- SP put (%c), SC get (%c)\n"
 				"\t  -- Pool created (%c)\n"
-				"\t  -- No IOVA config (%c)\n",
+				"\t  -- No IOVA config (%c)\n"
+				"\t  -- Not used for IO (%c)\n",
 				ptr->name,
 				ptr->socket_id,
 				(flags & MEMPOOL_F_NO_SPREAD) ? 'y' : 'n',
@@ -1303,7 +1304,8 @@ show_mempool(char *name)
 				(flags & MEMPOOL_F_SP_PUT) ? 'y' : 'n',
 				(flags & MEMPOOL_F_SC_GET) ? 'y' : 'n',
 				(flags & MEMPOOL_F_POOL_CREATED) ? 'y' : 'n',
-				(flags & MEMPOOL_F_NO_IOVA_CONTIG) ? 'y' : 'n');
+				(flags & MEMPOOL_F_NO_IOVA_CONTIG) ? 'y' : 'n',
+				(flags & MEMPOOL_F_NON_IO) ? 'y' : 'n');
 			printf("  - Size %u Cache %u element %u\n"
 				"  - header %u trailer %u\n"
 				"  - private data size %u\n",
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index c39c83256e..81800b7122 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -12,6 +12,7 @@
 #include <sys/queue.h>
 
 #include <rte_common.h>
+#include <rte_eal_paging.h>
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_errno.h>
@@ -729,6 +730,111 @@ test_mempool_events_safety(void)
 #pragma pop_macro("RTE_TEST_TRACE_FAILURE")
 }
 
+#pragma push_macro("RTE_TEST_TRACE_FAILURE")
+#undef RTE_TEST_TRACE_FAILURE
+#define RTE_TEST_TRACE_FAILURE(...) do { \
+		ret = TEST_FAILED; \
+		goto exit; \
+	} while (0)
+
+static int
+test_mempool_flag_non_io_set_when_no_iova_contig_set(void)
+{
+	struct rte_mempool *mp = NULL;
+	int ret;
+
+	mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
+				      MEMPOOL_ELT_SIZE, 0, 0,
+				      SOCKET_ID_ANY, MEMPOOL_F_NO_IOVA_CONTIG);
+	RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
+				 rte_strerror(rte_errno));
+	rte_mempool_set_ops_byname(mp, rte_mbuf_best_mempool_ops(), NULL);
+	ret = rte_mempool_populate_default(mp);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(mp->flags & MEMPOOL_F_NON_IO,
+			"NON_IO flag is not set when NO_IOVA_CONTIG is set");
+	ret = TEST_SUCCESS;
+exit:
+	rte_mempool_free(mp);
+	return ret;
+}
+
+static int
+test_mempool_flag_non_io_unset_when_populated_with_valid_iova(void)
+{
+	void *virt = NULL;
+	rte_iova_t iova;
+	size_t page_size = RTE_PGSIZE_2M;
+	struct rte_mempool *mp = NULL;
+	int ret;
+
+	/*
+	 * Since objects from the pool are never used in the test,
+	 * we don't care for contiguous IOVA, on the other hand,
+	 * reiuring it could cause spurious test failures.
+	 */
+	virt = rte_malloc("test_mempool", 3 * page_size, page_size);
+	RTE_TEST_ASSERT_NOT_NULL(virt, "Cannot allocate memory");
+	iova = rte_mem_virt2iova(virt);
+	RTE_TEST_ASSERT_NOT_EQUAL(iova,  RTE_BAD_IOVA, "Cannot get IOVA");
+	mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
+				      MEMPOOL_ELT_SIZE, 0, 0,
+				      SOCKET_ID_ANY, 0);
+	RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
+				 rte_strerror(rte_errno));
+
+	ret = rte_mempool_populate_iova(mp, RTE_PTR_ADD(virt, 1 * page_size),
+					RTE_BAD_IOVA, page_size, NULL, NULL);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(mp->flags & MEMPOOL_F_NON_IO,
+			"NON_IO flag is not set when mempool is populated with only RTE_BAD_IOVA");
+
+	ret = rte_mempool_populate_iova(mp, virt, iova, page_size, NULL, NULL);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(!(mp->flags & MEMPOOL_F_NON_IO),
+			"NON_IO flag is not unset when mempool is populated with valid IOVA");
+
+	ret = rte_mempool_populate_iova(mp, RTE_PTR_ADD(virt, 2 * page_size),
+					RTE_BAD_IOVA, page_size, NULL, NULL);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(!(mp->flags & MEMPOOL_F_NON_IO),
+			"NON_IO flag is set even when some objects have valid IOVA");
+	ret = TEST_SUCCESS;
+
+exit:
+	rte_mempool_free(mp);
+	rte_free(virt);
+	return ret;
+}
+
+static int
+test_mempool_flag_non_io_unset_by_default(void)
+{
+	struct rte_mempool *mp;
+	int ret;
+
+	mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
+				      MEMPOOL_ELT_SIZE, 0, 0,
+				      SOCKET_ID_ANY, 0);
+	RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
+				 rte_strerror(rte_errno));
+	ret = rte_mempool_populate_default(mp);
+	RTE_TEST_ASSERT_EQUAL(ret, (int)mp->size, "Failed to populate mempool: %s",
+			      rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(!(mp->flags & MEMPOOL_F_NON_IO),
+			"NON_IO flag is set by default");
+	ret = TEST_SUCCESS;
+exit:
+	rte_mempool_free(mp);
+	return ret;
+}
+
+#pragma pop_macro("RTE_TEST_TRACE_FAILURE")
+
 static int
 test_mempool(void)
 {
@@ -914,6 +1020,14 @@ test_mempool(void)
 	if (test_mempool_events_safety() < 0)
 		GOTO_ERR(ret, err);
 
+	/* test NON_IO flag inference */
+	if (test_mempool_flag_non_io_unset_by_default() < 0)
+		GOTO_ERR(ret, err);
+	if (test_mempool_flag_non_io_set_when_no_iova_contig_set() < 0)
+		GOTO_ERR(ret, err);
+	if (test_mempool_flag_non_io_unset_when_populated_with_valid_iova() < 0)
+		GOTO_ERR(ret, err);
+
 	rte_mempool_list_dump(stdout);
 
 	ret = 0;
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d5435a64aa..f6bb5adeff 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -237,6 +237,9 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* mempool: Added ``MEMPOOL_F_NON_IO`` flag to give a hint to DPDK components
+  that objects from this pool will not be used for device IO (e.g. DMA).
+
 
 ABI Changes
 -----------
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 8810d08ab5..7d7d97d85d 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -372,6 +372,10 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next);
 	mp->nb_mem_chunks++;
 
+	/* At least some objects in the pool can now be used for IO. */
+	if (iova != RTE_BAD_IOVA)
+		mp->flags &= ~MEMPOOL_F_NON_IO;
+
 	/* Report the mempool as ready only when fully populated. */
 	if (mp->populated_size >= mp->size)
 		mempool_event_callback_invoke(RTE_MEMPOOL_EVENT_READY, mp);
@@ -851,6 +855,12 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		return NULL;
 	}
 
+	/*
+	 * No objects in the pool can be used for IO until it's populated
+	 * with at least some objects with valid IOVA.
+	 */
+	flags |= MEMPOOL_F_NON_IO;
+
 	/* "no cache align" imply "no spread" */
 	if (flags & MEMPOOL_F_NO_CACHE_ALIGN)
 		flags |= MEMPOOL_F_NO_SPREAD;
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 5799d4a705..b2e20c8855 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -257,6 +257,8 @@ struct rte_mempool {
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
+/** Internal: no object from the pool can be used for device IO (DMA). */
+#define MEMPOOL_F_NON_IO         0x0040
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v5] lib/cmdline: release cl when cmdline exit
  @ 2021-10-18 13:58  4% ` zhihongx.peng
  2021-10-20  9:22  0%   ` Peng, ZhihongX
  0 siblings, 1 reply; 200+ results
From: zhihongx.peng @ 2021-10-18 13:58 UTC (permalink / raw)
  To: olivier.matz, dmitry.kozliuk; +Cc: dev, Zhihong Peng

From: Zhihong Peng <zhihongx.peng@intel.com>

Malloc cl in the cmdline_stdin_new function, so release in the
cmdline_stdin_exit function is logical, so that cl will not be
released alone.

Fixes: af75078fece3 ("first public release")
Cc: intel.com

Signed-off-by: Zhihong Peng <zhihongx.peng@intel.com>
---
 app/test/test.c                        | 1 -
 app/test/test_cmdline_lib.c            | 1 -
 doc/guides/rel_notes/release_21_11.rst | 3 +++
 lib/cmdline/cmdline_socket.c           | 1 +
 4 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/app/test/test.c b/app/test/test.c
index 173d202e47..5194131026 100644
--- a/app/test/test.c
+++ b/app/test/test.c
@@ -233,7 +233,6 @@ main(int argc, char **argv)
 
 		cmdline_interact(cl);
 		cmdline_stdin_exit(cl);
-		cmdline_free(cl);
 	}
 #endif
 	ret = 0;
diff --git a/app/test/test_cmdline_lib.c b/app/test/test_cmdline_lib.c
index d5a09b4541..6bcfa6511e 100644
--- a/app/test/test_cmdline_lib.c
+++ b/app/test/test_cmdline_lib.c
@@ -174,7 +174,6 @@ test_cmdline_socket_fns(void)
 	/* void functions */
 	cmdline_stdin_exit(NULL);
 
-	cmdline_free(cl);
 	return 0;
 error:
 	printf("Error: function accepted null parameter!\n");
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d5435a64aa..6aa98d1e34 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -237,6 +237,9 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* cmdline: ``cmdline_stdin_exit()`` now frees the ``cmdline`` structure.
+  Calls to ``cmdline_free()`` after it need to be deleted from applications.
+
 
 ABI Changes
 -----------
diff --git a/lib/cmdline/cmdline_socket.c b/lib/cmdline/cmdline_socket.c
index 998e8ade25..ebd5343754 100644
--- a/lib/cmdline/cmdline_socket.c
+++ b/lib/cmdline/cmdline_socket.c
@@ -53,4 +53,5 @@ cmdline_stdin_exit(struct cmdline *cl)
 		return;
 
 	terminal_restore(cl);
+	cmdline_free(cl);
 }
-- 
2.25.1


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v2 1/5] hash: add new toeplitz hash implementation
  2021-10-15 16:58  3%   ` Stephen Hemminger
  2021-10-18 10:40  3%     ` Ananyev, Konstantin
@ 2021-10-18 11:08  0%     ` Medvedkin, Vladimir
  1 sibling, 0 replies; 200+ results
From: Medvedkin, Vladimir @ 2021-10-18 11:08 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: dev, yipeng1.wang, sameh.gobriel, bruce.richardson, konstantin.ananyev

Hi Stephen,

Thanks for reviewing

On 15/10/2021 18:58, Stephen Hemminger wrote:
> On Fri, 15 Oct 2021 10:30:02 +0100
> Vladimir Medvedkin <vladimir.medvedkin@intel.com> wrote:
> 
>> +			m[i * 8 + j] = (rss_key[i] << j)|
>> +				(uint8_t)((uint16_t)(rss_key[i + 1]) >>
>> +				(8 - j));
>> +		}
> 
> This ends up being harder than necessary to read. Maybe split into
> multiple statements and/or use temporary variable.
> 
>> +RTE_INIT(rte_thash_gfni_init)
>> +{
>> +	rte_thash_gfni_supported = 0;
> 
> Not necessary in C globals are initialized to zero by default.
> 
> By removing that the constructor can be totally behind #ifdef
> 
>> +__rte_internal
>> +static inline __m512i
>> +__rte_thash_gfni(const uint64_t *mtrx, const uint8_t *tuple,
>> +	const uint8_t *secondary_tuple, int len)
>> +{
>> +	__m512i permute_idx = _mm512_set_epi8(7, 6, 5, 4, 7, 6, 5, 4,
>> +						6, 5, 4, 3, 6, 5, 4, 3,
>> +						5, 4, 3, 2, 5, 4, 3, 2,
>> +						4, 3, 2, 1, 4, 3, 2, 1,
>> +						3, 2, 1, 0, 3, 2, 1, 0,
>> +						2, 1, 0, -1, 2, 1, 0, -1,
>> +						1, 0, -1, -2, 1, 0, -1, -2,
>> +						0, -1, -2, -3, 0, -1, -2, -3);
> 
> NAK
> 
> Please don't put the implementation in an inline. This makes it harder
> to support (API/ABI) and blocks other architectures from implementing
> same thing with different instructions.
> 

By making this function not inline, its performance drops by about 2 
times. Compiler optimization (at least with respect to the len argument) 
helps a lot in the implementation.


-- 
Regards,
Vladimir

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 1/5] hash: add new toeplitz hash implementation
  2021-10-15 16:58  3%   ` Stephen Hemminger
@ 2021-10-18 10:40  3%     ` Ananyev, Konstantin
  2021-10-19  1:15  0%       ` Stephen Hemminger
  2021-10-18 11:08  0%     ` Medvedkin, Vladimir
  1 sibling, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-10-18 10:40 UTC (permalink / raw)
  To: Stephen Hemminger, Medvedkin, Vladimir
  Cc: dev, Wang, Yipeng1, Gobriel, Sameh, Richardson, Bruce


> On Fri, 15 Oct 2021 10:30:02 +0100
> Vladimir Medvedkin <vladimir.medvedkin@intel.com> wrote:
> 
> > +			m[i * 8 + j] = (rss_key[i] << j)|
> > +				(uint8_t)((uint16_t)(rss_key[i + 1]) >>
> > +				(8 - j));
> > +		}
> 
> This ends up being harder than necessary to read. Maybe split into
> multiple statements and/or use temporary variable.
> 
> > +RTE_INIT(rte_thash_gfni_init)
> > +{
> > +	rte_thash_gfni_supported = 0;
> 
> Not necessary in C globals are initialized to zero by default.
> 
> By removing that the constructor can be totally behind #ifdef
> 
> > +__rte_internal
> > +static inline __m512i
> > +__rte_thash_gfni(const uint64_t *mtrx, const uint8_t *tuple,
> > +	const uint8_t *secondary_tuple, int len)
> > +{
> > +	__m512i permute_idx = _mm512_set_epi8(7, 6, 5, 4, 7, 6, 5, 4,
> > +						6, 5, 4, 3, 6, 5, 4, 3,
> > +						5, 4, 3, 2, 5, 4, 3, 2,
> > +						4, 3, 2, 1, 4, 3, 2, 1,
> > +						3, 2, 1, 0, 3, 2, 1, 0,
> > +						2, 1, 0, -1, 2, 1, 0, -1,
> > +						1, 0, -1, -2, 1, 0, -1, -2,
> > +						0, -1, -2, -3, 0, -1, -2, -3);
> 
> NAK
> 
> Please don't put the implementation in an inline. This makes it harder
> to support (API/ABI) and blocks other architectures from implementing
> same thing with different instructions.

I don't really understand your reasoning here.
rte_thash_gfni.h is an arch-specific header, which provides
arch-specific optimizations for RSS hash calculation
(Vladimir pls correct me if I am wrong here).
We do have dozens of inline functions that do use arch-specific instructions (both x86 and arm)
for different purposes:
sync primitives, memory-ordering, cache manipulations, LPM lookup, TSX, power-saving, etc.
That's a usual trade-off taken for performance reasons, when extra function call
costs too much comparing to the operation itself.
Why it suddenly became a problem for that particular case and how exactly it blocks other architectures?
Also I don't understand how it makes things harder in terms of API/ABI stability.
As I can see this patch doesn't introduce any public structs/unions.
All functions take as arguments just raw data buffers and length.
To summarize - in general, I don't see any good reason why this patch shouldn't be allowed.
Konstantin
 




^ permalink raw reply	[relevance 3%]

* [dpdk-dev] Minutes of Technical Board Meeting, 2021-Oct-06
@ 2021-10-18 10:26  3% Kevin Traynor
  0 siblings, 0 replies; 200+ results
From: Kevin Traynor @ 2021-10-18 10:26 UTC (permalink / raw)
  To: dev

Minutes of Technical Board Meeting, 2021-Oct-06

Members Attending
-----------------
-Aaron
-Bruce
-Ferruh
-Hemant
-Honnappa
-Jerin
-Kevin (Chair)
-Konstantin
-Maxime
-Olivier
-Stephen
-Thomas

NOTE: The technical board meetings every second Wednesday at
https://meet.jit.si/DPDK at 3 pm UTC.
Meetings are public, and DPDK community members are welcome to attend.

NOTE: Next meeting will be on Wednesday 2021-Oct-20 @3pm UTC, and will
be chaired by Konstantin.

# Techboard membership - discussed requests from gov board to clarify
- TB discussed the min and max size of the board
-- TB voted to have min of 7 and max of 12
- TB discussed the process to include more members when one resigns
-- Kevin to write up steps for this which will be presented to gov
    board and then shared

# DTS WG Presentation
- Honnappa presented proposals for integrating DTS more with DPDK [1]
- Work items identified by WG are here [2]
- Further discussion will be needed at the next TB
- Any feedback welcome [3]

# Any unannounced ABI/API breaks that need approval?
- If required, will do over email as ran out of time

[1]
https://docs.google.com/presentation/d/1gTMJGP40FlWoSxMwdZsE2ydmd5SrMvGfWA9wtqvdbbM/edit?usp=sharing
[2]
https://docs.google.com/spreadsheets/d/1s_Y3Ph1sVptYs6YjOxkUI8rVzrPFGpTZzd75gkpgIXY/edit?usp=sharing
[3]
http://inbox.dpdk.org/dev/DBAPR08MB581492CA7AF1556B1B3770B598AA9@DBAPR08MB5814.eurprd08.prod.outlook.com/


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v7 2/4] mempool: add non-IO flag
  @ 2021-10-18 10:01  3%         ` Dmitry Kozlyuk
    1 sibling, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-18 10:01 UTC (permalink / raw)
  To: dev
  Cc: David Marchand, Matan Azrad, Andrew Rybchenko, Maryam Tahhan,
	Reshma Pattan, Olivier Matz

Mempool is a generic allocator that is not necessarily used
for device IO operations and its memory for DMA.
Add MEMPOOL_F_NON_IO flag to mark such mempools automatically
a) if their objects are not contiguous;
b) if IOVA is not available for any object.
Other components can inspect this flag
in order to optimize their memory management.

Discussion: https://mails.dpdk.org/archives/dev/2021-August/216654.html

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/proc-info/main.c                   |   6 +-
 app/test/test_mempool.c                | 112 +++++++++++++++++++++++++
 doc/guides/rel_notes/release_21_11.rst |   3 +
 lib/mempool/rte_mempool.c              |  10 +++
 lib/mempool/rte_mempool.h              |   2 +
 5 files changed, 131 insertions(+), 2 deletions(-)

diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index a8e928fa9f..8ec9cadd79 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -1295,7 +1295,8 @@ show_mempool(char *name)
 				"\t  -- No cache align (%c)\n"
 				"\t  -- SP put (%c), SC get (%c)\n"
 				"\t  -- Pool created (%c)\n"
-				"\t  -- No IOVA config (%c)\n",
+				"\t  -- No IOVA config (%c)\n"
+				"\t  -- Not used for IO (%c)\n",
 				ptr->name,
 				ptr->socket_id,
 				(flags & MEMPOOL_F_NO_SPREAD) ? 'y' : 'n',
@@ -1303,7 +1304,8 @@ show_mempool(char *name)
 				(flags & MEMPOOL_F_SP_PUT) ? 'y' : 'n',
 				(flags & MEMPOOL_F_SC_GET) ? 'y' : 'n',
 				(flags & MEMPOOL_F_POOL_CREATED) ? 'y' : 'n',
-				(flags & MEMPOOL_F_NO_IOVA_CONTIG) ? 'y' : 'n');
+				(flags & MEMPOOL_F_NO_IOVA_CONTIG) ? 'y' : 'n',
+				(flags & MEMPOOL_F_NON_IO) ? 'y' : 'n');
 			printf("  - Size %u Cache %u element %u\n"
 				"  - header %u trailer %u\n"
 				"  - private data size %u\n",
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index c39c83256e..9136e17374 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -12,6 +12,7 @@
 #include <sys/queue.h>
 
 #include <rte_common.h>
+#include <rte_eal_paging.h>
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_errno.h>
@@ -729,6 +730,109 @@ test_mempool_events_safety(void)
 #pragma pop_macro("RTE_TEST_TRACE_FAILURE")
 }
 
+#pragma push_macro("RTE_TEST_TRACE_FAILURE")
+#undef RTE_TEST_TRACE_FAILURE
+#define RTE_TEST_TRACE_FAILURE(...) do { \
+		ret = TEST_FAILED; \
+		goto exit; \
+	} while (0)
+
+static int
+test_mempool_flag_non_io_set_when_no_iova_contig_set(void)
+{
+	struct rte_mempool *mp = NULL;
+	int ret;
+
+	mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
+				      MEMPOOL_ELT_SIZE, 0, 0,
+				      SOCKET_ID_ANY, MEMPOOL_F_NO_IOVA_CONTIG);
+	RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
+				 rte_strerror(rte_errno));
+	rte_mempool_set_ops_byname(mp, rte_mbuf_best_mempool_ops(), NULL);
+	ret = rte_mempool_populate_default(mp);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(mp->flags & MEMPOOL_F_NON_IO,
+			"NON_IO flag is not set when NO_IOVA_CONTIG is set");
+	ret = TEST_SUCCESS;
+exit:
+	rte_mempool_free(mp);
+	return ret;
+}
+
+static int
+test_mempool_flag_non_io_unset_when_populated_with_valid_iova(void)
+{
+	const struct rte_memzone *mz;
+	void *virt;
+	rte_iova_t iova;
+	size_t page_size = RTE_PGSIZE_2M;
+	struct rte_mempool *mp = NULL;
+	int ret;
+
+	mz = rte_memzone_reserve("test_mempool", 3 * page_size, SOCKET_ID_ANY,
+				 RTE_MEMZONE_IOVA_CONTIG);
+	RTE_TEST_ASSERT_NOT_NULL(mz, "Cannot allocate memory");
+	virt = mz->addr;
+	iova = rte_mem_virt2iova(virt);
+	RTE_TEST_ASSERT_NOT_EQUAL(iova,  RTE_BAD_IOVA, "Cannot get IOVA");
+	mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
+				      MEMPOOL_ELT_SIZE, 0, 0,
+				      SOCKET_ID_ANY, 0);
+	RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
+				 rte_strerror(rte_errno));
+
+	ret = rte_mempool_populate_iova(mp, RTE_PTR_ADD(virt, 1 * page_size),
+					RTE_BAD_IOVA, page_size, NULL, NULL);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(mp->flags & MEMPOOL_F_NON_IO,
+			"NON_IO flag is not set when mempool is populated with only RTE_BAD_IOVA");
+
+	ret = rte_mempool_populate_iova(mp, virt, iova, page_size, NULL, NULL);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(!(mp->flags & MEMPOOL_F_NON_IO),
+			"NON_IO flag is not unset when mempool is populated with valid IOVA");
+
+	ret = rte_mempool_populate_iova(mp, RTE_PTR_ADD(virt, 2 * page_size),
+					RTE_BAD_IOVA, page_size, NULL, NULL);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(!(mp->flags & MEMPOOL_F_NON_IO),
+			"NON_IO flag is set even when some objects have valid IOVA");
+	ret = TEST_SUCCESS;
+
+exit:
+	rte_mempool_free(mp);
+	rte_memzone_free(mz);
+	return ret;
+}
+
+static int
+test_mempool_flag_non_io_unset_by_default(void)
+{
+	struct rte_mempool *mp;
+	int ret;
+
+	mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
+				      MEMPOOL_ELT_SIZE, 0, 0,
+				      SOCKET_ID_ANY, 0);
+	RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
+				 rte_strerror(rte_errno));
+	ret = rte_mempool_populate_default(mp);
+	RTE_TEST_ASSERT_EQUAL(ret, (int)mp->size, "Failed to populate mempool: %s",
+			      rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(!(mp->flags & MEMPOOL_F_NON_IO),
+			"NON_IO flag is set by default");
+	ret = TEST_SUCCESS;
+exit:
+	rte_mempool_free(mp);
+	return ret;
+}
+
+#pragma pop_macro("RTE_TEST_TRACE_FAILURE")
+
 static int
 test_mempool(void)
 {
@@ -914,6 +1018,14 @@ test_mempool(void)
 	if (test_mempool_events_safety() < 0)
 		GOTO_ERR(ret, err);
 
+	/* test NON_IO flag inference */
+	if (test_mempool_flag_non_io_unset_by_default() < 0)
+		GOTO_ERR(ret, err);
+	if (test_mempool_flag_non_io_set_when_no_iova_contig_set() < 0)
+		GOTO_ERR(ret, err);
+	if (test_mempool_flag_non_io_unset_when_populated_with_valid_iova() < 0)
+		GOTO_ERR(ret, err);
+
 	rte_mempool_list_dump(stdout);
 
 	ret = 0;
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 4c56cdfeaa..39a8a3d950 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -229,6 +229,9 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* mempool: Added ``MEMPOOL_F_NON_IO`` flag to give a hint to DPDK components
+  that objects from this pool will not be used for device IO (e.g. DMA).
+
 
 ABI Changes
 -----------
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 8810d08ab5..7d7d97d85d 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -372,6 +372,10 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next);
 	mp->nb_mem_chunks++;
 
+	/* At least some objects in the pool can now be used for IO. */
+	if (iova != RTE_BAD_IOVA)
+		mp->flags &= ~MEMPOOL_F_NON_IO;
+
 	/* Report the mempool as ready only when fully populated. */
 	if (mp->populated_size >= mp->size)
 		mempool_event_callback_invoke(RTE_MEMPOOL_EVENT_READY, mp);
@@ -851,6 +855,12 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		return NULL;
 	}
 
+	/*
+	 * No objects in the pool can be used for IO until it's populated
+	 * with at least some objects with valid IOVA.
+	 */
+	flags |= MEMPOOL_F_NON_IO;
+
 	/* "no cache align" imply "no spread" */
 	if (flags & MEMPOOL_F_NO_CACHE_ALIGN)
 		flags |= MEMPOOL_F_NO_SPREAD;
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 5799d4a705..b2e20c8855 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -257,6 +257,8 @@ struct rte_mempool {
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
+/** Internal: no object from the pool can be used for device IO (DMA). */
+#define MEMPOOL_F_NON_IO         0x0040
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v2 3/5] cryptodev: move inline APIs into separate structure
  @ 2021-10-18  7:02  0%       ` Akhil Goyal
  0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-10-18  7:02 UTC (permalink / raw)
  To: Zhang, Roy Fan, dev
  Cc: thomas, david.marchand, hemant.agrawal, Anoob Joseph,
	De Lara Guarch, Pablo, Trahe, Fiona, Doherty, Declan, matan,
	g.singh, jianjay.zhou, asomalap, ruifeng.wang, Ananyev,
	Konstantin, Nicolau, Radu, ajit.khaparde, Nagadheeraj Rottela,
	Ankur Dwivedi, Power, Ciara

> Hi Akhil,
> 
> > Move fastpath inline function pointers from rte_cryptodev into a
> > separate structure accessed via a flat array.
> > The intension is to make rte_cryptodev and related structures private
> > to avoid future API/ABI breakages.
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > ---
> >  lib/cryptodev/cryptodev_pmd.c      | 51
> > ++++++++++++++++++++++++++++++
> >  lib/cryptodev/cryptodev_pmd.h      | 11 +++++++
> >  lib/cryptodev/rte_cryptodev.c      | 29 +++++++++++++++++
> >  lib/cryptodev/rte_cryptodev_core.h | 29 +++++++++++++++++
> >  lib/cryptodev/version.map          |  5 +++
> >  5 files changed, 125 insertions(+)
> >
> > diff --git a/lib/cryptodev/cryptodev_pmd.c
> > b/lib/cryptodev/cryptodev_pmd.c
> > index 44a70ecb35..4646708045 100644
> > --- a/lib/cryptodev/cryptodev_pmd.c
> > +++ b/lib/cryptodev/cryptodev_pmd.c
> > @@ -4,6 +4,7 @@
> >
> >  #include <sys/queue.h>
> >
> > +#include <rte_errno.h>
> >  #include <rte_string_fns.h>
> >  #include <rte_malloc.h>
> >
> > @@ -160,3 +161,53 @@ rte_cryptodev_pmd_destroy(struct rte_cryptodev
> > *cryptodev)
> >
> 
> When a device is removed - aka when rte_pci_remove() is called
> cryptodev_fp_ops_reset() will never be called. This may expose a problem.
> Looks like cryptodev_fp_ops_reset() needs to be called here too.
> 
rte_cryptodev_pmd_destroy internally calls rte_cryptodev_pmd_release_device
and reset is called in that.

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v3 2/2] security: add reserved bitfields
  @ 2021-10-18  5:22  3%   ` Akhil Goyal
  2021-10-18 15:39  0%     ` Akhil Goyal
  0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-18  5:22 UTC (permalink / raw)
  To: dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
	konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
	adwivedi, ciara.power, Akhil Goyal, Ray Kinsella

In struct rte_security_ipsec_sa_options, for every new option
added, there is an ABI breakage, to avoid, a reserved_opts
bitfield is added to for the remaining bits available in the
structure.
Now for every new sa option, these reserved_opts can be reduced
and new option can be added.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
v3:
- added a comment for requesting user to clear reserved_opts.
- removed LIST_END enumerators patch. It will be handled separately.


 lib/security/rte_security.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 17d0e95412..4c55dcd744 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -263,6 +263,15 @@ struct rte_security_ipsec_sa_options {
 	 * PKT_TX_UDP_CKSUM or PKT_TX_L4_MASK in mbuf.
 	 */
 	uint32_t l4_csum_enable : 1;
+
+	/** Reserved bit fields for future extension
+	 *
+	 * User should ensure reserved_opts is cleared as it may change in
+	 * subsequent releases to support new options.
+	 *
+	 * Note: Reduce number of bits in reserved_opts for every new option.
+	 */
+	uint32_t reserved_opts : 18;
 };
 
 /** IPSec security association direction */
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v5 03/15] crypto: add dest_sgl in raw vector APIs
    2021-10-17 16:16  4%   ` [dpdk-dev] [PATCH v5 02/15] crypto: add total raw buffer length Hemant Agrawal
@ 2021-10-17 16:16  4%   ` Hemant Agrawal
  1 sibling, 0 replies; 200+ results
From: Hemant Agrawal @ 2021-10-17 16:16 UTC (permalink / raw)
  To: dev, gakhil; +Cc: konstantin.ananyev, roy.fan.zhang

The structure rte_crypto_sym_vec is updated to
add dest_sgl to support out of place processing.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/deprecation.rst   | 5 -----
 doc/guides/rel_notes/release_21_11.rst | 5 ++++-
 lib/cryptodev/rte_crypto_sym.h         | 2 ++
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 0e04ecf743..1d743c3a17 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -174,11 +174,6 @@ Deprecation Notices
   and ``rte_vhost_driver_set_protocol_features`` functions will be removed
   and the API functions will be made stable in DPDK 21.11.
 
-* cryptodev: The structure ``rte_crypto_sym_vec`` would be updated to add
-  ``dest_sgl`` to support out of place processing.
-  This field will be null for inplace processing.
-  This change is targeted for DPDK 21.11.
-
 * cryptodev: Hide structures ``rte_cryptodev_sym_session`` and
   ``rte_cryptodev_asym_session`` to remove unnecessary indirection between
   session and the private data of session. An opaque pointer can be exposed
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index ba036c5b3f..6e274a131f 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -232,12 +232,15 @@ API Changes
 * cryptodev: The field ``dataunit_len`` of the ``struct rte_crypto_cipher_xform``
   moved to the end of the structure and extended to ``uint32_t``.
 
-* cryptodev: The structure ``rte_crypto_vec`` updated to add ``tot_len`` to
+* cryptodev: The structure ``rte_crypto_vec`` is updated to add ``tot_len`` to
   support total buffer length. This is required for security cases like IPsec
   and PDCP encryption offload to know how much additional memory space is
   available in buffer other than data length so that driver/HW can write
   expanded size data after encryption.
 
+* cryptodev: The structure ``rte_crypto_sym_vec`` is updated to add ``dest_sgl``
+  to support out of place processing. This field will be null for inplace
+  processing.
 
 ABI Changes
 -----------
diff --git a/lib/cryptodev/rte_crypto_sym.h b/lib/cryptodev/rte_crypto_sym.h
index 1f2f0a572c..c907b1646d 100644
--- a/lib/cryptodev/rte_crypto_sym.h
+++ b/lib/cryptodev/rte_crypto_sym.h
@@ -72,6 +72,8 @@ struct rte_crypto_sym_vec {
 	uint32_t num;
 	/** array of SGL vectors */
 	struct rte_crypto_sgl *src_sgl;
+	/** array of SGL vectors for OOP, keep it NULL for inplace*/
+	struct rte_crypto_sgl *dest_sgl;
 	/** array of pointers to cipher IV */
 	struct rte_crypto_va_iova_ptr *iv;
 	/** array of pointers to digest */
-- 
2.17.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v5 02/15] crypto: add total raw buffer length
  @ 2021-10-17 16:16  4%   ` Hemant Agrawal
  2021-10-17 16:16  4%   ` [dpdk-dev] [PATCH v5 03/15] crypto: add dest_sgl in raw vector APIs Hemant Agrawal
  1 sibling, 0 replies; 200+ results
From: Hemant Agrawal @ 2021-10-17 16:16 UTC (permalink / raw)
  To: dev, gakhil; +Cc: konstantin.ananyev, roy.fan.zhang, Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

The current crypto raw data vectors is extended to support
rte_security usecases, where we need total data length to know
how much additional memory space is available in buffer other
than data length so that driver/HW can write expanded size
data after encryption.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/deprecation.rst   | 7 -------
 doc/guides/rel_notes/release_21_11.rst | 6 ++++++
 lib/cryptodev/rte_crypto_sym.h         | 4 ++++
 3 files changed, 10 insertions(+), 7 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 09b54fdef3..0e04ecf743 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -179,13 +179,6 @@ Deprecation Notices
   This field will be null for inplace processing.
   This change is targeted for DPDK 21.11.
 
-* cryptodev: The structure ``rte_crypto_vec`` would be updated to add
-  ``tot_len`` to support total buffer length.
-  This is required for security cases like IPsec and PDCP encryption offload
-  to know how much additional memory space is available in buffer other than
-  data length so that driver/HW can write expanded size data after encryption.
-  This change is targeted for DPDK 21.11.
-
 * cryptodev: Hide structures ``rte_cryptodev_sym_session`` and
   ``rte_cryptodev_asym_session`` to remove unnecessary indirection between
   session and the private data of session. An opaque pointer can be exposed
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 256037b639..ba036c5b3f 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -232,6 +232,12 @@ API Changes
 * cryptodev: The field ``dataunit_len`` of the ``struct rte_crypto_cipher_xform``
   moved to the end of the structure and extended to ``uint32_t``.
 
+* cryptodev: The structure ``rte_crypto_vec`` updated to add ``tot_len`` to
+  support total buffer length. This is required for security cases like IPsec
+  and PDCP encryption offload to know how much additional memory space is
+  available in buffer other than data length so that driver/HW can write
+  expanded size data after encryption.
+
 
 ABI Changes
 -----------
diff --git a/lib/cryptodev/rte_crypto_sym.h b/lib/cryptodev/rte_crypto_sym.h
index f8cb2ccca0..1f2f0a572c 100644
--- a/lib/cryptodev/rte_crypto_sym.h
+++ b/lib/cryptodev/rte_crypto_sym.h
@@ -37,6 +37,8 @@ struct rte_crypto_vec {
 	rte_iova_t iova;
 	/** length of the data buffer */
 	uint32_t len;
+	/** total buffer length */
+	uint32_t tot_len;
 };
 
 /**
@@ -963,6 +965,7 @@ rte_crypto_mbuf_to_vec(const struct rte_mbuf *mb, uint32_t ofs, uint32_t len,
 
 	vec[0].base = rte_pktmbuf_mtod_offset(mb, void *, ofs);
 	vec[0].iova = rte_pktmbuf_iova_offset(mb, ofs);
+	vec[0].tot_len = mb->buf_len - rte_pktmbuf_headroom(mb) - ofs;
 
 	/* whole data lies in the first segment */
 	seglen = mb->data_len - ofs;
@@ -978,6 +981,7 @@ rte_crypto_mbuf_to_vec(const struct rte_mbuf *mb, uint32_t ofs, uint32_t len,
 
 		vec[i].base = rte_pktmbuf_mtod(nseg, void *);
 		vec[i].iova = rte_pktmbuf_iova(nseg);
+		vec[i].tot_len = mb->buf_len - rte_pktmbuf_headroom(mb) - ofs;
 
 		seglen = nseg->data_len;
 		if (left <= seglen) {
-- 
2.17.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v11 1/8] bbdev: add device info related to data endianness
  @ 2021-10-17  6:53  4%   ` nipun.gupta
  0 siblings, 0 replies; 200+ results
From: nipun.gupta @ 2021-10-17  6:53 UTC (permalink / raw)
  To: dev, gakhil, nicolas.chautru; +Cc: david.marchand, hemant.agrawal, Nipun Gupta

From: Nicolas Chautru <nicolas.chautru@intel.com>

Adding device information to capture explicitly the assumption
of the input/output data byte endianness being processed.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 doc/guides/rel_notes/release_21_11.rst             | 1 +
 drivers/baseband/acc100/rte_acc100_pmd.c           | 1 +
 drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c       | 1 +
 drivers/baseband/null/bbdev_null.c                 | 6 ++++++
 drivers/baseband/turbo_sw/bbdev_turbo_software.c   | 1 +
 lib/bbdev/rte_bbdev.h                              | 4 ++++
 7 files changed, 15 insertions(+)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 4c56cdfeaa..957bd78d61 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -229,6 +229,7 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* bbdev: Added device info related to data byte endianness processing.
 
 ABI Changes
 -----------
diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
index 4e2feefc3c..05fe6f8b6f 100644
--- a/drivers/baseband/acc100/rte_acc100_pmd.c
+++ b/drivers/baseband/acc100/rte_acc100_pmd.c
@@ -1089,6 +1089,7 @@ acc100_dev_info_get(struct rte_bbdev *dev,
 #else
 	dev_info->harq_buffer_size = 0;
 #endif
+	dev_info->data_endianness = RTE_LITTLE_ENDIAN;
 	acc100_check_ir(d);
 }
 
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 6485cc824a..ee457f3071 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -372,6 +372,7 @@ fpga_dev_info_get(struct rte_bbdev *dev,
 	dev_info->default_queue_conf = default_queue_conf;
 	dev_info->capabilities = bbdev_capabilities;
 	dev_info->cpu_flag_reqs = NULL;
+	dev_info->data_endianness = RTE_LITTLE_ENDIAN;
 
 	/* Calculates number of queues assigned to device */
 	dev_info->max_num_queues = 0;
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 350c4248eb..703bb611a0 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -644,6 +644,7 @@ fpga_dev_info_get(struct rte_bbdev *dev,
 	dev_info->default_queue_conf = default_queue_conf;
 	dev_info->capabilities = bbdev_capabilities;
 	dev_info->cpu_flag_reqs = NULL;
+	dev_info->data_endianness = RTE_LITTLE_ENDIAN;
 
 	/* Calculates number of queues assigned to device */
 	dev_info->max_num_queues = 0;
diff --git a/drivers/baseband/null/bbdev_null.c b/drivers/baseband/null/bbdev_null.c
index 53c538ba44..753d920e18 100644
--- a/drivers/baseband/null/bbdev_null.c
+++ b/drivers/baseband/null/bbdev_null.c
@@ -77,6 +77,12 @@ info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info)
 	dev_info->cpu_flag_reqs = NULL;
 	dev_info->min_alignment = 0;
 
+	/* BBDEV null device does not process the data, so
+	 * endianness setting is not relevant, but setting it
+	 * here for code completeness.
+	 */
+	dev_info->data_endianness = RTE_LITTLE_ENDIAN;
+
 	rte_bbdev_log_debug("got device info from %u", dev->data->dev_id);
 }
 
diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
index e1db2bf205..b234bb751a 100644
--- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
+++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
@@ -253,6 +253,7 @@ info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info)
 	dev_info->capabilities = bbdev_capabilities;
 	dev_info->min_alignment = 64;
 	dev_info->harq_buffer_size = 0;
+	dev_info->data_endianness = RTE_LITTLE_ENDIAN;
 
 	rte_bbdev_log_debug("got device info from %u\n", dev->data->dev_id);
 }
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index 3ebf62e697..e863bd913f 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -309,6 +309,10 @@ struct rte_bbdev_driver_info {
 	uint16_t min_alignment;
 	/** HARQ memory available in kB */
 	uint32_t harq_buffer_size;
+	/** Byte endianness (RTE_BIG_ENDIAN/RTE_LITTLE_ENDIAN) supported
+	 *  for input/output data
+	 */
+	uint8_t data_endianness;
 	/** Default queue configuration used if none is supplied  */
 	struct rte_bbdev_queue_conf default_queue_conf;
 	/** Device operation capabilities */
-- 
2.17.1


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v4 14/14] eventdev: mark trace variables as internal
  2021-10-15 19:02  9%   ` [dpdk-dev] [PATCH v4 14/14] eventdev: mark trace variables as internal pbhagavatula
@ 2021-10-17  5:58  0%     ` Jerin Jacob
  2021-10-18 15:06  0%       ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-10-17  5:58 UTC (permalink / raw)
  To: Pavan Nikhilesh; +Cc: Jerin Jacob, Ray Kinsella, dpdk-dev

On Sat, Oct 16, 2021 at 12:34 AM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Mark rte_trace global variables as internal i.e. remove them
> from experimental section of version map.
> Some of them are used in inline APIs, mark those as global.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> Acked-by: Ray Kinsella <mdr@ashroe.eu>
> ---
>  doc/guides/rel_notes/release_21_11.rst | 12 +++++
>  lib/eventdev/version.map               | 71 ++++++++++++--------------
>  2 files changed, 44 insertions(+), 39 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index 38e601c236..5b4a05c3ae 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -226,6 +226,9 @@ API Changes
>    the crypto/security operation. This field will be used to communicate
>    events such as soft expiry with IPsec in lookaside mode.
>
> +* eventdev: Event vector configuration APIs have been made stable.
> +  Move memory used by timer adapters to hugepage. This will prevent TLB misses
> +  if any and aligns to memory structure of other subsystems.
>
>  ABI Changes
>  -----------
> @@ -277,6 +280,15 @@ ABI Changes
>    were added in structure ``rte_event_eth_rx_adapter_stats`` to get additional
>    status.
>
> +* eventdev: A new structure ``rte_event_fp_ops`` has been added which is now used
> +  by the fastpath inline functions. The structures ``rte_eventdev``,
> +  ``rte_eventdev_data`` have been made internal. ``rte_eventdevs[]`` can't be
> +  accessed directly by user any more. This change is transparent to both
> +  applications and PMDs.
> +
> +* eventdev: Re-arrange fields in ``rte_event_timer`` to remove holes.
> +  ``rte_event_timer_adapter_pmd.h`` has been made internal.

Looks good. Please fix the following, If there are no objections, I
will merge the next version.

1) Please move the doc update to respective patches
2) Following checkpath issue
[for-main]dell[dpdk-next-eventdev] $ ./devtools/checkpatches.sh -n 14

### eventdev: move inline APIs into separate structure

INFO: symbol event_dev_fp_ops_reset has been added to the INTERNAL
section of the version map
INFO: symbol event_dev_fp_ops_set has been added to the INTERNAL
section of the version map
INFO: symbol event_dev_probing_finish has been added to the INTERNAL
section of the version map
ERROR: symbol rte_event_fp_ops is added in the DPDK_22 section, but is
expected to be added in the EXPERIMENTAL section of the version map

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v6 2/4] mempool: add non-IO flag
  @ 2021-10-16 20:00  3%       ` Dmitry Kozlyuk
    1 sibling, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-16 20:00 UTC (permalink / raw)
  To: dev
  Cc: David Marchand, Matan Azrad, Andrew Rybchenko, Maryam Tahhan,
	Reshma Pattan, Olivier Matz

Mempool is a generic allocator that is not necessarily used
for device IO operations and its memory for DMA.
Add MEMPOOL_F_NON_IO flag to mark such mempools automatically
a) if their objects are not contiguous;
b) if IOVA is not available for any object.
Other components can inspect this flag
in order to optimize their memory management.

Discussion: https://mails.dpdk.org/archives/dev/2021-August/216654.html

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/proc-info/main.c                   |   6 +-
 app/test/test_mempool.c                | 112 +++++++++++++++++++++++++
 doc/guides/rel_notes/release_21_11.rst |   3 +
 lib/mempool/rte_mempool.c              |  10 +++
 lib/mempool/rte_mempool.h              |   2 +
 5 files changed, 131 insertions(+), 2 deletions(-)

diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index a8e928fa9f..8ec9cadd79 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -1295,7 +1295,8 @@ show_mempool(char *name)
 				"\t  -- No cache align (%c)\n"
 				"\t  -- SP put (%c), SC get (%c)\n"
 				"\t  -- Pool created (%c)\n"
-				"\t  -- No IOVA config (%c)\n",
+				"\t  -- No IOVA config (%c)\n"
+				"\t  -- Not used for IO (%c)\n",
 				ptr->name,
 				ptr->socket_id,
 				(flags & MEMPOOL_F_NO_SPREAD) ? 'y' : 'n',
@@ -1303,7 +1304,8 @@ show_mempool(char *name)
 				(flags & MEMPOOL_F_SP_PUT) ? 'y' : 'n',
 				(flags & MEMPOOL_F_SC_GET) ? 'y' : 'n',
 				(flags & MEMPOOL_F_POOL_CREATED) ? 'y' : 'n',
-				(flags & MEMPOOL_F_NO_IOVA_CONTIG) ? 'y' : 'n');
+				(flags & MEMPOOL_F_NO_IOVA_CONTIG) ? 'y' : 'n',
+				(flags & MEMPOOL_F_NON_IO) ? 'y' : 'n');
 			printf("  - Size %u Cache %u element %u\n"
 				"  - header %u trailer %u\n"
 				"  - private data size %u\n",
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index c39c83256e..caf9c46a29 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -12,6 +12,7 @@
 #include <sys/queue.h>
 
 #include <rte_common.h>
+#include <rte_eal_paging.h>
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_errno.h>
@@ -729,6 +730,109 @@ test_mempool_events_safety(void)
 #pragma pop_macro("RTE_TEST_TRACE_FAILURE")
 }
 
+#pragma push_macro("RTE_TEST_TRACE_FAILURE")
+#undef RTE_TEST_TRACE_FAILURE
+#define RTE_TEST_TRACE_FAILURE(...) do { \
+		ret = TEST_FAILED; \
+		goto exit; \
+	} while (0)
+
+static int
+test_mempool_flag_non_io_set_when_no_iova_contig_set(void)
+{
+	struct rte_mempool *mp = NULL;
+	int ret;
+
+	mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
+				      MEMPOOL_ELT_SIZE, 0, 0,
+				      SOCKET_ID_ANY, MEMPOOL_F_NO_IOVA_CONTIG);
+	RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
+				 rte_strerror(rte_errno));
+	rte_mempool_set_ops_byname(mp, rte_mbuf_best_mempool_ops(), NULL);
+	ret = rte_mempool_populate_default(mp);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(mp->flags & MEMPOOL_F_NON_IO,
+			"NON_IO flag is not set when NO_IOVA_CONTIG is set");
+	ret = TEST_SUCCESS;
+exit:
+	rte_mempool_free(mp);
+	return ret;
+}
+
+static int
+test_mempool_flag_non_io_unset_when_populated_with_valid_iova(void)
+{
+	const struct rte_memzone *mz;
+	void *virt;
+	rte_iova_t iova;
+	size_t page_size = RTE_PGSIZE_2M;
+	struct rte_mempool *mp;
+	int ret;
+
+	mz = rte_memzone_reserve("test_mempool", 3 * page_size, SOCKET_ID_ANY,
+				 RTE_MEMZONE_IOVA_CONTIG);
+	RTE_TEST_ASSERT_NOT_NULL(mz, "Cannot allocate memory");
+	virt = mz->addr;
+	iova = rte_mem_virt2iova(virt);
+	RTE_TEST_ASSERT_NOT_EQUAL(iova,  RTE_BAD_IOVA, "Cannot get IOVA");
+	mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
+				      MEMPOOL_ELT_SIZE, 0, 0,
+				      SOCKET_ID_ANY, 0);
+	RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
+				 rte_strerror(rte_errno));
+
+	ret = rte_mempool_populate_iova(mp, RTE_PTR_ADD(virt, 1 * page_size),
+					RTE_BAD_IOVA, page_size, NULL, NULL);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(mp->flags & MEMPOOL_F_NON_IO,
+			"NON_IO flag is not set when mempool is populated with only RTE_BAD_IOVA");
+
+	ret = rte_mempool_populate_iova(mp, virt, iova, page_size, NULL, NULL);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(!(mp->flags & MEMPOOL_F_NON_IO),
+			"NON_IO flag is not unset when mempool is populated with valid IOVA");
+
+	ret = rte_mempool_populate_iova(mp, RTE_PTR_ADD(virt, 2 * page_size),
+					RTE_BAD_IOVA, page_size, NULL, NULL);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(!(mp->flags & MEMPOOL_F_NON_IO),
+			"NON_IO flag is set even when some objects have valid IOVA");
+	ret = TEST_SUCCESS;
+
+exit:
+	rte_mempool_free(mp);
+	rte_memzone_free(mz);
+	return ret;
+}
+
+static int
+test_mempool_flag_non_io_unset_by_default(void)
+{
+	struct rte_mempool *mp;
+	int ret;
+
+	mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
+				      MEMPOOL_ELT_SIZE, 0, 0,
+				      SOCKET_ID_ANY, 0);
+	RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
+				 rte_strerror(rte_errno));
+	ret = rte_mempool_populate_default(mp);
+	RTE_TEST_ASSERT_EQUAL(ret, (int)mp->size, "Failed to populate mempool: %s",
+			      rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(!(mp->flags & MEMPOOL_F_NON_IO),
+			"NON_IO flag is set by default");
+	ret = TEST_SUCCESS;
+exit:
+	rte_mempool_free(mp);
+	return ret;
+}
+
+#pragma pop_macro("RTE_TEST_TRACE_FAILURE")
+
 static int
 test_mempool(void)
 {
@@ -914,6 +1018,14 @@ test_mempool(void)
 	if (test_mempool_events_safety() < 0)
 		GOTO_ERR(ret, err);
 
+	/* test NON_IO flag inference */
+	if (test_mempool_flag_non_io_unset_by_default() < 0)
+		GOTO_ERR(ret, err);
+	if (test_mempool_flag_non_io_set_when_no_iova_contig_set() < 0)
+		GOTO_ERR(ret, err);
+	if (test_mempool_flag_non_io_unset_when_populated_with_valid_iova() < 0)
+		GOTO_ERR(ret, err);
+
 	rte_mempool_list_dump(stdout);
 
 	ret = 0;
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 4c56cdfeaa..39a8a3d950 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -229,6 +229,9 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* mempool: Added ``MEMPOOL_F_NON_IO`` flag to give a hint to DPDK components
+  that objects from this pool will not be used for device IO (e.g. DMA).
+
 
 ABI Changes
 -----------
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 8810d08ab5..7d7d97d85d 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -372,6 +372,10 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next);
 	mp->nb_mem_chunks++;
 
+	/* At least some objects in the pool can now be used for IO. */
+	if (iova != RTE_BAD_IOVA)
+		mp->flags &= ~MEMPOOL_F_NON_IO;
+
 	/* Report the mempool as ready only when fully populated. */
 	if (mp->populated_size >= mp->size)
 		mempool_event_callback_invoke(RTE_MEMPOOL_EVENT_READY, mp);
@@ -851,6 +855,12 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		return NULL;
 	}
 
+	/*
+	 * No objects in the pool can be used for IO until it's populated
+	 * with at least some objects with valid IOVA.
+	 */
+	flags |= MEMPOOL_F_NON_IO;
+
 	/* "no cache align" imply "no spread" */
 	if (flags & MEMPOOL_F_NO_CACHE_ALIGN)
 		flags |= MEMPOOL_F_NO_SPREAD;
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 3285626712..408d916a9c 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -257,6 +257,8 @@ struct rte_mempool {
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
+/** Internal: no object from the pool can be used for device IO (DMA). */
+#define MEMPOOL_F_NON_IO         0x0040
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v14 11/12] doc: changes for new pcapng and dumpcap utility
    2021-10-15 20:11  1%   ` [dpdk-dev] [PATCH v14 06/12] pdump: support pcapng and filtering Stephen Hemminger
@ 2021-10-15 20:11  1%   ` Stephen Hemminger
  1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-10-15 20:11 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger, Reshma Pattan

Describe the new packet capture library and utility.
Fix the title line on the pdump documentation.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 doc/api/doxy-api-index.md                     |  1 +
 doc/api/doxy-api.conf.in                      |  1 +
 .../howto/img/packet_capture_framework.svg    | 96 +++++++++----------
 doc/guides/howto/packet_capture_framework.rst | 69 ++++++-------
 doc/guides/prog_guide/index.rst               |  1 +
 doc/guides/prog_guide/pcapng_lib.rst          | 46 +++++++++
 doc/guides/prog_guide/pdump_lib.rst           | 28 ++++--
 doc/guides/rel_notes/release_21_11.rst        | 10 ++
 doc/guides/tools/dumpcap.rst                  | 86 +++++++++++++++++
 doc/guides/tools/index.rst                    |  1 +
 10 files changed, 251 insertions(+), 88 deletions(-)
 create mode 100644 doc/guides/prog_guide/pcapng_lib.rst
 create mode 100644 doc/guides/tools/dumpcap.rst

diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 1992107a0356..ee07394d1c78 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -223,3 +223,4 @@ The public API headers are grouped by topics:
   [experimental APIs]  (@ref rte_compat.h),
   [ABI versioning]     (@ref rte_function_versioning.h),
   [version]            (@ref rte_version.h)
+  [pcapng]             (@ref rte_pcapng.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 325a0195c6ab..aba17799a9a1 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -58,6 +58,7 @@ INPUT                   = @TOPDIR@/doc/api/doxy-api-index.md \
                           @TOPDIR@/lib/metrics \
                           @TOPDIR@/lib/node \
                           @TOPDIR@/lib/net \
+                          @TOPDIR@/lib/pcapng \
                           @TOPDIR@/lib/pci \
                           @TOPDIR@/lib/pdump \
                           @TOPDIR@/lib/pipeline \
diff --git a/doc/guides/howto/img/packet_capture_framework.svg b/doc/guides/howto/img/packet_capture_framework.svg
index a76baf71fdee..1c2646a81096 100644
--- a/doc/guides/howto/img/packet_capture_framework.svg
+++ b/doc/guides/howto/img/packet_capture_framework.svg
@@ -1,6 +1,4 @@
 <?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
 <svg
    xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
@@ -16,8 +14,8 @@
    viewBox="0 0 425.19685 283.46457"
    id="svg2"
    version="1.1"
-   inkscape:version="0.91 r13725"
-   sodipodi:docname="drawing-pcap.svg">
+   inkscape:version="1.0.2 (e86c870879, 2021-01-15)"
+   sodipodi:docname="packet_capture_framework.svg">
   <defs
      id="defs4">
     <marker
@@ -228,7 +226,7 @@
        x2="487.64606"
        y2="258.38232"
        gradientUnits="userSpaceOnUse"
-       gradientTransform="translate(-84.916417,744.90779)" />
+       gradientTransform="matrix(1.1457977,0,0,0.99944907,-151.97019,745.05014)" />
     <linearGradient
        inkscape:collect="always"
        xlink:href="#linearGradient5784"
@@ -277,17 +275,18 @@
      borderopacity="1.0"
      inkscape:pageopacity="0.0"
      inkscape:pageshadow="2"
-     inkscape:zoom="0.57434918"
-     inkscape:cx="215.17857"
-     inkscape:cy="285.26445"
+     inkscape:zoom="1"
+     inkscape:cx="226.77165"
+     inkscape:cy="78.124511"
      inkscape:document-units="px"
      inkscape:current-layer="layer1"
      showgrid="false"
-     inkscape:window-width="1874"
-     inkscape:window-height="971"
-     inkscape:window-x="2"
-     inkscape:window-y="24"
-     inkscape:window-maximized="0" />
+     inkscape:window-width="2560"
+     inkscape:window-height="1414"
+     inkscape:window-x="0"
+     inkscape:window-y="0"
+     inkscape:window-maximized="1"
+     inkscape:document-rotation="0" />
   <metadata
      id="metadata7">
     <rdf:RDF>
@@ -296,7 +295,7 @@
         <dc:format>image/svg+xml</dc:format>
         <dc:type
            rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
+        <dc:title />
       </cc:Work>
     </rdf:RDF>
   </metadata>
@@ -321,15 +320,15 @@
        y="790.82452" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="61.050636"
        y="807.3205"
-       id="text4152"
-       sodipodi:linespacing="125%"><tspan
+       id="text4152"><tspan
          sodipodi:role="line"
          id="tspan4154"
          x="61.050636"
-         y="807.3205">DPDK Primary Application</tspan></text>
+         y="807.3205"
+         style="font-size:12.5px;line-height:1.25">DPDK Primary Application</tspan></text>
     <rect
        style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6"
@@ -339,19 +338,20 @@
        y="827.01843" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="350.68585"
        y="841.16058"
-       id="text4189"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189"><tspan
          sodipodi:role="line"
          id="tspan4191"
          x="350.68585"
-         y="841.16058">dpdk-pdump</tspan><tspan
+         y="841.16058"
+         style="font-size:12.5px;line-height:1.25">dpdk-dumpcap</tspan><tspan
          sodipodi:role="line"
          x="350.68585"
          y="856.78558"
-         id="tspan4193">tool</tspan></text>
+         id="tspan4193"
+         style="font-size:12.5px;line-height:1.25">tool</tspan></text>
     <rect
        style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-4"
@@ -361,15 +361,15 @@
        y="891.16315" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="352.70612"
        y="905.3053"
-       id="text4189-1"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-1"><tspan
          sodipodi:role="line"
          x="352.70612"
          y="905.3053"
-         id="tspan4193-3">PCAP PMD</tspan></text>
+         id="tspan4193-3"
+         style="font-size:12.5px;line-height:1.25">librte_pcapng</tspan></text>
     <rect
        style="fill:url(#linearGradient5745);fill-opacity:1;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-6"
@@ -379,15 +379,15 @@
        y="923.9931" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="136.02846"
        y="938.13525"
-       id="text4189-0"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-0"><tspan
          sodipodi:role="line"
          x="136.02846"
          y="938.13525"
-         id="tspan4193-6">dpdk_port0</tspan></text>
+         id="tspan4193-6"
+         style="font-size:12.5px;line-height:1.25">dpdk_port0</tspan></text>
     <rect
        style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-5"
@@ -397,33 +397,33 @@
        y="824.99817" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="137.54369"
        y="839.14026"
-       id="text4189-4"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-4"><tspan
          sodipodi:role="line"
          x="137.54369"
          y="839.14026"
-         id="tspan4193-2">librte_pdump</tspan></text>
+         id="tspan4193-2"
+         style="font-size:12.5px;line-height:1.25">librte_pdump</tspan></text>
     <rect
-       style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+       style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1.07013;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-4-5"
-       width="94.449265"
-       height="35.355339"
-       x="307.7804"
-       y="985.61243" />
+       width="108.21974"
+       height="35.335861"
+       x="297.9809"
+       y="985.62219" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="352.70618"
        y="999.75458"
-       id="text4189-1-8"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-1-8"><tspan
          sodipodi:role="line"
          x="352.70618"
          y="999.75458"
-         id="tspan4193-3-2">capture.pcap</tspan></text>
+         id="tspan4193-3-2"
+         style="font-size:12.5px;line-height:1.25">capture.pcapng</tspan></text>
     <rect
        style="fill:url(#linearGradient5788-1);fill-opacity:1;stroke:#257cdc;stroke-width:1.12555885;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-4-5-1"
@@ -433,15 +433,15 @@
        y="983.14984" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="136.53352"
        y="1002.785"
-       id="text4189-1-8-4"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-1-8-4"><tspan
          sodipodi:role="line"
          x="136.53352"
          y="1002.785"
-         id="tspan4193-3-2-7">Traffic Generator</tspan></text>
+         id="tspan4193-3-2-7"
+         style="font-size:12.5px;line-height:1.25">Traffic Generator</tspan></text>
     <path
        style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker7331)"
        d="m 351.46948,927.02357 c 0,57.5787 0,57.5787 0,57.5787"
diff --git a/doc/guides/howto/packet_capture_framework.rst b/doc/guides/howto/packet_capture_framework.rst
index c31bac52340e..f933cc7e9311 100644
--- a/doc/guides/howto/packet_capture_framework.rst
+++ b/doc/guides/howto/packet_capture_framework.rst
@@ -1,18 +1,19 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright(c) 2017 Intel Corporation.
+    Copyright(c) 2017-2021 Intel Corporation.
 
-DPDK pdump Library and pdump Tool
-=================================
+DPDK packet capture libraries and tools
+=======================================
 
 This document describes how the Data Plane Development Kit (DPDK) Packet
 Capture Framework is used for capturing packets on DPDK ports. It is intended
 for users of DPDK who want to know more about the Packet Capture feature and
 for those who want to monitor traffic on DPDK-controlled devices.
 
-The DPDK packet capture framework was introduced in DPDK v16.07. The DPDK
-packet capture framework consists of the DPDK pdump library and DPDK pdump
-tool.
-
+The DPDK packet capture framework was introduced in DPDK v16.07 and
+enhanced in 21.11. The DPDK packet capture framework consists of the
+libraries for collecting packets ``librte_pdump`` and writing packets
+to a file ``librte_pcapng``. There are two sample applications:
+``dpdk-dumpcap`` and older ``dpdk-pdump``.
 
 Introduction
 ------------
@@ -22,43 +23,46 @@ allow users to initialize the packet capture framework and to enable or
 disable packet capture. The library works on a multi process communication model and its
 usage is recommended for debugging purposes.
 
-The :ref:`dpdk-pdump <pdump_tool>` tool is developed based on the
-``librte_pdump`` library.  It runs as a DPDK secondary process and is capable
-of enabling or disabling packet capture on DPDK ports. The ``dpdk-pdump`` tool
-provides command-line options with which users can request enabling or
-disabling of the packet capture on DPDK ports.
+The :ref:`librte_pcapng <pcapng_library>` library provides the APIs to format
+packets and write them to a file in Pcapng format.
+
+
+The :ref:`dpdk-dumpcap <dumpcap_tool>` is a tool that captures packets in
+like Wireshark dumpcap does for Linux. It runs as a DPDK secondary process and
+captures packets from one or more interfaces and writes them to a file
+in Pcapng format.  The ``dpdk-dumpcap`` tool is designed to take
+most of the same options as the Wireshark ``dumpcap`` command.
 
-The application which initializes the packet capture framework will be a primary process
-and the application that enables or disables the packet capture will
-be a secondary process. The primary process sends the Rx and Tx packets from the DPDK ports
-to the secondary process.
+Without any options it will use the packet capture framework to
+capture traffic from the first available DPDK port.
 
 In DPDK the ``testpmd`` application can be used to initialize the packet
-capture framework and acts as a server, and the ``dpdk-pdump`` tool acts as a
+capture framework and acts as a server, and the ``dpdk-dumpcap`` tool acts as a
 client. To view Rx or Tx packets of ``testpmd``, the application should be
-launched first, and then the ``dpdk-pdump`` tool. Packets from ``testpmd``
-will be sent to the tool, which then sends them on to the Pcap PMD device and
-that device writes them to the Pcap file or to an external interface depending
-on the command-line option used.
+launched first, and then the ``dpdk-dumpcap`` tool. Packets from ``testpmd``
+will be sent to the tool, and then to the Pcapng file.
 
 Some things to note:
 
-* The ``dpdk-pdump`` tool can only be used in conjunction with a primary
+* All tools using ``librte_pdump`` can only be used in conjunction with a primary
   application which has the packet capture framework initialized already. In
   dpdk, only ``testpmd`` is modified to initialize packet capture framework,
-  other applications remain untouched. So, if the ``dpdk-pdump`` tool has to
+  other applications remain untouched. So, if the ``dpdk-dumpcap`` tool has to
   be used with any application other than the testpmd, the user needs to
   explicitly modify that application to call the packet capture framework
   initialization code. Refer to the ``app/test-pmd/testpmd.c`` code and look
   for ``pdump`` keyword to see how this is done.
 
-* The ``dpdk-pdump`` tool depends on the libpcap based PMD.
+* The ``dpdk-pdump`` tool is an older tool created as demonstration of ``librte_pdump``
+  library. The ``dpdk-pdump`` tool provides more limited functionality and
+  and depends on the Pcap PMD. It is retained only for compatibility reasons;
+  users should use ``dpdk-dumpcap`` instead.
 
 
 Test Environment
 ----------------
 
-The overview of using the Packet Capture Framework and the ``dpdk-pdump`` tool
+The overview of using the Packet Capture Framework and the ``dpdk-dumpcap`` utility
 for packet capturing on the DPDK port in
 :numref:`figure_packet_capture_framework`.
 
@@ -66,13 +70,13 @@ for packet capturing on the DPDK port in
 
 .. figure:: img/packet_capture_framework.*
 
-   Packet capturing on a DPDK port using the dpdk-pdump tool.
+   Packet capturing on a DPDK port using the dpdk-dumpcap utility.
 
 
 Running the Application
 -----------------------
 
-The following steps demonstrate how to run the ``dpdk-pdump`` tool to capture
+The following steps demonstrate how to run the ``dpdk-dumpcap`` tool to capture
 Rx side packets on dpdk_port0 in :numref:`figure_packet_capture_framework` and
 inspect them using ``tcpdump``.
 
@@ -80,16 +84,15 @@ inspect them using ``tcpdump``.
 
      sudo <build_dir>/app/dpdk-testpmd -c 0xf0 -n 4 -- -i --port-topology=chained
 
-#. Launch the pdump tool as follows::
+#. Launch the dpdk-dumpcap as follows::
 
-     sudo <build_dir>/app/dpdk-pdump -- \
-          --pdump 'port=0,queue=*,rx-dev=/tmp/capture.pcap'
+     sudo <build_dir>/app/dpdk-dumpcap -w /tmp/capture.pcapng
 
 #. Send traffic to dpdk_port0 from traffic generator.
-   Inspect packets captured in the file capture.pcap using a tool
-   that can interpret Pcap files, for example tcpdump::
+   Inspect packets captured in the file capture.pcapng using a tool such as
+   tcpdump or tshark that can interpret Pcapng files::
 
-     $tcpdump -nr /tmp/capture.pcap
+     $ tcpdump -nr /tmp/capture.pcapng
      reading from file /tmp/capture.pcap, link-type EN10MB (Ethernet)
      11:11:36.891404 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
      11:11:36.891442 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 2dce507f46a3..b440c77c2ba1 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -43,6 +43,7 @@ Programmer's Guide
     ip_fragment_reassembly_lib
     generic_receive_offload_lib
     generic_segmentation_offload_lib
+    pcapng_lib
     pdump_lib
     multi_proc_support
     kernel_nic_interface
diff --git a/doc/guides/prog_guide/pcapng_lib.rst b/doc/guides/prog_guide/pcapng_lib.rst
new file mode 100644
index 000000000000..09fa2934a2cc
--- /dev/null
+++ b/doc/guides/prog_guide/pcapng_lib.rst
@@ -0,0 +1,46 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2021 Microsoft Corporation
+
+.. _pcapng_library:
+
+Packet Capture Next Generation Library
+======================================
+
+Exchanging packet traces becomes more and more critical every day.
+The defacto standard for this is the format define by libpcap;
+but that format is rather old and is lacking in functionality
+for more modern applications. The `Pcapng file format`_
+is the default capture file format for modern network capture
+processing tools such as `wireshark`_ (can also be read by `tcpdump`_).
+
+The Pcapng library is a an API for formatting packet data into
+into a Pcapng file.
+The format conforms to the current `Pcapng RFC`_ standard.
+It is designed to be integrated with the packet capture library.
+
+Usage
+-----
+
+Before the library can be used the function ``rte_pcapng_init``
+should be called once to initialize timestamp computation.
+
+The output stream is created with ``rte_pcapng_fdopen``,
+and should be closed with ``rte_pcapng_close``.
+
+The library requires a DPDK mempool to allocate mbufs. The mbufs
+need to be able to accommodate additional space for the pcapng packet
+format header and trailer information; the function ``rte_pcapng_mbuf_size``
+should be used to determine the lower bound based on MTU.
+
+Collecting packets is done in two parts. The function ``rte_pcapng_copy``
+is used to format and copy mbuf data and ``rte_pcapng_write_packets``
+writes a burst of packets to the output file.
+
+The function ``rte_pcapng_write_stats`` can be used to write
+statistics information into the output file. The summary statistics
+information is automatically added by ``rte_pcapng_close``.
+
+.. _Tcpdump: https://tcpdump.org/
+.. _Wireshark: https://wireshark.org/
+.. _Pcapng file format: https://github.com/pcapng/pcapng/
+.. _Pcapng RFC: https://datatracker.ietf.org/doc/html/draft-tuexen-opsawg-pcapng
diff --git a/doc/guides/prog_guide/pdump_lib.rst b/doc/guides/prog_guide/pdump_lib.rst
index 62c0b015b2fe..f3ff8fd828dc 100644
--- a/doc/guides/prog_guide/pdump_lib.rst
+++ b/doc/guides/prog_guide/pdump_lib.rst
@@ -3,10 +3,10 @@
 
 .. _pdump_library:
 
-The librte_pdump Library
-========================
+Packet Capture Library
+======================
 
-The ``librte_pdump`` library provides a framework for packet capturing in DPDK.
+The DPDK ``pdump`` library provides a framework for packet capturing in DPDK.
 The library does the complete copy of the Rx and Tx mbufs to a new mempool and
 hence it slows down the performance of the applications, so it is recommended
 to use this library for debugging purposes.
@@ -23,11 +23,19 @@ or disable the packet capture, and to uninitialize it.
 
 * ``rte_pdump_enable()``:
   This API enables the packet capture on a given port and queue.
-  Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf()``
+  This API enables the packet capture on a given port and queue.
+  It also allows setting an optional filter using DPDK BPF interpreter and
+  setting the captured packet length.
 
 * ``rte_pdump_enable_by_deviceid()``:
   This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
-  Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf_by_deviceid()``
+  This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
+  It also allows setting an optional filter using DPDK BPF interpreter and
+  setting the captured packet length.
 
 * ``rte_pdump_disable()``:
   This API disables the packet capture on a given port and queue.
@@ -61,6 +69,12 @@ and enables the packet capture by registering the Ethernet RX and TX callbacks f
 and queue combinations. Then the primary process will mirror the packets to the new mempool and enqueue them to
 the rte_ring that secondary process have passed to these APIs.
 
+The packet ring supports one of two formats. The default format enqueues copies of the original packets
+into the rte_ring. If the ``RTE_PDUMP_FLAG_PCAPNG`` is set the mbuf data is extended with header and trailer
+to match the format of Pcapng enhanced packet block. The enhanced packet block has meta-data such as the
+timestamp, port and queue the packet was captured on. It is up to the application consuming the
+packets from the ring to select the format desired.
+
 The library APIs ``rte_pdump_disable()`` and ``rte_pdump_disable_by_deviceid()`` disables the packet capture.
 For the calls to these APIs from secondary process, the library creates the "pdump disable" request and sends
 the request to the primary process over the multi process channel. The primary process takes this request and
@@ -74,5 +88,5 @@ function.
 Use Case: Packet Capturing
 --------------------------
 
-The DPDK ``app/pdump`` tool is developed based on this library to capture packets in DPDK.
-Users can use this as an example to develop their own packet capturing tools.
+The DPDK ``app/dpdk-dumpcap`` utility uses this library
+to capture packets in DPDK.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 4c56cdfeaaa2..0909f4258cf8 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -159,6 +159,16 @@ New Features
   * Added tests to verify tunnel header verification in IPsec inbound.
   * Added tests to verify inner checksum.
 
+* **Revised packet capture framework.**
+
+  * New dpdk-dumpcap program that has most of the features of the
+    wireshark dumpcap utility including: capture of multiple interfaces,
+    filtering, and stopping after number of bytes, packets.
+  * New library for writing pcapng packet capture files.
+  * Enhancements to the pdump library to support:
+    * Packet filter with BPF.
+    * Pcapng format with timestamps and meta-data.
+    * Fixes packet capture with stripped VLAN tags.
 
 Removed Items
 -------------
diff --git a/doc/guides/tools/dumpcap.rst b/doc/guides/tools/dumpcap.rst
new file mode 100644
index 000000000000..664ea0c79802
--- /dev/null
+++ b/doc/guides/tools/dumpcap.rst
@@ -0,0 +1,86 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2020 Microsoft Corporation.
+
+.. _dumpcap_tool:
+
+dpdk-dumpcap Application
+========================
+
+The ``dpdk-dumpcap`` tool is a Data Plane Development Kit (DPDK)
+network traffic dump tool.  The interface is similar to  the dumpcap tool in Wireshark.
+It runs as a secondary DPDK process and lets you capture packets that are
+coming into and out of a DPDK primary process.
+The ``dpdk-dumpcap`` writes files in Pcapng packet format using
+capture file format is pcapng.
+
+Without any options set it will use DPDK to capture traffic from the first
+available DPDK interface and write the received raw packet data, along
+with timestamps into a pcapng file.
+
+If the ``-w`` option is not specified, ``dpdk-dumpcap`` writes to a newly
+create file with a name chosen based on interface name and timestamp.
+If ``-w`` option is specified, then that file is used.
+
+   .. Note::
+      * The ``dpdk-dumpcap`` tool can only be used in conjunction with a primary
+        application which has the packet capture framework initialized already.
+        In dpdk, only the ``testpmd`` is modified to initialize packet capture
+        framework, other applications remain untouched. So, if the ``dpdk-dumpcap``
+        tool has to be used with any application other than the testpmd, user
+        needs to explicitly modify that application to call packet capture
+        framework initialization code. Refer ``app/test-pmd/testpmd.c``
+        code to see how this is done.
+
+      * The ``dpdk-dumpcap`` tool runs as a DPDK secondary process. It exits when
+        the primary application exits.
+
+
+Running the Application
+-----------------------
+
+To list interfaces available for capture use ``--list-interfaces``.
+
+To filter packets in style of *tshark* use the ``-f`` flag.
+
+To capture on multiple interfaces at once, use multiple ``-I`` flags.
+
+Example
+-------
+
+.. code-block:: console
+
+   # ./<build_dir>/app/dpdk-dumpcap --list-interfaces
+   0. 000:00:03.0
+   1. 000:00:03.1
+
+   # ./<build_dir>/app/dpdk-dumpcap -I 0000:00:03.0 -c 6 -w /tmp/sample.pcapng
+   Packets captured: 6
+   Packets received/dropped on interface '0000:00:03.0' 6/0
+
+   # ./<build_dir>/app/dpdk-dumpcap -f 'tcp port 80'
+   Packets captured: 6
+   Packets received/dropped on interface '0000:00:03.0' 10/8
+
+
+Limitations
+-----------
+The following option of Wireshark ``dumpcap`` is not yet implemented:
+
+   * ``-b|--ring-buffer`` -- more complex file management.
+
+The following options do not make sense in the context of DPDK.
+
+   * ``-C <byte_limit>`` -- its a kernel thing
+
+   * ``-t`` -- use a thread per interface
+
+   * Timestamp type.
+
+   * Link data types. Only EN10MB (Ethernet) is supported.
+
+   * Wireless related options:  ``-I|--monitor-mode`` and  ``-k <freq>``
+
+
+.. Note::
+   * The options to ``dpdk-dumpcap`` are like the Wireshark dumpcap program and
+     are not the same as ``dpdk-pdump`` and other DPDK applications.
diff --git a/doc/guides/tools/index.rst b/doc/guides/tools/index.rst
index 93dde4148e90..b71c12b8f2dd 100644
--- a/doc/guides/tools/index.rst
+++ b/doc/guides/tools/index.rst
@@ -8,6 +8,7 @@ DPDK Tools User Guides
     :maxdepth: 2
     :numbered:
 
+    dumpcap
     proc_info
     pdump
     pmdinfo
-- 
2.30.2


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v14 06/12] pdump: support pcapng and filtering
  @ 2021-10-15 20:11  1%   ` Stephen Hemminger
  2021-10-15 20:11  1%   ` [dpdk-dev] [PATCH v14 11/12] doc: changes for new pcapng and dumpcap utility Stephen Hemminger
  1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-10-15 20:11 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger, Reshma Pattan, Ray Kinsella, Anatoly Burakov

This enhances the DPDK pdump library to support new
pcapng format and filtering via BPF.

The internal client/server protocol is changed to support
two versions: the original pdump basic version and a
new pcapng version.

The internal version number (not part of exposed API or ABI)
is intentionally increased to cause any attempt to try
mismatched primary/secondary process to fail.

Add new API to do allow filtering of captured packets with
DPDK BPF (eBPF) filter program. It keeps statistics
on packets captured, filtered, and missed (because ring was full).

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Reshma Pattan <reshma.pattan@intel.com>
---
 lib/meson.build       |   4 +-
 lib/pdump/meson.build |   2 +-
 lib/pdump/rte_pdump.c | 432 ++++++++++++++++++++++++++++++------------
 lib/pdump/rte_pdump.h | 113 ++++++++++-
 lib/pdump/version.map |   8 +
 5 files changed, 433 insertions(+), 126 deletions(-)

diff --git a/lib/meson.build b/lib/meson.build
index 15150efa19a7..c71c6917dbb7 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -27,6 +27,7 @@ libraries = [
         'acl',
         'bbdev',
         'bitratestats',
+        'bpf',
         'cfgfile',
         'compressdev',
         'cryptodev',
@@ -43,7 +44,6 @@ libraries = [
         'member',
         'pcapng',
         'power',
-        'pdump',
         'rawdev',
         'regexdev',
         'rib',
@@ -55,10 +55,10 @@ libraries = [
         'ipsec', # ipsec lib depends on net, crypto and security
         'fib', #fib lib depends on rib
         'port', # pkt framework libs which use other libs from above
+        'pdump', # pdump lib depends on bpf
         'table',
         'pipeline',
         'flow_classify', # flow_classify lib depends on pkt framework table lib
-        'bpf',
         'graph',
         'node',
 ]
diff --git a/lib/pdump/meson.build b/lib/pdump/meson.build
index 3a95eabde6a6..51ceb2afdec5 100644
--- a/lib/pdump/meson.build
+++ b/lib/pdump/meson.build
@@ -3,4 +3,4 @@
 
 sources = files('rte_pdump.c')
 headers = files('rte_pdump.h')
-deps += ['ethdev']
+deps += ['ethdev', 'bpf', 'pcapng']
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index 382217bc1564..2636a216994b 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -7,8 +7,10 @@
 #include <rte_ethdev.h>
 #include <rte_lcore.h>
 #include <rte_log.h>
+#include <rte_memzone.h>
 #include <rte_errno.h>
 #include <rte_string_fns.h>
+#include <rte_pcapng.h>
 
 #include "rte_pdump.h"
 
@@ -27,30 +29,23 @@ enum pdump_operation {
 	ENABLE = 2
 };
 
+/* Internal version number in request */
 enum pdump_version {
-	V1 = 1
+	V1 = 1,		    /* no filtering or snap */
+	V2 = 2,
 };
 
 struct pdump_request {
 	uint16_t ver;
 	uint16_t op;
 	uint32_t flags;
-	union pdump_data {
-		struct enable_v1 {
-			char device[RTE_DEV_NAME_MAX_LEN];
-			uint16_t queue;
-			struct rte_ring *ring;
-			struct rte_mempool *mp;
-			void *filter;
-		} en_v1;
-		struct disable_v1 {
-			char device[RTE_DEV_NAME_MAX_LEN];
-			uint16_t queue;
-			struct rte_ring *ring;
-			struct rte_mempool *mp;
-			void *filter;
-		} dis_v1;
-	} data;
+	char device[RTE_DEV_NAME_MAX_LEN];
+	uint16_t queue;
+	struct rte_ring *ring;
+	struct rte_mempool *mp;
+
+	const struct rte_bpf_prm *prm;
+	uint32_t snaplen;
 };
 
 struct pdump_response {
@@ -63,80 +58,140 @@ static struct pdump_rxtx_cbs {
 	struct rte_ring *ring;
 	struct rte_mempool *mp;
 	const struct rte_eth_rxtx_callback *cb;
-	void *filter;
+	const struct rte_bpf *filter;
+	enum pdump_version ver;
+	uint32_t snaplen;
 } rx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
 tx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
 
 
-static inline void
-pdump_copy(struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
+/*
+ * The packet capture statistics keep track of packets
+ * accepted, filtered and dropped. These are per-queue
+ * and in memory between primary and secondary processes.
+ */
+static const char MZ_RTE_PDUMP_STATS[] = "rte_pdump_stats";
+static struct {
+	struct rte_pdump_stats rx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+	struct rte_pdump_stats tx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+} *pdump_stats;
+
+/* Create a clone of mbuf to be placed into ring. */
+static void
+pdump_copy(uint16_t port_id, uint16_t queue,
+	   enum rte_pcapng_direction direction,
+	   struct rte_mbuf **pkts, uint16_t nb_pkts,
+	   const struct pdump_rxtx_cbs *cbs,
+	   struct rte_pdump_stats *stats)
 {
 	unsigned int i;
 	int ring_enq;
 	uint16_t d_pkts = 0;
 	struct rte_mbuf *dup_bufs[nb_pkts];
-	struct pdump_rxtx_cbs *cbs;
+	uint64_t ts;
 	struct rte_ring *ring;
 	struct rte_mempool *mp;
 	struct rte_mbuf *p;
+	uint64_t rcs[nb_pkts];
+
+	if (cbs->filter)
+		rte_bpf_exec_burst(cbs->filter, (void **)pkts, rcs, nb_pkts);
 
-	cbs  = user_params;
+	ts = rte_get_tsc_cycles();
 	ring = cbs->ring;
 	mp = cbs->mp;
 	for (i = 0; i < nb_pkts; i++) {
-		p = rte_pktmbuf_copy(pkts[i], mp, 0, UINT32_MAX);
-		if (p)
+		/*
+		 * This uses same BPF return value convention as socket filter
+		 * and pcap_offline_filter.
+		 * if program returns zero
+		 * then packet doesn't match the filter (will be ignored).
+		 */
+		if (cbs->filter && rcs[i] == 0) {
+			__atomic_fetch_add(&stats->filtered,
+					   1, __ATOMIC_RELAXED);
+			continue;
+		}
+
+		/*
+		 * If using pcapng then want to wrap packets
+		 * otherwise a simple copy.
+		 */
+		if (cbs->ver == V2)
+			p = rte_pcapng_copy(port_id, queue,
+					    pkts[i], mp, cbs->snaplen,
+					    ts, direction);
+		else
+			p = rte_pktmbuf_copy(pkts[i], mp, 0, cbs->snaplen);
+
+		if (unlikely(p == NULL))
+			__atomic_fetch_add(&stats->nombuf, 1, __ATOMIC_RELAXED);
+		else
 			dup_bufs[d_pkts++] = p;
 	}
 
+	__atomic_fetch_add(&stats->accepted, d_pkts, __ATOMIC_RELAXED);
+
 	ring_enq = rte_ring_enqueue_burst(ring, (void *)dup_bufs, d_pkts, NULL);
 	if (unlikely(ring_enq < d_pkts)) {
 		unsigned int drops = d_pkts - ring_enq;
 
-		PDUMP_LOG(DEBUG,
-			"only %d of packets enqueued to ring\n", ring_enq);
+		__atomic_fetch_add(&stats->ringfull, drops, __ATOMIC_RELAXED);
 		rte_pktmbuf_free_bulk(&dup_bufs[ring_enq], drops);
 	}
 }
 
 static uint16_t
-pdump_rx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_rx(uint16_t port, uint16_t queue,
 	struct rte_mbuf **pkts, uint16_t nb_pkts,
-	uint16_t max_pkts __rte_unused,
-	void *user_params)
+	uint16_t max_pkts __rte_unused, void *user_params)
 {
-	pdump_copy(pkts, nb_pkts, user_params);
+	const struct pdump_rxtx_cbs *cbs = user_params;
+	struct rte_pdump_stats *stats = &pdump_stats->rx[port][queue];
+
+	pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_IN,
+		   pkts, nb_pkts, cbs, stats);
 	return nb_pkts;
 }
 
 static uint16_t
-pdump_tx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_tx(uint16_t port, uint16_t queue,
 		struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
 {
-	pdump_copy(pkts, nb_pkts, user_params);
+	const struct pdump_rxtx_cbs *cbs = user_params;
+	struct rte_pdump_stats *stats = &pdump_stats->tx[port][queue];
+
+	pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_OUT,
+		   pkts, nb_pkts, cbs, stats);
 	return nb_pkts;
 }
 
 static int
-pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
-				struct rte_ring *ring, struct rte_mempool *mp,
-				uint16_t operation)
+pdump_register_rx_callbacks(enum pdump_version ver,
+			    uint16_t end_q, uint16_t port, uint16_t queue,
+			    struct rte_ring *ring, struct rte_mempool *mp,
+			    struct rte_bpf *filter,
+			    uint16_t operation, uint32_t snaplen)
 {
 	uint16_t qid;
-	struct pdump_rxtx_cbs *cbs = NULL;
 
 	qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
 	for (; qid < end_q; qid++) {
-		cbs = &rx_cbs[port][qid];
-		if (cbs && operation == ENABLE) {
+		struct pdump_rxtx_cbs *cbs = &rx_cbs[port][qid];
+
+		if (operation == ENABLE) {
 			if (cbs->cb) {
 				PDUMP_LOG(ERR,
 					"rx callback for port=%d queue=%d, already exists\n",
 					port, qid);
 				return -EEXIST;
 			}
+			cbs->ver = ver;
 			cbs->ring = ring;
 			cbs->mp = mp;
+			cbs->snaplen = snaplen;
+			cbs->filter = filter;
+
 			cbs->cb = rte_eth_add_first_rx_callback(port, qid,
 								pdump_rx, cbs);
 			if (cbs->cb == NULL) {
@@ -145,8 +200,7 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
 					rte_errno);
 				return rte_errno;
 			}
-		}
-		if (cbs && operation == DISABLE) {
+		} else if (operation == DISABLE) {
 			int ret;
 
 			if (cbs->cb == NULL) {
@@ -170,26 +224,32 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
 }
 
 static int
-pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
-				struct rte_ring *ring, struct rte_mempool *mp,
-				uint16_t operation)
+pdump_register_tx_callbacks(enum pdump_version ver,
+			    uint16_t end_q, uint16_t port, uint16_t queue,
+			    struct rte_ring *ring, struct rte_mempool *mp,
+			    struct rte_bpf *filter,
+			    uint16_t operation, uint32_t snaplen)
 {
 
 	uint16_t qid;
-	struct pdump_rxtx_cbs *cbs = NULL;
 
 	qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
 	for (; qid < end_q; qid++) {
-		cbs = &tx_cbs[port][qid];
-		if (cbs && operation == ENABLE) {
+		struct pdump_rxtx_cbs *cbs = &tx_cbs[port][qid];
+
+		if (operation == ENABLE) {
 			if (cbs->cb) {
 				PDUMP_LOG(ERR,
 					"tx callback for port=%d queue=%d, already exists\n",
 					port, qid);
 				return -EEXIST;
 			}
+			cbs->ver = ver;
 			cbs->ring = ring;
 			cbs->mp = mp;
+			cbs->snaplen = snaplen;
+			cbs->filter = filter;
+
 			cbs->cb = rte_eth_add_tx_callback(port, qid, pdump_tx,
 								cbs);
 			if (cbs->cb == NULL) {
@@ -198,8 +258,7 @@ pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
 					rte_errno);
 				return rte_errno;
 			}
-		}
-		if (cbs && operation == DISABLE) {
+		} else if (operation == DISABLE) {
 			int ret;
 
 			if (cbs->cb == NULL) {
@@ -228,37 +287,47 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
 	uint16_t nb_rx_q = 0, nb_tx_q = 0, end_q, queue;
 	uint16_t port;
 	int ret = 0;
+	struct rte_bpf *filter = NULL;
 	uint32_t flags;
 	uint16_t operation;
 	struct rte_ring *ring;
 	struct rte_mempool *mp;
 
-	flags = p->flags;
-	operation = p->op;
-	if (operation == ENABLE) {
-		ret = rte_eth_dev_get_port_by_name(p->data.en_v1.device,
-				&port);
-		if (ret < 0) {
+	/* Check for possible DPDK version mismatch */
+	if (!(p->ver == V1 || p->ver == V2)) {
+		PDUMP_LOG(ERR,
+			  "incorrect client version %u\n", p->ver);
+		return -EINVAL;
+	}
+
+	if (p->prm) {
+		if (p->prm->prog_arg.type != RTE_BPF_ARG_PTR_MBUF) {
 			PDUMP_LOG(ERR,
-				"failed to get port id for device id=%s\n",
-				p->data.en_v1.device);
+				  "invalid BPF program type: %u\n",
+				  p->prm->prog_arg.type);
 			return -EINVAL;
 		}
-		queue = p->data.en_v1.queue;
-		ring = p->data.en_v1.ring;
-		mp = p->data.en_v1.mp;
-	} else {
-		ret = rte_eth_dev_get_port_by_name(p->data.dis_v1.device,
-				&port);
-		if (ret < 0) {
-			PDUMP_LOG(ERR,
-				"failed to get port id for device id=%s\n",
-				p->data.dis_v1.device);
-			return -EINVAL;
+
+		filter = rte_bpf_load(p->prm);
+		if (filter == NULL) {
+			PDUMP_LOG(ERR, "cannot load BPF filter: %s\n",
+				  rte_strerror(rte_errno));
+			return -rte_errno;
 		}
-		queue = p->data.dis_v1.queue;
-		ring = p->data.dis_v1.ring;
-		mp = p->data.dis_v1.mp;
+	}
+
+	flags = p->flags;
+	operation = p->op;
+	queue = p->queue;
+	ring = p->ring;
+	mp = p->mp;
+
+	ret = rte_eth_dev_get_port_by_name(p->device, &port);
+	if (ret < 0) {
+		PDUMP_LOG(ERR,
+			  "failed to get port id for device id=%s\n",
+			  p->device);
+		return -EINVAL;
 	}
 
 	/* validation if packet capture is for all queues */
@@ -296,8 +365,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
 	/* register RX callback */
 	if (flags & RTE_PDUMP_FLAG_RX) {
 		end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_rx_q : queue + 1;
-		ret = pdump_register_rx_callbacks(end_q, port, queue, ring, mp,
-							operation);
+		ret = pdump_register_rx_callbacks(p->ver, end_q, port, queue,
+						  ring, mp, filter,
+						  operation, p->snaplen);
 		if (ret < 0)
 			return ret;
 	}
@@ -305,8 +375,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
 	/* register TX callback */
 	if (flags & RTE_PDUMP_FLAG_TX) {
 		end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_tx_q : queue + 1;
-		ret = pdump_register_tx_callbacks(end_q, port, queue, ring, mp,
-							operation);
+		ret = pdump_register_tx_callbacks(p->ver, end_q, port, queue,
+						  ring, mp, filter,
+						  operation, p->snaplen);
 		if (ret < 0)
 			return ret;
 	}
@@ -332,7 +403,7 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
 		resp->err_value = set_pdump_rxtx_cbs(cli_req);
 	}
 
-	strlcpy(mp_resp.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
+	rte_strscpy(mp_resp.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
 	mp_resp.len_param = sizeof(*resp);
 	mp_resp.num_fds = 0;
 	if (rte_mp_reply(&mp_resp, peer) < 0) {
@@ -347,8 +418,18 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
 int
 rte_pdump_init(void)
 {
+	const struct rte_memzone *mz;
 	int ret;
 
+	mz = rte_memzone_reserve(MZ_RTE_PDUMP_STATS, sizeof(*pdump_stats),
+				 rte_socket_id(), 0);
+	if (mz == NULL) {
+		PDUMP_LOG(ERR, "cannot allocate pdump statistics\n");
+		rte_errno = ENOMEM;
+		return -1;
+	}
+	pdump_stats = mz->addr;
+
 	ret = rte_mp_action_register(PDUMP_MP, pdump_server);
 	if (ret && rte_errno != ENOTSUP)
 		return -1;
@@ -392,14 +473,21 @@ pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp)
 static int
 pdump_validate_flags(uint32_t flags)
 {
-	if (flags != RTE_PDUMP_FLAG_RX && flags != RTE_PDUMP_FLAG_TX &&
-		flags != RTE_PDUMP_FLAG_RXTX) {
+	if ((flags & RTE_PDUMP_FLAG_RXTX) == 0) {
 		PDUMP_LOG(ERR,
 			"invalid flags, should be either rx/tx/rxtx\n");
 		rte_errno = EINVAL;
 		return -1;
 	}
 
+	/* mask off the flags we know about */
+	if (flags & ~(RTE_PDUMP_FLAG_RXTX | RTE_PDUMP_FLAG_PCAPNG)) {
+		PDUMP_LOG(ERR,
+			  "unknown flags: %#x\n", flags);
+		rte_errno = ENOTSUP;
+		return -1;
+	}
+
 	return 0;
 }
 
@@ -426,12 +514,12 @@ pdump_validate_port(uint16_t port, char *name)
 }
 
 static int
-pdump_prepare_client_request(char *device, uint16_t queue,
-				uint32_t flags,
-				uint16_t operation,
-				struct rte_ring *ring,
-				struct rte_mempool *mp,
-				void *filter)
+pdump_prepare_client_request(const char *device, uint16_t queue,
+			     uint32_t flags, uint32_t snaplen,
+			     uint16_t operation,
+			     struct rte_ring *ring,
+			     struct rte_mempool *mp,
+			     const struct rte_bpf_prm *prm)
 {
 	int ret = -1;
 	struct rte_mp_msg mp_req, *mp_rep;
@@ -440,26 +528,22 @@ pdump_prepare_client_request(char *device, uint16_t queue,
 	struct pdump_request *req = (struct pdump_request *)mp_req.param;
 	struct pdump_response *resp;
 
-	req->ver = 1;
-	req->flags = flags;
+	memset(req, 0, sizeof(*req));
+
+	req->ver = (flags & RTE_PDUMP_FLAG_PCAPNG) ? V2 : V1;
+	req->flags = flags & RTE_PDUMP_FLAG_RXTX;
 	req->op = operation;
+	req->queue = queue;
+	rte_strscpy(req->device, device, sizeof(req->device));
+
 	if ((operation & ENABLE) != 0) {
-		strlcpy(req->data.en_v1.device, device,
-			sizeof(req->data.en_v1.device));
-		req->data.en_v1.queue = queue;
-		req->data.en_v1.ring = ring;
-		req->data.en_v1.mp = mp;
-		req->data.en_v1.filter = filter;
-	} else {
-		strlcpy(req->data.dis_v1.device, device,
-			sizeof(req->data.dis_v1.device));
-		req->data.dis_v1.queue = queue;
-		req->data.dis_v1.ring = NULL;
-		req->data.dis_v1.mp = NULL;
-		req->data.dis_v1.filter = NULL;
+		req->ring = ring;
+		req->mp = mp;
+		req->prm = prm;
+		req->snaplen = snaplen;
 	}
 
-	strlcpy(mp_req.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
+	rte_strscpy(mp_req.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
 	mp_req.len_param = sizeof(*req);
 	mp_req.num_fds = 0;
 	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0) {
@@ -477,11 +561,17 @@ pdump_prepare_client_request(char *device, uint16_t queue,
 	return ret;
 }
 
-int
-rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
-			struct rte_ring *ring,
-			struct rte_mempool *mp,
-			void *filter)
+/*
+ * There are two versions of this function, because although original API
+ * left place holder for future filter, it never checked the value.
+ * Therefore the API can't depend on application passing a non
+ * bogus value.
+ */
+static int
+pdump_enable(uint16_t port, uint16_t queue,
+	     uint32_t flags, uint32_t snaplen,
+	     struct rte_ring *ring, struct rte_mempool *mp,
+	     const struct rte_bpf_prm *prm)
 {
 	int ret;
 	char name[RTE_DEV_NAME_MAX_LEN];
@@ -496,20 +586,42 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(name, queue, flags,
-						ENABLE, ring, mp, filter);
+	if (snaplen == 0)
+		snaplen = UINT32_MAX;
 
-	return ret;
+	return pdump_prepare_client_request(name, queue, flags, snaplen,
+					    ENABLE, ring, mp, prm);
 }
 
 int
-rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
-				uint32_t flags,
-				struct rte_ring *ring,
-				struct rte_mempool *mp,
-				void *filter)
+rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
+		 struct rte_ring *ring,
+		 struct rte_mempool *mp,
+		 void *filter __rte_unused)
 {
-	int ret = 0;
+	return pdump_enable(port, queue, flags, 0,
+			    ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf(uint16_t port, uint16_t queue,
+		     uint32_t flags, uint32_t snaplen,
+		     struct rte_ring *ring,
+		     struct rte_mempool *mp,
+		     const struct rte_bpf_prm *prm)
+{
+	return pdump_enable(port, queue, flags, snaplen,
+			    ring, mp, prm);
+}
+
+static int
+pdump_enable_by_deviceid(const char *device_id, uint16_t queue,
+			 uint32_t flags, uint32_t snaplen,
+			 struct rte_ring *ring,
+			 struct rte_mempool *mp,
+			 const struct rte_bpf_prm *prm)
+{
+	int ret;
 
 	ret = pdump_validate_ring_mp(ring, mp);
 	if (ret < 0)
@@ -518,10 +630,30 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(device_id, queue, flags,
-						ENABLE, ring, mp, filter);
+	return pdump_prepare_client_request(device_id, queue, flags, snaplen,
+					    ENABLE, ring, mp, prm);
+}
 
-	return ret;
+int
+rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
+			     uint32_t flags,
+			     struct rte_ring *ring,
+			     struct rte_mempool *mp,
+			     void *filter __rte_unused)
+{
+	return pdump_enable_by_deviceid(device_id, queue, flags, 0,
+					ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+				 uint32_t flags, uint32_t snaplen,
+				 struct rte_ring *ring,
+				 struct rte_mempool *mp,
+				 const struct rte_bpf_prm *prm)
+{
+	return pdump_enable_by_deviceid(device_id, queue, flags, snaplen,
+					ring, mp, prm);
 }
 
 int
@@ -537,8 +669,8 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags)
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(name, queue, flags,
-						DISABLE, NULL, NULL, NULL);
+	ret = pdump_prepare_client_request(name, queue, flags, 0,
+					   DISABLE, NULL, NULL, NULL);
 
 	return ret;
 }
@@ -553,8 +685,68 @@ rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(device_id, queue, flags,
-						DISABLE, NULL, NULL, NULL);
+	ret = pdump_prepare_client_request(device_id, queue, flags, 0,
+					   DISABLE, NULL, NULL, NULL);
 
 	return ret;
 }
+
+static void
+pdump_sum_stats(uint16_t port, uint16_t nq,
+		struct rte_pdump_stats stats[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
+		struct rte_pdump_stats *total)
+{
+	uint64_t *sum = (uint64_t *)total;
+	unsigned int i;
+	uint64_t val;
+	uint16_t qid;
+
+	for (qid = 0; qid < nq; qid++) {
+		const uint64_t *perq = (const uint64_t *)&stats[port][qid];
+
+		for (i = 0; i < sizeof(*total) / sizeof(uint64_t); i++) {
+			val = __atomic_load_n(&perq[i], __ATOMIC_RELAXED);
+			sum[i] += val;
+		}
+	}
+}
+
+int
+rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats)
+{
+	struct rte_eth_dev_info dev_info;
+	const struct rte_memzone *mz;
+	int ret;
+
+	memset(stats, 0, sizeof(*stats));
+	ret = rte_eth_dev_info_get(port, &dev_info);
+	if (ret != 0) {
+		PDUMP_LOG(ERR,
+			  "Error during getting device (port %u) info: %s\n",
+			  port, strerror(-ret));
+		return ret;
+	}
+
+	if (pdump_stats == NULL) {
+		if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+			/* rte_pdump_init was not called */
+			PDUMP_LOG(ERR, "pdump stats not initialized\n");
+			rte_errno = EINVAL;
+			return -1;
+		}
+
+		/* secondary process looks up the memzone */
+		mz = rte_memzone_lookup(MZ_RTE_PDUMP_STATS);
+		if (mz == NULL) {
+			/* rte_pdump_init was not called in primary process?? */
+			PDUMP_LOG(ERR, "can not find pdump stats\n");
+			rte_errno = EINVAL;
+			return -1;
+		}
+		pdump_stats = mz->addr;
+	}
+
+	pdump_sum_stats(port, dev_info.nb_rx_queues, pdump_stats->rx, stats);
+	pdump_sum_stats(port, dev_info.nb_tx_queues, pdump_stats->tx, stats);
+	return 0;
+}
diff --git a/lib/pdump/rte_pdump.h b/lib/pdump/rte_pdump.h
index 6b00fc17aeb2..6efa0274f2ce 100644
--- a/lib/pdump/rte_pdump.h
+++ b/lib/pdump/rte_pdump.h
@@ -15,6 +15,7 @@
 #include <stdint.h>
 #include <rte_mempool.h>
 #include <rte_ring.h>
+#include <rte_bpf.h>
 
 #ifdef __cplusplus
 extern "C" {
@@ -26,7 +27,9 @@ enum {
 	RTE_PDUMP_FLAG_RX = 1,  /* receive direction */
 	RTE_PDUMP_FLAG_TX = 2,  /* transmit direction */
 	/* both receive and transmit directions */
-	RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX)
+	RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX),
+
+	RTE_PDUMP_FLAG_PCAPNG = 4, /* format for pcapng */
 };
 
 /**
@@ -68,7 +71,7 @@ rte_pdump_uninit(void);
  * @param mp
  *  mempool on to which original packets will be mirrored or duplicated.
  * @param filter
- *  place holder for packet filtering.
+ *  Unused should be NULL.
  *
  * @return
  *    0 on success, -1 on error, rte_errno is set accordingly.
@@ -80,6 +83,41 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
 		struct rte_mempool *mp,
 		void *filter);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given port and queue with filtering.
+ *
+ * @param port_id
+ *  The Ethernet port on which packet capturing should be enabled.
+ * @param queue
+ *  The queue on the Ethernet port which packet capturing
+ *  should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ *  queues of a given port.
+ * @param flags
+ *  Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ *  The upper limit on bytes to copy.
+ *  Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ *  The ring on which captured packets will be enqueued for user.
+ * @param mp
+ *  The mempool on to which original packets will be mirrored or duplicated.
+ * @param prm
+ *  Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ *    0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf(uint16_t port_id, uint16_t queue,
+		     uint32_t flags, uint32_t snaplen,
+		     struct rte_ring *ring,
+		     struct rte_mempool *mp,
+		     const struct rte_bpf_prm *prm);
+
 /**
  * Disables packet capturing on given port and queue.
  *
@@ -118,7 +156,7 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags);
  * @param mp
  *  mempool on to which original packets will be mirrored or duplicated.
  * @param filter
- *  place holder for packet filtering.
+ *  unused should be NULL
  *
  * @return
  *    0 on success, -1 on error, rte_errno is set accordingly.
@@ -131,6 +169,43 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
 				struct rte_mempool *mp,
 				void *filter);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given device id and queue with filtering.
+ * device_id can be name or pci address of device.
+ *
+ * @param device_id
+ *  device id on which packet capturing should be enabled.
+ * @param queue
+ *  The queue on the Ethernet port which packet capturing
+ *  should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ *  queues of a given port.
+ * @param flags
+ *  Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ *  The upper limit on bytes to copy.
+ *  Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ *  The ring on which captured packets will be enqueued for user.
+ * @param mp
+ *  The mempool on to which original packets will be mirrored or duplicated.
+ * @param filter
+ *  Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ *    0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+				 uint32_t flags, uint32_t snaplen,
+				 struct rte_ring *ring,
+				 struct rte_mempool *mp,
+				 const struct rte_bpf_prm *filter);
+
+
 /**
  * Disables packet capturing on given device_id and queue.
  * device_id can be name or pci address of device.
@@ -153,6 +228,38 @@ int
 rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
 				uint32_t flags);
 
+
+/**
+ * A structure used to retrieve statistics from packet capture.
+ * The statistics are sum of both receive and transmit queues.
+ */
+struct rte_pdump_stats {
+	uint64_t accepted; /**< Number of packets accepted by filter. */
+	uint64_t filtered; /**< Number of packets rejected by filter. */
+	uint64_t nombuf;   /**< Number of mbuf allocation failures. */
+	uint64_t ringfull; /**< Number of missed packets due to ring full. */
+
+	uint64_t reserved[4]; /**< Reserved and pad to cache line */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Retrieve the packet capture statistics for a queue.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param stats
+ *   A pointer to structure of type *rte_pdump_stats* to be filled in.
+ * @return
+ *   Zero if successful. -1 on error and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_pdump_stats(uint16_t port_id, struct rte_pdump_stats *stats);
+
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/pdump/version.map b/lib/pdump/version.map
index f0a9d12c9a9e..ce5502d9cdf4 100644
--- a/lib/pdump/version.map
+++ b/lib/pdump/version.map
@@ -10,3 +10,11 @@ DPDK_22 {
 
 	local: *;
 };
+
+EXPERIMENTAL {
+	global:
+
+	rte_pdump_enable_bpf;
+	rte_pdump_enable_bpf_by_deviceid;
+	rte_pdump_stats;
+};
-- 
2.30.2


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v2 4/4] mbuf: add rte prefix to offload flags
  @ 2021-10-15 19:24  1%   ` Olivier Matz
  0 siblings, 0 replies; 200+ results
From: Olivier Matz @ 2021-10-15 19:24 UTC (permalink / raw)
  To: dev; +Cc: David Marchand

Fix the mbuf offload flags namespace by adding an RTE_ prefix to the
name. The old flags remain usable, but a deprecation warning is issued
at compilation.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test-pmd/csumonly.c                       |  62 +--
 app/test-pmd/flowgen.c                        |   8 +-
 app/test-pmd/ieee1588fwd.c                    |   6 +-
 app/test-pmd/macfwd.c                         |   8 +-
 app/test-pmd/macswap_common.h                 |  12 +-
 app/test-pmd/txonly.c                         |   8 +-
 app/test-pmd/util.c                           |  18 +-
 app/test/test_cryptodev_security_ipsec.c      |   4 +-
 app/test/test_ipsec.c                         |   4 +-
 app/test/test_mbuf.c                          | 144 ++++---
 doc/guides/nics/bnxt.rst                      |   8 +-
 doc/guides/nics/enic.rst                      |   8 +-
 doc/guides/nics/features.rst                  |  70 +--
 doc/guides/nics/ixgbe.rst                     |   2 +-
 doc/guides/nics/mlx5.rst                      |   6 +-
 .../generic_segmentation_offload_lib.rst      |   4 +-
 doc/guides/prog_guide/mbuf_lib.rst            |  18 +-
 doc/guides/prog_guide/metrics_lib.rst         |   2 +-
 doc/guides/prog_guide/rte_flow.rst            |  14 +-
 doc/guides/rel_notes/deprecation.rst          |   5 -
 doc/guides/rel_notes/release_21_11.rst        |   3 +
 drivers/compress/mlx5/mlx5_compress.c         |   2 +-
 drivers/crypto/cnxk/cn10k_cryptodev_ops.c     |  12 +-
 drivers/crypto/cnxk/cn10k_ipsec_la_ops.h      |   4 +-
 drivers/crypto/mlx5/mlx5_crypto.c             |   2 +-
 drivers/event/cnxk/cn9k_worker.h              |   2 +-
 drivers/event/octeontx/ssovf_worker.c         |  36 +-
 drivers/event/octeontx/ssovf_worker.h         |   2 +-
 drivers/event/octeontx2/otx2_worker.h         |   2 +-
 drivers/net/af_packet/rte_eth_af_packet.c     |   4 +-
 drivers/net/atlantic/atl_rxtx.c               |  46 +-
 drivers/net/avp/avp_ethdev.c                  |   8 +-
 drivers/net/axgbe/axgbe_rxtx.c                |  64 +--
 drivers/net/axgbe/axgbe_rxtx_vec_sse.c        |   2 +-
 drivers/net/bnx2x/bnx2x.c                     |   2 +-
 drivers/net/bnx2x/bnx2x_rxtx.c                |   2 +-
 drivers/net/bnxt/bnxt_rxr.c                   |  50 +--
 drivers/net/bnxt/bnxt_rxr.h                   |  32 +-
 drivers/net/bnxt/bnxt_txr.c                   |  40 +-
 drivers/net/bnxt/bnxt_txr.h                   |  38 +-
 drivers/net/bonding/rte_eth_bond_pmd.c        |   2 +-
 drivers/net/cnxk/cn10k_ethdev.c               |  18 +-
 drivers/net/cnxk/cn10k_rx.h                   |  38 +-
 drivers/net/cnxk/cn10k_tx.h                   | 178 ++++----
 drivers/net/cnxk/cn9k_ethdev.c                |  18 +-
 drivers/net/cnxk/cn9k_rx.h                    |  32 +-
 drivers/net/cnxk/cn9k_tx.h                    | 170 ++++----
 drivers/net/cnxk/cnxk_ethdev.h                |  10 +-
 drivers/net/cnxk/cnxk_lookup.c                |  40 +-
 drivers/net/cxgbe/sge.c                       |  46 +-
 drivers/net/dpaa/dpaa_ethdev.h                |   7 +-
 drivers/net/dpaa/dpaa_rxtx.c                  |  10 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |  30 +-
 drivers/net/e1000/em_rxtx.c                   |  39 +-
 drivers/net/e1000/igb_rxtx.c                  |  81 ++--
 drivers/net/ena/ena_ethdev.c                  |  53 ++-
 drivers/net/enetc/enetc_rxtx.c                |  44 +-
 drivers/net/enic/enic_main.c                  |  10 +-
 drivers/net/enic/enic_res.c                   |  12 +-
 drivers/net/enic/enic_rxtx.c                  |  24 +-
 drivers/net/enic/enic_rxtx_common.h           |  18 +-
 drivers/net/enic/enic_rxtx_vec_avx2.c         |  80 ++--
 drivers/net/fm10k/fm10k_rxtx.c                |  43 +-
 drivers/net/fm10k/fm10k_rxtx_vec.c            |  25 +-
 drivers/net/hinic/hinic_pmd_rx.c              |  22 +-
 drivers/net/hinic/hinic_pmd_tx.c              |  56 +--
 drivers/net/hinic/hinic_pmd_tx.h              |  13 +-
 drivers/net/hns3/hns3_ethdev.h                |   2 +-
 drivers/net/hns3/hns3_rxtx.c                  | 108 ++---
 drivers/net/hns3/hns3_rxtx.h                  |  25 +-
 drivers/net/hns3/hns3_rxtx_vec_neon.h         |   2 +-
 drivers/net/hns3/hns3_rxtx_vec_sve.c          |   2 +-
 drivers/net/i40e/i40e_rxtx.c                  | 157 ++++---
 drivers/net/i40e/i40e_rxtx_vec_altivec.c      |  22 +-
 drivers/net/i40e/i40e_rxtx_vec_avx2.c         |  70 +--
 drivers/net/i40e/i40e_rxtx_vec_avx512.c       |  62 +--
 drivers/net/i40e/i40e_rxtx_vec_neon.c         |  50 +--
 drivers/net/i40e/i40e_rxtx_vec_sse.c          |  60 +--
 drivers/net/iavf/iavf_rxtx.c                  |  90 ++--
 drivers/net/iavf/iavf_rxtx.h                  |  28 +-
 drivers/net/iavf/iavf_rxtx_vec_avx2.c         | 140 +++---
 drivers/net/iavf/iavf_rxtx_vec_avx512.c       | 140 +++---
 drivers/net/iavf/iavf_rxtx_vec_common.h       |  16 +-
 drivers/net/iavf/iavf_rxtx_vec_sse.c          | 112 ++---
 drivers/net/ice/ice_rxtx.c                    | 117 +++--
 drivers/net/ice/ice_rxtx_vec_avx2.c           | 158 +++----
 drivers/net/ice/ice_rxtx_vec_avx512.c         | 158 +++----
 drivers/net/ice/ice_rxtx_vec_common.h         |  16 +-
 drivers/net/ice/ice_rxtx_vec_sse.c            | 112 ++---
 drivers/net/igc/igc_txrx.c                    |  67 +--
 drivers/net/ionic/ionic_rxtx.c                |  59 ++-
 drivers/net/ixgbe/ixgbe_ethdev.c              |   4 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                | 113 +++--
 drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c       |  38 +-
 drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c        |  44 +-
 drivers/net/liquidio/lio_rxtx.c               |  16 +-
 drivers/net/mlx4/mlx4_rxtx.c                  |  22 +-
 drivers/net/mlx5/mlx5_flow.c                  |   2 +-
 drivers/net/mlx5/mlx5_rx.c                    |  18 +-
 drivers/net/mlx5/mlx5_rx.h                    |   4 +-
 drivers/net/mlx5/mlx5_rxq.c                   |   2 +-
 drivers/net/mlx5/mlx5_rxtx.c                  |  18 +-
 drivers/net/mlx5/mlx5_rxtx_vec_altivec.h      |  76 ++--
 drivers/net/mlx5/mlx5_rxtx_vec_neon.h         |  36 +-
 drivers/net/mlx5/mlx5_rxtx_vec_sse.h          |  38 +-
 drivers/net/mlx5/mlx5_tx.h                    | 104 ++---
 drivers/net/mvneta/mvneta_ethdev.h            |   6 +-
 drivers/net/mvneta/mvneta_rxtx.c              |  16 +-
 drivers/net/mvpp2/mrvl_ethdev.c               |  22 +-
 drivers/net/netvsc/hn_rxtx.c                  |  28 +-
 drivers/net/nfp/nfp_rxtx.c                    |  26 +-
 drivers/net/octeontx/octeontx_rxtx.h          |  38 +-
 drivers/net/octeontx2/otx2_ethdev.c           |  18 +-
 drivers/net/octeontx2/otx2_lookup.c           |  40 +-
 drivers/net/octeontx2/otx2_rx.c               |  12 +-
 drivers/net/octeontx2/otx2_rx.h               |  22 +-
 drivers/net/octeontx2/otx2_tx.c               |  86 ++--
 drivers/net/octeontx2/otx2_tx.h               |  70 +--
 drivers/net/qede/qede_rxtx.c                  | 104 ++---
 drivers/net/qede/qede_rxtx.h                  |  20 +-
 drivers/net/sfc/sfc_dp_tx.h                   |  14 +-
 drivers/net/sfc/sfc_ef100_rx.c                |  18 +-
 drivers/net/sfc/sfc_ef100_tx.c                |  52 +--
 drivers/net/sfc/sfc_ef10_essb_rx.c            |   6 +-
 drivers/net/sfc/sfc_ef10_rx.c                 |   6 +-
 drivers/net/sfc/sfc_ef10_rx_ev.h              |  16 +-
 drivers/net/sfc/sfc_ef10_tx.c                 |  18 +-
 drivers/net/sfc/sfc_rx.c                      |  22 +-
 drivers/net/sfc/sfc_tso.c                     |   2 +-
 drivers/net/sfc/sfc_tso.h                     |   2 +-
 drivers/net/sfc/sfc_tx.c                      |   4 +-
 drivers/net/tap/rte_eth_tap.c                 |  28 +-
 drivers/net/thunderx/nicvf_rxtx.c             |  24 +-
 drivers/net/thunderx/nicvf_rxtx.h             |   2 +-
 drivers/net/txgbe/txgbe_ethdev.c              |   4 +-
 drivers/net/txgbe/txgbe_rxtx.c                | 172 ++++----
 drivers/net/vhost/rte_eth_vhost.c             |   2 +-
 drivers/net/virtio/virtio_rxtx.c              |  14 +-
 drivers/net/virtio/virtio_rxtx_packed.h       |   6 +-
 drivers/net/virtio/virtqueue.h                |  14 +-
 drivers/net/vmxnet3/vmxnet3_rxtx.c            |  59 ++-
 drivers/regex/mlx5/mlx5_regex_fastpath.c      |   2 +-
 examples/bpf/t2.c                             |   4 +-
 examples/ip_fragmentation/main.c              |   2 +-
 examples/ip_reassembly/main.c                 |   2 +-
 examples/ipsec-secgw/esp.c                    |   6 +-
 examples/ipsec-secgw/ipsec-secgw.c            |  20 +-
 examples/ipsec-secgw/ipsec_worker.c           |  12 +-
 examples/ipsec-secgw/sa.c                     |   2 +-
 examples/ptpclient/ptpclient.c                |   4 +-
 examples/qos_meter/main.c                     |  12 +-
 examples/vhost/main.c                         |  12 +-
 lib/ethdev/rte_ethdev.h                       |   4 +-
 lib/ethdev/rte_flow.h                         |  33 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   4 +-
 lib/gso/gso_common.h                          |  40 +-
 lib/gso/gso_tunnel_tcp4.c                     |   2 +-
 lib/gso/rte_gso.c                             |  10 +-
 lib/gso/rte_gso.h                             |   4 +-
 lib/ipsec/esp_inb.c                           |  10 +-
 lib/ipsec/esp_outb.c                          |   4 +-
 lib/ipsec/misc.h                              |   2 +-
 lib/ipsec/rte_ipsec_group.h                   |   6 +-
 lib/ipsec/sa.c                                |   2 +-
 lib/mbuf/rte_mbuf.c                           | 220 +++++-----
 lib/mbuf/rte_mbuf.h                           |  30 +-
 lib/mbuf/rte_mbuf_core.h                      | 404 +++++++++++-------
 lib/mbuf/rte_mbuf_dyn.c                       |   2 +-
 lib/net/rte_ether.h                           |   6 +-
 lib/net/rte_ip.h                              |   4 +-
 lib/net/rte_net.h                             |  22 +-
 lib/pipeline/rte_table_action.c               |  10 +-
 lib/security/rte_security.h                   |  10 +-
 lib/vhost/virtio_net.c                        |  42 +-
 174 files changed, 3120 insertions(+), 3013 deletions(-)

diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 090797318a..1faa508f83 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -481,12 +481,12 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 	if (info->ethertype == _htons(RTE_ETHER_TYPE_IPV4)) {
 		ipv4_hdr = l3_hdr;
 
-		ol_flags |= PKT_TX_IPV4;
+		ol_flags |= RTE_MBUF_F_TX_IPV4;
 		if (info->l4_proto == IPPROTO_TCP && tso_segsz) {
-			ol_flags |= PKT_TX_IP_CKSUM;
+			ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
 		} else {
 			if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
-				ol_flags |= PKT_TX_IP_CKSUM;
+				ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
 			} else {
 				ipv4_hdr->hdr_checksum = 0;
 				ipv4_hdr->hdr_checksum =
@@ -494,7 +494,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 			}
 		}
 	} else if (info->ethertype == _htons(RTE_ETHER_TYPE_IPV6))
-		ol_flags |= PKT_TX_IPV6;
+		ol_flags |= RTE_MBUF_F_TX_IPV6;
 	else
 		return 0; /* packet type not supported, nothing to do */
 
@@ -503,7 +503,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		/* do not recalculate udp cksum if it was 0 */
 		if (udp_hdr->dgram_cksum != 0) {
 			if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
-				ol_flags |= PKT_TX_UDP_CKSUM;
+				ol_flags |= RTE_MBUF_F_TX_UDP_CKSUM;
 			} else {
 				udp_hdr->dgram_cksum = 0;
 				udp_hdr->dgram_cksum =
@@ -512,13 +512,13 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 			}
 		}
 		if (info->gso_enable)
-			ol_flags |= PKT_TX_UDP_SEG;
+			ol_flags |= RTE_MBUF_F_TX_UDP_SEG;
 	} else if (info->l4_proto == IPPROTO_TCP) {
 		tcp_hdr = (struct rte_tcp_hdr *)((char *)l3_hdr + info->l3_len);
 		if (tso_segsz)
-			ol_flags |= PKT_TX_TCP_SEG;
+			ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 		else if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
-			ol_flags |= PKT_TX_TCP_CKSUM;
+			ol_flags |= RTE_MBUF_F_TX_TCP_CKSUM;
 		} else {
 			tcp_hdr->cksum = 0;
 			tcp_hdr->cksum =
@@ -526,7 +526,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 					info->ethertype);
 		}
 		if (info->gso_enable)
-			ol_flags |= PKT_TX_TCP_SEG;
+			ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	} else if (info->l4_proto == IPPROTO_SCTP) {
 		sctp_hdr = (struct rte_sctp_hdr *)
 			((char *)l3_hdr + info->l3_len);
@@ -534,7 +534,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		 * offloaded */
 		if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
 			((ipv4_hdr->total_length & 0x3) == 0)) {
-			ol_flags |= PKT_TX_SCTP_CKSUM;
+			ol_flags |= RTE_MBUF_F_TX_SCTP_CKSUM;
 		} else {
 			sctp_hdr->cksum = 0;
 			/* XXX implement CRC32c, example available in
@@ -557,14 +557,14 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 
 	if (info->outer_ethertype == _htons(RTE_ETHER_TYPE_IPV4)) {
 		ipv4_hdr->hdr_checksum = 0;
-		ol_flags |= PKT_TX_OUTER_IPV4;
+		ol_flags |= RTE_MBUF_F_TX_OUTER_IPV4;
 
 		if (tx_offloads	& DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
-			ol_flags |= PKT_TX_OUTER_IP_CKSUM;
+			ol_flags |= RTE_MBUF_F_TX_OUTER_IP_CKSUM;
 		else
 			ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
 	} else
-		ol_flags |= PKT_TX_OUTER_IPV6;
+		ol_flags |= RTE_MBUF_F_TX_OUTER_IPV6;
 
 	if (info->outer_l4_proto != IPPROTO_UDP)
 		return ol_flags;
@@ -573,7 +573,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 		((char *)outer_l3_hdr + info->outer_l3_len);
 
 	if (tso_enabled)
-		ol_flags |= PKT_TX_TCP_SEG;
+		ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 
 	/* Skip SW outer UDP checksum generation if HW supports it */
 	if (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) {
@@ -584,7 +584,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 			udp_hdr->dgram_cksum
 				= rte_ipv6_phdr_cksum(ipv6_hdr, ol_flags);
 
-		ol_flags |= PKT_TX_OUTER_UDP_CKSUM;
+		ol_flags |= RTE_MBUF_F_TX_OUTER_UDP_CKSUM;
 		return ol_flags;
 	}
 
@@ -855,17 +855,17 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 		info.is_tunnel = 0;
 		info.pkt_len = rte_pktmbuf_pkt_len(m);
 		tx_ol_flags = m->ol_flags &
-			      (IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF);
+			      (RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL);
 		rx_ol_flags = m->ol_flags;
 
 		/* Update the L3/L4 checksum error packet statistics */
-		if ((rx_ol_flags & PKT_RX_IP_CKSUM_MASK) == PKT_RX_IP_CKSUM_BAD)
+		if ((rx_ol_flags & RTE_MBUF_F_RX_IP_CKSUM_MASK) == RTE_MBUF_F_RX_IP_CKSUM_BAD)
 			rx_bad_ip_csum += 1;
-		if ((rx_ol_flags & PKT_RX_L4_CKSUM_MASK) == PKT_RX_L4_CKSUM_BAD)
+		if ((rx_ol_flags & RTE_MBUF_F_RX_L4_CKSUM_MASK) == RTE_MBUF_F_RX_L4_CKSUM_BAD)
 			rx_bad_l4_csum += 1;
-		if (rx_ol_flags & PKT_RX_OUTER_L4_CKSUM_BAD)
+		if (rx_ol_flags & RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD)
 			rx_bad_outer_l4_csum += 1;
-		if (rx_ol_flags & PKT_RX_OUTER_IP_CKSUM_BAD)
+		if (rx_ol_flags & RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD)
 			rx_bad_outer_ip_csum += 1;
 
 		/* step 1: dissect packet, parsing optional vlan, ip4/ip6, vxlan
@@ -888,26 +888,26 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 					((char *)l3_hdr + info.l3_len);
 				parse_gtp(udp_hdr, &info);
 				if (info.is_tunnel) {
-					tx_ol_flags |= PKT_TX_TUNNEL_GTP;
+					tx_ol_flags |= RTE_MBUF_F_TX_TUNNEL_GTP;
 					goto tunnel_update;
 				}
 				parse_vxlan_gpe(udp_hdr, &info);
 				if (info.is_tunnel) {
 					tx_ol_flags |=
-						PKT_TX_TUNNEL_VXLAN_GPE;
+						RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE;
 					goto tunnel_update;
 				}
 				parse_vxlan(udp_hdr, &info,
 					    m->packet_type);
 				if (info.is_tunnel) {
 					tx_ol_flags |=
-						PKT_TX_TUNNEL_VXLAN;
+						RTE_MBUF_F_TX_TUNNEL_VXLAN;
 					goto tunnel_update;
 				}
 				parse_geneve(udp_hdr, &info);
 				if (info.is_tunnel) {
 					tx_ol_flags |=
-						PKT_TX_TUNNEL_GENEVE;
+						RTE_MBUF_F_TX_TUNNEL_GENEVE;
 					goto tunnel_update;
 				}
 			} else if (info.l4_proto == IPPROTO_GRE) {
@@ -917,14 +917,14 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 					((char *)l3_hdr + info.l3_len);
 				parse_gre(gre_hdr, &info);
 				if (info.is_tunnel)
-					tx_ol_flags |= PKT_TX_TUNNEL_GRE;
+					tx_ol_flags |= RTE_MBUF_F_TX_TUNNEL_GRE;
 			} else if (info.l4_proto == IPPROTO_IPIP) {
 				void *encap_ip_hdr;
 
 				encap_ip_hdr = (char *)l3_hdr + info.l3_len;
 				parse_encap_ip(encap_ip_hdr, &info);
 				if (info.is_tunnel)
-					tx_ol_flags |= PKT_TX_TUNNEL_IPIP;
+					tx_ol_flags |= RTE_MBUF_F_TX_TUNNEL_IPIP;
 			}
 		}
 
@@ -950,7 +950,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 		if (info.is_tunnel == 1) {
 			tx_ol_flags |= process_outer_cksums(outer_l3_hdr, &info,
 					tx_offloads,
-					!!(tx_ol_flags & PKT_TX_TCP_SEG));
+					!!(tx_ol_flags & RTE_MBUF_F_TX_TCP_SEG));
 		}
 
 		/* step 3: fill the mbuf meta data (flags and header lengths) */
@@ -1014,7 +1014,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 				"l4_proto=%d l4_len=%d flags=%s\n",
 				info.l2_len, rte_be_to_cpu_16(info.ethertype),
 				info.l3_len, info.l4_proto, info.l4_len, buf);
-			if (rx_ol_flags & PKT_RX_LRO)
+			if (rx_ol_flags & RTE_MBUF_F_RX_LRO)
 				printf("rx: m->lro_segsz=%u\n", m->tso_segsz);
 			if (info.is_tunnel == 1)
 				printf("rx: outer_l2_len=%d outer_ethertype=%x "
@@ -1035,17 +1035,17 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 				    DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
 				    (tx_offloads &
 				    DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
-				    (tx_ol_flags & PKT_TX_OUTER_IPV6))
+				    (tx_ol_flags & RTE_MBUF_F_TX_OUTER_IPV6))
 					printf("tx: m->outer_l2_len=%d "
 						"m->outer_l3_len=%d\n",
 						m->outer_l2_len,
 						m->outer_l3_len);
 				if (info.tunnel_tso_segsz != 0 &&
-						(m->ol_flags & PKT_TX_TCP_SEG))
+						(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 					printf("tx: m->tso_segsz=%d\n",
 						m->tso_segsz);
 			} else if (info.tso_segsz != 0 &&
-					(m->ol_flags & PKT_TX_TCP_SEG))
+					(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 				printf("tx: m->tso_segsz=%d\n", m->tso_segsz);
 			rte_get_tx_ol_flag_list(m->ol_flags, buf, sizeof(buf));
 			printf("tx: flags=%s", buf);
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 0ce2afbea5..2da20d5309 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -100,11 +100,11 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
 
 	tx_offloads = ports[fs->tx_port].dev_conf.txmode.offloads;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags |= PKT_TX_VLAN;
+		ol_flags |= RTE_MBUF_F_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ;
+		ol_flags |= RTE_MBUF_F_TX_QINQ;
 	if (tx_offloads	& DEV_TX_OFFLOAD_MACSEC_INSERT)
-		ol_flags |= PKT_TX_MACSEC;
+		ol_flags |= RTE_MBUF_F_TX_MACSEC;
 
 	for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
 		if (!nb_pkt || !nb_clones) {
@@ -152,7 +152,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
 								   sizeof(*ip_hdr));
 			pkt->nb_segs		= 1;
 			pkt->pkt_len		= pkt_size;
-			pkt->ol_flags		&= EXT_ATTACHED_MBUF;
+			pkt->ol_flags		&= RTE_MBUF_F_EXTERNAL;
 			pkt->ol_flags		|= ol_flags;
 			pkt->vlan_tci		= vlan_tci;
 			pkt->vlan_tci_outer	= vlan_tci_outer;
diff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c
index 9cf10c1c50..3ff98c3455 100644
--- a/app/test-pmd/ieee1588fwd.c
+++ b/app/test-pmd/ieee1588fwd.c
@@ -114,7 +114,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs)
 	eth_hdr = rte_pktmbuf_mtod(mb, struct rte_ether_hdr *);
 	eth_type = rte_be_to_cpu_16(eth_hdr->ether_type);
 
-	if (! (mb->ol_flags & PKT_RX_IEEE1588_PTP)) {
+	if (! (mb->ol_flags & RTE_MBUF_F_RX_IEEE1588_PTP)) {
 		if (eth_type == RTE_ETHER_TYPE_1588) {
 			printf("Port %u Received PTP packet not filtered"
 			       " by hardware\n",
@@ -163,7 +163,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs)
 	 * Check that the received PTP packet has been timestamped by the
 	 * hardware.
 	 */
-	if (! (mb->ol_flags & PKT_RX_IEEE1588_TMST)) {
+	if (! (mb->ol_flags & RTE_MBUF_F_RX_IEEE1588_TMST)) {
 		printf("Port %u Received PTP packet not timestamped"
 		       " by hardware\n",
 		       fs->rx_port);
@@ -183,7 +183,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs)
 	rte_ether_addr_copy(&addr, &eth_hdr->src_addr);
 
 	/* Forward PTP packet with hardware TX timestamp */
-	mb->ol_flags |= PKT_TX_IEEE1588_TMST;
+	mb->ol_flags |= RTE_MBUF_F_TX_IEEE1588_TMST;
 	fs->tx_packets += 1;
 	if (rte_eth_tx_burst(fs->rx_port, fs->tx_queue, &mb, 1) == 0) {
 		printf("Port %u sent PTP packet dropped\n", fs->rx_port);
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index a67907b449..333998580b 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -73,11 +73,11 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
 	txp = &ports[fs->tx_port];
 	tx_offloads = txp->dev_conf.txmode.offloads;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags = PKT_TX_VLAN;
+		ol_flags = RTE_MBUF_F_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ;
+		ol_flags |= RTE_MBUF_F_TX_QINQ;
 	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
-		ol_flags |= PKT_TX_MACSEC;
+		ol_flags |= RTE_MBUF_F_TX_MACSEC;
 	for (i = 0; i < nb_rx; i++) {
 		if (likely(i < nb_rx - 1))
 			rte_prefetch0(rte_pktmbuf_mtod(pkts_burst[i + 1],
@@ -88,7 +88,7 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
 				&eth_hdr->dst_addr);
 		rte_ether_addr_copy(&ports[fs->tx_port].eth_addr,
 				&eth_hdr->src_addr);
-		mb->ol_flags &= IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF;
+		mb->ol_flags &= RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL;
 		mb->ol_flags |= ol_flags;
 		mb->l2_len = sizeof(struct rte_ether_hdr);
 		mb->l3_len = sizeof(struct rte_ipv4_hdr);
diff --git a/app/test-pmd/macswap_common.h b/app/test-pmd/macswap_common.h
index 7e9a3590a4..0d43d5cceb 100644
--- a/app/test-pmd/macswap_common.h
+++ b/app/test-pmd/macswap_common.h
@@ -11,11 +11,11 @@ ol_flags_init(uint64_t tx_offload)
 	uint64_t ol_flags = 0;
 
 	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_VLAN_INSERT) ?
-			PKT_TX_VLAN : 0;
+			RTE_MBUF_F_TX_VLAN : 0;
 	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_QINQ_INSERT) ?
-			PKT_TX_QINQ : 0;
+			RTE_MBUF_F_TX_QINQ : 0;
 	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_MACSEC_INSERT) ?
-			PKT_TX_MACSEC : 0;
+			RTE_MBUF_F_TX_MACSEC : 0;
 
 	return ol_flags;
 }
@@ -26,10 +26,10 @@ vlan_qinq_set(struct rte_mbuf *pkts[], uint16_t nb,
 {
 	int i;
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		for (i = 0; i < nb; i++)
 			pkts[i]->vlan_tci = vlan;
-	if (ol_flags & PKT_TX_QINQ)
+	if (ol_flags & RTE_MBUF_F_TX_QINQ)
 		for (i = 0; i < nb; i++)
 			pkts[i]->vlan_tci_outer = outer_vlan;
 }
@@ -37,7 +37,7 @@ vlan_qinq_set(struct rte_mbuf *pkts[], uint16_t nb,
 static inline void
 mbuf_field_set(struct rte_mbuf *mb, uint64_t ol_flags)
 {
-	mb->ol_flags &= IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF;
+	mb->ol_flags &= RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL;
 	mb->ol_flags |= ol_flags;
 	mb->l2_len = sizeof(struct rte_ether_hdr);
 	mb->l3_len = sizeof(struct rte_ipv4_hdr);
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index 0e44bc4d3b..7c34ef4541 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -207,7 +207,7 @@ pkt_burst_prepare(struct rte_mbuf *pkt, struct rte_mempool *mbp,
 
 	rte_pktmbuf_reset_headroom(pkt);
 	pkt->data_len = tx_pkt_seg_lengths[0];
-	pkt->ol_flags &= EXT_ATTACHED_MBUF;
+	pkt->ol_flags &= RTE_MBUF_F_EXTERNAL;
 	pkt->ol_flags |= ol_flags;
 	pkt->vlan_tci = vlan_tci;
 	pkt->vlan_tci_outer = vlan_tci_outer;
@@ -353,11 +353,11 @@ pkt_burst_transmit(struct fwd_stream *fs)
 	vlan_tci = txp->tx_vlan_id;
 	vlan_tci_outer = txp->tx_vlan_id_outer;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags = PKT_TX_VLAN;
+		ol_flags = RTE_MBUF_F_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ;
+		ol_flags |= RTE_MBUF_F_TX_QINQ;
 	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
-		ol_flags |= PKT_TX_MACSEC;
+		ol_flags |= RTE_MBUF_F_TX_MACSEC;
 
 	/*
 	 * Initialize Ethernet header.
diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c
index 51506e4940..34ad0a09ca 100644
--- a/app/test-pmd/util.c
+++ b/app/test-pmd/util.c
@@ -151,20 +151,20 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
 			  eth_type, (unsigned int) mb->pkt_len,
 			  (int)mb->nb_segs);
 		ol_flags = mb->ol_flags;
-		if (ol_flags & PKT_RX_RSS_HASH) {
+		if (ol_flags & RTE_MBUF_F_RX_RSS_HASH) {
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - RSS hash=0x%x",
 				  (unsigned int) mb->hash.rss);
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - RSS queue=0x%x", (unsigned int) queue);
 		}
-		if (ol_flags & PKT_RX_FDIR) {
+		if (ol_flags & RTE_MBUF_F_RX_FDIR) {
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - FDIR matched ");
-			if (ol_flags & PKT_RX_FDIR_ID)
+			if (ol_flags & RTE_MBUF_F_RX_FDIR_ID)
 				MKDUMPSTR(print_buf, buf_size, cur_len,
 					  "ID=0x%x", mb->hash.fdir.hi);
-			else if (ol_flags & PKT_RX_FDIR_FLX)
+			else if (ol_flags & RTE_MBUF_F_RX_FDIR_FLX)
 				MKDUMPSTR(print_buf, buf_size, cur_len,
 					  "flex bytes=0x%08x %08x",
 					  mb->hash.fdir.hi, mb->hash.fdir.lo);
@@ -176,18 +176,18 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
 		if (is_timestamp_enabled(mb))
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - timestamp %"PRIu64" ", get_timestamp(mb));
-		if (ol_flags & PKT_RX_QINQ)
+		if (ol_flags & RTE_MBUF_F_RX_QINQ)
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - QinQ VLAN tci=0x%x, VLAN tci outer=0x%x",
 				  mb->vlan_tci, mb->vlan_tci_outer);
-		else if (ol_flags & PKT_RX_VLAN)
+		else if (ol_flags & RTE_MBUF_F_RX_VLAN)
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - VLAN tci=0x%x", mb->vlan_tci);
-		if (!is_rx && (ol_flags & PKT_TX_DYNF_METADATA))
+		if (!is_rx && (ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA))
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - Tx metadata: 0x%x",
 				  *RTE_FLOW_DYNF_METADATA(mb));
-		if (is_rx && (ol_flags & PKT_RX_DYNF_METADATA))
+		if (is_rx && (ol_flags & RTE_MBUF_DYNFLAG_RX_METADATA))
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - Rx metadata: 0x%x",
 				  *RTE_FLOW_DYNF_METADATA(mb));
@@ -325,7 +325,7 @@ tx_pkt_set_md(uint16_t port_id, __rte_unused uint16_t queue,
 		for (i = 0; i < nb_pkts; i++) {
 			*RTE_FLOW_DYNF_METADATA(pkts[i]) =
 						ports[port_id].tx_metadata;
-			pkts[i]->ol_flags |= PKT_TX_DYNF_METADATA;
+			pkts[i]->ol_flags |= RTE_MBUF_DYNFLAG_TX_METADATA;
 		}
 	return nb_pkts;
 }
diff --git a/app/test/test_cryptodev_security_ipsec.c b/app/test/test_cryptodev_security_ipsec.c
index bcd9746c98..4708803bd2 100644
--- a/app/test/test_cryptodev_security_ipsec.c
+++ b/app/test/test_cryptodev_security_ipsec.c
@@ -524,7 +524,7 @@ test_ipsec_td_verify(struct rte_mbuf *m, const struct ipsec_test_data *td,
 
 	if ((td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) &&
 				flags->ip_csum) {
-		if (m->ol_flags & PKT_RX_IP_CKSUM_GOOD)
+		if (m->ol_flags & RTE_MBUF_F_RX_IP_CKSUM_GOOD)
 			ret = test_ipsec_l3_csum_verify(m);
 		else
 			ret = TEST_FAILED;
@@ -537,7 +537,7 @@ test_ipsec_td_verify(struct rte_mbuf *m, const struct ipsec_test_data *td,
 
 	if ((td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) &&
 				flags->l4_csum) {
-		if (m->ol_flags & PKT_RX_L4_CKSUM_GOOD)
+		if (m->ol_flags & RTE_MBUF_F_RX_L4_CKSUM_GOOD)
 			ret = test_ipsec_l4_csum_verify(m);
 		else
 			ret = TEST_FAILED;
diff --git a/app/test/test_ipsec.c b/app/test/test_ipsec.c
index c6d6b88d6d..1bec63b0e8 100644
--- a/app/test/test_ipsec.c
+++ b/app/test/test_ipsec.c
@@ -1622,8 +1622,8 @@ inline_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
 			"ibuf pkt_len is not equal to obuf pkt_len");
 
 		/* check mbuf ol_flags */
-		TEST_ASSERT(ut_params->ibuf[j]->ol_flags & PKT_TX_SEC_OFFLOAD,
-			"ibuf PKT_TX_SEC_OFFLOAD is not set");
+		TEST_ASSERT(ut_params->ibuf[j]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD,
+			    "ibuf RTE_MBUF_F_TX_SEC_OFFLOAD is not set");
 	}
 	return 0;
 }
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index 82777109dc..05e72ad123 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -1495,7 +1495,7 @@ test_get_rx_ol_flag_list(void)
 		GOTO_FAIL("%s expected: -1, received = %d\n", __func__, ret);
 
 	/* Test case to check with zero buffer len */
-	ret = rte_get_rx_ol_flag_list(PKT_RX_L4_CKSUM_MASK, buf, 0);
+	ret = rte_get_rx_ol_flag_list(RTE_MBUF_F_RX_L4_CKSUM_MASK, buf, 0);
 	if (ret != -1)
 		GOTO_FAIL("%s expected: -1, received = %d\n", __func__, ret);
 
@@ -1526,7 +1526,8 @@ test_get_rx_ol_flag_list(void)
 				"non-zero, buffer should not be empty");
 
 	/* Test case to check with valid mask value */
-	ret = rte_get_rx_ol_flag_list(PKT_RX_SEC_OFFLOAD, buf, sizeof(buf));
+	ret = rte_get_rx_ol_flag_list(RTE_MBUF_F_RX_SEC_OFFLOAD, buf,
+				      sizeof(buf));
 	if (ret != 0)
 		GOTO_FAIL("%s expected: 0, received = %d\n", __func__, ret);
 
@@ -1553,7 +1554,7 @@ test_get_tx_ol_flag_list(void)
 		GOTO_FAIL("%s expected: -1, received = %d\n", __func__, ret);
 
 	/* Test case to check with zero buffer len */
-	ret = rte_get_tx_ol_flag_list(PKT_TX_IP_CKSUM, buf, 0);
+	ret = rte_get_tx_ol_flag_list(RTE_MBUF_F_TX_IP_CKSUM, buf, 0);
 	if (ret != -1)
 		GOTO_FAIL("%s expected: -1, received = %d\n", __func__, ret);
 
@@ -1585,7 +1586,8 @@ test_get_tx_ol_flag_list(void)
 				"non-zero, buffer should not be empty");
 
 	/* Test case to check with valid mask value */
-	ret = rte_get_tx_ol_flag_list(PKT_TX_UDP_CKSUM, buf, sizeof(buf));
+	ret = rte_get_tx_ol_flag_list(RTE_MBUF_F_TX_UDP_CKSUM, buf,
+				      sizeof(buf));
 	if (ret != 0)
 		GOTO_FAIL("%s expected: 0, received = %d\n", __func__, ret);
 
@@ -1611,28 +1613,28 @@ test_get_rx_ol_flag_name(void)
 	uint16_t i;
 	const char *flag_str = NULL;
 	const struct flag_name rx_flags[] = {
-		VAL_NAME(PKT_RX_VLAN),
-		VAL_NAME(PKT_RX_RSS_HASH),
-		VAL_NAME(PKT_RX_FDIR),
-		VAL_NAME(PKT_RX_L4_CKSUM_BAD),
-		VAL_NAME(PKT_RX_L4_CKSUM_GOOD),
-		VAL_NAME(PKT_RX_L4_CKSUM_NONE),
-		VAL_NAME(PKT_RX_IP_CKSUM_BAD),
-		VAL_NAME(PKT_RX_IP_CKSUM_GOOD),
-		VAL_NAME(PKT_RX_IP_CKSUM_NONE),
-		VAL_NAME(PKT_RX_OUTER_IP_CKSUM_BAD),
-		VAL_NAME(PKT_RX_VLAN_STRIPPED),
-		VAL_NAME(PKT_RX_IEEE1588_PTP),
-		VAL_NAME(PKT_RX_IEEE1588_TMST),
-		VAL_NAME(PKT_RX_FDIR_ID),
-		VAL_NAME(PKT_RX_FDIR_FLX),
-		VAL_NAME(PKT_RX_QINQ_STRIPPED),
-		VAL_NAME(PKT_RX_LRO),
-		VAL_NAME(PKT_RX_SEC_OFFLOAD),
-		VAL_NAME(PKT_RX_SEC_OFFLOAD_FAILED),
-		VAL_NAME(PKT_RX_OUTER_L4_CKSUM_BAD),
-		VAL_NAME(PKT_RX_OUTER_L4_CKSUM_GOOD),
-		VAL_NAME(PKT_RX_OUTER_L4_CKSUM_INVALID),
+		VAL_NAME(RTE_MBUF_F_RX_VLAN),
+		VAL_NAME(RTE_MBUF_F_RX_RSS_HASH),
+		VAL_NAME(RTE_MBUF_F_RX_FDIR),
+		VAL_NAME(RTE_MBUF_F_RX_L4_CKSUM_BAD),
+		VAL_NAME(RTE_MBUF_F_RX_L4_CKSUM_GOOD),
+		VAL_NAME(RTE_MBUF_F_RX_L4_CKSUM_NONE),
+		VAL_NAME(RTE_MBUF_F_RX_IP_CKSUM_BAD),
+		VAL_NAME(RTE_MBUF_F_RX_IP_CKSUM_GOOD),
+		VAL_NAME(RTE_MBUF_F_RX_IP_CKSUM_NONE),
+		VAL_NAME(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD),
+		VAL_NAME(RTE_MBUF_F_RX_VLAN_STRIPPED),
+		VAL_NAME(RTE_MBUF_F_RX_IEEE1588_PTP),
+		VAL_NAME(RTE_MBUF_F_RX_IEEE1588_TMST),
+		VAL_NAME(RTE_MBUF_F_RX_FDIR_ID),
+		VAL_NAME(RTE_MBUF_F_RX_FDIR_FLX),
+		VAL_NAME(RTE_MBUF_F_RX_QINQ_STRIPPED),
+		VAL_NAME(RTE_MBUF_F_RX_LRO),
+		VAL_NAME(RTE_MBUF_F_RX_SEC_OFFLOAD),
+		VAL_NAME(RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED),
+		VAL_NAME(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD),
+		VAL_NAME(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD),
+		VAL_NAME(RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID),
 	};
 
 	/* Test case to check with valid flag */
@@ -1663,31 +1665,31 @@ test_get_tx_ol_flag_name(void)
 	uint16_t i;
 	const char *flag_str = NULL;
 	const struct flag_name tx_flags[] = {
-		VAL_NAME(PKT_TX_VLAN),
-		VAL_NAME(PKT_TX_IP_CKSUM),
-		VAL_NAME(PKT_TX_TCP_CKSUM),
-		VAL_NAME(PKT_TX_SCTP_CKSUM),
-		VAL_NAME(PKT_TX_UDP_CKSUM),
-		VAL_NAME(PKT_TX_IEEE1588_TMST),
-		VAL_NAME(PKT_TX_TCP_SEG),
-		VAL_NAME(PKT_TX_IPV4),
-		VAL_NAME(PKT_TX_IPV6),
-		VAL_NAME(PKT_TX_OUTER_IP_CKSUM),
-		VAL_NAME(PKT_TX_OUTER_IPV4),
-		VAL_NAME(PKT_TX_OUTER_IPV6),
-		VAL_NAME(PKT_TX_TUNNEL_VXLAN),
-		VAL_NAME(PKT_TX_TUNNEL_GRE),
-		VAL_NAME(PKT_TX_TUNNEL_IPIP),
-		VAL_NAME(PKT_TX_TUNNEL_GENEVE),
-		VAL_NAME(PKT_TX_TUNNEL_MPLSINUDP),
-		VAL_NAME(PKT_TX_TUNNEL_VXLAN_GPE),
-		VAL_NAME(PKT_TX_TUNNEL_IP),
-		VAL_NAME(PKT_TX_TUNNEL_UDP),
-		VAL_NAME(PKT_TX_QINQ),
-		VAL_NAME(PKT_TX_MACSEC),
-		VAL_NAME(PKT_TX_SEC_OFFLOAD),
-		VAL_NAME(PKT_TX_UDP_SEG),
-		VAL_NAME(PKT_TX_OUTER_UDP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_VLAN),
+		VAL_NAME(RTE_MBUF_F_TX_IP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_TCP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_SCTP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_UDP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_IEEE1588_TMST),
+		VAL_NAME(RTE_MBUF_F_TX_TCP_SEG),
+		VAL_NAME(RTE_MBUF_F_TX_IPV4),
+		VAL_NAME(RTE_MBUF_F_TX_IPV6),
+		VAL_NAME(RTE_MBUF_F_TX_OUTER_IP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_OUTER_IPV4),
+		VAL_NAME(RTE_MBUF_F_TX_OUTER_IPV6),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_VXLAN),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_GRE),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_IPIP),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_GENEVE),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_MPLSINUDP),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_IP),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_UDP),
+		VAL_NAME(RTE_MBUF_F_TX_QINQ),
+		VAL_NAME(RTE_MBUF_F_TX_MACSEC),
+		VAL_NAME(RTE_MBUF_F_TX_SEC_OFFLOAD),
+		VAL_NAME(RTE_MBUF_F_TX_UDP_SEG),
+		VAL_NAME(RTE_MBUF_F_TX_OUTER_UDP_CKSUM),
 	};
 
 	/* Test case to check with valid flag */
@@ -1755,8 +1757,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	/* test to validate if IP checksum is counted only for IPV4 packet */
 	/* set both IP checksum and IPV6 flags */
-	ol_flags |= PKT_TX_IP_CKSUM;
-	ol_flags |= PKT_TX_IPV6;
+	ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
+	ol_flags |= RTE_MBUF_F_TX_IPV6;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_IP_CKSUM_IPV6_SET",
 				pktmbuf_pool,
 				ol_flags, 0, -EINVAL) < 0)
@@ -1765,14 +1767,14 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 	ol_flags = 0;
 
 	/* test to validate if IP type is set when required */
-	ol_flags |= PKT_TX_L4_MASK;
+	ol_flags |= RTE_MBUF_F_TX_L4_MASK;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_IP_TYPE_NOT_SET",
 				pktmbuf_pool,
 				ol_flags, 0, -EINVAL) < 0)
 		GOTO_FAIL("%s failed: IP type is not set.\n", __func__);
 
 	/* test if IP type is set when TCP SEG is on */
-	ol_flags |= PKT_TX_TCP_SEG;
+	ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_IP_TYPE_NOT_SET",
 				pktmbuf_pool,
 				ol_flags, 0, -EINVAL) < 0)
@@ -1780,8 +1782,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	ol_flags = 0;
 	/* test to confirm IP type (IPV4/IPV6) is set */
-	ol_flags = PKT_TX_L4_MASK;
-	ol_flags |= PKT_TX_IPV6;
+	ol_flags = RTE_MBUF_F_TX_L4_MASK;
+	ol_flags |= RTE_MBUF_F_TX_IPV6;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_IP_TYPE_SET",
 				pktmbuf_pool,
 				ol_flags, 0, 0) < 0)
@@ -1789,15 +1791,15 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	ol_flags = 0;
 	/* test to check TSO segment size is non-zero */
-	ol_flags |= PKT_TX_IPV4;
-	ol_flags |= PKT_TX_TCP_SEG;
+	ol_flags |= RTE_MBUF_F_TX_IPV4;
+	ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	/* set 0 tso segment size */
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_NULL_TSO_SEGSZ",
 				pktmbuf_pool,
 				ol_flags, 0, -EINVAL) < 0)
 		GOTO_FAIL("%s failed: tso segment size is null.\n", __func__);
 
-	/* retain IPV4 and PKT_TX_TCP_SEG mask */
+	/* retain IPV4 and RTE_MBUF_F_TX_TCP_SEG mask */
 	/* set valid tso segment size but IP CKSUM not set */
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_TSO_IP_CKSUM_NOT_SET",
 				pktmbuf_pool,
@@ -1806,7 +1808,7 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	/* test to validate if IP checksum is set for TSO capability */
 	/* retain IPV4, TCP_SEG, tso_seg size */
-	ol_flags |= PKT_TX_IP_CKSUM;
+	ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_TSO_IP_CKSUM_SET",
 				pktmbuf_pool,
 				ol_flags, 512, 0) < 0)
@@ -1814,8 +1816,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	/* test to confirm TSO for IPV6 type */
 	ol_flags = 0;
-	ol_flags |= PKT_TX_IPV6;
-	ol_flags |= PKT_TX_TCP_SEG;
+	ol_flags |= RTE_MBUF_F_TX_IPV6;
+	ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_TSO_IPV6_SET",
 				pktmbuf_pool,
 				ol_flags, 512, 0) < 0)
@@ -1823,8 +1825,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	ol_flags = 0;
 	/* test if outer IP checksum set for non outer IPv4 packet */
-	ol_flags |= PKT_TX_IPV6;
-	ol_flags |= PKT_TX_OUTER_IP_CKSUM;
+	ol_flags |= RTE_MBUF_F_TX_IPV6;
+	ol_flags |= RTE_MBUF_F_TX_OUTER_IP_CKSUM;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_OUTER_IPV4_NOT_SET",
 				pktmbuf_pool,
 				ol_flags, 512, -EINVAL) < 0)
@@ -1832,8 +1834,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	ol_flags = 0;
 	/* test to confirm outer IP checksum is set for outer IPV4 packet */
-	ol_flags |= PKT_TX_OUTER_IP_CKSUM;
-	ol_flags |= PKT_TX_OUTER_IPV4;
+	ol_flags |= RTE_MBUF_F_TX_OUTER_IP_CKSUM;
+	ol_flags |= RTE_MBUF_F_TX_OUTER_IPV4;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_OUTER_IPV4_SET",
 				pktmbuf_pool,
 				ol_flags, 512, 0) < 0)
@@ -2366,7 +2368,7 @@ test_pktmbuf_ext_shinfo_init_helper(struct rte_mempool *pktmbuf_pool)
 	buf_iova = rte_mem_virt2iova(ext_buf_addr);
 	rte_pktmbuf_attach_extbuf(m, ext_buf_addr, buf_iova, buf_len,
 		ret_shinfo);
-	if (m->ol_flags != EXT_ATTACHED_MBUF)
+	if (m->ol_flags != RTE_MBUF_F_EXTERNAL)
 		GOTO_FAIL("%s: External buffer is not attached to mbuf\n",
 				__func__);
 
@@ -2380,7 +2382,7 @@ test_pktmbuf_ext_shinfo_init_helper(struct rte_mempool *pktmbuf_pool)
 	/* attach the same external buffer to the cloned mbuf */
 	rte_pktmbuf_attach_extbuf(clone, ext_buf_addr, buf_iova, buf_len,
 			ret_shinfo);
-	if (clone->ol_flags != EXT_ATTACHED_MBUF)
+	if (clone->ol_flags != RTE_MBUF_F_EXTERNAL)
 		GOTO_FAIL("%s: External buffer is not attached to mbuf\n",
 				__func__);
 
@@ -2672,8 +2674,8 @@ test_mbuf_dyn(struct rte_mempool *pktmbuf_pool)
 			flag2, strerror(errno));
 
 	flag3 = rte_mbuf_dynflag_register_bitnum(&dynflag3,
-						rte_bsf64(PKT_LAST_FREE));
-	if (flag3 != rte_bsf64(PKT_LAST_FREE))
+						rte_bsf64(RTE_MBUF_F_LAST_FREE));
+	if (flag3 != rte_bsf64(RTE_MBUF_F_LAST_FREE))
 		GOTO_FAIL("failed to register dynamic flag 3, flag3=%d: %s",
 			flag3, strerror(errno));
 
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index f2f5eff48d..72f4b53109 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -512,9 +512,9 @@ configured TPID.
     // enable VLAN insert offload
     testpmd> port config (port_id) rx_offload vlan_insert|qinq_insert (on|off)
 
-    if (mbuf->ol_flags && PKT_TX_QINQ)       // case-1: insert VLAN to single-tagged packet
+    if (mbuf->ol_flags && RTE_MBUF_F_TX_QINQ)       // case-1: insert VLAN to single-tagged packet
         tci_value = mbuf->vlan_tci_outer
-    else if (mbuf->ol_flags && PKT_TX_VLAN)  // case-2: insert VLAN to untagged packet
+    else if (mbuf->ol_flags && RTE_MBUF_F_TX_VLAN)  // case-2: insert VLAN to untagged packet
         tci_value = mbuf->vlan_tci
 
 VLAN Strip
@@ -528,7 +528,7 @@ The application configures the per-port VLAN strip offload.
     testpmd> port config (port_id) tx_offload vlan_strip (on|off)
 
     // notify application VLAN strip via mbuf
-    mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_STRIPPED // outer VLAN is found and stripped
+    mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_STRIPPED // outer VLAN is found and stripped
     mbuf->vlan_tci = tci_value                      // TCI of the stripped VLAN
 
 Time Synchronization
@@ -552,7 +552,7 @@ packets to application via mbuf.
 .. code-block:: console
 
     // RX packet completion will indicate whether the packet is PTP
-    mbuf->ol_flags |= PKT_RX_IEEE1588_PTP
+    mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP
 
 Statistics Collection
 ~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index 91bdcd065a..d5ffd51dea 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -279,9 +279,9 @@ inner and outer packets can be IPv4 or IPv6.
 - Rx checksum offloads.
 
   The NIC validates IPv4/UDP/TCP checksums of both inner and outer packets.
-  Good checksum flags (e.g. ``PKT_RX_L4_CKSUM_GOOD``) indicate that the inner
+  Good checksum flags (e.g. ``RTE_MBUF_F_RX_L4_CKSUM_GOOD``) indicate that the inner
   packet has the correct checksum, and if applicable, the outer packet also
-  has the correct checksum. Bad checksum flags (e.g. ``PKT_RX_L4_CKSUM_BAD``)
+  has the correct checksum. Bad checksum flags (e.g. ``RTE_MBUF_F_RX_L4_CKSUM_BAD``)
   indicate that the inner and/or outer packets have invalid checksum values.
 
 - Inner Rx packet type classification
@@ -437,8 +437,8 @@ Limitations
 
 Another alternative is modify the adapter's ingress VLAN rewrite mode so that
 packets with the default VLAN tag are stripped by the adapter and presented to
-DPDK as untagged packets. In this case mbuf->vlan_tci and the PKT_RX_VLAN and
-PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
+DPDK as untagged packets. In this case mbuf->vlan_tci and the RTE_MBUF_F_RX_VLAN and
+RTE_MBUF_F_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
 ``devargs`` parameter ``ig-vlan-rewrite=untag``. For example::
 
     -a 12:00.0,ig-vlan-rewrite=untag
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index e346018e4b..fe830338ec 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -210,7 +210,7 @@ Supports Large Receive Offload.
   ``dev_conf.rxmode.max_lro_pkt_size``.
 * **[implements] datapath**: ``LRO functionality``.
 * **[implements] rte_eth_dev_data**: ``lro``.
-* **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
+* **[provides]   mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_LRO``, ``mbuf.tso_segsz``.
 * **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
 * **[provides]   rte_eth_dev_info**: ``max_lro_pkt_size``.
 
@@ -224,7 +224,7 @@ Supports TCP Segmentation Offloading.
 
 * **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
 * **[uses]       rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
-* **[uses]       mbuf**: ``mbuf.ol_flags:`` ``PKT_TX_TCP_SEG``, ``PKT_TX_IPV4``, ``PKT_TX_IPV6``, ``PKT_TX_IP_CKSUM``.
+* **[uses]       mbuf**: ``mbuf.ol_flags:`` ``RTE_MBUF_F_TX_TCP_SEG``, ``RTE_MBUF_F_TX_IPV4``, ``RTE_MBUF_F_TX_IPV6``, ``RTE_MBUF_F_TX_IP_CKSUM``.
 * **[uses]       mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
 * **[implements] datapath**: ``TSO functionality``.
 * **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
@@ -292,7 +292,7 @@ Supports RSS hashing on RX.
 * **[uses]     user config**: ``dev_conf.rx_adv_conf.rss_conf``.
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
 * **[provides] rte_eth_dev_info**: ``flow_type_rss_offloads``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_RSS_HASH``, ``mbuf.rss``.
 
 
 .. _nic_features_inner_rss:
@@ -304,7 +304,7 @@ Supports RX RSS hashing on Inner headers.
 
 * **[uses]    rte_flow_action_rss**: ``level``.
 * **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_RSS_HASH``, ``mbuf.rss``.
 
 
 .. _nic_features_rss_key_update:
@@ -424,8 +424,8 @@ of protocol operations. See Security library and PMD documentation for more deta
   ``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``capabilities_get``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
-* **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
-  ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
+* **[provides]   mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_SEC_OFFLOAD``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO``
 
 
@@ -447,8 +447,8 @@ protocol operations. See security library and PMD documentation for more details
   ``capabilities_get``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
-* **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
-  ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
+* **[provides]   mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_SEC_OFFLOAD``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL``
 
 
@@ -472,9 +472,9 @@ Supports VLAN offload to hardware.
 
 * **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
 * **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
-* **[uses]       mbuf**: ``mbuf.ol_flags:PKT_TX_VLAN``, ``mbuf.vlan_tci``.
+* **[uses]       mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_VLAN``, ``mbuf.vlan_tci``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
-* **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN`` ``mbuf.vlan_tci``.
+* **[provides]   mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:RTE_MBUF_F_RX_VLAN`` ``mbuf.vlan_tci``.
 * **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
@@ -490,9 +490,9 @@ Supports QinQ (queue in queue) offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ``, ``mbuf.vlan_tci_outer``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.ol_flags:PKT_RX_QINQ``,
-  ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN``
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_QINQ``, ``mbuf.vlan_tci_outer``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_QINQ_STRIPPED``, ``mbuf.ol_flags:RTE_MBUF_F_RX_QINQ``,
+  ``mbuf.ol_flags:RTE_MBUF_F_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:RTE_MBUF_F_RX_VLAN``
   ``mbuf.vlan_tci``, ``mbuf.vlan_tci_outer``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
@@ -522,12 +522,12 @@ Supports L3 checksum offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
-  ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_IP_CKSUM``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_IPV4`` | ``RTE_MBUF_F_TX_IPV6``.
 * **[uses]     mbuf**: ``mbuf.l2_len``, ``mbuf.l3_len``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
-  ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
-  ``PKT_RX_IP_CKSUM_NONE``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN`` |
+  ``RTE_MBUF_F_RX_IP_CKSUM_BAD`` | ``RTE_MBUF_F_RX_IP_CKSUM_GOOD`` |
+  ``RTE_MBUF_F_RX_IP_CKSUM_NONE``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 
@@ -541,13 +541,13 @@ Supports L4 checksum offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
-  ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
-  ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_IPV4`` | ``RTE_MBUF_F_TX_IPV6``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_L4_NO_CKSUM`` | ``RTE_MBUF_F_TX_TCP_CKSUM`` |
+  ``RTE_MBUF_F_TX_SCTP_CKSUM`` | ``RTE_MBUF_F_TX_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.l2_len``, ``mbuf.l3_len``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
-  ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
-  ``PKT_RX_L4_CKSUM_NONE``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN`` |
+  ``RTE_MBUF_F_RX_L4_CKSUM_BAD`` | ``RTE_MBUF_F_RX_L4_CKSUM_GOOD`` |
+  ``RTE_MBUF_F_RX_L4_CKSUM_NONE``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 
@@ -559,7 +559,7 @@ Timestamp offload
 Supports Timestamp.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TIMESTAMP``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_TIMESTAMP``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_TIMESTAMP``.
 * **[provides] mbuf**: ``mbuf.timestamp``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: DEV_RX_OFFLOAD_TIMESTAMP``.
 * **[related] eth_dev_ops**: ``read_clock``.
@@ -573,7 +573,7 @@ Supports MACsec.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_MACSEC``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 
@@ -587,12 +587,12 @@ Supports inner packet L3 checksum.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
-  ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
-  ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
-  ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_IP_CKSUM``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_IPV4`` | ``RTE_MBUF_F_TX_IPV6``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_OUTER_IP_CKSUM``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_OUTER_IPV4`` | ``RTE_MBUF_F_TX_OUTER_IPV6``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_IP_CKSUM_BAD``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
@@ -605,11 +605,11 @@ Inner L4 checksum
 Supports inner packet L4 checksum.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_L4_CKSUM_UNKNOWN`` |
-  ``PKT_RX_OUTER_L4_CKSUM_BAD`` | ``PKT_RX_OUTER_L4_CKSUM_GOOD`` | ``PKT_RX_OUTER_L4_CKSUM_INVALID``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN`` |
+  ``RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD`` | ``RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD`` | ``RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
-  ``mbuf.ol_flags:PKT_TX_OUTER_UDP_CKSUM``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_OUTER_IPV4`` | ``RTE_MBUF_F_TX_OUTER_IPV6``.
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_OUTER_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index 20a74b9b5b..437662aa05 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -284,7 +284,7 @@ Intel 82599 10 Gigabit Ethernet Controller Specification Update (Revision 2.87)
 Errata: 44 Integrity Error Reported for IPv4/UDP Packets With Zero Checksum
 
 To support UDP zero checksum, the zero and bad UDP checksum packet is marked as
-PKT_RX_L4_CKSUM_UNKNOWN, so the application needs to recompute the checksum to
+RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN, so the application needs to recompute the checksum to
 validate it.
 
 Inline crypto processing support
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index bae73f42d8..9324ce7818 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -255,7 +255,7 @@ Limitations
   no MPRQ feature or vectorized code can be engaged.
 
 - When Multi-Packet Rx queue is configured (``mprq_en``), a Rx packet can be
-  externally attached to a user-provided mbuf with having EXT_ATTACHED_MBUF in
+  externally attached to a user-provided mbuf with having RTE_MBUF_F_EXTERNAL in
   ol_flags. As the mempool for the external buffer is managed by PMD, all the
   Rx mbufs must be freed before the device is closed. Otherwise, the mempool of
   the external buffers will be freed by PMD and the application which still
@@ -263,7 +263,7 @@ Limitations
 
 - If Multi-Packet Rx queue is configured (``mprq_en``) and Rx CQE compression is
   enabled (``rxq_cqe_comp_en``) at the same time, RSS hash result is not fully
-  supported. Some Rx packets may not have PKT_RX_RSS_HASH.
+  supported. Some Rx packets may not have RTE_MBUF_F_RX_RSS_HASH.
 
 - IPv6 Multicast messages are not supported on VM, while promiscuous mode
   and allmulticast mode are both set to off.
@@ -644,7 +644,7 @@ Driver options
   the mbuf by external buffer attachment - ``rte_pktmbuf_attach_extbuf()``.
   A mempool for external buffers will be allocated and managed by PMD. If Rx
   packet is externally attached, ol_flags field of the mbuf will have
-  EXT_ATTACHED_MBUF and this flag must be preserved. ``RTE_MBUF_HAS_EXTBUF()``
+  RTE_MBUF_F_EXTERNAL and this flag must be preserved. ``RTE_MBUF_HAS_EXTBUF()``
   checks the flag. The default value is 128, valid only if ``mprq_en`` is set.
 
 - ``rxqs_min_mprq`` parameter [int]
diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
index 7bff0aef0b..6537f3d5d6 100644
--- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
@@ -211,11 +211,11 @@ To segment an outgoing packet, an application must:
      responsibility to ensure that these flags are set.
 
    - For example, in order to segment TCP/IPv4 packets, the application should
-     add the ``PKT_TX_IPV4`` and ``PKT_TX_TCP_SEG`` flags to the mbuf's
+     add the ``RTE_MBUF_F_TX_IPV4`` and ``RTE_MBUF_F_TX_TCP_SEG`` flags to the mbuf's
      ol_flags.
 
    - If checksum calculation in hardware is required, the application should
-     also add the ``PKT_TX_TCP_CKSUM`` and ``PKT_TX_IP_CKSUM`` flags.
+     also add the ``RTE_MBUF_F_TX_TCP_CKSUM`` and ``RTE_MBUF_F_TX_IP_CKSUM`` flags.
 
 #. Check if the packet should be processed. Packets with one of the
    following properties are not processed and are returned immediately:
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 2f190b40e4..15b266c295 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -123,7 +123,7 @@ timestamp mechanism, the VLAN tagging and the IP checksum computation.
 
 On TX side, it is also possible for an application to delegate some
 processing to the hardware if it supports it. For instance, the
-PKT_TX_IP_CKSUM flag allows to offload the computation of the IPv4
+RTE_MBUF_F_TX_IP_CKSUM flag allows to offload the computation of the IPv4
 checksum.
 
 The following examples explain how to configure different TX offloads on
@@ -134,7 +134,7 @@ a vxlan-encapsulated tcp packet:
 
     mb->l2_len = len(out_eth)
     mb->l3_len = len(out_ip)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CSUM
     set out_ip checksum to 0 in the packet
 
   This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
@@ -143,7 +143,7 @@ a vxlan-encapsulated tcp packet:
 
     mb->l2_len = len(out_eth)
     mb->l3_len = len(out_ip)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM | PKT_TX_UDP_CKSUM
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CSUM | RTE_MBUF_F_TX_UDP_CKSUM
     set out_ip checksum to 0 in the packet
     set out_udp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
@@ -154,7 +154,7 @@ a vxlan-encapsulated tcp packet:
 
     mb->l2_len = len(out_eth + out_ip + out_udp + vxlan + in_eth)
     mb->l3_len = len(in_ip)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CSUM
     set in_ip checksum to 0 in the packet
 
   This is similar to case 1), but l2_len is different. It is supported
@@ -165,7 +165,7 @@ a vxlan-encapsulated tcp packet:
 
     mb->l2_len = len(out_eth + out_ip + out_udp + vxlan + in_eth)
     mb->l3_len = len(in_ip)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM | PKT_TX_TCP_CKSUM
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CSUM | RTE_MBUF_F_TX_TCP_CKSUM
     set in_ip checksum to 0 in the packet
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
@@ -179,8 +179,8 @@ a vxlan-encapsulated tcp packet:
     mb->l2_len = len(out_eth + out_ip + out_udp + vxlan + in_eth)
     mb->l3_len = len(in_ip)
     mb->l4_len = len(in_tcp)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM |
-      PKT_TX_TCP_SEG;
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM |
+      RTE_MBUF_F_TX_TCP_SEG;
     set in_ip checksum to 0 in the packet
     set in_tcp checksum to pseudo header without including the IP
       payload length using rte_ipv4_phdr_cksum()
@@ -194,8 +194,8 @@ a vxlan-encapsulated tcp packet:
     mb->outer_l3_len = len(out_ip)
     mb->l2_len = len(out_udp + vxlan + in_eth)
     mb->l3_len = len(in_ip)
-    mb->ol_flags |= PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IP_CKSUM  | \
-      PKT_TX_IP_CKSUM |  PKT_TX_TCP_CKSUM;
+    mb->ol_flags |= RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM  | \
+      RTE_MBUF_F_TX_IP_CKSUM |  RTE_MBUF_F_TX_TCP_CKSUM;
     set out_ip checksum to 0 in the packet
     set in_ip checksum to 0 in the packet
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
diff --git a/doc/guides/prog_guide/metrics_lib.rst b/doc/guides/prog_guide/metrics_lib.rst
index eca855d601..f8416eaa02 100644
--- a/doc/guides/prog_guide/metrics_lib.rst
+++ b/doc/guides/prog_guide/metrics_lib.rst
@@ -290,7 +290,7 @@ Timestamp and latency calculation
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 The Latency stats library marks the time in the timestamp field of the
-mbuf for the ingress packets and sets the ``PKT_RX_TIMESTAMP`` flag of
+mbuf for the ingress packets and sets the ``RTE_MBUF_F_RX_TIMESTAMP`` flag of
 ``ol_flags`` for the mbuf to indicate the marked time as a valid one.
 At the egress, the mbufs with the flag set are considered having valid
 timestamp and are used for the latency calculation.
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 2b42d5ec8c..8f9251953d 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -687,9 +687,9 @@ Item: ``META``
 Matches 32 bit metadata item set.
 
 On egress, metadata can be set either by mbuf metadata field with
-PKT_TX_DYNF_METADATA flag or ``SET_META`` action. On ingress, ``SET_META``
+RTE_MBUF_DYNFLAG_TX_METADATA flag or ``SET_META`` action. On ingress, ``SET_META``
 action sets metadata for a packet and the metadata will be reported via
-``metadata`` dynamic field of ``rte_mbuf`` with PKT_RX_DYNF_METADATA flag.
+``metadata`` dynamic field of ``rte_mbuf`` with RTE_MBUF_DYNFLAG_RX_METADATA flag.
 
 - Default ``mask`` matches the specified Rx metadata value.
 
@@ -1656,8 +1656,8 @@ flows to loop between groups.
 Action: ``MARK``
 ^^^^^^^^^^^^^^^^
 
-Attaches an integer value to packets and sets ``PKT_RX_FDIR`` and
-``PKT_RX_FDIR_ID`` mbuf flags.
+Attaches an integer value to packets and sets ``RTE_MBUF_F_RX_FDIR`` and
+``RTE_MBUF_F_RX_FDIR_ID`` mbuf flags.
 
 This value is arbitrary and application-defined. Maximum allowed value
 depends on the underlying implementation. It is returned in the
@@ -1677,7 +1677,7 @@ Action: ``FLAG``
 ^^^^^^^^^^^^^^^^
 
 Flags packets. Similar to `Action: MARK`_ without a specific value; only
-sets the ``PKT_RX_FDIR`` mbuf flag.
+sets the ``RTE_MBUF_F_RX_FDIR`` mbuf flag.
 
 - No configurable properties.
 
@@ -2635,10 +2635,10 @@ Action: ``SET_META``
 
 Set metadata. Item ``META`` matches metadata.
 
-Metadata set by mbuf metadata field with PKT_TX_DYNF_METADATA flag on egress
+Metadata set by mbuf metadata field with RTE_MBUF_DYNFLAG_TX_METADATA flag on egress
 will be overridden by this action. On ingress, the metadata will be carried by
 ``metadata`` dynamic field of ``rte_mbuf`` which can be accessed by
-``RTE_FLOW_DYNF_METADATA()``. PKT_RX_DYNF_METADATA flag will be set along
+``RTE_FLOW_DYNF_METADATA()``. RTE_MBUF_DYNFLAG_RX_METADATA flag will be set along
 with the data.
 
 The mbuf dynamic field must be registered by calling
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 45239ca56e..07bed606a6 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -39,11 +39,6 @@ Deprecation Notices
   ``__atomic_thread_fence`` must be used for patches that need to be merged in
   20.08 onwards. This change will not introduce any performance degradation.
 
-* mbuf: The mbuf offload flags ``PKT_*`` will be renamed as ``RTE_MBUF_F_*``.
-  A compatibility layer will be kept until DPDK 22.11, except for the flags
-  that are already deprecated (``PKT_RX_L4_CKSUM_BAD``, ``PKT_RX_IP_CKSUM_BAD``,
-  ``PKT_RX_EIP_CKSUM_BAD``, ``PKT_TX_QINQ_PKT``) which will be removed.
-
 * pci: To reduce unnecessary ABIs exposed by DPDK bus driver, "rte_bus_pci.h"
   will be made internal in 21.11 and macros/data structures/functions defined
   in the header will not be considered as ABI anymore. This change is inspired
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 4c56cdfeaa..df1a3053a2 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -229,6 +229,9 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* mbuf: The mbuf offload flags ``PKT_*`` are renamed as ``RTE_MBUF_F_*``. A
+  compatibility layer will be kept until DPDK 22.11.
+
 
 ABI Changes
 -----------
diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c
index 5c5aa87a18..ed822412ec 100644
--- a/drivers/compress/mlx5/mlx5_compress.c
+++ b/drivers/compress/mlx5/mlx5_compress.c
@@ -470,7 +470,7 @@ mlx5_compress_addr2mr(struct mlx5_compress_priv *priv, uintptr_t addr,
 		return lkey;
 	/* Take slower bottom-half on miss. */
 	return mlx5_mr_addr2mr_bh(priv->pd, 0, &priv->mr_scache, mr_ctrl, addr,
-				  !!(ol_flags & EXT_ATTACHED_MBUF));
+				  !!(ol_flags & RTE_MBUF_F_EXTERNAL));
 }
 
 static __rte_always_inline uint32_t
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index c25c8e67b2..a16f75337b 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -368,20 +368,20 @@ cn10k_cpt_sec_ucc_process(struct rte_crypto_op *cop,
 	switch (uc_compcode) {
 	case ROC_IE_OT_UCC_SUCCESS:
 		if (sa->ip_csum_enable)
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		break;
 	case ROC_IE_OT_UCC_SUCCESS_PKT_IP_BADCSUM:
-		mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		break;
 	case ROC_IE_OT_UCC_SUCCESS_PKT_L4_GOODCSUM:
-		mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		if (sa->ip_csum_enable)
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		break;
 	case ROC_IE_OT_UCC_SUCCESS_PKT_L4_BADCSUM:
-		mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		if (sa->ip_csum_enable)
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		break;
 	default:
 		break;
diff --git a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
index df1b0a3678..881fbd19b3 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
@@ -69,10 +69,10 @@ process_outb_sa(struct rte_crypto_op *cop, struct cn10k_ipsec_sa *sess,
 	}
 #endif
 
-	if (m_src->ol_flags & PKT_TX_IP_CKSUM)
+	if (m_src->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 		inst_w4_u64 &= ~BIT_ULL(33);
 
-	if (m_src->ol_flags & PKT_TX_L4_MASK)
+	if (m_src->ol_flags & RTE_MBUF_F_TX_L4_MASK)
 		inst_w4_u64 &= ~BIT_ULL(32);
 
 	/* Prepare CPT instruction */
diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c
index 6a2f8b6ac6..714ff539ca 100644
--- a/drivers/crypto/mlx5/mlx5_crypto.c
+++ b/drivers/crypto/mlx5/mlx5_crypto.c
@@ -334,7 +334,7 @@ mlx5_crypto_addr2mr(struct mlx5_crypto_priv *priv, uintptr_t addr,
 		return lkey;
 	/* Take slower bottom-half on miss. */
 	return mlx5_mr_addr2mr_bh(priv->pd, 0, &priv->mr_scache, mr_ctrl, addr,
-				  !!(ol_flags & EXT_ATTACHED_MBUF));
+				  !!(ol_flags & RTE_MBUF_F_EXTERNAL));
 }
 
 static __rte_always_inline uint32_t
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index 6be9be0b47..d536c0a8ca 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -642,7 +642,7 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
 	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
 		uint64_t ol_flags = m->ol_flags;
 
-		if (ol_flags & PKT_TX_SEC_OFFLOAD) {
+		if (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
 			uintptr_t ssow_base = base;
 
 			if (ev->sched_type)
diff --git a/drivers/event/octeontx/ssovf_worker.c b/drivers/event/octeontx/ssovf_worker.c
index 8b056ddc5a..1300c4f155 100644
--- a/drivers/event/octeontx/ssovf_worker.c
+++ b/drivers/event/octeontx/ssovf_worker.c
@@ -428,53 +428,53 @@ octeontx_create_rx_ol_flags_array(void *mem)
 		errcode = idx & 0xff;
 		errlev = (idx & 0x700) >> 8;
 
-		val = PKT_RX_IP_CKSUM_UNKNOWN;
-		val |= PKT_RX_L4_CKSUM_UNKNOWN;
-		val |= PKT_RX_OUTER_L4_CKSUM_UNKNOWN;
+		val = RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN;
 
 		switch (errlev) {
 		case OCCTX_ERRLEV_RE:
 			if (errcode) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		case OCCTX_ERRLEV_LC:
 			if (errcode == OCCTX_EC_IP4_CSUM) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 			break;
 		case OCCTX_ERRLEV_LD:
 			/* Check if parsed packet is neither IPv4 or IPV6 */
 			if (errcode == OCCTX_EC_IP4_NOT)
 				break;
-			val |= PKT_RX_IP_CKSUM_GOOD;
+			val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			if (errcode == OCCTX_EC_L4_CSUM)
-				val |= PKT_RX_OUTER_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 			else
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			break;
 		case OCCTX_ERRLEV_LE:
 			if (errcode == OCCTX_EC_IP4_CSUM)
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			else
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case OCCTX_ERRLEV_LF:
 			/* Check if parsed packet is neither IPv4 or IPV6 */
 			if (errcode == OCCTX_EC_IP4_NOT)
 				break;
-			val |= PKT_RX_IP_CKSUM_GOOD;
+			val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			if (errcode == OCCTX_EC_L4_CSUM)
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			else
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			break;
 		}
 
diff --git a/drivers/event/octeontx/ssovf_worker.h b/drivers/event/octeontx/ssovf_worker.h
index f609b296ed..ccc6de588e 100644
--- a/drivers/event/octeontx/ssovf_worker.h
+++ b/drivers/event/octeontx/ssovf_worker.h
@@ -126,7 +126,7 @@ ssovf_octeontx_wqe_to_pkt(uint64_t work, uint16_t port_info,
 
 	if (!!(flag & OCCTX_RX_VLAN_FLTR_F)) {
 		if (likely(wqe->s.w2.vv)) {
-			mbuf->ol_flags |= PKT_RX_VLAN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 			mbuf->vlan_tci =
 				ntohs(*((uint16_t *)((char *)mbuf->buf_addr +
 					mbuf->data_off + wqe->s.w4.vlptr + 2)));
diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h
index 3e36dcece1..aa766c6602 100644
--- a/drivers/event/octeontx2/otx2_worker.h
+++ b/drivers/event/octeontx2/otx2_worker.h
@@ -277,7 +277,7 @@ otx2_ssogws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
 	uint16_t ref_cnt = m->refcnt;
 
 	if ((flags & NIX_TX_OFFLOAD_SECURITY_F) &&
-	    (m->ol_flags & PKT_TX_SEC_OFFLOAD)) {
+	    (m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)) {
 		txq = otx2_ssogws_xtract_meta(m, txq_data);
 		return otx2_sec_event_tx(base, ev, m, txq, flags);
 	}
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index 931fc230e5..ad8506344a 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -149,7 +149,7 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		/* check for vlan info */
 		if (ppd->tp_status & TP_STATUS_VLAN_VALID) {
 			mbuf->vlan_tci = ppd->tp_vlan_tci;
-			mbuf->ol_flags |= (PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+			mbuf->ol_flags |= (RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 
 			if (!pkt_q->vlan_strip && rte_vlan_insert(&mbuf))
 				PMD_LOG(ERR, "Failed to reinsert VLAN tag");
@@ -229,7 +229,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		}
 
 		/* insert vlan info if necessary */
-		if (mbuf->ol_flags & PKT_TX_VLAN) {
+		if (mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			if (rte_vlan_insert(&mbuf)) {
 				rte_pktmbuf_free(mbuf);
 				continue;
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index fca682d8b0..e7805ac2b2 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -15,20 +15,20 @@
 #include "hw_atl/hw_atl_b0_internal.h"
 
 #define ATL_TX_CKSUM_OFFLOAD_MASK (			 \
-	PKT_TX_IP_CKSUM |				 \
-	PKT_TX_L4_MASK |				 \
-	PKT_TX_TCP_SEG)
+	RTE_MBUF_F_TX_IP_CKSUM |				 \
+	RTE_MBUF_F_TX_L4_MASK |				 \
+	RTE_MBUF_F_TX_TCP_SEG)
 
 #define ATL_TX_OFFLOAD_MASK (				 \
-	PKT_TX_VLAN |					 \
-	PKT_TX_IPV6 |					 \
-	PKT_TX_IPV4 |					 \
-	PKT_TX_IP_CKSUM |				 \
-	PKT_TX_L4_MASK |				 \
-	PKT_TX_TCP_SEG)
+	RTE_MBUF_F_TX_VLAN |					 \
+	RTE_MBUF_F_TX_IPV6 |					 \
+	RTE_MBUF_F_TX_IPV4 |					 \
+	RTE_MBUF_F_TX_IP_CKSUM |				 \
+	RTE_MBUF_F_TX_L4_MASK |				 \
+	RTE_MBUF_F_TX_TCP_SEG)
 
 #define ATL_TX_OFFLOAD_NOTSUP_MASK \
-	(PKT_TX_OFFLOAD_MASK ^ ATL_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ ATL_TX_OFFLOAD_MASK)
 
 /**
  * Structure associated with each descriptor of the RX ring of a RX queue.
@@ -850,21 +850,21 @@ atl_desc_to_offload_flags(struct atl_rx_queue *rxq,
 	if (rxq->l3_csum_enabled && ((rxd_wb->pkt_type & 0x3) == 0)) {
 		/* IPv4 csum error ? */
 		if (rxd_wb->rx_stat & BIT(1))
-			mbuf_flags |= PKT_RX_IP_CKSUM_BAD;
+			mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else
-			mbuf_flags |= PKT_RX_IP_CKSUM_GOOD;
+			mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	} else {
-		mbuf_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+		mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 	}
 
 	/* CSUM calculated ? */
 	if (rxq->l4_csum_enabled && (rxd_wb->rx_stat & BIT(3))) {
 		if (rxd_wb->rx_stat & BIT(2))
-			mbuf_flags |= PKT_RX_L4_CKSUM_BAD;
+			mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else
-			mbuf_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	} else {
-		mbuf_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+		mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 	}
 
 	return mbuf_flags;
@@ -1044,12 +1044,12 @@ atl_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			rx_mbuf->packet_type = atl_desc_to_pkt_type(&rxd_wb);
 
 			if (rx_mbuf->packet_type & RTE_PTYPE_L2_ETHER_VLAN) {
-				rx_mbuf->ol_flags |= PKT_RX_VLAN;
+				rx_mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 				rx_mbuf->vlan_tci = rxd_wb.vlan;
 
 				if (cfg->vlan_strip)
 					rx_mbuf->ol_flags |=
-						PKT_RX_VLAN_STRIPPED;
+						RTE_MBUF_F_RX_VLAN_STRIPPED;
 			}
 
 			if (!rx_mbuf_first)
@@ -1179,12 +1179,12 @@ atl_tso_setup(struct rte_mbuf *tx_pkt, union hw_atl_txc_s *txc)
 	uint32_t tx_cmd = 0;
 	uint64_t ol_flags = tx_pkt->ol_flags;
 
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		tx_cmd |= tx_desc_cmd_lso | tx_desc_cmd_l4cs;
 
 		txc->cmd = 0x4;
 
-		if (ol_flags & PKT_TX_IPV6)
+		if (ol_flags & RTE_MBUF_F_TX_IPV6)
 			txc->cmd |= 0x2;
 
 		txc->l2_len = tx_pkt->l2_len;
@@ -1194,7 +1194,7 @@ atl_tso_setup(struct rte_mbuf *tx_pkt, union hw_atl_txc_s *txc)
 		txc->mss_len = tx_pkt->tso_segsz;
 	}
 
-	if (ol_flags & PKT_TX_VLAN) {
+	if (ol_flags & RTE_MBUF_F_TX_VLAN) {
 		tx_cmd |= tx_desc_cmd_vlan;
 		txc->vlan_tag = tx_pkt->vlan_tci;
 	}
@@ -1212,9 +1212,9 @@ atl_setup_csum_offload(struct rte_mbuf *mbuf, struct hw_atl_txd_s *txd,
 		       uint32_t tx_cmd)
 {
 	txd->cmd |= tx_desc_cmd_fcs;
-	txd->cmd |= (mbuf->ol_flags & PKT_TX_IP_CKSUM) ? tx_desc_cmd_ipv4 : 0;
+	txd->cmd |= (mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) ? tx_desc_cmd_ipv4 : 0;
 	/* L4 csum requested */
-	txd->cmd |= (mbuf->ol_flags & PKT_TX_L4_MASK) ? tx_desc_cmd_l4cs : 0;
+	txd->cmd |= (mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) ? tx_desc_cmd_l4cs : 0;
 	txd->cmd |= tx_cmd;
 }
 
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 6f0dafc287..3898e8299d 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1310,7 +1310,7 @@ avp_dev_copy_from_buffers(struct avp_dev *avp,
 	src_offset = 0;
 
 	if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
-		ol_flags = PKT_RX_VLAN;
+		ol_flags = RTE_MBUF_F_RX_VLAN;
 		vlan_tci = pkt_buf->vlan_tci;
 	} else {
 		ol_flags = 0;
@@ -1568,7 +1568,7 @@ avp_recv_pkts(void *rx_queue,
 		m->port = avp->port_id;
 
 		if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
-			m->ol_flags = PKT_RX_VLAN;
+			m->ol_flags = RTE_MBUF_F_RX_VLAN;
 			m->vlan_tci = pkt_buf->vlan_tci;
 		}
 
@@ -1674,7 +1674,7 @@ avp_dev_copy_to_buffers(struct avp_dev *avp,
 	first_buf->nb_segs = count;
 	first_buf->pkt_len = total_length;
 
-	if (mbuf->ol_flags & PKT_TX_VLAN) {
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		first_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
 		first_buf->vlan_tci = mbuf->vlan_tci;
 	}
@@ -1905,7 +1905,7 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		pkt_buf->nb_segs = 1;
 		pkt_buf->next = NULL;
 
-		if (m->ol_flags & PKT_TX_VLAN) {
+		if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			pkt_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
 			pkt_buf->vlan_tci = m->vlan_tci;
 		}
diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c
index c9d5800b01..f0fd3c6eb8 100644
--- a/drivers/net/axgbe/axgbe_rxtx.c
+++ b/drivers/net/axgbe/axgbe_rxtx.c
@@ -260,17 +260,17 @@ axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		}
 		if (rxq->pdata->rx_csum_enable) {
 			mbuf->ol_flags = 0;
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			if (unlikely(error_status == AXGBE_L3_CSUM_ERR)) {
-				mbuf->ol_flags &= ~PKT_RX_IP_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
-				mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 			} else if (
 				unlikely(error_status == AXGBE_L4_CSUM_ERR)) {
-				mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			}
 		}
 		rte_prefetch1(rte_pktmbuf_mtod(mbuf, void *));
@@ -282,25 +282,25 @@ axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		offloads = rxq->pdata->eth_dev->data->dev_conf.rxmode.offloads;
 		if (!err || !etlt) {
 			if (etlt == RX_CVLAN_TAG_PRESENT) {
-				mbuf->ol_flags |= PKT_RX_VLAN;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
 				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
+					mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 				else
-					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
+					mbuf->ol_flags &= ~RTE_MBUF_F_RX_VLAN_STRIPPED;
 				} else {
 					mbuf->ol_flags &=
-						~(PKT_RX_VLAN
-							| PKT_RX_VLAN_STRIPPED);
+						~(RTE_MBUF_F_RX_VLAN
+							| RTE_MBUF_F_RX_VLAN_STRIPPED);
 					mbuf->vlan_tci = 0;
 				}
 		}
 		/* Indicate if a Context Descriptor is next */
 		if (AXGMAC_GET_BITS_LE(desc->write.desc3, RX_NORMAL_DESC3, CDA))
-			mbuf->ol_flags |= PKT_RX_IEEE1588_PTP
-					| PKT_RX_IEEE1588_TMST;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP
+					| RTE_MBUF_F_RX_IEEE1588_TMST;
 		pkt_len = AXGMAC_GET_BITS_LE(desc->write.desc3, RX_NORMAL_DESC3,
 					     PL) - rxq->crc_len;
 		/* Mbuf populate */
@@ -426,17 +426,17 @@ uint16_t eth_axgbe_recv_scattered_pkts(void *rx_queue,
 		offloads = rxq->pdata->eth_dev->data->dev_conf.rxmode.offloads;
 		if (!err || !etlt) {
 			if (etlt == RX_CVLAN_TAG_PRESENT) {
-				mbuf->ol_flags |= PKT_RX_VLAN;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
 				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
+					mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 				else
-					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
+					mbuf->ol_flags &= ~RTE_MBUF_F_RX_VLAN_STRIPPED;
 			} else {
 				mbuf->ol_flags &=
-					~(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+					~(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 				mbuf->vlan_tci = 0;
 			}
 		}
@@ -465,17 +465,17 @@ uint16_t eth_axgbe_recv_scattered_pkts(void *rx_queue,
 		first_seg->port = rxq->port_id;
 		if (rxq->pdata->rx_csum_enable) {
 			mbuf->ol_flags = 0;
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			if (unlikely(error_status == AXGBE_L3_CSUM_ERR)) {
-				mbuf->ol_flags &= ~PKT_RX_IP_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
-				mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 			} else if (unlikely(error_status
 						== AXGBE_L4_CSUM_ERR)) {
-				mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			}
 		}
 
@@ -795,7 +795,7 @@ static int axgbe_xmit_hw(struct axgbe_tx_queue *txq,
 	AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, FL,
 			   mbuf->pkt_len);
 	/* Timestamp enablement check */
-	if (mbuf->ol_flags & PKT_TX_IEEE1588_TMST)
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 		AXGMAC_SET_BITS_LE(desc->desc2, TX_NORMAL_DESC2, TTSE, 1);
 	rte_wmb();
 	/* Mark it as First and Last Descriptor */
@@ -804,14 +804,14 @@ static int axgbe_xmit_hw(struct axgbe_tx_queue *txq,
 	/* Mark it as a NORMAL descriptor */
 	AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CTXT, 0);
 	/* configure h/w Offload */
-	mask = mbuf->ol_flags & PKT_TX_L4_MASK;
-	if ((mask == PKT_TX_TCP_CKSUM) || (mask == PKT_TX_UDP_CKSUM))
+	mask = mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK;
+	if ((mask == RTE_MBUF_F_TX_TCP_CKSUM) || (mask == RTE_MBUF_F_TX_UDP_CKSUM))
 		AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CIC, 0x3);
-	else if (mbuf->ol_flags & PKT_TX_IP_CKSUM)
+	else if (mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 		AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CIC, 0x1);
 	rte_wmb();
 
-	if (mbuf->ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+	if (mbuf->ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 		/* Mark it as a CONTEXT descriptor */
 		AXGMAC_SET_BITS_LE(desc->desc3, TX_CONTEXT_DESC3,
 				  CTXT, 1);
diff --git a/drivers/net/axgbe/axgbe_rxtx_vec_sse.c b/drivers/net/axgbe/axgbe_rxtx_vec_sse.c
index 1c962b9333..816371cd79 100644
--- a/drivers/net/axgbe/axgbe_rxtx_vec_sse.c
+++ b/drivers/net/axgbe/axgbe_rxtx_vec_sse.c
@@ -23,7 +23,7 @@ axgbe_vec_tx(volatile struct axgbe_tx_desc *desc,
 {
 	uint64_t tmst_en = 0;
 	/* Timestamp enablement check */
-	if (mbuf->ol_flags & PKT_TX_IEEE1588_TMST)
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 		tmst_en = TX_DESC_CTRL_FLAG_TMST;
 	__m128i descriptor = _mm_set_epi64x((uint64_t)mbuf->pkt_len << 32 |
 					    TX_DESC_CTRL_FLAGS | mbuf->data_len
diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index 819e54044b..f67db015b5 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -2189,7 +2189,7 @@ int bnx2x_tx_encap(struct bnx2x_tx_queue *txq, struct rte_mbuf *m0)
 
 	tx_start_bd->nbd = rte_cpu_to_le_16(2);
 
-	if (m0->ol_flags & PKT_TX_VLAN) {
+	if (m0->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		tx_start_bd->vlan_or_ethertype =
 		    rte_cpu_to_le_16(m0->vlan_tci);
 		tx_start_bd->bd_flags.as_bitfield |=
diff --git a/drivers/net/bnx2x/bnx2x_rxtx.c b/drivers/net/bnx2x/bnx2x_rxtx.c
index fea7a34e7d..66b0512c86 100644
--- a/drivers/net/bnx2x/bnx2x_rxtx.c
+++ b/drivers/net/bnx2x/bnx2x_rxtx.c
@@ -435,7 +435,7 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		 */
 		if (cqe_fp->pars_flags.flags & PARSING_FLAGS_VLAN) {
 			rx_mb->vlan_tci = cqe_fp->vlan_tag;
-			rx_mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			rx_mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		}
 
 		rx_pkts[nb_rx] = rx_mb;
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 4c1ee4294e..18eda482ef 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -260,25 +260,25 @@ static void bnxt_tpa_start(struct bnxt_rx_queue *rxq,
 	mbuf->pkt_len = rte_le_to_cpu_32(tpa_start->len);
 	mbuf->data_len = mbuf->pkt_len;
 	mbuf->port = rxq->port_id;
-	mbuf->ol_flags = PKT_RX_LRO;
+	mbuf->ol_flags = RTE_MBUF_F_RX_LRO;
 
 	bnxt_tpa_get_metadata(rxq->bp, tpa_info, tpa_start, tpa_start1);
 
 	if (likely(tpa_info->hash_valid)) {
 		mbuf->hash.rss = tpa_info->rss_hash;
-		mbuf->ol_flags |= PKT_RX_RSS_HASH;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	} else if (tpa_info->cfa_code_valid) {
 		mbuf->hash.fdir.id = tpa_info->cfa_code;
-		mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 	}
 
 	if (tpa_info->vlan_valid && BNXT_RX_VLAN_STRIP_EN(rxq->bp)) {
 		mbuf->vlan_tci = tpa_info->vlan;
-		mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 	}
 
 	if (likely(tpa_info->l4_csum_valid))
-		mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	/* recycle next mbuf */
 	data_cons = RING_NEXT(data_cons);
@@ -576,34 +576,34 @@ bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
 
 		if (BNXT_RX_VLAN_STRIP_EN(rxq->bp)) {
 			if (i & RX_PKT_CMPL_FLAGS2_META_FORMAT_VLAN)
-				pt[i] |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+				pt[i] |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		}
 
 		if (i & (RX_PKT_CMPL_FLAGS2_T_IP_CS_CALC << 3)) {
 			/* Tunnel case. */
 			if (outer_cksum_enabled) {
 				if (i & RX_PKT_CMPL_FLAGS2_IP_CS_CALC)
-					pt[i] |= PKT_RX_IP_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 				if (i & RX_PKT_CMPL_FLAGS2_L4_CS_CALC)
-					pt[i] |= PKT_RX_L4_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 				if (i & RX_PKT_CMPL_FLAGS2_T_L4_CS_CALC)
-					pt[i] |= PKT_RX_OUTER_L4_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
 			} else {
 				if (i & RX_PKT_CMPL_FLAGS2_T_IP_CS_CALC)
-					pt[i] |= PKT_RX_IP_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 				if (i & RX_PKT_CMPL_FLAGS2_T_L4_CS_CALC)
-					pt[i] |= PKT_RX_L4_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 		} else {
 			/* Non-tunnel case. */
 			if (i & RX_PKT_CMPL_FLAGS2_IP_CS_CALC)
-				pt[i] |= PKT_RX_IP_CKSUM_GOOD;
+				pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 			if (i & RX_PKT_CMPL_FLAGS2_L4_CS_CALC)
-				pt[i] |= PKT_RX_L4_CKSUM_GOOD;
+				pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		}
 	}
 
@@ -616,30 +616,30 @@ bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
 			/* Tunnel case. */
 			if (outer_cksum_enabled) {
 				if (i & (RX_PKT_CMPL_ERRORS_IP_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_IP_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 				if (i & (RX_PKT_CMPL_ERRORS_T_IP_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_OUTER_IP_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 
 				if (i & (RX_PKT_CMPL_ERRORS_L4_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_L4_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 
 				if (i & (RX_PKT_CMPL_ERRORS_T_L4_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_OUTER_L4_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 			} else {
 				if (i & (RX_PKT_CMPL_ERRORS_T_IP_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_IP_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 				if (i & (RX_PKT_CMPL_ERRORS_T_L4_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_L4_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			}
 		} else {
 			/* Non-tunnel case. */
 			if (i & (RX_PKT_CMPL_ERRORS_IP_CS_ERROR >> 4))
-				pt[i] |= PKT_RX_IP_CKSUM_BAD;
+				pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 			if (i & (RX_PKT_CMPL_ERRORS_L4_CS_ERROR >> 4))
-				pt[i] |= PKT_RX_L4_CKSUM_BAD;
+				pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		}
 	}
 }
@@ -677,13 +677,13 @@ bnxt_set_ol_flags(struct bnxt_rx_ring_info *rxr, struct rx_pkt_cmpl *rxcmp,
 
 	if (flags_type & RX_PKT_CMPL_FLAGS_RSS_VALID) {
 		mbuf->hash.rss = rte_le_to_cpu_32(rxcmp->rss_hash);
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 #ifdef RTE_LIBRTE_IEEE1588
 	if (unlikely((flags_type & RX_PKT_CMPL_FLAGS_MASK) ==
 		     RX_PKT_CMPL_FLAGS_ITYPE_PTP_W_TIMESTAMP))
-		ol_flags |= PKT_RX_IEEE1588_PTP | PKT_RX_IEEE1588_TMST;
+		ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP | RTE_MBUF_F_RX_IEEE1588_TMST;
 #endif
 
 	mbuf->ol_flags = ol_flags;
@@ -807,7 +807,7 @@ bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
 		mbuf->hash.fdir.hi = mark_id;
 		*bnxt_cfa_code_dynfield(mbuf) = cfa_code & 0xffffffffull;
 		mbuf->hash.fdir.id = rxcmp1->cfa_code;
-		mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		return mark_id;
 	}
 
@@ -854,7 +854,7 @@ void bnxt_set_mark_in_mbuf(struct bnxt *bp,
 	}
 
 	mbuf->hash.fdir.hi = bp->mark_table[cfa_code].mark_id;
-	mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	mbuf->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 }
 
 static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h
index 59adb7242c..a84f016609 100644
--- a/drivers/net/bnxt/bnxt_rxr.h
+++ b/drivers/net/bnxt/bnxt_rxr.h
@@ -212,7 +212,7 @@ static inline void bnxt_rx_vlan_v2(struct rte_mbuf *mbuf,
 {
 	if (RX_CMP_VLAN_VALID(rxcmp)) {
 		mbuf->vlan_tci = RX_CMP_METADATA0_VID(rxcmp1);
-		mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 	}
 }
 
@@ -276,47 +276,47 @@ static inline void bnxt_parse_csum_v2(struct rte_mbuf *mbuf,
 			t_pkt = 1;
 
 		if (unlikely(RX_CMP_V2_L4_CS_ERR(error_v2)))
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else if (flags2 & RX_CMP_FLAGS2_L4_CSUM_ALL_OK_MASK)
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		else
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 
 		if (unlikely(RX_CMP_V2_L3_CS_ERR(error_v2)))
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else if (flags2 & RX_CMP_FLAGS2_IP_CSUM_ALL_OK_MASK)
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		else
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 	} else {
 		hdr_cnt = RX_CMP_V2_L4_CS_OK(flags2);
 		if (hdr_cnt > 1)
 			t_pkt = 1;
 
 		if (RX_CMP_V2_L4_CS_OK(flags2))
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		else if (RX_CMP_V2_L4_CS_ERR(error_v2))
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 
 		if (RX_CMP_V2_L3_CS_OK(flags2))
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		else if (RX_CMP_V2_L3_CS_ERR(error_v2))
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 	}
 
 	if (t_pkt) {
 		if (unlikely(RX_CMP_V2_OT_L4_CS_ERR(error_v2) ||
 					RX_CMP_V2_T_L4_CS_ERR(error_v2)))
-			mbuf->ol_flags |= PKT_RX_OUTER_L4_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 		else
-			mbuf->ol_flags |= PKT_RX_OUTER_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
 
 		if (unlikely(RX_CMP_V2_T_IP_CS_ERR(error_v2)))
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	}
 }
 
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index e36da59fce..31f5e856fb 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -111,12 +111,12 @@ int bnxt_init_tx_ring_struct(struct bnxt_tx_queue *txq, unsigned int socket_id)
 static bool
 bnxt_xmit_need_long_bd(struct rte_mbuf *tx_pkt, struct bnxt_tx_queue *txq)
 {
-	if (tx_pkt->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM |
-				PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM |
-				PKT_TX_VLAN | PKT_TX_OUTER_IP_CKSUM |
-				PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN |
-				PKT_TX_TUNNEL_GENEVE | PKT_TX_IEEE1588_TMST |
-				PKT_TX_QINQ) ||
+	if (tx_pkt->ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_TCP_CKSUM |
+				RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_IP_CKSUM |
+				RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+				RTE_MBUF_F_TX_TUNNEL_GRE | RTE_MBUF_F_TX_TUNNEL_VXLAN |
+				RTE_MBUF_F_TX_TUNNEL_GENEVE | RTE_MBUF_F_TX_IEEE1588_TMST |
+				RTE_MBUF_F_TX_QINQ) ||
 	     (BNXT_TRUFLOW_EN(txq->bp) &&
 	      (txq->bp->tx_cfa_action || txq->vfr_tx_cfa_action)))
 		return true;
@@ -203,13 +203,13 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		vlan_tag_flags = 0;
 
 		/* HW can accelerate only outer vlan in QinQ mode */
-		if (tx_pkt->ol_flags & PKT_TX_QINQ) {
+		if (tx_pkt->ol_flags & RTE_MBUF_F_TX_QINQ) {
 			vlan_tag_flags = TX_BD_LONG_CFA_META_KEY_VLAN_TAG |
 				tx_pkt->vlan_tci_outer;
 			outer_tpid_bd = txq->bp->outer_tpid_bd &
 				BNXT_OUTER_TPID_BD_MASK;
 			vlan_tag_flags |= outer_tpid_bd;
-		} else if (tx_pkt->ol_flags & PKT_TX_VLAN) {
+		} else if (tx_pkt->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			/* shurd: Should this mask at
 			 * TX_BD_LONG_CFA_META_VLAN_VID_MASK?
 			 */
@@ -239,7 +239,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		else
 			txbd1->cfa_action = txq->bp->tx_cfa_action;
 
-		if (tx_pkt->ol_flags & PKT_TX_TCP_SEG) {
+		if (tx_pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			uint16_t hdr_size;
 
 			/* TSO */
@@ -247,7 +247,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 					 TX_BD_LONG_LFLAGS_T_IPID;
 			hdr_size = tx_pkt->l2_len + tx_pkt->l3_len +
 					tx_pkt->l4_len;
-			hdr_size += (tx_pkt->ol_flags & PKT_TX_TUNNEL_MASK) ?
+			hdr_size += (tx_pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 				    tx_pkt->outer_l2_len +
 				    tx_pkt->outer_l3_len : 0;
 			/* The hdr_size is multiple of 16bit units not 8bit.
@@ -302,24 +302,24 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 			   PKT_TX_TCP_UDP_CKSUM) {
 			/* TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_TCP_CKSUM) ==
-			   PKT_TX_TCP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) ==
+			   RTE_MBUF_F_TX_TCP_CKSUM) {
 			/* TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_UDP_CKSUM) ==
-			   PKT_TX_UDP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM) ==
+			   RTE_MBUF_F_TX_UDP_CKSUM) {
 			/* TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_IP_CKSUM) ==
-			   PKT_TX_IP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) ==
+			   RTE_MBUF_F_TX_IP_CKSUM) {
 			/* IP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_IP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_OUTER_IP_CKSUM) ==
-			   PKT_TX_OUTER_IP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) ==
+			   RTE_MBUF_F_TX_OUTER_IP_CKSUM) {
 			/* IP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_T_IP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_IEEE1588_TMST) ==
-			   PKT_TX_IEEE1588_TMST) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST) ==
+			   RTE_MBUF_F_TX_IEEE1588_TMST) {
 			/* PTP */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_STAMP;
 		}
diff --git a/drivers/net/bnxt/bnxt_txr.h b/drivers/net/bnxt/bnxt_txr.h
index 6bfdc6d01a..e11343c082 100644
--- a/drivers/net/bnxt/bnxt_txr.h
+++ b/drivers/net/bnxt/bnxt_txr.h
@@ -60,25 +60,25 @@ int bnxt_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int bnxt_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int bnxt_flush_tx_cmp(struct bnxt_cp_ring_info *cpr);
 
-#define PKT_TX_OIP_IIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
-					PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_IIP_UDP_CKSUM	(PKT_TX_UDP_CKSUM | \
-					PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_IIP_TCP_CKSUM	(PKT_TX_TCP_CKSUM | \
-					PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_IIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
-					PKT_TX_IP_CKSUM)
-#define PKT_TX_IIP_TCP_CKSUM		(PKT_TX_TCP_CKSUM | PKT_TX_IP_CKSUM)
-#define PKT_TX_IIP_UDP_CKSUM		(PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM)
-#define PKT_TX_OIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
-					PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_UDP_CKSUM		(PKT_TX_UDP_CKSUM | \
-					PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_TCP_CKSUM		(PKT_TX_TCP_CKSUM | \
-					PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_IIP_CKSUM		(PKT_TX_IP_CKSUM |	\
-					 PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_TCP_UDP_CKSUM		(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)
+#define PKT_TX_OIP_IIP_TCP_UDP_CKSUM	(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_IIP_UDP_CKSUM	(RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_IIP_TCP_CKSUM	(RTE_MBUF_F_TX_TCP_CKSUM | \
+					RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_IIP_TCP_UDP_CKSUM	(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_IP_CKSUM)
+#define PKT_TX_IIP_TCP_CKSUM		(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_IP_CKSUM)
+#define PKT_TX_IIP_UDP_CKSUM		(RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_IP_CKSUM)
+#define PKT_TX_OIP_TCP_UDP_CKSUM	(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_UDP_CKSUM		(RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_TCP_CKSUM		(RTE_MBUF_F_TX_TCP_CKSUM | \
+					RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_IIP_CKSUM		(RTE_MBUF_F_TX_IP_CKSUM |	\
+					 RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_TCP_UDP_CKSUM		(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM)
 
 
 #define TX_BD_FLG_TIP_IP_TCP_UDP_CHKSUM	(TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM | \
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 542c6633b5..ce40eef28a 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -112,7 +112,7 @@ is_lacp_packets(uint16_t ethertype, uint8_t subtype, struct rte_mbuf *mbuf)
 	const uint16_t ether_type_slow_be =
 		rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
 
-	return !((mbuf->ol_flags & PKT_RX_VLAN) ? mbuf->vlan_tci : 0) &&
+	return !((mbuf->ol_flags & RTE_MBUF_F_RX_VLAN) ? mbuf->vlan_tci : 0) &&
 		(ethertype == ether_type_slow_be &&
 		(subtype == SLOW_SUBTYPE_MARKER || subtype == SLOW_SUBTYPE_LACP));
 }
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 9dfea99db9..6a86998c88 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -50,15 +50,15 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	uint16_t flags = 0;
 
 	/* Fastpath is dependent on these enums */
-	RTE_BUILD_BUG_ON(PKT_TX_TCP_CKSUM != (1ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_SCTP_CKSUM != (2ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_UDP_CKSUM != (3ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_IP_CKSUM != (1ULL << 54));
-	RTE_BUILD_BUG_ON(PKT_TX_IPV4 != (1ULL << 55));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IP_CKSUM != (1ULL << 58));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV4 != (1ULL << 59));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV6 != (1ULL << 60));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_UDP_CKSUM != (1ULL << 41));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_TCP_CKSUM != (1ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_SCTP_CKSUM != (2ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_UDP_CKSUM != (3ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IP_CKSUM != (1ULL << 54));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IPV4 != (1ULL << 55));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IP_CKSUM != (1ULL << 58));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV4 != (1ULL << 59));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV6 != (1ULL << 60));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_UDP_CKSUM != (1ULL << 41));
 	RTE_BUILD_BUG_ON(RTE_MBUF_L2_LEN_BITS != 7);
 	RTE_BUILD_BUG_ON(RTE_MBUF_L3_LEN_BITS != 9);
 	RTE_BUILD_BUG_ON(RTE_MBUF_OUTL2_LEN_BITS != 7);
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index fcc451aa36..5afc188e96 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -163,10 +163,10 @@ nix_sec_meta_to_mbuf(uint64_t cq_w1, uintptr_t sa_base, uintptr_t laddr,
 		res_w1 = sg[10];
 
 		/* Clear checksum flags and update security flag */
-		*ol_flags &= ~(PKT_RX_L4_CKSUM_MASK | PKT_RX_IP_CKSUM_MASK);
+		*ol_flags &= ~(RTE_MBUF_F_RX_L4_CKSUM_MASK | RTE_MBUF_F_RX_IP_CKSUM_MASK);
 		*ol_flags |= (((res_w1 & 0xFF) == CPT_COMP_WARN) ?
-			      PKT_RX_SEC_OFFLOAD :
-			      (PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED));
+			      RTE_MBUF_F_RX_SEC_OFFLOAD :
+			      (RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED));
 		/* Calculate inner packet length */
 		len = ((res_w1 >> 16) & 0xFFFF) + hdr->w2.il3_off -
 			sizeof(struct cpt_parse_hdr_s) - (w0 & 0x7);
@@ -229,9 +229,9 @@ nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
 	 * 0 to CNXK_FLOW_ACTION_FLAG_DEFAULT - 2
 	 */
 	if (likely(match_id)) {
-		ol_flags |= PKT_RX_FDIR;
+		ol_flags |= RTE_MBUF_F_RX_FDIR;
 		if (match_id != CNXK_FLOW_ACTION_FLAG_DEFAULT) {
-			ol_flags |= PKT_RX_FDIR_ID;
+			ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
 			mbuf->hash.fdir.hi = match_id - 1;
 		}
 	}
@@ -315,7 +315,7 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_RSS_F) {
 		mbuf->hash.rss = tag;
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 	/* Process Security packets */
@@ -331,9 +331,9 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 			/* Rlen */
 			len = ((res_w1 >> 16) & 0xFFFF) + mbuf->pkt_len;
 			ol_flags |= ((uc_cc == CPT_COMP_WARN) ?
-						   PKT_RX_SEC_OFFLOAD :
-						   (PKT_RX_SEC_OFFLOAD |
-					      PKT_RX_SEC_OFFLOAD_FAILED));
+						   RTE_MBUF_F_RX_SEC_OFFLOAD :
+						   (RTE_MBUF_F_RX_SEC_OFFLOAD |
+					      RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED));
 		} else {
 			if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
 				ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
@@ -345,11 +345,11 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 		if (rx->vtag0_gone) {
-			ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 			mbuf->vlan_tci = rx->vtag0_tci;
 		}
 		if (rx->vtag1_gone) {
-			ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 			mbuf->vlan_tci_outer = rx->vtag1_tci;
 		}
 	}
@@ -495,7 +495,7 @@ static __rte_always_inline uint64_t
 nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f)
 {
 	if (w2 & BIT_ULL(21) /* vtag0_gone */) {
-		ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		*f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5);
 	}
 
@@ -506,7 +506,7 @@ static __rte_always_inline uint64_t
 nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
 {
 	if (w2 & BIT_ULL(23) /* vtag1_gone */) {
-		ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 		mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48);
 	}
 
@@ -678,10 +678,10 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 			f1 = vsetq_lane_u32(cq1_w0, f1, 3);
 			f2 = vsetq_lane_u32(cq2_w0, f2, 3);
 			f3 = vsetq_lane_u32(cq3_w0, f3, 3);
-			ol_flags0 = PKT_RX_RSS_HASH;
-			ol_flags1 = PKT_RX_RSS_HASH;
-			ol_flags2 = PKT_RX_RSS_HASH;
-			ol_flags3 = PKT_RX_RSS_HASH;
+			ol_flags0 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags1 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags2 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags3 = RTE_MBUF_F_RX_RSS_HASH;
 		} else {
 			ol_flags0 = 0;
 			ol_flags1 = 0;
@@ -778,8 +778,8 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC};
-			const uint64_t ts_olf = PKT_RX_IEEE1588_PTP |
-						PKT_RX_IEEE1588_TMST |
+			const uint64_t ts_olf = RTE_MBUF_F_RX_IEEE1588_PTP |
+						RTE_MBUF_F_RX_IEEE1588_TMST |
 						tstamp->rx_tstamp_dynflag;
 			const uint32x4_t and_mask = {0x1, 0x2, 0x4, 0x8};
 			uint64x2_t ts01, ts23, mask;
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index c6f349b352..36f6aec35e 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -458,12 +458,12 @@ cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 {
 	uint64_t mask, ol_flags = m->ol_flags;
 
-	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uintptr_t mdata = rte_pktmbuf_mtod(m, uintptr_t);
 		uint16_t *iplen, *oiplen, *oudplen;
 		uint16_t lso_sb, paylen;
 
-		mask = -!!(ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6));
+		mask = -!!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6));
 		lso_sb = (mask & (m->outer_l2_len + m->outer_l3_len)) +
 			 m->l2_len + m->l3_len + m->l4_len;
 
@@ -472,18 +472,18 @@ cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 
 		/* Get iplen position assuming no tunnel hdr */
 		iplen = (uint16_t *)(mdata + m->l2_len +
-				     (2 << !!(ol_flags & PKT_TX_IPV6)));
+				     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun =
 				(CNXK_NIX_UDP_TUN_BITMASK >>
-				 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+				 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 				0x1;
 
 			oiplen = (uint16_t *)(mdata + m->outer_l2_len +
 					      (2 << !!(ol_flags &
-						       PKT_TX_OUTER_IPV6)));
+						       RTE_MBUF_F_TX_OUTER_IPV6)));
 			*oiplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*oiplen) -
 						   paylen);
 
@@ -498,7 +498,7 @@ cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 			/* Update iplen position to inner ip hdr */
 			iplen = (uint16_t *)(mdata + lso_sb - m->l3_len -
 					     m->l4_len +
-					     (2 << !!(ol_flags & PKT_TX_IPV6)));
+					     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		}
 
 		*iplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*iplen) - paylen);
@@ -548,11 +548,11 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
 	    (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t ol3type =
-			((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L3 */
 		w1.ol3type = ol3type;
@@ -564,15 +564,15 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol4type = csum + (csum << 1);
 
 		/* Inner L3 */
-		w1.il3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_IPV6)) << 2);
+		w1.il3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2);
 		w1.il3ptr = w1.ol4ptr + m->l2_len;
 		w1.il4ptr = w1.il3ptr + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.il3type = w1.il3type + !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.il3type = w1.il3type + !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.il4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.il4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 
 		/* In case of no tunnel header use only
 		 * shift IL3/IL4 fields a bit to use
@@ -583,16 +583,16 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		       ((w1.u & 0X00000000FFFFFFFF) >> (mask << 4));
 
 	} else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t outer_l2_len = m->outer_l2_len;
 
 		/* Outer L3 */
 		w1.ol3ptr = outer_l2_len;
 		w1.ol4ptr = outer_l2_len + m->outer_l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			     !!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			     !!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L4 */
 		w1.ol4type = csum + (csum << 1);
@@ -608,27 +608,27 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol3ptr = l2_len;
 		w1.ol4ptr = l2_len + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_IPV6)) << 2) +
-			     !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2) +
+			     !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.ol4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.ol4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 	}
 
 	if (flags & NIX_TX_NEED_EXT_HDR && flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
-		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & PKT_TX_VLAN);
+		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
 		/* HW will update ptr after vlan0 update */
 		send_hdr_ext->w1.vlan1_ins_ptr = 12;
 		send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci;
 
-		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & PKT_TX_QINQ);
+		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_QINQ);
 		/* 2B before end of l2 header */
 		send_hdr_ext->w1.vlan0_ins_ptr = 12;
 		send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer;
 	}
 
-	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uint16_t lso_sb;
 		uint64_t mask;
 
@@ -639,20 +639,20 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		send_hdr_ext->w0.lso = 1;
 		send_hdr_ext->w0.lso_mps = m->tso_segsz;
 		send_hdr_ext->w0.lso_format =
-			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 		w1.ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun =
 				(CNXK_NIX_UDP_TUN_BITMASK >>
-				 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+				 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 				0x1;
 			uint8_t shift = is_udp_tun ? 32 : 0;
 
-			shift += (!!(ol_flags & PKT_TX_OUTER_IPV6) << 4);
-			shift += (!!(ol_flags & PKT_TX_IPV6) << 3);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
 
 			w1.il4type = NIX_SENDL4TYPE_TCP_CKSUM;
 			w1.ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
@@ -686,7 +686,7 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 	}
 
 	if (flags & NIX_TX_OFFLOAD_SECURITY_F)
-		*sec = !!(ol_flags & PKT_TX_SEC_OFFLOAD);
+		*sec = !!(ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD);
 }
 
 static __rte_always_inline void
@@ -722,7 +722,7 @@ cn10k_nix_xmit_prepare_tstamp(uintptr_t lmt_addr, const uint64_t *cmd,
 			      const uint16_t flags)
 {
 	if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
-		const uint8_t is_ol_tstamp = !(ol_flags & PKT_TX_IEEE1588_TMST);
+		const uint8_t is_ol_tstamp = !(ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST);
 		struct nix_send_ext_s *send_hdr_ext =
 			(struct nix_send_ext_s *)lmt_addr + 16;
 		uint64_t *lmt = (uint64_t *)lmt_addr;
@@ -742,7 +742,7 @@ cn10k_nix_xmit_prepare_tstamp(uintptr_t lmt_addr, const uint64_t *cmd,
 			rte_compiler_barrier();
 		}
 
-		/* Packets for which PKT_TX_IEEE1588_TMST is not set, tx tstamp
+		/* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp
 		 * should not be recorded, hence changing the alg type to
 		 * NIX_SENDMEMALG_SET and also changing send mem addr field to
 		 * next 8 bytes as it corrpt the actual tx tstamp registered
@@ -1118,7 +1118,7 @@ cn10k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 	uint16_t lso_sb;
 	uint64_t mask;
 
-	if (!(ol_flags & PKT_TX_TCP_SEG))
+	if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		return;
 
 	mask = -(!w1->il3type);
@@ -1127,20 +1127,20 @@ cn10k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 	w0->u |= BIT(14);
 	w0->lso_sb = lso_sb;
 	w0->lso_mps = m->tso_segsz;
-	w0->lso_format = NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+	w0->lso_format = NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 	w1->ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 	/* Handle tunnel tso */
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-	    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+	    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 		const uint8_t is_udp_tun =
 			(CNXK_NIX_UDP_TUN_BITMASK >>
-			 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+			 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 			0x1;
 		uint8_t shift = is_udp_tun ? 32 : 0;
 
-		shift += (!!(ol_flags & PKT_TX_OUTER_IPV6) << 4);
-		shift += (!!(ol_flags & PKT_TX_IPV6) << 3);
+		shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
+		shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
 
 		w1->il4type = NIX_SENDL4TYPE_TCP_CKSUM;
 		w1->ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
@@ -1784,26 +1784,26 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			const uint8x16_t tbl = {
 				/* [0-15] = il4type:il3type */
 				0x04, /* none (IPv6 assumed) */
-				0x14, /* PKT_TX_TCP_CKSUM (IPv6 assumed) */
-				0x24, /* PKT_TX_SCTP_CKSUM (IPv6 assumed) */
-				0x34, /* PKT_TX_UDP_CKSUM (IPv6 assumed) */
-				0x03, /* PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM */
-				0x23, /* PKT_TX_IP_CKSUM | PKT_TX_SCTP_CKSUM */
-				0x33, /* PKT_TX_IP_CKSUM | PKT_TX_UDP_CKSUM */
-				0x02, /* PKT_TX_IPV4  */
-				0x12, /* PKT_TX_IPV4 | PKT_TX_TCP_CKSUM */
-				0x22, /* PKT_TX_IPV4 | PKT_TX_SCTP_CKSUM */
-				0x32, /* PKT_TX_IPV4 | PKT_TX_UDP_CKSUM */
-				0x03, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_TCP_CKSUM
+				0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6 assumed) */
+				0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6 assumed) */
+				0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6 assumed) */
+				0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x23, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x33, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x02, /* RTE_MBUF_F_TX_IPV4  */
+				0x12, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x22, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x32, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x03, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_TCP_CKSUM
 				       */
-				0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_SCTP_CKSUM
+				0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_SCTP_CKSUM
 				       */
-				0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_UDP_CKSUM
+				0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_UDP_CKSUM
 				       */
 			};
 
@@ -1988,40 +1988,40 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 				{
 					/* [0-15] = il4type:il3type */
 					0x04, /* none (IPv6) */
-					0x14, /* PKT_TX_TCP_CKSUM (IPv6) */
-					0x24, /* PKT_TX_SCTP_CKSUM (IPv6) */
-					0x34, /* PKT_TX_UDP_CKSUM (IPv6) */
-					0x03, /* PKT_TX_IP_CKSUM */
-					0x13, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6) */
+					0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6) */
+					0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6) */
+					0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+					0x13, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x02, /* PKT_TX_IPV4 */
-					0x12, /* PKT_TX_IPV4 |
-					       * PKT_TX_TCP_CKSUM
+					0x02, /* RTE_MBUF_F_TX_IPV4 */
+					0x12, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x22, /* PKT_TX_IPV4 |
-					       * PKT_TX_SCTP_CKSUM
+					0x22, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x32, /* PKT_TX_IPV4 |
-					       * PKT_TX_UDP_CKSUM
+					0x32, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x03, /* PKT_TX_IPV4 |
-					       * PKT_TX_IP_CKSUM
+					0x03, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_IP_CKSUM
 					       */
-					0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
 				},
 
@@ -2209,11 +2209,11 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		if (flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
 			/* Tx ol_flag for vlan. */
-			const uint64x2_t olv = {PKT_TX_VLAN, PKT_TX_VLAN};
+			const uint64x2_t olv = {RTE_MBUF_F_TX_VLAN, RTE_MBUF_F_TX_VLAN};
 			/* Bit enable for VLAN1 */
 			const uint64x2_t mlv = {BIT_ULL(49), BIT_ULL(49)};
 			/* Tx ol_flag for QnQ. */
-			const uint64x2_t olq = {PKT_TX_QINQ, PKT_TX_QINQ};
+			const uint64x2_t olq = {RTE_MBUF_F_TX_QINQ, RTE_MBUF_F_TX_QINQ};
 			/* Bit enable for VLAN0 */
 			const uint64x2_t mlq = {BIT_ULL(48), BIT_ULL(48)};
 			/* Load vlan values from packet. outer is VLAN 0 */
@@ -2255,8 +2255,8 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
 			/* Tx ol_flag for timestam. */
-			const uint64x2_t olf = {PKT_TX_IEEE1588_TMST,
-						PKT_TX_IEEE1588_TMST};
+			const uint64x2_t olf = {RTE_MBUF_F_TX_IEEE1588_TMST,
+						RTE_MBUF_F_TX_IEEE1588_TMST};
 			/* Set send mem alg to SUB. */
 			const uint64x2_t alg = {BIT_ULL(59), BIT_ULL(59)};
 			/* Increment send mem address by 8. */
@@ -2425,8 +2425,8 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 		}
 
 		if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
-			const uint64x2_t olf = {PKT_TX_SEC_OFFLOAD,
-						PKT_TX_SEC_OFFLOAD};
+			const uint64x2_t olf = {RTE_MBUF_F_TX_SEC_OFFLOAD,
+						RTE_MBUF_F_TX_SEC_OFFLOAD};
 			uintptr_t next;
 			uint8_t dw;
 
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 08c86f9e6b..6cc6044f89 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -50,15 +50,15 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	uint16_t flags = 0;
 
 	/* Fastpath is dependent on these enums */
-	RTE_BUILD_BUG_ON(PKT_TX_TCP_CKSUM != (1ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_SCTP_CKSUM != (2ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_UDP_CKSUM != (3ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_IP_CKSUM != (1ULL << 54));
-	RTE_BUILD_BUG_ON(PKT_TX_IPV4 != (1ULL << 55));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IP_CKSUM != (1ULL << 58));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV4 != (1ULL << 59));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV6 != (1ULL << 60));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_UDP_CKSUM != (1ULL << 41));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_TCP_CKSUM != (1ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_SCTP_CKSUM != (2ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_UDP_CKSUM != (3ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IP_CKSUM != (1ULL << 54));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IPV4 != (1ULL << 55));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IP_CKSUM != (1ULL << 58));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV4 != (1ULL << 59));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV6 != (1ULL << 60));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_UDP_CKSUM != (1ULL << 41));
 	RTE_BUILD_BUG_ON(RTE_MBUF_L2_LEN_BITS != 7);
 	RTE_BUILD_BUG_ON(RTE_MBUF_L3_LEN_BITS != 9);
 	RTE_BUILD_BUG_ON(RTE_MBUF_OUTL2_LEN_BITS != 7);
diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h
index 7ab415a194..03773c5436 100644
--- a/drivers/net/cnxk/cn9k_rx.h
+++ b/drivers/net/cnxk/cn9k_rx.h
@@ -103,9 +103,9 @@ nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
 	 * 0 to CNXK_FLOW_ACTION_FLAG_DEFAULT - 2
 	 */
 	if (likely(match_id)) {
-		ol_flags |= PKT_RX_FDIR;
+		ol_flags |= RTE_MBUF_F_RX_FDIR;
 		if (match_id != CNXK_FLOW_ACTION_FLAG_DEFAULT) {
-			ol_flags |= PKT_RX_FDIR_ID;
+			ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
 			mbuf->hash.fdir.hi = match_id - 1;
 		}
 	}
@@ -237,7 +237,7 @@ nix_rx_sec_mbuf_update(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
 	rte_prefetch0((void *)data);
 
 	if (unlikely(res != (CPT_COMP_GOOD | ROC_IE_ONF_UCC_SUCCESS << 8)))
-		return PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED;
+		return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 
 	data += lcptr;
 	/* 20 bits of tag would have the SPI */
@@ -258,7 +258,7 @@ nix_rx_sec_mbuf_update(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
 	win_sz = (uint32_t)(dw >> 64);
 	if (win_sz) {
 		if (ipsec_antireplay_check(sa, sa_priv, data, win_sz) < 0)
-			return PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED;
+			return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 	}
 
 	/* Get total length from IPv4 header. We can assume only IPv4 */
@@ -272,7 +272,7 @@ nix_rx_sec_mbuf_update(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
 	*rearm_val |= data_off;
 
 	*len = rte_be_to_cpu_16(ipv4->total_length) + lcptr;
-	return PKT_RX_SEC_OFFLOAD;
+	return RTE_MBUF_F_RX_SEC_OFFLOAD;
 }
 
 static __rte_always_inline void
@@ -319,7 +319,7 @@ cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_RSS_F) {
 		mbuf->hash.rss = tag;
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 	if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
@@ -328,11 +328,11 @@ cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 skip_parse:
 	if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 		if (rx->cn9k.vtag0_gone) {
-			ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 			mbuf->vlan_tci = rx->cn9k.vtag0_tci;
 		}
 		if (rx->cn9k.vtag1_gone) {
-			ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 			mbuf->vlan_tci_outer = rx->cn9k.vtag1_tci;
 		}
 	}
@@ -437,7 +437,7 @@ static __rte_always_inline uint64_t
 nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f)
 {
 	if (w2 & BIT_ULL(21) /* vtag0_gone */) {
-		ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		*f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5);
 	}
 
@@ -448,7 +448,7 @@ static __rte_always_inline uint64_t
 nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
 {
 	if (w2 & BIT_ULL(23) /* vtag1_gone */) {
-		ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 		mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48);
 	}
 
@@ -549,10 +549,10 @@ cn9k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 			f1 = vsetq_lane_u32(cq1_w0, f1, 3);
 			f2 = vsetq_lane_u32(cq2_w0, f2, 3);
 			f3 = vsetq_lane_u32(cq3_w0, f3, 3);
-			ol_flags0 = PKT_RX_RSS_HASH;
-			ol_flags1 = PKT_RX_RSS_HASH;
-			ol_flags2 = PKT_RX_RSS_HASH;
-			ol_flags3 = PKT_RX_RSS_HASH;
+			ol_flags0 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags1 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags2 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags3 = RTE_MBUF_F_RX_RSS_HASH;
 		} else {
 			ol_flags0 = 0;
 			ol_flags1 = 0;
@@ -625,8 +625,8 @@ cn9k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC};
-			const uint64_t ts_olf = PKT_RX_IEEE1588_PTP |
-						PKT_RX_IEEE1588_TMST |
+			const uint64_t ts_olf = RTE_MBUF_F_RX_IEEE1588_PTP |
+						RTE_MBUF_F_RX_IEEE1588_TMST |
 						rxq->tstamp->rx_tstamp_dynflag;
 			const uint32x4_t and_mask = {0x1, 0x2, 0x4, 0x8};
 			uint64x2_t ts01, ts23, mask;
diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h
index 44273eca90..79a70ddcdd 100644
--- a/drivers/net/cnxk/cn9k_tx.h
+++ b/drivers/net/cnxk/cn9k_tx.h
@@ -62,12 +62,12 @@ cn9k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 {
 	uint64_t mask, ol_flags = m->ol_flags;
 
-	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uintptr_t mdata = rte_pktmbuf_mtod(m, uintptr_t);
 		uint16_t *iplen, *oiplen, *oudplen;
 		uint16_t lso_sb, paylen;
 
-		mask = -!!(ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6));
+		mask = -!!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6));
 		lso_sb = (mask & (m->outer_l2_len + m->outer_l3_len)) +
 			 m->l2_len + m->l3_len + m->l4_len;
 
@@ -76,18 +76,18 @@ cn9k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 
 		/* Get iplen position assuming no tunnel hdr */
 		iplen = (uint16_t *)(mdata + m->l2_len +
-				     (2 << !!(ol_flags & PKT_TX_IPV6)));
+				     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun =
 				(CNXK_NIX_UDP_TUN_BITMASK >>
-				 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+				 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 				0x1;
 
 			oiplen = (uint16_t *)(mdata + m->outer_l2_len +
 					      (2 << !!(ol_flags &
-						       PKT_TX_OUTER_IPV6)));
+						       RTE_MBUF_F_TX_OUTER_IPV6)));
 			*oiplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*oiplen) -
 						   paylen);
 
@@ -102,7 +102,7 @@ cn9k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 			/* Update iplen position to inner ip hdr */
 			iplen = (uint16_t *)(mdata + lso_sb - m->l3_len -
 					     m->l4_len +
-					     (2 << !!(ol_flags & PKT_TX_IPV6)));
+					     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		}
 
 		*iplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*iplen) - paylen);
@@ -152,11 +152,11 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
 	    (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t ol3type =
-			((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L3 */
 		w1.ol3type = ol3type;
@@ -168,15 +168,15 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol4type = csum + (csum << 1);
 
 		/* Inner L3 */
-		w1.il3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_IPV6)) << 2);
+		w1.il3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2);
 		w1.il3ptr = w1.ol4ptr + m->l2_len;
 		w1.il4ptr = w1.il3ptr + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.il3type = w1.il3type + !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.il3type = w1.il3type + !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.il4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.il4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 
 		/* In case of no tunnel header use only
 		 * shift IL3/IL4 fields a bit to use
@@ -187,16 +187,16 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		       ((w1.u & 0X00000000FFFFFFFF) >> (mask << 4));
 
 	} else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t outer_l2_len = m->outer_l2_len;
 
 		/* Outer L3 */
 		w1.ol3ptr = outer_l2_len;
 		w1.ol4ptr = outer_l2_len + m->outer_l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			     !!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			     !!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L4 */
 		w1.ol4type = csum + (csum << 1);
@@ -212,27 +212,27 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol3ptr = l2_len;
 		w1.ol4ptr = l2_len + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_IPV6)) << 2) +
-			     !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2) +
+			     !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.ol4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.ol4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 	}
 
 	if (flags & NIX_TX_NEED_EXT_HDR && flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
-		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & PKT_TX_VLAN);
+		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
 		/* HW will update ptr after vlan0 update */
 		send_hdr_ext->w1.vlan1_ins_ptr = 12;
 		send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci;
 
-		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & PKT_TX_QINQ);
+		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_QINQ);
 		/* 2B before end of l2 header */
 		send_hdr_ext->w1.vlan0_ins_ptr = 12;
 		send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer;
 	}
 
-	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uint16_t lso_sb;
 		uint64_t mask;
 
@@ -243,20 +243,20 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		send_hdr_ext->w0.lso = 1;
 		send_hdr_ext->w0.lso_mps = m->tso_segsz;
 		send_hdr_ext->w0.lso_format =
-			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 		w1.ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun =
 				(CNXK_NIX_UDP_TUN_BITMASK >>
-				 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+				 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 				0x1;
 			uint8_t shift = is_udp_tun ? 32 : 0;
 
-			shift += (!!(ol_flags & PKT_TX_OUTER_IPV6) << 4);
-			shift += (!!(ol_flags & PKT_TX_IPV6) << 3);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
 
 			w1.il4type = NIX_SENDL4TYPE_TCP_CKSUM;
 			w1.ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
@@ -297,7 +297,7 @@ cn9k_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc,
 	if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
 		struct nix_send_mem_s *send_mem;
 		uint16_t off = (no_segdw - 1) << 1;
-		const uint8_t is_ol_tstamp = !(ol_flags & PKT_TX_IEEE1588_TMST);
+		const uint8_t is_ol_tstamp = !(ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST);
 
 		send_mem = (struct nix_send_mem_s *)(cmd + off);
 		if (flags & NIX_TX_MULTI_SEG_F) {
@@ -310,7 +310,7 @@ cn9k_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc,
 			rte_compiler_barrier();
 		}
 
-		/* Packets for which PKT_TX_IEEE1588_TMST is not set, tx tstamp
+		/* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp
 		 * should not be recorded, hence changing the alg type to
 		 * NIX_SENDMEMALG_SET and also changing send mem addr field to
 		 * next 8 bytes as it corrpt the actual tx tstamp registered
@@ -554,7 +554,7 @@ cn9k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 	uint16_t lso_sb;
 	uint64_t mask;
 
-	if (!(ol_flags & PKT_TX_TCP_SEG))
+	if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		return;
 
 	mask = -(!w1->il3type);
@@ -563,15 +563,15 @@ cn9k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 	w0->u |= BIT(14);
 	w0->lso_sb = lso_sb;
 	w0->lso_mps = m->tso_segsz;
-	w0->lso_format = NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+	w0->lso_format = NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 	w1->ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 	/* Handle tunnel tso */
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-	    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+	    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 		const uint8_t is_udp_tun =
 			(CNXK_NIX_UDP_TUN_BITMASK >>
-			 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+			 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 			0x1;
 
 		w1->il4type = NIX_SENDL4TYPE_TCP_CKSUM;
@@ -579,7 +579,7 @@ cn9k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 		/* Update format for UDP tunneled packet */
 		w0->lso_format += is_udp_tun ? 2 : 6;
 
-		w0->lso_format += !!(ol_flags & PKT_TX_OUTER_IPV6) << 1;
+		w0->lso_format += !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 1;
 	}
 }
 
@@ -1061,26 +1061,26 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			const uint8x16_t tbl = {
 				/* [0-15] = il4type:il3type */
 				0x04, /* none (IPv6 assumed) */
-				0x14, /* PKT_TX_TCP_CKSUM (IPv6 assumed) */
-				0x24, /* PKT_TX_SCTP_CKSUM (IPv6 assumed) */
-				0x34, /* PKT_TX_UDP_CKSUM (IPv6 assumed) */
-				0x03, /* PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM */
-				0x23, /* PKT_TX_IP_CKSUM | PKT_TX_SCTP_CKSUM */
-				0x33, /* PKT_TX_IP_CKSUM | PKT_TX_UDP_CKSUM */
-				0x02, /* PKT_TX_IPV4  */
-				0x12, /* PKT_TX_IPV4 | PKT_TX_TCP_CKSUM */
-				0x22, /* PKT_TX_IPV4 | PKT_TX_SCTP_CKSUM */
-				0x32, /* PKT_TX_IPV4 | PKT_TX_UDP_CKSUM */
-				0x03, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_TCP_CKSUM
+				0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6 assumed) */
+				0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6 assumed) */
+				0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6 assumed) */
+				0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x23, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x33, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x02, /* RTE_MBUF_F_TX_IPV4  */
+				0x12, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x22, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x32, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x03, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_TCP_CKSUM
 				       */
-				0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_SCTP_CKSUM
+				0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_SCTP_CKSUM
 				       */
-				0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_UDP_CKSUM
+				0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_UDP_CKSUM
 				       */
 			};
 
@@ -1265,40 +1265,40 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 				{
 					/* [0-15] = il4type:il3type */
 					0x04, /* none (IPv6) */
-					0x14, /* PKT_TX_TCP_CKSUM (IPv6) */
-					0x24, /* PKT_TX_SCTP_CKSUM (IPv6) */
-					0x34, /* PKT_TX_UDP_CKSUM (IPv6) */
-					0x03, /* PKT_TX_IP_CKSUM */
-					0x13, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6) */
+					0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6) */
+					0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6) */
+					0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+					0x13, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x02, /* PKT_TX_IPV4 */
-					0x12, /* PKT_TX_IPV4 |
-					       * PKT_TX_TCP_CKSUM
+					0x02, /* RTE_MBUF_F_TX_IPV4 */
+					0x12, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x22, /* PKT_TX_IPV4 |
-					       * PKT_TX_SCTP_CKSUM
+					0x22, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x32, /* PKT_TX_IPV4 |
-					       * PKT_TX_UDP_CKSUM
+					0x32, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x03, /* PKT_TX_IPV4 |
-					       * PKT_TX_IP_CKSUM
+					0x03, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_IP_CKSUM
 					       */
-					0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
 				},
 
@@ -1486,11 +1486,11 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		if (flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
 			/* Tx ol_flag for vlan. */
-			const uint64x2_t olv = {PKT_TX_VLAN, PKT_TX_VLAN};
+			const uint64x2_t olv = {RTE_MBUF_F_TX_VLAN, RTE_MBUF_F_TX_VLAN};
 			/* Bit enable for VLAN1 */
 			const uint64x2_t mlv = {BIT_ULL(49), BIT_ULL(49)};
 			/* Tx ol_flag for QnQ. */
-			const uint64x2_t olq = {PKT_TX_QINQ, PKT_TX_QINQ};
+			const uint64x2_t olq = {RTE_MBUF_F_TX_QINQ, RTE_MBUF_F_TX_QINQ};
 			/* Bit enable for VLAN0 */
 			const uint64x2_t mlq = {BIT_ULL(48), BIT_ULL(48)};
 			/* Load vlan values from packet. outer is VLAN 0 */
@@ -1532,8 +1532,8 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
 			/* Tx ol_flag for timestam. */
-			const uint64x2_t olf = {PKT_TX_IEEE1588_TMST,
-						PKT_TX_IEEE1588_TMST};
+			const uint64x2_t olf = {RTE_MBUF_F_TX_IEEE1588_TMST,
+						RTE_MBUF_F_TX_IEEE1588_TMST};
 			/* Set send mem alg to SUB. */
 			const uint64x2_t alg = {BIT_ULL(59), BIT_ULL(59)};
 			/* Increment send mem address by 8. */
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index ff21b977b7..0667c2f115 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -124,8 +124,8 @@
 #define CNXK_NIX_FASTPATH_LOOKUP_MEM "cnxk_nix_fastpath_lookup_mem"
 
 #define CNXK_NIX_UDP_TUN_BITMASK                                               \
-	((1ull << (PKT_TX_TUNNEL_VXLAN >> 45)) |                               \
-	 (1ull << (PKT_TX_TUNNEL_GENEVE >> 45)))
+	((1ull << (RTE_MBUF_F_TX_TUNNEL_VXLAN >> 45)) |                               \
+	 (1ull << (RTE_MBUF_F_TX_TUNNEL_GENEVE >> 45)))
 
 /* Subtype from inline outbound error event */
 #define CNXK_ETHDEV_SEC_OUTB_EV_SUB 0xFFUL
@@ -596,15 +596,15 @@ cnxk_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf,
 		 */
 		*cnxk_nix_timestamp_dynfield(mbuf, tstamp) =
 			rte_be_to_cpu_64(*tstamp_ptr);
-		/* PKT_RX_IEEE1588_TMST flag needs to be set only in case
+		/* RTE_MBUF_F_RX_IEEE1588_TMST flag needs to be set only in case
 		 * PTP packets are received.
 		 */
 		if (mbuf->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC) {
 			tstamp->rx_tstamp =
 				*cnxk_nix_timestamp_dynfield(mbuf, tstamp);
 			tstamp->rx_ready = 1;
-			mbuf->ol_flags |= PKT_RX_IEEE1588_PTP |
-					  PKT_RX_IEEE1588_TMST |
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP |
+					  RTE_MBUF_F_RX_IEEE1588_TMST |
 					  tstamp->rx_tstamp_dynflag;
 		}
 	}
diff --git a/drivers/net/cnxk/cnxk_lookup.c b/drivers/net/cnxk/cnxk_lookup.c
index f6ec7689fc..4eb1ecf17d 100644
--- a/drivers/net/cnxk/cnxk_lookup.c
+++ b/drivers/net/cnxk/cnxk_lookup.c
@@ -238,9 +238,9 @@ nix_create_rx_ol_flags_array(void *mem)
 		errlev = idx & 0xf;
 		errcode = (idx & 0xff0) >> 4;
 
-		val = PKT_RX_IP_CKSUM_UNKNOWN;
-		val |= PKT_RX_L4_CKSUM_UNKNOWN;
-		val |= PKT_RX_OUTER_L4_CKSUM_UNKNOWN;
+		val = RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN;
 
 		switch (errlev) {
 		case NPC_ERRLEV_RE:
@@ -248,46 +248,46 @@ nix_create_rx_ol_flags_array(void *mem)
 			 * including Outer L2 length mismatch error
 			 */
 			if (errcode) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		case NPC_ERRLEV_LC:
 			if (errcode == NPC_EC_OIP4_CSUM ||
 			    errcode == NPC_EC_IP_FRAG_OFFSET_1) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 			break;
 		case NPC_ERRLEV_LG:
 			if (errcode == NPC_EC_IIP4_CSUM)
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			else
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case NPC_ERRLEV_NIX:
 			if (errcode == NIX_RX_PERRCODE_OL4_CHK ||
 			    errcode == NIX_RX_PERRCODE_OL4_LEN ||
 			    errcode == NIX_RX_PERRCODE_OL4_PORT) {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_BAD;
-				val |= PKT_RX_OUTER_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 			} else if (errcode == NIX_RX_PERRCODE_IL4_CHK ||
 				   errcode == NIX_RX_PERRCODE_IL4_LEN ||
 				   errcode == NIX_RX_PERRCODE_IL4_PORT) {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else if (errcode == NIX_RX_PERRCODE_IL3_LEN ||
 				   errcode == NIX_RX_PERRCODE_OL3_LEN) {
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		}
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index 3299d6252e..20aa84b653 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -539,7 +539,7 @@ static inline unsigned int flits_to_desc(unsigned int n)
  */
 static inline int is_eth_imm(const struct rte_mbuf *m)
 {
-	unsigned int hdrlen = (m->ol_flags & PKT_TX_TCP_SEG) ?
+	unsigned int hdrlen = (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) ?
 			      sizeof(struct cpl_tx_pkt_lso_core) : 0;
 
 	hdrlen += sizeof(struct cpl_tx_pkt);
@@ -749,12 +749,12 @@ static u64 hwcsum(enum chip_type chip, const struct rte_mbuf *m)
 {
 	int csum_type;
 
-	if (m->ol_flags & PKT_TX_IP_CKSUM) {
-		switch (m->ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_TCP_CKSUM:
+	if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
+		switch (m->ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			csum_type = TX_CSUM_TCPIP;
 			break;
-		case PKT_TX_UDP_CKSUM:
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			csum_type = TX_CSUM_UDPIP;
 			break;
 		default:
@@ -1029,7 +1029,7 @@ static inline int tx_do_packet_coalesce(struct sge_eth_txq *txq,
 	/* fill the cpl message, same as in t4_eth_xmit, this should be kept
 	 * similar to t4_eth_xmit
 	 */
-	if (mbuf->ol_flags & PKT_TX_IP_CKSUM) {
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		cntrl = hwcsum(adap->params.chip, mbuf) |
 			       F_TXPKT_IPCSUM_DIS;
 		txq->stats.tx_cso++;
@@ -1037,7 +1037,7 @@ static inline int tx_do_packet_coalesce(struct sge_eth_txq *txq,
 		cntrl = F_TXPKT_L4CSUM_DIS | F_TXPKT_IPCSUM_DIS;
 	}
 
-	if (mbuf->ol_flags & PKT_TX_VLAN) {
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		txq->stats.vlan_ins++;
 		cntrl |= F_TXPKT_VLAN_VLD | V_TXPKT_VLAN(mbuf->vlan_tci);
 	}
@@ -1129,7 +1129,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 		return 0;
 	}
 
-	if ((!(m->ol_flags & PKT_TX_TCP_SEG)) &&
+	if ((!(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) &&
 	    (unlikely(m->pkt_len > max_pkt_len)))
 		goto out_free;
 
@@ -1140,7 +1140,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 	/* align the end of coalesce WR to a 512 byte boundary */
 	txq->q.coalesce.max = (8 - (txq->q.pidx & 7)) * 8;
 
-	if (!((m->ol_flags & PKT_TX_TCP_SEG) ||
+	if (!((m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) ||
 			m->pkt_len > RTE_ETHER_MAX_LEN)) {
 		if (should_tx_packet_coalesce(txq, mbuf, &cflits, adap)) {
 			if (unlikely(map_mbuf(mbuf, addr) < 0)) {
@@ -1203,7 +1203,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 	len += sizeof(*cpl);
 
 	/* Coalescing skipped and we send through normal path */
-	if (!(m->ol_flags & PKT_TX_TCP_SEG)) {
+	if (!(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		wr->op_immdlen = htonl(V_FW_WR_OP(is_pf4(adap) ?
 						  FW_ETH_TX_PKT_WR :
 						  FW_ETH_TX_PKT_VM_WR) |
@@ -1212,7 +1212,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 			cpl = (void *)(wr + 1);
 		else
 			cpl = (void *)(vmwr + 1);
-		if (m->ol_flags & PKT_TX_IP_CKSUM) {
+		if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 			cntrl = hwcsum(adap->params.chip, m) |
 				F_TXPKT_IPCSUM_DIS;
 			txq->stats.tx_cso++;
@@ -1222,7 +1222,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 			lso = (void *)(wr + 1);
 		else
 			lso = (void *)(vmwr + 1);
-		v6 = (m->ol_flags & PKT_TX_IPV6) != 0;
+		v6 = (m->ol_flags & RTE_MBUF_F_TX_IPV6) != 0;
 		l3hdr_len = m->l3_len;
 		l4hdr_len = m->l4_len;
 		eth_xtra_len = m->l2_len - RTE_ETHER_HDR_LEN;
@@ -1258,7 +1258,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 		txq->stats.tx_cso += m->tso_segsz;
 	}
 
-	if (m->ol_flags & PKT_TX_VLAN) {
+	if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		txq->stats.vlan_ins++;
 		cntrl |= F_TXPKT_VLAN_VLD | V_TXPKT_VLAN(m->vlan_tci);
 	}
@@ -1528,27 +1528,27 @@ static inline void cxgbe_fill_mbuf_info(struct adapter *adap,
 
 	if (cpl->vlan_ex)
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L2_ETHER_VLAN,
-				    PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+				    RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 	else
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L2_ETHER, 0);
 
 	if (cpl->l2info & htonl(F_RXF_IP))
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L3_IPV4,
-				    csum_ok ? PKT_RX_IP_CKSUM_GOOD :
-					      PKT_RX_IP_CKSUM_BAD);
+				    csum_ok ? RTE_MBUF_F_RX_IP_CKSUM_GOOD :
+				    RTE_MBUF_F_RX_IP_CKSUM_BAD);
 	else if (cpl->l2info & htonl(F_RXF_IP6))
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L3_IPV6,
-				    csum_ok ? PKT_RX_IP_CKSUM_GOOD :
-					      PKT_RX_IP_CKSUM_BAD);
+				    csum_ok ? RTE_MBUF_F_RX_IP_CKSUM_GOOD :
+				    RTE_MBUF_F_RX_IP_CKSUM_BAD);
 
 	if (cpl->l2info & htonl(F_RXF_TCP))
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L4_TCP,
-				    csum_ok ? PKT_RX_L4_CKSUM_GOOD :
-					      PKT_RX_L4_CKSUM_BAD);
+				    csum_ok ? RTE_MBUF_F_RX_L4_CKSUM_GOOD :
+				    RTE_MBUF_F_RX_L4_CKSUM_BAD);
 	else if (cpl->l2info & htonl(F_RXF_UDP))
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L4_UDP,
-				    csum_ok ? PKT_RX_L4_CKSUM_GOOD :
-					      PKT_RX_L4_CKSUM_BAD);
+				    csum_ok ? RTE_MBUF_F_RX_L4_CKSUM_GOOD :
+				    RTE_MBUF_F_RX_L4_CKSUM_BAD);
 }
 
 /**
@@ -1639,7 +1639,7 @@ static int process_responses(struct sge_rspq *q, int budget,
 
 				if (!rss_hdr->filter_tid &&
 				    rss_hdr->hash_type) {
-					pkt->ol_flags |= PKT_RX_RSS_HASH;
+					pkt->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 					pkt->hash.rss =
 						ntohl(rss_hdr->hash_val);
 				}
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b5728e09c2..98edc53359 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -80,10 +80,9 @@
 	ETH_RSS_TCP | \
 	ETH_RSS_SCTP)
 
-#define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
-		PKT_TX_IP_CKSUM |                \
-		PKT_TX_TCP_CKSUM |               \
-		PKT_TX_UDP_CKSUM)
+#define DPAA_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |                \
+		RTE_MBUF_F_TX_TCP_CKSUM |               \
+		RTE_MBUF_F_TX_UDP_CKSUM)
 
 /* DPAA Frame descriptor macros */
 
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 423de40e95..ffac6ce3e2 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -125,8 +125,8 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
 
 	DPAA_DP_LOG(DEBUG, " Parsing mbuf: %p with annotations: %p", m, annot);
 
-	m->ol_flags = PKT_RX_RSS_HASH | PKT_RX_IP_CKSUM_GOOD |
-		PKT_RX_L4_CKSUM_GOOD;
+	m->ol_flags = RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	switch (prs) {
 	case DPAA_PKT_TYPE_IPV4:
@@ -204,13 +204,13 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
 		break;
 	case DPAA_PKT_TYPE_IPV4_CSUM_ERR:
 	case DPAA_PKT_TYPE_IPV6_CSUM_ERR:
-		m->ol_flags = PKT_RX_RSS_HASH | PKT_RX_IP_CKSUM_BAD;
+		m->ol_flags = RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		break;
 	case DPAA_PKT_TYPE_IPV4_TCP_CSUM_ERR:
 	case DPAA_PKT_TYPE_IPV6_TCP_CSUM_ERR:
 	case DPAA_PKT_TYPE_IPV4_UDP_CSUM_ERR:
 	case DPAA_PKT_TYPE_IPV6_UDP_CSUM_ERR:
-		m->ol_flags = PKT_RX_RSS_HASH | PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags = RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		break;
 	case DPAA_PKT_TYPE_NONE:
 		m->packet_type = 0;
@@ -229,7 +229,7 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
 
 	/* Check if Vlan is present */
 	if (prs & DPAA_PARSE_VLAN_MASK)
-		m->ol_flags |= PKT_RX_VLAN;
+		m->ol_flags |= RTE_MBUF_F_RX_VLAN;
 	/* Packet received without stripping the vlan */
 }
 
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f491f4d10a..267090c59b 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -114,7 +114,7 @@ dpaa2_dev_rx_parse_new(struct rte_mbuf *m, const struct qbman_fd *fd,
 		m->packet_type = dpaa2_dev_rx_parse_slow(m, annotation);
 	}
 	m->hash.rss = fd->simple.flc_hi;
-	m->ol_flags |= PKT_RX_RSS_HASH;
+	m->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 
 	if (dpaa2_enable_ts[m->port]) {
 		*dpaa2_timestamp_dynfield(m) = annotation->word2;
@@ -141,20 +141,20 @@ dpaa2_dev_rx_parse_slow(struct rte_mbuf *mbuf,
 
 #if defined(RTE_LIBRTE_IEEE1588)
 	if (BIT_ISSET_AT_POS(annotation->word1, DPAA2_ETH_FAS_PTP))
-		mbuf->ol_flags |= PKT_RX_IEEE1588_PTP;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
 #endif
 
 	if (BIT_ISSET_AT_POS(annotation->word3, L2_VLAN_1_PRESENT)) {
 		vlan_tci = rte_pktmbuf_mtod_offset(mbuf, uint16_t *,
 			(VLAN_TCI_OFFSET_1(annotation->word5) >> 16));
 		mbuf->vlan_tci = rte_be_to_cpu_16(*vlan_tci);
-		mbuf->ol_flags |= PKT_RX_VLAN;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 		pkt_type |= RTE_PTYPE_L2_ETHER_VLAN;
 	} else if (BIT_ISSET_AT_POS(annotation->word3, L2_VLAN_N_PRESENT)) {
 		vlan_tci = rte_pktmbuf_mtod_offset(mbuf, uint16_t *,
 			(VLAN_TCI_OFFSET_1(annotation->word5) >> 16));
 		mbuf->vlan_tci = rte_be_to_cpu_16(*vlan_tci);
-		mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_QINQ;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_QINQ;
 		pkt_type |= RTE_PTYPE_L2_ETHER_QINQ;
 	}
 
@@ -189,9 +189,9 @@ dpaa2_dev_rx_parse_slow(struct rte_mbuf *mbuf,
 	}
 
 	if (BIT_ISSET_AT_POS(annotation->word8, DPAA2_ETH_FAS_L3CE))
-		mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else if (BIT_ISSET_AT_POS(annotation->word8, DPAA2_ETH_FAS_L4CE))
-		mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 
 	if (BIT_ISSET_AT_POS(annotation->word4, L3_IP_1_FIRST_FRAGMENT |
 	    L3_IP_1_MORE_FRAGMENT |
@@ -232,9 +232,9 @@ dpaa2_dev_rx_parse(struct rte_mbuf *mbuf, void *hw_annot_addr)
 			   annotation->word4);
 
 	if (BIT_ISSET_AT_POS(annotation->word8, DPAA2_ETH_FAS_L3CE))
-		mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else if (BIT_ISSET_AT_POS(annotation->word8, DPAA2_ETH_FAS_L4CE))
-		mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 
 	if (dpaa2_enable_ts[mbuf->port]) {
 		*dpaa2_timestamp_dynfield(mbuf) = annotation->word2;
@@ -1228,9 +1228,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				    (*bufs)->nb_segs == 1 &&
 				    rte_mbuf_refcnt_read((*bufs)) == 1)) {
 					if (unlikely(((*bufs)->ol_flags
-						& PKT_TX_VLAN) ||
-						(eth_data->dev_conf.txmode.offloads
-						& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+						& RTE_MBUF_F_TX_VLAN) ||
+						     (eth_data->dev_conf.txmode.offloads
+						      & DEV_TX_OFFLOAD_VLAN_INSERT))) {
 						ret = rte_vlan_insert(bufs);
 						if (ret)
 							goto send_n_return;
@@ -1271,9 +1271,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				goto send_n_return;
 			}
 
-			if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN) ||
-				(eth_data->dev_conf.txmode.offloads
-				& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+			if (unlikely(((*bufs)->ol_flags & RTE_MBUF_F_TX_VLAN) ||
+				     (eth_data->dev_conf.txmode.offloads
+				      & DEV_TX_OFFLOAD_VLAN_INSERT))) {
 				int ret = rte_vlan_insert(bufs);
 				if (ret)
 					goto send_n_return;
@@ -1532,7 +1532,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				    (*bufs)->nb_segs == 1 &&
 				    rte_mbuf_refcnt_read((*bufs)) == 1)) {
 					if (unlikely((*bufs)->ol_flags
-						& PKT_TX_VLAN)) {
+						& RTE_MBUF_F_TX_VLAN)) {
 					  ret = rte_vlan_insert(bufs);
 					  if (ret)
 						goto send_n_return;
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 82873c91b0..82516bda8b 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -50,15 +50,14 @@
 
 #define E1000_RXDCTL_GRAN	0x01000000 /* RXDCTL Granularity */
 
-#define E1000_TX_OFFLOAD_MASK ( \
-		PKT_TX_IPV6 |           \
-		PKT_TX_IPV4 |           \
-		PKT_TX_IP_CKSUM |       \
-		PKT_TX_L4_MASK |        \
-		PKT_TX_VLAN)
+#define E1000_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_IPV6 |           \
+		RTE_MBUF_F_TX_IPV4 |           \
+		RTE_MBUF_F_TX_IP_CKSUM |       \
+		RTE_MBUF_F_TX_L4_MASK |        \
+		RTE_MBUF_F_TX_VLAN)
 
 #define E1000_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ E1000_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ E1000_TX_OFFLOAD_MASK)
 
 /* PCI offset for querying configuration status register */
 #define PCI_CFG_STATUS_REG                 0x06
@@ -236,7 +235,7 @@ em_set_xmit_ctx(struct em_tx_queue* txq,
 	 * When doing checksum or TCP segmentation with IPv6 headers,
 	 * IPCSE field should be set t0 0.
 	 */
-	if (flags & PKT_TX_IP_CKSUM) {
+	if (flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		ctx.lower_setup.ip_fields.ipcse =
 			(uint16_t)rte_cpu_to_le_16(ipcse - 1);
 		cmd_len |= E1000_TXD_CMD_IP;
@@ -249,13 +248,13 @@ em_set_xmit_ctx(struct em_tx_queue* txq,
 	ctx.upper_setup.tcp_fields.tucss = (uint8_t)ipcse;
 	ctx.upper_setup.tcp_fields.tucse = 0;
 
-	switch (flags & PKT_TX_L4_MASK) {
-	case PKT_TX_UDP_CKSUM:
+	switch (flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		ctx.upper_setup.tcp_fields.tucso = (uint8_t)(ipcse +
 				offsetof(struct rte_udp_hdr, dgram_cksum));
 		cmp_mask |= TX_MACIP_LEN_CMP_MASK;
 		break;
-	case PKT_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		ctx.upper_setup.tcp_fields.tucso = (uint8_t)(ipcse +
 				offsetof(struct rte_tcp_hdr, cksum));
 		cmd_len |= E1000_TXD_CMD_TCP;
@@ -358,8 +357,8 @@ tx_desc_cksum_flags_to_upper(uint64_t ol_flags)
 	static const uint32_t l3_olinfo[2] = {0, E1000_TXD_POPTS_IXSM << 8};
 	uint32_t tmp;
 
-	tmp = l4_olinfo[(ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM];
-	tmp |= l3_olinfo[(ol_flags & PKT_TX_IP_CKSUM) != 0];
+	tmp = l4_olinfo[(ol_flags & RTE_MBUF_F_TX_L4_MASK) != RTE_MBUF_F_TX_L4_NO_CKSUM];
+	tmp |= l3_olinfo[(ol_flags & RTE_MBUF_F_TX_IP_CKSUM) != 0];
 	return tmp;
 }
 
@@ -412,7 +411,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		ol_flags = tx_pkt->ol_flags;
 
 		/* If hardware offload required */
-		tx_ol_req = (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK));
+		tx_ol_req = (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK));
 		if (tx_ol_req) {
 			hdrlen.f.vlan_tci = tx_pkt->vlan_tci;
 			hdrlen.f.l2_len = tx_pkt->l2_len;
@@ -508,7 +507,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		popts_spec = 0;
 
 		/* Set VLAN Tag offload fields. */
-		if (ol_flags & PKT_TX_VLAN) {
+		if (ol_flags & RTE_MBUF_F_TX_VLAN) {
 			cmd_type_len |= E1000_TXD_CMD_VLE;
 			popts_spec = tx_pkt->vlan_tci << E1000_TXD_VLAN_SHIFT;
 		}
@@ -658,7 +657,7 @@ rx_desc_status_to_pkt_flags(uint32_t rx_status)
 
 	/* Check if VLAN present */
 	pkt_flags = ((rx_status & E1000_RXD_STAT_VP) ?
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED : 0);
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED : 0);
 
 	return pkt_flags;
 }
@@ -669,9 +668,9 @@ rx_desc_error_to_pkt_flags(uint32_t rx_error)
 	uint64_t pkt_flags = 0;
 
 	if (rx_error & E1000_RXD_ERR_IPE)
-		pkt_flags |= PKT_RX_IP_CKSUM_BAD;
+		pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	if (rx_error & E1000_RXD_ERR_TCPE)
-		pkt_flags |= PKT_RX_L4_CKSUM_BAD;
+		pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	return pkt_flags;
 }
 
@@ -813,7 +812,7 @@ eth_em_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->ol_flags = rxm->ol_flags |
 				rx_desc_error_to_pkt_flags(rxd.errors);
 
-		/* Only valid if PKT_RX_VLAN set in pkt_flags */
+		/* Only valid if RTE_MBUF_F_RX_VLAN set in pkt_flags */
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
 
 		/*
@@ -1039,7 +1038,7 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		first_seg->ol_flags = first_seg->ol_flags |
 					rx_desc_error_to_pkt_flags(rxd.errors);
 
-		/* Only valid if PKT_RX_VLAN set in pkt_flags */
+		/* Only valid if RTE_MBUF_F_RX_VLAN set in pkt_flags */
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
 
 		/* Prefetch data of first segment, if configured to do so. */
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index fa2797074f..141c2ba000 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -44,24 +44,23 @@
 #include "e1000_ethdev.h"
 
 #ifdef RTE_LIBRTE_IEEE1588
-#define IGB_TX_IEEE1588_TMST PKT_TX_IEEE1588_TMST
+#define IGB_TX_IEEE1588_TMST RTE_MBUF_F_TX_IEEE1588_TMST
 #else
 #define IGB_TX_IEEE1588_TMST 0
 #endif
 /* Bit Mask to indicate what bits required for building TX context */
-#define IGB_TX_OFFLOAD_MASK (			 \
-		PKT_TX_OUTER_IPV6 |	 \
-		PKT_TX_OUTER_IPV4 |	 \
-		PKT_TX_IPV6 |		 \
-		PKT_TX_IPV4 |		 \
-		PKT_TX_VLAN |		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG |		 \
+#define IGB_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV6 |	 \
+		RTE_MBUF_F_TX_OUTER_IPV4 |	 \
+		RTE_MBUF_F_TX_IPV6 |		 \
+		RTE_MBUF_F_TX_IPV4 |		 \
+		RTE_MBUF_F_TX_VLAN |		 \
+		RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG |		 \
 		IGB_TX_IEEE1588_TMST)
 
 #define IGB_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ IGB_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IGB_TX_OFFLOAD_MASK)
 
 /**
  * Structure associated with each descriptor of the RX ring of a RX queue.
@@ -226,12 +225,12 @@ struct igb_tx_queue {
 static inline uint64_t
 check_tso_para(uint64_t ol_req, union igb_tx_offload ol_para)
 {
-	if (!(ol_req & PKT_TX_TCP_SEG))
+	if (!(ol_req & RTE_MBUF_F_TX_TCP_SEG))
 		return ol_req;
 	if ((ol_para.tso_segsz > IGB_TSO_MAX_MSS) || (ol_para.l2_len +
 			ol_para.l3_len + ol_para.l4_len > IGB_TSO_MAX_HDRLEN)) {
-		ol_req &= ~PKT_TX_TCP_SEG;
-		ol_req |= PKT_TX_TCP_CKSUM;
+		ol_req &= ~RTE_MBUF_F_TX_TCP_SEG;
+		ol_req |= RTE_MBUF_F_TX_TCP_CKSUM;
 	}
 	return ol_req;
 }
@@ -262,13 +261,13 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx = (ctx_idx << E1000_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		tx_offload_mask.data |= TX_VLAN_CMP_MASK;
 
 	/* check if TCP segmentation required for this packet */
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		/* implies IP cksum in IPv4 */
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = E1000_ADVTXD_TUCMD_IPV4 |
 				E1000_ADVTXD_TUCMD_L4T_TCP |
 				E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT;
@@ -281,26 +280,26 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq,
 		mss_l4len_idx |= tx_offload.tso_segsz << E1000_ADVTXD_MSS_SHIFT;
 		mss_l4len_idx |= tx_offload.l4_len << E1000_ADVTXD_L4LEN_SHIFT;
 	} else { /* no TSO, check if hardware checksum is needed */
-		if (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK))
+		if (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK))
 			tx_offload_mask.data |= TX_MACIP_LEN_CMP_MASK;
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = E1000_ADVTXD_TUCMD_IPV4;
 
-		switch (ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_UDP_CKSUM:
+		switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_UDP |
 				E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_udp_hdr)
 				<< E1000_ADVTXD_L4LEN_SHIFT;
 			break;
-		case PKT_TX_TCP_CKSUM:
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_TCP |
 				E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_tcp_hdr)
 				<< E1000_ADVTXD_L4LEN_SHIFT;
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_SCTP |
 				E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_sctp_hdr)
@@ -359,9 +358,9 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 	static const uint32_t l3_olinfo[2] = {0, E1000_ADVTXD_POPTS_IXSM};
 	uint32_t tmp;
 
-	tmp  = l4_olinfo[(ol_flags & PKT_TX_L4_MASK)  != PKT_TX_L4_NO_CKSUM];
-	tmp |= l3_olinfo[(ol_flags & PKT_TX_IP_CKSUM) != 0];
-	tmp |= l4_olinfo[(ol_flags & PKT_TX_TCP_SEG) != 0];
+	tmp  = l4_olinfo[(ol_flags & RTE_MBUF_F_TX_L4_MASK)  != RTE_MBUF_F_TX_L4_NO_CKSUM];
+	tmp |= l3_olinfo[(ol_flags & RTE_MBUF_F_TX_IP_CKSUM) != 0];
+	tmp |= l4_olinfo[(ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0];
 	return tmp;
 }
 
@@ -371,8 +370,8 @@ tx_desc_vlan_flags_to_cmdtype(uint64_t ol_flags)
 	uint32_t cmdtype;
 	static uint32_t vlan_cmd[2] = {0, E1000_ADVTXD_DCMD_VLE};
 	static uint32_t tso_cmd[2] = {0, E1000_ADVTXD_DCMD_TSE};
-	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN) != 0];
-	cmdtype |= tso_cmd[(ol_flags & PKT_TX_TCP_SEG) != 0];
+	cmdtype = vlan_cmd[(ol_flags & RTE_MBUF_F_TX_VLAN) != 0];
+	cmdtype |= tso_cmd[(ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0];
 	return cmdtype;
 }
 
@@ -528,11 +527,11 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 */
 		cmd_type_len = txq->txd_type |
 			E1000_ADVTXD_DCMD_IFCS | E1000_ADVTXD_DCMD_DEXT;
-		if (tx_ol_req & PKT_TX_TCP_SEG)
+		if (tx_ol_req & RTE_MBUF_F_TX_TCP_SEG)
 			pkt_len -= (tx_pkt->l2_len + tx_pkt->l3_len + tx_pkt->l4_len);
 		olinfo_status = (pkt_len << E1000_ADVTXD_PAYLEN_SHIFT);
 #if defined(RTE_LIBRTE_IEEE1588)
-		if (ol_flags & PKT_TX_IEEE1588_TMST)
+		if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 			cmd_type_len |= E1000_ADVTXD_MAC_TSTAMP;
 #endif
 		if (tx_ol_req) {
@@ -630,7 +629,7 @@ eth_igb_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		m = tx_pkts[i];
 
 		/* Check some limitations for TSO in hardware */
-		if (m->ol_flags & PKT_TX_TCP_SEG)
+		if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			if ((m->tso_segsz > IGB_TSO_MAX_MSS) ||
 					(m->l2_len + m->l3_len + m->l4_len >
 					IGB_TSO_MAX_HDRLEN)) {
@@ -745,11 +744,11 @@ igb_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
 static inline uint64_t
 rx_desc_hlen_type_rss_to_pkt_flags(struct igb_rx_queue *rxq, uint32_t hl_tp_rs)
 {
-	uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ?  0 : PKT_RX_RSS_HASH;
+	uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ?  0 : RTE_MBUF_F_RX_RSS_HASH;
 
 #if defined(RTE_LIBRTE_IEEE1588)
 	static uint32_t ip_pkt_etqf_map[8] = {
-		0, 0, 0, PKT_RX_IEEE1588_PTP,
+		0, 0, 0, RTE_MBUF_F_RX_IEEE1588_PTP,
 		0, 0, 0, 0,
 	};
 
@@ -775,11 +774,11 @@ rx_desc_status_to_pkt_flags(uint32_t rx_status)
 
 	/* Check if VLAN present */
 	pkt_flags = ((rx_status & E1000_RXD_STAT_VP) ?
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED : 0);
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED : 0);
 
 #if defined(RTE_LIBRTE_IEEE1588)
 	if (rx_status & E1000_RXD_STAT_TMST)
-		pkt_flags = pkt_flags | PKT_RX_IEEE1588_TMST;
+		pkt_flags = pkt_flags | RTE_MBUF_F_RX_IEEE1588_TMST;
 #endif
 	return pkt_flags;
 }
@@ -793,10 +792,10 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
 	 */
 
 	static uint64_t error_to_pkt_flags_map[4] = {
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD
 	};
 	return error_to_pkt_flags_map[(rx_status >>
 		E1000_RXD_ERR_CKSUM_BIT) & E1000_RXD_ERR_CKSUM_MSK];
@@ -938,7 +937,7 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
 
 		/*
-		 * The vlan_tci field is only valid when PKT_RX_VLAN is
+		 * The vlan_tci field is only valid when RTE_MBUF_F_RX_VLAN is
 		 * set in the pkt_flags field and must be in CPU byte order.
 		 */
 		if ((staterr & rte_cpu_to_le_32(E1000_RXDEXT_STATERR_LB)) &&
@@ -1178,7 +1177,7 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
 
 		/*
-		 * The vlan_tci field is only valid when PKT_RX_VLAN is
+		 * The vlan_tci field is only valid when RTE_MBUF_F_RX_VLAN is
 		 * set in the pkt_flags field and must be in CPU byte order.
 		 */
 		if ((staterr & rte_cpu_to_le_32(E1000_RXDEXT_STATERR_LB)) &&
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index a82d4b6287..e1e88096c5 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -120,9 +120,9 @@ static const struct ena_stats ena_stats_rx_strings[] = {
 			DEV_TX_OFFLOAD_UDP_CKSUM |\
 			DEV_TX_OFFLOAD_IPV4_CKSUM |\
 			DEV_TX_OFFLOAD_TCP_TSO)
-#define MBUF_OFFLOADS (PKT_TX_L4_MASK |\
-		       PKT_TX_IP_CKSUM |\
-		       PKT_TX_TCP_SEG)
+#define MBUF_OFFLOADS (RTE_MBUF_F_TX_L4_MASK |\
+		       RTE_MBUF_F_TX_IP_CKSUM |\
+		       RTE_MBUF_F_TX_TCP_SEG)
 
 /** Vendor ID used by Amazon devices */
 #define PCI_VENDOR_ID_AMAZON 0x1D0F
@@ -130,15 +130,14 @@ static const struct ena_stats ena_stats_rx_strings[] = {
 #define PCI_DEVICE_ID_ENA_VF		0xEC20
 #define PCI_DEVICE_ID_ENA_VF_RSERV0	0xEC21
 
-#define	ENA_TX_OFFLOAD_MASK	(\
-	PKT_TX_L4_MASK |         \
-	PKT_TX_IPV6 |            \
-	PKT_TX_IPV4 |            \
-	PKT_TX_IP_CKSUM |        \
-	PKT_TX_TCP_SEG)
+#define	ENA_TX_OFFLOAD_MASK	(RTE_MBUF_F_TX_L4_MASK |         \
+	RTE_MBUF_F_TX_IPV6 |            \
+	RTE_MBUF_F_TX_IPV4 |            \
+	RTE_MBUF_F_TX_IP_CKSUM |        \
+	RTE_MBUF_F_TX_TCP_SEG)
 
 #define	ENA_TX_OFFLOAD_NOTSUP_MASK	\
-	(PKT_TX_OFFLOAD_MASK ^ ENA_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ ENA_TX_OFFLOAD_MASK)
 
 static const struct rte_pci_id pci_id_ena_map[] = {
 	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_AMAZON, PCI_DEVICE_ID_ENA_VF) },
@@ -274,24 +273,24 @@ static inline void ena_rx_mbuf_prepare(struct rte_mbuf *mbuf,
 	if (ena_rx_ctx->l3_proto == ENA_ETH_IO_L3_PROTO_IPV4) {
 		packet_type |= RTE_PTYPE_L3_IPV4;
 		if (unlikely(ena_rx_ctx->l3_csum_err))
-			ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else
-			ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	} else if (ena_rx_ctx->l3_proto == ENA_ETH_IO_L3_PROTO_IPV6) {
 		packet_type |= RTE_PTYPE_L3_IPV6;
 	}
 
 	if (!ena_rx_ctx->l4_csum_checked || ena_rx_ctx->frag)
-		ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+		ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 	else
 		if (unlikely(ena_rx_ctx->l4_csum_err))
-			ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else
-			ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (fill_hash &&
 	    likely((packet_type & ENA_PTYPE_HAS_HASH) && !ena_rx_ctx->frag)) {
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mbuf->hash.rss = ena_rx_ctx->hash;
 	}
 
@@ -309,7 +308,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 	if ((mbuf->ol_flags & MBUF_OFFLOADS) &&
 	    (queue_offloads & QUEUE_OFFLOADS)) {
 		/* check if TSO is required */
-		if ((mbuf->ol_flags & PKT_TX_TCP_SEG) &&
+		if ((mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
 		    (queue_offloads & DEV_TX_OFFLOAD_TCP_TSO)) {
 			ena_tx_ctx->tso_enable = true;
 
@@ -317,11 +316,11 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 		}
 
 		/* check if L3 checksum is needed */
-		if ((mbuf->ol_flags & PKT_TX_IP_CKSUM) &&
+		if ((mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) &&
 		    (queue_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM))
 			ena_tx_ctx->l3_csum_enable = true;
 
-		if (mbuf->ol_flags & PKT_TX_IPV6) {
+		if (mbuf->ol_flags & RTE_MBUF_F_TX_IPV6) {
 			ena_tx_ctx->l3_proto = ENA_ETH_IO_L3_PROTO_IPV6;
 		} else {
 			ena_tx_ctx->l3_proto = ENA_ETH_IO_L3_PROTO_IPV4;
@@ -334,12 +333,12 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 		}
 
 		/* check if L4 checksum is needed */
-		if (((mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) &&
+		if (((mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM) &&
 		    (queue_offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_TCP;
 			ena_tx_ctx->l4_csum_enable = true;
-		} else if (((mbuf->ol_flags & PKT_TX_L4_MASK) ==
-				PKT_TX_UDP_CKSUM) &&
+		} else if (((mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) ==
+				RTE_MBUF_F_TX_UDP_CKSUM) &&
 				(queue_offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UDP;
 			ena_tx_ctx->l4_csum_enable = true;
@@ -2149,7 +2148,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		ena_rx_mbuf_prepare(mbuf, &ena_rx_ctx, fill_hash);
 
 		if (unlikely(mbuf->ol_flags &
-				(PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD))) {
+				(RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD))) {
 			rte_atomic64_inc(&rx_ring->adapter->drv_stats->ierrors);
 			++rx_ring->rx_stats.bad_csum;
 		}
@@ -2191,7 +2190,7 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		m = tx_pkts[i];
 		ol_flags = m->ol_flags;
 
-		if (!(ol_flags & PKT_TX_IPV4))
+		if (!(ol_flags & RTE_MBUF_F_TX_IPV4))
 			continue;
 
 		/* If there was not L2 header length specified, assume it is
@@ -2215,8 +2214,8 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		}
 
 		if ((ol_flags & ENA_TX_OFFLOAD_NOTSUP_MASK) != 0 ||
-				(ol_flags & PKT_TX_L4_MASK) ==
-				PKT_TX_SCTP_CKSUM) {
+				(ol_flags & RTE_MBUF_F_TX_L4_MASK) ==
+				RTE_MBUF_F_TX_SCTP_CKSUM) {
 			rte_errno = ENOTSUP;
 			return i;
 		}
@@ -2235,7 +2234,7 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 */
 
 		ret = rte_net_intel_cksum_flags_prepare(m,
-			ol_flags & ~PKT_TX_TCP_SEG);
+			ol_flags & ~RTE_MBUF_F_TX_TCP_SEG);
 		if (ret != 0) {
 			rte_errno = -ret;
 			return i;
diff --git a/drivers/net/enetc/enetc_rxtx.c b/drivers/net/enetc/enetc_rxtx.c
index 412322523d..ea64c9f682 100644
--- a/drivers/net/enetc/enetc_rxtx.c
+++ b/drivers/net/enetc/enetc_rxtx.c
@@ -174,80 +174,80 @@ enetc_refill_rx_ring(struct enetc_bdr *rx_ring, const int buff_cnt)
 static inline void enetc_slow_parsing(struct rte_mbuf *m,
 				     uint64_t parse_results)
 {
-	m->ol_flags &= ~(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+	m->ol_flags &= ~(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 
 	switch (parse_results) {
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4;
-		m->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6;
-		m->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4_TCP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4 |
 				 RTE_PTYPE_L4_TCP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6_TCP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6 |
 				 RTE_PTYPE_L4_TCP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4_UDP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4 |
 				 RTE_PTYPE_L4_UDP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6_UDP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6 |
 				 RTE_PTYPE_L4_UDP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4_SCTP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4 |
 				 RTE_PTYPE_L4_SCTP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6_SCTP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6 |
 				 RTE_PTYPE_L4_SCTP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4_ICMP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4 |
 				 RTE_PTYPE_L4_ICMP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6_ICMP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6 |
 				 RTE_PTYPE_L4_ICMP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	/* More switch cases can be added */
 	default:
 		m->packet_type = RTE_PTYPE_UNKNOWN;
-		m->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN |
-			       PKT_RX_L4_CKSUM_UNKNOWN;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN |
+			       RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 	}
 }
 
@@ -256,7 +256,7 @@ static inline void __rte_hot
 enetc_dev_rx_parse(struct rte_mbuf *m, uint16_t parse_results)
 {
 	ENETC_PMD_DP_DEBUG("parse summary = 0x%x   ", parse_results);
-	m->ol_flags |= PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD;
+	m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	switch (parse_results) {
 	case ENETC_PKT_TYPE_ETHER:
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6..b312e216ef 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -250,7 +250,7 @@ void enic_init_vnic_resources(struct enic *enic)
 			error_interrupt_offset);
 		/* Compute unsupported ol flags for enic_prep_pkts() */
 		enic->wq[index].tx_offload_notsup_mask =
-			PKT_TX_OFFLOAD_MASK ^ enic->tx_offload_mask;
+			RTE_MBUF_F_TX_OFFLOAD_MASK ^ enic->tx_offload_mask;
 
 		cq_idx = enic_cq_wq(enic, index);
 		vnic_cq_init(&enic->cq[cq_idx],
@@ -1755,10 +1755,10 @@ enic_enable_overlay_offload(struct enic *enic)
 		(enic->geneve ? DEV_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
 		(enic->vxlan ? DEV_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
 	enic->tx_offload_mask |=
-		PKT_TX_OUTER_IPV6 |
-		PKT_TX_OUTER_IPV4 |
-		PKT_TX_OUTER_IP_CKSUM |
-		PKT_TX_TUNNEL_MASK;
+		RTE_MBUF_F_TX_OUTER_IPV6 |
+		RTE_MBUF_F_TX_OUTER_IPV4 |
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+		RTE_MBUF_F_TX_TUNNEL_MASK;
 	enic->overlay_offload = true;
 
 	if (enic->vxlan && enic->geneve)
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index 0493e096d0..e85f9f23fb 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -216,12 +216,12 @@ int enic_get_vnic_config(struct enic *enic)
 		DEV_RX_OFFLOAD_TCP_CKSUM |
 		DEV_RX_OFFLOAD_RSS_HASH;
 	enic->tx_offload_mask =
-		PKT_TX_IPV6 |
-		PKT_TX_IPV4 |
-		PKT_TX_VLAN |
-		PKT_TX_IP_CKSUM |
-		PKT_TX_L4_MASK |
-		PKT_TX_TCP_SEG;
+		RTE_MBUF_F_TX_IPV6 |
+		RTE_MBUF_F_TX_IPV4 |
+		RTE_MBUF_F_TX_VLAN |
+		RTE_MBUF_F_TX_IP_CKSUM |
+		RTE_MBUF_F_TX_L4_MASK |
+		RTE_MBUF_F_TX_TCP_SEG;
 
 	return 0;
 }
diff --git a/drivers/net/enic/enic_rxtx.c b/drivers/net/enic/enic_rxtx.c
index 3899907d6d..c44715bfd0 100644
--- a/drivers/net/enic/enic_rxtx.c
+++ b/drivers/net/enic/enic_rxtx.c
@@ -424,7 +424,7 @@ uint16_t enic_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	for (i = 0; i != nb_pkts; i++) {
 		m = tx_pkts[i];
 		ol_flags = m->ol_flags;
-		if (!(ol_flags & PKT_TX_TCP_SEG)) {
+		if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			if (unlikely(m->pkt_len > ENIC_TX_MAX_PKT_SIZE)) {
 				rte_errno = EINVAL;
 				return i;
@@ -489,7 +489,7 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	wq_desc_avail = vnic_wq_desc_avail(wq);
 	head_idx = wq->head_idx;
 	desc_count = wq->ring.desc_count;
-	ol_flags_mask = PKT_TX_VLAN | PKT_TX_IP_CKSUM | PKT_TX_L4_MASK;
+	ol_flags_mask = RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK;
 	tx_oversized = &enic->soft_stats.tx_oversized;
 
 	nb_pkts = RTE_MIN(nb_pkts, ENIC_TX_XMIT_MAX);
@@ -500,7 +500,7 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		data_len = tx_pkt->data_len;
 		ol_flags = tx_pkt->ol_flags;
 		nb_segs = tx_pkt->nb_segs;
-		tso = ol_flags & PKT_TX_TCP_SEG;
+		tso = ol_flags & RTE_MBUF_F_TX_TCP_SEG;
 
 		/* drop packet if it's too big to send */
 		if (unlikely(!tso && pkt_len > ENIC_TX_MAX_PKT_SIZE)) {
@@ -517,7 +517,7 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		mss = 0;
 		vlan_id = tx_pkt->vlan_tci;
-		vlan_tag_insert = !!(ol_flags & PKT_TX_VLAN);
+		vlan_tag_insert = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
 		bus_addr = (dma_addr_t)
 			   (tx_pkt->buf_iova + tx_pkt->data_off);
 
@@ -543,20 +543,20 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			offload_mode = WQ_ENET_OFFLOAD_MODE_TSO;
 			mss = tx_pkt->tso_segsz;
 			/* For tunnel, need the size of outer+inner headers */
-			if (ol_flags & PKT_TX_TUNNEL_MASK) {
+			if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 				header_len += tx_pkt->outer_l2_len +
 					tx_pkt->outer_l3_len;
 			}
 		}
 
 		if ((ol_flags & ol_flags_mask) && (header_len == 0)) {
-			if (ol_flags & PKT_TX_IP_CKSUM)
+			if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 				mss |= ENIC_CALC_IP_CKSUM;
 
 			/* Nic uses just 1 bit for UDP and TCP */
-			switch (ol_flags & PKT_TX_L4_MASK) {
-			case PKT_TX_TCP_CKSUM:
-			case PKT_TX_UDP_CKSUM:
+			switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+			case RTE_MBUF_F_TX_TCP_CKSUM:
+			case RTE_MBUF_F_TX_UDP_CKSUM:
 				mss |= ENIC_CALC_TCP_UDP_CKSUM;
 				break;
 			}
@@ -634,7 +634,7 @@ static void enqueue_simple_pkts(struct rte_mbuf **pkts,
 		desc->header_length_flags &=
 			((1 << WQ_ENET_FLAGS_EOP_SHIFT) |
 			 (1 << WQ_ENET_FLAGS_CQ_ENTRY_SHIFT));
-		if (p->ol_flags & PKT_TX_VLAN) {
+		if (p->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			desc->header_length_flags |=
 				1 << WQ_ENET_FLAGS_VLAN_TAG_INSERT_SHIFT;
 		}
@@ -643,9 +643,9 @@ static void enqueue_simple_pkts(struct rte_mbuf **pkts,
 		 * is 0, so no need to set offload_mode.
 		 */
 		mss = 0;
-		if (p->ol_flags & PKT_TX_IP_CKSUM)
+		if (p->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			mss |= ENIC_CALC_IP_CKSUM << WQ_ENET_MSS_SHIFT;
-		if (p->ol_flags & PKT_TX_L4_MASK)
+		if (p->ol_flags & RTE_MBUF_F_TX_L4_MASK)
 			mss |= ENIC_CALC_TCP_UDP_CKSUM << WQ_ENET_MSS_SHIFT;
 		desc->mss_loopback = mss;
 
diff --git a/drivers/net/enic/enic_rxtx_common.h b/drivers/net/enic/enic_rxtx_common.h
index d8668d1898..9d6d3476b0 100644
--- a/drivers/net/enic/enic_rxtx_common.h
+++ b/drivers/net/enic/enic_rxtx_common.h
@@ -209,11 +209,11 @@ enic_cq_rx_to_pkt_flags(struct cq_desc *cqd, struct rte_mbuf *mbuf)
 
 	/* VLAN STRIPPED flag. The L2 packet type updated here also */
 	if (bwflags & CQ_ENET_RQ_DESC_FLAGS_VLAN_STRIPPED) {
-		pkt_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		pkt_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mbuf->packet_type |= RTE_PTYPE_L2_ETHER;
 	} else {
 		if (vlan_tci != 0) {
-			pkt_flags |= PKT_RX_VLAN;
+			pkt_flags |= RTE_MBUF_F_RX_VLAN;
 			mbuf->packet_type |= RTE_PTYPE_L2_ETHER_VLAN;
 		} else {
 			mbuf->packet_type |= RTE_PTYPE_L2_ETHER;
@@ -227,16 +227,16 @@ enic_cq_rx_to_pkt_flags(struct cq_desc *cqd, struct rte_mbuf *mbuf)
 		clsf_cqd = (struct cq_enet_rq_clsf_desc *)cqd;
 		filter_id = clsf_cqd->filter_id;
 		if (filter_id) {
-			pkt_flags |= PKT_RX_FDIR;
+			pkt_flags |= RTE_MBUF_F_RX_FDIR;
 			if (filter_id != ENIC_MAGIC_FILTER_ID) {
 				/* filter_id = mark id + 1, so subtract 1 */
 				mbuf->hash.fdir.hi = filter_id - 1;
-				pkt_flags |= PKT_RX_FDIR_ID;
+				pkt_flags |= RTE_MBUF_F_RX_FDIR_ID;
 			}
 		}
 	} else if (enic_cq_rx_desc_rss_type(cqrd)) {
 		/* RSS flag */
-		pkt_flags |= PKT_RX_RSS_HASH;
+		pkt_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mbuf->hash.rss = enic_cq_rx_desc_rss_hash(cqrd);
 	}
 
@@ -254,17 +254,17 @@ enic_cq_rx_to_pkt_flags(struct cq_desc *cqd, struct rte_mbuf *mbuf)
 			 */
 			if (mbuf->packet_type & RTE_PTYPE_L3_IPV4) {
 				if (enic_cq_rx_desc_ipv4_csum_ok(cqrd))
-					pkt_flags |= PKT_RX_IP_CKSUM_GOOD;
+					pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 				else
-					pkt_flags |= PKT_RX_IP_CKSUM_BAD;
+					pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			}
 
 			if (l4_flags == RTE_PTYPE_L4_UDP ||
 			    l4_flags == RTE_PTYPE_L4_TCP) {
 				if (enic_cq_rx_desc_tcp_udp_csum_ok(cqrd))
-					pkt_flags |= PKT_RX_L4_CKSUM_GOOD;
+					pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 				else
-					pkt_flags |= PKT_RX_L4_CKSUM_BAD;
+					pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			}
 		}
 	}
diff --git a/drivers/net/enic/enic_rxtx_vec_avx2.c b/drivers/net/enic/enic_rxtx_vec_avx2.c
index 1848f52717..600efff270 100644
--- a/drivers/net/enic/enic_rxtx_vec_avx2.c
+++ b/drivers/net/enic/enic_rxtx_vec_avx2.c
@@ -167,21 +167,21 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			0x80, 0x80, 11, 10,
 			0x80, 0x80, 11, 10,
 			0x80, 0x80, 11, 10);
-	/* PKT_RX_RSS_HASH is 1<<1 so fits in 8-bit integer */
+	/* RTE_MBUF_F_RX_RSS_HASH is 1<<1 so fits in 8-bit integer */
 	const __m256i rss_shuffle =
 		_mm256_set_epi8(/* second 128 bits */
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
 			0, /* rss_types = 0 */
 			/* first 128 bits */
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
 			0 /* rss_types = 0 */);
 	/*
 	 * VLAN offload flags.
@@ -191,8 +191,8 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	 */
 	const __m256i vlan_shuffle =
 		_mm256_set_epi32(0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, PKT_RX_VLAN);
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, RTE_MBUF_F_RX_VLAN);
 	/* Use the same shuffle index as vlan_shuffle */
 	const __m256i vlan_ptype_shuffle =
 		_mm256_set_epi32(0, 0, 0, 0,
@@ -211,39 +211,39 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	const __m256i csum_shuffle =
 		_mm256_set_epi8(/* second 128 bits */
 			/* 1111 ip4+ip4_ok+l4+l4_ok */
-			((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 			/* 1110 ip4_ok+ip4+l4+!l4_ok */
-			((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1),
-			(PKT_RX_IP_CKSUM_GOOD >> 1), /* 1101 ip4+ip4_ok */
-			(PKT_RX_IP_CKSUM_GOOD >> 1), /* 1100 ip4_ok+ip4 */
-			(PKT_RX_L4_CKSUM_GOOD >> 1), /* 1011 l4+l4_ok */
-			(PKT_RX_L4_CKSUM_BAD >> 1),  /* 1010 l4+!l4_ok */
+			((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1), /* 1101 ip4+ip4_ok */
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1), /* 1100 ip4_ok+ip4 */
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1), /* 1011 l4+l4_ok */
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD >> 1),  /* 1010 l4+!l4_ok */
 			0, /* 1001 */
 			0, /* 1000 */
 			/* 0111 !ip4_ok+ip4+l4+l4_ok */
-			((PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD) >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 			/* 0110 !ip4_ok+ip4+l4+!l4_ok */
-			((PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD) >> 1),
-			(PKT_RX_IP_CKSUM_BAD >> 1),  /* 0101 !ip4_ok+ip4 */
-			(PKT_RX_IP_CKSUM_BAD >> 1),  /* 0100 !ip4_ok+ip4 */
-			(PKT_RX_L4_CKSUM_GOOD >> 1), /* 0011 l4+l4_ok */
-			(PKT_RX_L4_CKSUM_BAD >> 1),  /* 0010 l4+!l4_ok */
+			((RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1),  /* 0101 !ip4_ok+ip4 */
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1),  /* 0100 !ip4_ok+ip4 */
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1), /* 0011 l4+l4_ok */
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD >> 1),  /* 0010 l4+!l4_ok */
 			0, /* 0001 */
 			0, /* 0000 */
 			/* first 128 bits */
-			((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1),
-			((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1),
-			(PKT_RX_IP_CKSUM_GOOD >> 1),
-			(PKT_RX_IP_CKSUM_GOOD >> 1),
-			(PKT_RX_L4_CKSUM_GOOD >> 1),
-			(PKT_RX_L4_CKSUM_BAD >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1),
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1),
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD >> 1),
 			0, 0,
-			((PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD) >> 1),
-			((PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD) >> 1),
-			(PKT_RX_IP_CKSUM_BAD >> 1),
-			(PKT_RX_IP_CKSUM_BAD >> 1),
-			(PKT_RX_L4_CKSUM_GOOD >> 1),
-			(PKT_RX_L4_CKSUM_BAD >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1),
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1),
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD >> 1),
 			0, 0);
 	/*
 	 * Non-fragment PTYPEs.
@@ -471,7 +471,7 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			break;
 
 		/*
-		 * Compute PKT_RX_RSS_HASH.
+		 * Compute RTE_MBUF_F_RX_RSS_HASH.
 		 * Use 2 shifts and 1 shuffle for 8 desc: 0.375 inst/desc
 		 * RSS types in byte 0, 4, 8, 12, 16, 20, 24, 28
 		 * Everything else is zero.
@@ -479,7 +479,7 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		__m256i rss_types =
 			_mm256_srli_epi32(_mm256_slli_epi32(flags0_7, 10), 28);
 		/*
-		 * RSS flags (PKT_RX_RSS_HASH) are in
+		 * RSS flags (RTE_MBUF_F_RX_RSS_HASH) are in
 		 * byte 0, 4, 8, 12, 16, 20, 24, 28
 		 * Everything else is zero.
 		 */
@@ -557,7 +557,7 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		vlan0_7 = _mm256_sub_epi32(zero4, vlan0_7);
 
 		/*
-		 * Compute PKT_RX_VLAN and PKT_RX_VLAN_STRIPPED.
+		 * Compute RTE_MBUF_F_RX_VLAN and RTE_MBUF_F_RX_VLAN_STRIPPED.
 		 * Use 3 shifts, 1 or,  1 shuffle for 8 desc: 0.625 inst/desc
 		 * VLAN offload flags in byte 0, 4, 8, 12, 16, 20, 24, 28
 		 * Everything else is zero.
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 496e72a003..b232d09104 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -37,16 +37,15 @@ static inline void dump_rxd(union fm10k_rx_desc *rxd)
 }
 #endif
 
-#define FM10K_TX_OFFLOAD_MASK (  \
-		PKT_TX_VLAN |        \
-		PKT_TX_IPV6 |            \
-		PKT_TX_IPV4 |            \
-		PKT_TX_IP_CKSUM |        \
-		PKT_TX_L4_MASK |         \
-		PKT_TX_TCP_SEG)
+#define FM10K_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_VLAN |        \
+		RTE_MBUF_F_TX_IPV6 |            \
+		RTE_MBUF_F_TX_IPV4 |            \
+		RTE_MBUF_F_TX_IP_CKSUM |        \
+		RTE_MBUF_F_TX_L4_MASK |         \
+		RTE_MBUF_F_TX_TCP_SEG)
 
 #define FM10K_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ FM10K_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ FM10K_TX_OFFLOAD_MASK)
 
 /* @note: When this function is changed, make corresponding change to
  * fm10k_dev_supported_ptypes_get()
@@ -78,21 +77,21 @@ rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
 						>> FM10K_RXD_PKTTYPE_SHIFT];
 
 	if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK)
-		m->ol_flags |= PKT_RX_RSS_HASH;
+		m->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 
 	if (unlikely((d->d.staterr &
 		(FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE)) ==
 		(FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE)))
-		m->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely((d->d.staterr &
 		(FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E)) ==
 		(FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E)))
-		m->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 }
 
 uint16_t
@@ -131,10 +130,10 @@ fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		 * Packets in fm10k device always carry at least one VLAN tag.
 		 * For those packets coming in without VLAN tag,
 		 * the port default VLAN tag will be used.
-		 * So, always PKT_RX_VLAN flag is set and vlan_tci
+		 * So, always RTE_MBUF_F_RX_VLAN flag is set and vlan_tci
 		 * is valid for each RX packet's mbuf.
 		 */
-		mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mbuf->vlan_tci = desc.w.vlan;
 		/**
 		 * mbuf->vlan_tci_outer is an idle field in fm10k driver,
@@ -292,10 +291,10 @@ fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		 * Packets in fm10k device always carry at least one VLAN tag.
 		 * For those packets coming in without VLAN tag,
 		 * the port default VLAN tag will be used.
-		 * So, always PKT_RX_VLAN flag is set and vlan_tci
+		 * So, always RTE_MBUF_F_RX_VLAN flag is set and vlan_tci
 		 * is valid for each RX packet's mbuf.
 		 */
-		first_seg->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		first_seg->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		first_seg->vlan_tci = desc.w.vlan;
 		/**
 		 * mbuf->vlan_tci_outer is an idle field in fm10k driver,
@@ -605,11 +604,11 @@ static inline void tx_xmit_pkt(struct fm10k_tx_queue *q, struct rte_mbuf *mb)
 	/* set checksum flags on first descriptor of packet. SCTP checksum
 	 * offload is not supported, but we do not explicitly check for this
 	 * case in favor of greatly simplified processing. */
-	if (mb->ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK | PKT_TX_TCP_SEG))
+	if (mb->ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_TCP_SEG))
 		q->hw_ring[q->next_free].flags |= FM10K_TXD_FLAG_CSUM;
 
 	/* set vlan if requested */
-	if (mb->ol_flags & PKT_TX_VLAN)
+	if (mb->ol_flags & RTE_MBUF_F_TX_VLAN)
 		q->hw_ring[q->next_free].vlan = mb->vlan_tci;
 	else
 		q->hw_ring[q->next_free].vlan = 0;
@@ -620,9 +619,9 @@ static inline void tx_xmit_pkt(struct fm10k_tx_queue *q, struct rte_mbuf *mb)
 	q->hw_ring[q->next_free].buflen =
 			rte_cpu_to_le_16(rte_pktmbuf_data_len(mb));
 
-	if (mb->ol_flags & PKT_TX_TCP_SEG) {
+	if (mb->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		hdrlen = mb->l2_len + mb->l3_len + mb->l4_len;
-		hdrlen += (mb->ol_flags & PKT_TX_TUNNEL_MASK) ?
+		hdrlen += (mb->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			  mb->outer_l2_len + mb->outer_l3_len : 0;
 		if (q->hw_ring[q->next_free].flags & FM10K_TXD_FLAG_FTAG)
 			hdrlen += sizeof(struct fm10k_ftag);
@@ -699,7 +698,7 @@ fm10k_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 	for (i = 0; i < nb_pkts; i++) {
 		m = tx_pkts[i];
 
-		if ((m->ol_flags & PKT_TX_TCP_SEG) &&
+		if ((m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
 				(m->tso_segsz < FM10K_TSO_MINMSS)) {
 			rte_errno = EINVAL;
 			return i;
diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c
index 83af01dc2d..7ecba9fef2 100644
--- a/drivers/net/fm10k/fm10k_rxtx_vec.c
+++ b/drivers/net/fm10k/fm10k_rxtx_vec.c
@@ -38,7 +38,7 @@ fm10k_reset_tx_queue(struct fm10k_tx_queue *txq);
 #define RXEFLAG_SHIFT     (13)
 /* IPE/L4E flag shift */
 #define L3L4EFLAG_SHIFT     (14)
-/* shift PKT_RX_L4_CKSUM_GOOD into one byte by 1 bit */
+/* shift RTE_MBUF_F_RX_L4_CKSUM_GOOD into one byte by 1 bit */
 #define CKSUM_SHIFT     (1)
 
 static inline void
@@ -52,10 +52,10 @@ fm10k_desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
 
 	const __m128i pkttype_msk = _mm_set_epi16(
 			0x0000, 0x0000, 0x0000, 0x0000,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 
 	/* mask everything except rss type */
 	const __m128i rsstype_msk = _mm_set_epi16(
@@ -75,10 +75,10 @@ fm10k_desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
 	const __m128i l3l4cksum_flag = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			(PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD) >> CKSUM_SHIFT,
-			(PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD) >> CKSUM_SHIFT,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> CKSUM_SHIFT,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> CKSUM_SHIFT);
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> CKSUM_SHIFT,
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> CKSUM_SHIFT,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> CKSUM_SHIFT,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> CKSUM_SHIFT);
 
 	const __m128i rxe_flag = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
@@ -87,9 +87,10 @@ fm10k_desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
 
 	/* map rss type to rss hash flag */
 	const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
-			0, 0, 0, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH, 0,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, 0);
+			0, 0, 0, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	/* Calculate RSS_hash and Vlan fields */
 	ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
diff --git a/drivers/net/hinic/hinic_pmd_rx.c b/drivers/net/hinic/hinic_pmd_rx.c
index 842399cc4c..311b22ccd1 100644
--- a/drivers/net/hinic/hinic_pmd_rx.c
+++ b/drivers/net/hinic/hinic_pmd_rx.c
@@ -802,7 +802,7 @@ static inline uint64_t hinic_rx_rss_hash(uint32_t offload_type,
 	rss_type = HINIC_GET_RSS_TYPES(offload_type);
 	if (likely(rss_type != 0)) {
 		*rss_hash = cqe_hass_val;
-		return PKT_RX_RSS_HASH;
+		return RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 	return 0;
@@ -815,33 +815,33 @@ static inline uint64_t hinic_rx_csum(uint32_t status, struct hinic_rxq *rxq)
 	struct hinic_nic_dev *nic_dev = rxq->nic_dev;
 
 	if (unlikely(!(nic_dev->rx_csum_en & HINIC_RX_CSUM_OFFLOAD_EN)))
-		return PKT_RX_IP_CKSUM_UNKNOWN;
+		return RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	/* most case checksum is ok */
 	checksum_err = HINIC_GET_RX_CSUM_ERR(status);
 	if (likely(checksum_err == 0))
-		return (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		return (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 
 	/* If BYPASS bit set, all other status indications should be ignored */
 	if (unlikely(HINIC_CSUM_ERR_BYPASSED(checksum_err)))
-		return PKT_RX_IP_CKSUM_UNKNOWN;
+		return RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	flags = 0;
 
 	/* IP checksum error */
 	if (HINIC_CSUM_ERR_IP(checksum_err))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	/* L4 checksum error */
 	if (HINIC_CSUM_ERR_L4(checksum_err))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (unlikely(HINIC_CSUM_ERR_OTHER(checksum_err)))
-		flags = PKT_RX_L4_CKSUM_NONE;
+		flags = RTE_MBUF_F_RX_L4_CKSUM_NONE;
 
 	rxq->rxq_stats.errors++;
 
@@ -861,7 +861,7 @@ static inline uint64_t hinic_rx_vlan(uint32_t offload_type, uint32_t vlan_len,
 
 	*vlan_tci = vlan_tag;
 
-	return PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+	return RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 }
 
 static inline u32 hinic_rx_alloc_mbuf_bulk(struct hinic_rxq *rxq,
@@ -1061,7 +1061,7 @@ u16 hinic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts)
 		/* lro offload */
 		lro_num = HINIC_GET_RX_NUM_LRO(cqe.status);
 		if (unlikely(lro_num != 0)) {
-			rxm->ol_flags |= PKT_RX_LRO;
+			rxm->ol_flags |= RTE_MBUF_F_RX_LRO;
 			rxm->tso_segsz = pkt_len / lro_num;
 		}
 
diff --git a/drivers/net/hinic/hinic_pmd_tx.c b/drivers/net/hinic/hinic_pmd_tx.c
index e14937139d..2688817f37 100644
--- a/drivers/net/hinic/hinic_pmd_tx.c
+++ b/drivers/net/hinic/hinic_pmd_tx.c
@@ -592,7 +592,7 @@ hinic_fill_tx_offload_info(struct rte_mbuf *mbuf,
 	task->pkt_info2 = 0;
 
 	/* Base VLAN */
-	if (unlikely(ol_flags & PKT_TX_VLAN)) {
+	if (unlikely(ol_flags & RTE_MBUF_F_TX_VLAN)) {
 		vlan_tag = mbuf->vlan_tci;
 		hinic_set_vlan_tx_offload(task, queue_info, vlan_tag,
 					  vlan_tag >> VLAN_PRIO_SHIFT);
@@ -602,7 +602,7 @@ hinic_fill_tx_offload_info(struct rte_mbuf *mbuf,
 	if (unlikely(!(ol_flags & HINIC_TX_CKSUM_OFFLOAD_MASK)))
 		return;
 
-	if ((ol_flags & PKT_TX_TCP_SEG))
+	if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		/* set tso info for task and qsf */
 		hinic_set_tso_info(task, queue_info, mbuf, tx_off_info);
 	else /* just support l4 checksum offload */
@@ -718,7 +718,7 @@ hinic_ipv4_phdr_cksum(const struct rte_ipv4_hdr *ipv4_hdr, uint64_t ol_flags)
 	psd_hdr.dst_addr = ipv4_hdr->dst_addr;
 	psd_hdr.zero = 0;
 	psd_hdr.proto = ipv4_hdr->next_proto_id;
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		psd_hdr.len = 0;
 	} else {
 		psd_hdr.len =
@@ -738,7 +738,7 @@ hinic_ipv6_phdr_cksum(const struct rte_ipv6_hdr *ipv6_hdr, uint64_t ol_flags)
 	} psd_hdr;
 
 	psd_hdr.proto = (ipv6_hdr->proto << 24);
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		psd_hdr.len = 0;
 	else
 		psd_hdr.len = ipv6_hdr->payload_len;
@@ -754,10 +754,10 @@ static inline void hinic_get_outer_cs_pld_offset(struct rte_mbuf *m,
 {
 	uint64_t ol_flags = m->ol_flags;
 
-	if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM)
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM)
 		off_info->payload_offset = m->outer_l2_len + m->outer_l3_len +
 					   m->l2_len + m->l3_len;
-	else if ((ol_flags & PKT_TX_TCP_CKSUM) || (ol_flags & PKT_TX_TCP_SEG))
+	else if ((ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) || (ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		off_info->payload_offset = m->outer_l2_len + m->outer_l3_len +
 					   m->l2_len + m->l3_len + m->l4_len;
 }
@@ -767,10 +767,10 @@ static inline void hinic_get_pld_offset(struct rte_mbuf *m,
 {
 	uint64_t ol_flags = m->ol_flags;
 
-	if (((ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM) ||
-	    ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_SCTP_CKSUM))
+	if (((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM) ||
+	    ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_SCTP_CKSUM))
 		off_info->payload_offset = m->l2_len + m->l3_len;
-	else if ((ol_flags & PKT_TX_TCP_CKSUM) || (ol_flags & PKT_TX_TCP_SEG))
+	else if ((ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) || (ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		off_info->payload_offset = m->l2_len + m->l3_len +
 					   m->l4_len;
 }
@@ -845,11 +845,11 @@ static inline uint8_t hinic_analyze_l3_type(struct rte_mbuf *mbuf)
 	uint8_t l3_type;
 	uint64_t ol_flags = mbuf->ol_flags;
 
-	if (ol_flags & PKT_TX_IPV4)
-		l3_type = (ol_flags & PKT_TX_IP_CKSUM) ?
+	if (ol_flags & RTE_MBUF_F_TX_IPV4)
+		l3_type = (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) ?
 			  IPV4_PKT_WITH_CHKSUM_OFFLOAD :
 			  IPV4_PKT_NO_CHKSUM_OFFLOAD;
-	else if (ol_flags & PKT_TX_IPV6)
+	else if (ol_flags & RTE_MBUF_F_TX_IPV6)
 		l3_type = IPV6_PKT;
 	else
 		l3_type = UNKNOWN_L3TYPE;
@@ -866,11 +866,11 @@ static inline void hinic_calculate_tcp_checksum(struct rte_mbuf *mbuf,
 	struct rte_tcp_hdr *tcp_hdr;
 	uint64_t ol_flags = mbuf->ol_flags;
 
-	if (ol_flags & PKT_TX_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf, struct rte_ipv4_hdr *,
 						   inner_l3_offset);
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			ipv4_hdr->hdr_checksum = 0;
 
 		tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr +
@@ -898,11 +898,11 @@ static inline void hinic_calculate_udp_checksum(struct rte_mbuf *mbuf,
 	struct rte_udp_hdr *udp_hdr;
 	uint64_t ol_flags = mbuf->ol_flags;
 
-	if (ol_flags & PKT_TX_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf, struct rte_ipv4_hdr *,
 						   inner_l3_offset);
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			ipv4_hdr->hdr_checksum = 0;
 
 		udp_hdr = (struct rte_udp_hdr *)((char *)ipv4_hdr +
@@ -938,21 +938,21 @@ static inline void hinic_calculate_checksum(struct rte_mbuf *mbuf,
 {
 	uint64_t ol_flags = mbuf->ol_flags;
 
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_UDP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		hinic_calculate_udp_checksum(mbuf, off_info, inner_l3_offset);
 		break;
 
-	case PKT_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		hinic_calculate_tcp_checksum(mbuf, off_info, inner_l3_offset);
 		break;
 
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		hinic_calculate_sctp_checksum(off_info);
 		break;
 
 	default:
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			hinic_calculate_tcp_checksum(mbuf, off_info,
 						     inner_l3_offset);
 		break;
@@ -970,8 +970,8 @@ static inline int hinic_tx_offload_pkt_prepare(struct rte_mbuf *m,
 		return 0;
 
 	/* Support only vxlan offload */
-	if (unlikely((ol_flags & PKT_TX_TUNNEL_MASK) &&
-	    !(ol_flags & PKT_TX_TUNNEL_VXLAN)))
+	if (unlikely((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) &&
+		     !(ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN)))
 		return -ENOTSUP;
 
 #ifdef RTE_LIBRTE_ETHDEV_DEBUG
@@ -979,7 +979,7 @@ static inline int hinic_tx_offload_pkt_prepare(struct rte_mbuf *m,
 		return -EINVAL;
 #endif
 
-	if (ol_flags & PKT_TX_TUNNEL_VXLAN) {
+	if (ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN) {
 		off_info->tunnel_type = TUNNEL_UDP_NO_CSUM;
 
 		/* inner_l4_tcp_udp csum should be set to calculate outer
@@ -987,9 +987,9 @@ static inline int hinic_tx_offload_pkt_prepare(struct rte_mbuf *m,
 		 */
 		off_info->inner_l4_tcp_udp = 1;
 
-		if ((ol_flags & PKT_TX_OUTER_IP_CKSUM) ||
-		    (ol_flags & PKT_TX_OUTER_IPV6) ||
-		    (ol_flags & PKT_TX_TCP_SEG)) {
+		if ((ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) ||
+		    (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) ||
+		    (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			inner_l3_offset = m->l2_len + m->outer_l2_len +
 					  m->outer_l3_len;
 			off_info->outer_l2_len = m->outer_l2_len;
@@ -1057,7 +1057,7 @@ static inline bool hinic_get_sge_txoff_info(struct rte_mbuf *mbuf_pkt,
 	sqe_info->cpy_mbuf_cnt = 0;
 
 	/* non tso mbuf */
-	if (likely(!(mbuf_pkt->ol_flags & PKT_TX_TCP_SEG))) {
+	if (likely(!(mbuf_pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG))) {
 		if (unlikely(mbuf_pkt->pkt_len > MAX_SINGLE_SGE_SIZE)) {
 			/* non tso packet len must less than 64KB */
 			return false;
diff --git a/drivers/net/hinic/hinic_pmd_tx.h b/drivers/net/hinic/hinic_pmd_tx.h
index d98abad8da..a3ec6299fb 100644
--- a/drivers/net/hinic/hinic_pmd_tx.h
+++ b/drivers/net/hinic/hinic_pmd_tx.h
@@ -13,13 +13,12 @@
 #define HINIC_GET_WQ_TAIL(txq)		\
 		((txq)->wq->queue_buf_vaddr + (txq)->wq->wq_buf_size)
 
-#define HINIC_TX_CKSUM_OFFLOAD_MASK (	\
-		PKT_TX_IP_CKSUM |	\
-		PKT_TX_TCP_CKSUM |	\
-		PKT_TX_UDP_CKSUM |      \
-		PKT_TX_SCTP_CKSUM |	\
-		PKT_TX_OUTER_IP_CKSUM |	\
-		PKT_TX_TCP_SEG)
+#define HINIC_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |	\
+		RTE_MBUF_F_TX_TCP_CKSUM |	\
+		RTE_MBUF_F_TX_UDP_CKSUM |      \
+		RTE_MBUF_F_TX_SCTP_CKSUM |	\
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |	\
+		RTE_MBUF_F_TX_TCP_SEG)
 
 enum sq_wqe_type {
 	SQ_NORMAL_WQE = 0,
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 243a4046ae..375952ba5a 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -622,7 +622,7 @@ struct hns3_hw {
 	 *  - HNS3_SPECIAL_PORT_SW_CKSUM_MODE
 	 *     In this mode, HW can not do checksum for special UDP port like
 	 *     4789, 4790, 6081 for non-tunnel UDP packets and UDP tunnel
-	 *     packets without the PKT_TX_TUNEL_MASK in the mbuf. So, PMD need
+	 *     packets without the RTE_MBUF_F_TX_TUNEL_MASK in the mbuf. So, PMD need
 	 *     do the checksum for these packets to avoid a checksum error.
 	 *
 	 *  - HNS3_SPECIAL_PORT_HW_CKSUM_MODE
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index feac7eb218..49c4bbeff2 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -2341,11 +2341,11 @@ hns3_rxd_to_vlan_tci(struct hns3_rx_queue *rxq, struct rte_mbuf *mb,
 		mb->vlan_tci = 0;
 		return;
 	case HNS3_INNER_STRP_VLAN_VLD:
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci = rte_le_to_cpu_16(rxd->rx.vlan_tag);
 		return;
 	case HNS3_OUTER_STRP_VLAN_VLD:
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci = rte_le_to_cpu_16(rxd->rx.ot_vlan_tag);
 		return;
 	default:
@@ -2395,7 +2395,7 @@ hns3_rx_ptp_timestamp_handle(struct hns3_rx_queue *rxq, struct rte_mbuf *mbuf,
 	struct hns3_pf *pf = HNS3_DEV_PRIVATE_TO_PF(rxq->hns);
 	uint64_t timestamp = rte_le_to_cpu_64(rxd->timestamp);
 
-	mbuf->ol_flags |= PKT_RX_IEEE1588_PTP | PKT_RX_IEEE1588_TMST;
+	mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP | RTE_MBUF_F_RX_IEEE1588_TMST;
 	if (hns3_timestamp_rx_dynflag > 0) {
 		*RTE_MBUF_DYNFIELD(mbuf, hns3_timestamp_dynfield_offset,
 			rte_mbuf_timestamp_t *) = timestamp;
@@ -2481,11 +2481,11 @@ hns3_recv_pkts_simple(void *rx_queue,
 		rxm->data_len = rxm->pkt_len;
 		rxm->port = rxq->port_id;
 		rxm->hash.rss = rte_le_to_cpu_32(rxd.rx.rss_hash);
-		rxm->ol_flags |= PKT_RX_RSS_HASH;
+		rxm->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		if (unlikely(bd_base_info & BIT(HNS3_RXD_LUM_B))) {
 			rxm->hash.fdir.hi =
 				rte_le_to_cpu_16(rxd.rx.fd_id);
-			rxm->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+			rxm->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		}
 		rxm->nb_segs = 1;
 		rxm->next = NULL;
@@ -2500,7 +2500,7 @@ hns3_recv_pkts_simple(void *rx_queue,
 		rxm->packet_type = hns3_rx_calc_ptype(rxq, l234_info, ol_info);
 
 		if (rxm->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC)
-			rxm->ol_flags |= PKT_RX_IEEE1588_PTP;
+			rxm->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
 
 		hns3_rxd_to_vlan_tci(rxq, rxm, l234_info, &rxd);
 
@@ -2699,17 +2699,17 @@ hns3_recv_scattered_pkts(void *rx_queue,
 
 		first_seg->port = rxq->port_id;
 		first_seg->hash.rss = rte_le_to_cpu_32(rxd.rx.rss_hash);
-		first_seg->ol_flags = PKT_RX_RSS_HASH;
+		first_seg->ol_flags = RTE_MBUF_F_RX_RSS_HASH;
 		if (unlikely(bd_base_info & BIT(HNS3_RXD_LUM_B))) {
 			first_seg->hash.fdir.hi =
 				rte_le_to_cpu_16(rxd.rx.fd_id);
-			first_seg->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+			first_seg->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		}
 
 		gro_size = hns3_get_field(bd_base_info, HNS3_RXD_GRO_SIZE_M,
 					  HNS3_RXD_GRO_SIZE_S);
 		if (gro_size != 0) {
-			first_seg->ol_flags |= PKT_RX_LRO;
+			first_seg->ol_flags |= RTE_MBUF_F_RX_LRO;
 			first_seg->tso_segsz = gro_size;
 		}
 
@@ -2724,7 +2724,7 @@ hns3_recv_scattered_pkts(void *rx_queue,
 						l234_info, ol_info);
 
 		if (first_seg->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC)
-			rxm->ol_flags |= PKT_RX_IEEE1588_PTP;
+			rxm->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
 
 		hns3_rxd_to_vlan_tci(rxq, first_seg, l234_info, &rxd);
 
@@ -3151,7 +3151,7 @@ hns3_restore_gro_conf(struct hns3_hw *hw)
 static inline bool
 hns3_pkt_is_tso(struct rte_mbuf *m)
 {
-	return (m->tso_segsz != 0 && m->ol_flags & PKT_TX_TCP_SEG);
+	return (m->tso_segsz != 0 && m->ol_flags & RTE_MBUF_F_TX_TCP_SEG);
 }
 
 static void
@@ -3184,7 +3184,7 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 	uint32_t paylen;
 
 	hdr_len = rxm->l2_len + rxm->l3_len + rxm->l4_len;
-	hdr_len += (ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			   rxm->outer_l2_len + rxm->outer_l3_len : 0;
 	paylen = rxm->pkt_len - hdr_len;
 	desc->tx.paylen_fd_dop_ol4cs |= rte_cpu_to_le_32(paylen);
@@ -3202,11 +3202,11 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 	 * To avoid the VLAN of Tx descriptor is overwritten by PVID, it should
 	 * be added to the position close to the IP header when PVID is enabled.
 	 */
-	if (!txq->pvid_sw_shift_en && ol_flags & (PKT_TX_VLAN |
-				PKT_TX_QINQ)) {
+	if (!txq->pvid_sw_shift_en && ol_flags & (RTE_MBUF_F_TX_VLAN |
+				                  RTE_MBUF_F_TX_QINQ)) {
 		desc->tx.ol_type_vlan_len_msec |=
 				rte_cpu_to_le_32(BIT(HNS3_TXD_OVLAN_B));
-		if (ol_flags & PKT_TX_QINQ)
+		if (ol_flags & RTE_MBUF_F_TX_QINQ)
 			desc->tx.outer_vlan_tag =
 					rte_cpu_to_le_16(rxm->vlan_tci_outer);
 		else
@@ -3214,14 +3214,14 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 					rte_cpu_to_le_16(rxm->vlan_tci);
 	}
 
-	if (ol_flags & PKT_TX_QINQ ||
-	    ((ol_flags & PKT_TX_VLAN) && txq->pvid_sw_shift_en)) {
+	if (ol_flags & RTE_MBUF_F_TX_QINQ ||
+	    ((ol_flags & RTE_MBUF_F_TX_VLAN) && txq->pvid_sw_shift_en)) {
 		desc->tx.type_cs_vlan_tso_len |=
 					rte_cpu_to_le_32(BIT(HNS3_TXD_VLAN_B));
 		desc->tx.vlan_tag = rte_cpu_to_le_16(rxm->vlan_tci);
 	}
 
-	if (ol_flags & PKT_TX_IEEE1588_TMST)
+	if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 		desc->tx.tp_fe_sc_vld_ra_ri |=
 				rte_cpu_to_le_16(BIT(HNS3_TXD_TSYN_B));
 }
@@ -3343,14 +3343,14 @@ hns3_parse_outer_params(struct rte_mbuf *m, uint32_t *ol_type_vlan_len_msec)
 	uint64_t ol_flags = m->ol_flags;
 
 	/* (outer) IP header type */
-	if (ol_flags & PKT_TX_OUTER_IPV4) {
-		if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV4) {
+		if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 			tmp |= hns3_gen_field_val(HNS3_TXD_OL3T_M,
 					HNS3_TXD_OL3T_S, HNS3_OL3T_IPV4_CSUM);
 		else
 			tmp |= hns3_gen_field_val(HNS3_TXD_OL3T_M,
 				HNS3_TXD_OL3T_S, HNS3_OL3T_IPV4_NO_CSUM);
-	} else if (ol_flags & PKT_TX_OUTER_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) {
 		tmp |= hns3_gen_field_val(HNS3_TXD_OL3T_M, HNS3_TXD_OL3T_S,
 					HNS3_OL3T_IPV6);
 	}
@@ -3370,10 +3370,10 @@ hns3_parse_inner_params(struct rte_mbuf *m, uint32_t *ol_type_vlan_len_msec,
 	uint64_t ol_flags = m->ol_flags;
 	uint16_t inner_l2_len;
 
-	switch (ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_VXLAN_GPE:
-	case PKT_TX_TUNNEL_GENEVE:
-	case PKT_TX_TUNNEL_VXLAN:
+	switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		/* MAC in UDP tunnelling packet, include VxLAN and GENEVE */
 		tmp_outer |= hns3_gen_field_val(HNS3_TXD_TUNTYPE_M,
 				HNS3_TXD_TUNTYPE_S, HNS3_TUN_MAC_IN_UDP);
@@ -3392,7 +3392,7 @@ hns3_parse_inner_params(struct rte_mbuf *m, uint32_t *ol_type_vlan_len_msec,
 
 		inner_l2_len = m->l2_len - RTE_ETHER_VXLAN_HLEN;
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		tmp_outer |= hns3_gen_field_val(HNS3_TXD_TUNTYPE_M,
 					HNS3_TXD_TUNTYPE_S, HNS3_TUN_NVGRE);
 		/*
@@ -3441,7 +3441,7 @@ hns3_parse_tunneling_params(struct hns3_tx_queue *txq, struct rte_mbuf *m,
 	 * calculations, the length of the L2 header include the outer and
 	 * inner, will be filled during the parsing of tunnel packects.
 	 */
-	if (!(ol_flags & PKT_TX_TUNNEL_MASK)) {
+	if (!(ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 		/*
 		 * For non tunnel type the tunnel type id is 0, so no need to
 		 * assign a value to it. Only the inner(normal) L2 header length
@@ -3457,7 +3457,7 @@ hns3_parse_tunneling_params(struct hns3_tx_queue *txq, struct rte_mbuf *m,
 		 * calculate the header length.
 		 */
 		if (unlikely(!(ol_flags &
-			(PKT_TX_OUTER_IP_CKSUM | PKT_TX_OUTER_UDP_CKSUM)) &&
+			(RTE_MBUF_F_TX_OUTER_IP_CKSUM | RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) &&
 					m->outer_l2_len == 0)) {
 			struct rte_net_hdr_lens hdr_len;
 			(void)rte_net_get_ptype(m, &hdr_len,
@@ -3474,7 +3474,7 @@ hns3_parse_tunneling_params(struct hns3_tx_queue *txq, struct rte_mbuf *m,
 
 	desc->tx.ol_type_vlan_len_msec = rte_cpu_to_le_32(tmp_outer);
 	desc->tx.type_cs_vlan_tso_len = rte_cpu_to_le_32(tmp_inner);
-	tmp_ol4cs = ol_flags & PKT_TX_OUTER_UDP_CKSUM ?
+	tmp_ol4cs = ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM ?
 			BIT(HNS3_TXD_OL4CS_B) : 0;
 	desc->tx.paylen_fd_dop_ol4cs = rte_cpu_to_le_32(tmp_ol4cs);
 
@@ -3489,9 +3489,9 @@ hns3_parse_l3_cksum_params(struct rte_mbuf *m, uint32_t *type_cs_vlan_tso_len)
 	uint32_t tmp;
 
 	tmp = *type_cs_vlan_tso_len;
-	if (ol_flags & PKT_TX_IPV4)
+	if (ol_flags & RTE_MBUF_F_TX_IPV4)
 		l3_type = HNS3_L3T_IPV4;
-	else if (ol_flags & PKT_TX_IPV6)
+	else if (ol_flags & RTE_MBUF_F_TX_IPV6)
 		l3_type = HNS3_L3T_IPV6;
 	else
 		l3_type = HNS3_L3T_NONE;
@@ -3503,7 +3503,7 @@ hns3_parse_l3_cksum_params(struct rte_mbuf *m, uint32_t *type_cs_vlan_tso_len)
 	tmp |= hns3_gen_field_val(HNS3_TXD_L3T_M, HNS3_TXD_L3T_S, l3_type);
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 		tmp |= BIT(HNS3_TXD_L3CS_B);
 	*type_cs_vlan_tso_len = tmp;
 }
@@ -3514,20 +3514,20 @@ hns3_parse_l4_cksum_params(struct rte_mbuf *m, uint32_t *type_cs_vlan_tso_len)
 	uint64_t ol_flags = m->ol_flags;
 	uint32_t tmp;
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & (PKT_TX_L4_MASK | PKT_TX_TCP_SEG)) {
-	case PKT_TX_TCP_CKSUM | PKT_TX_TCP_SEG:
-	case PKT_TX_TCP_CKSUM:
-	case PKT_TX_TCP_SEG:
+	switch (ol_flags & (RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_TCP_SEG)) {
+	case RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_TCP_SEG:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_SEG:
 		tmp = *type_cs_vlan_tso_len;
 		tmp |= hns3_gen_field_val(HNS3_TXD_L4T_M, HNS3_TXD_L4T_S,
 					HNS3_L4T_TCP);
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		tmp = *type_cs_vlan_tso_len;
 		tmp |= hns3_gen_field_val(HNS3_TXD_L4T_M, HNS3_TXD_L4T_S,
 					HNS3_L4T_UDP);
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		tmp = *type_cs_vlan_tso_len;
 		tmp |= hns3_gen_field_val(HNS3_TXD_L4T_M, HNS3_TXD_L4T_S,
 					HNS3_L4T_SCTP);
@@ -3584,7 +3584,7 @@ hns3_pkt_need_linearized(struct rte_mbuf *tx_pkts, uint32_t bd_num,
 
 	/* ensure the first 8 frags is greater than mss + header */
 	hdr_len = tx_pkts->l2_len + tx_pkts->l3_len + tx_pkts->l4_len;
-	hdr_len += (tx_pkts->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (tx_pkts->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 		   tx_pkts->outer_l2_len + tx_pkts->outer_l3_len : 0;
 	if (tot_len + m_last->data_len < tx_pkts->tso_segsz + hdr_len)
 		return true;
@@ -3614,15 +3614,15 @@ hns3_outer_ipv4_cksum_prepared(struct rte_mbuf *m, uint64_t ol_flags,
 	struct rte_ipv4_hdr *ipv4_hdr;
 	ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,
 					   m->outer_l2_len);
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		ipv4_hdr->hdr_checksum = 0;
-	if (ol_flags & PKT_TX_OUTER_UDP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM) {
 		struct rte_udp_hdr *udp_hdr;
 		/*
 		 * If OUTER_UDP_CKSUM is support, HW can caclulate the pseudo
 		 * header for TSO packets
 		 */
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			return true;
 		udp_hdr = rte_pktmbuf_mtod_offset(m, struct rte_udp_hdr *,
 				m->outer_l2_len + m->outer_l3_len);
@@ -3641,13 +3641,13 @@ hns3_outer_ipv6_cksum_prepared(struct rte_mbuf *m, uint64_t ol_flags,
 	struct rte_ipv6_hdr *ipv6_hdr;
 	ipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *,
 					   m->outer_l2_len);
-	if (ol_flags & PKT_TX_OUTER_UDP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM) {
 		struct rte_udp_hdr *udp_hdr;
 		/*
 		 * If OUTER_UDP_CKSUM is support, HW can caclulate the pseudo
 		 * header for TSO packets
 		 */
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			return true;
 		udp_hdr = rte_pktmbuf_mtod_offset(m, struct rte_udp_hdr *,
 				m->outer_l2_len + m->outer_l3_len);
@@ -3666,10 +3666,10 @@ hns3_outer_header_cksum_prepare(struct rte_mbuf *m)
 	uint32_t paylen, hdr_len, l4_proto;
 	struct rte_udp_hdr *udp_hdr;
 
-	if (!(ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6)))
+	if (!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6)))
 		return;
 
-	if (ol_flags & PKT_TX_OUTER_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV4) {
 		if (hns3_outer_ipv4_cksum_prepared(m, ol_flags, &l4_proto))
 			return;
 	} else {
@@ -3678,7 +3678,7 @@ hns3_outer_header_cksum_prepare(struct rte_mbuf *m)
 	}
 
 	/* driver should ensure the outer udp cksum is 0 for TUNNEL TSO */
-	if (l4_proto == IPPROTO_UDP && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (l4_proto == IPPROTO_UDP && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		hdr_len = m->l2_len + m->l3_len + m->l4_len;
 		hdr_len += m->outer_l2_len + m->outer_l3_len;
 		paylen = m->pkt_len - hdr_len;
@@ -3704,7 +3704,7 @@ hns3_check_tso_pkt_valid(struct rte_mbuf *m)
 		return -EINVAL;
 
 	hdr_len = m->l2_len + m->l3_len + m->l4_len;
-	hdr_len += (m->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			m->outer_l2_len + m->outer_l3_len : 0;
 	if (hdr_len > HNS3_MAX_TSO_HDR_SIZE)
 		return -EINVAL;
@@ -3754,12 +3754,12 @@ hns3_vld_vlan_chk(struct hns3_tx_queue *txq, struct rte_mbuf *m)
 	 * implementation function named hns3_prep_pkts to inform users that
 	 * these packets will be discarded.
 	 */
-	if (m->ol_flags & PKT_TX_QINQ)
+	if (m->ol_flags & RTE_MBUF_F_TX_QINQ)
 		return -EINVAL;
 
 	eh = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 	if (eh->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN)) {
-		if (m->ol_flags & PKT_TX_VLAN)
+		if (m->ol_flags & RTE_MBUF_F_TX_VLAN)
 			return -EINVAL;
 
 		/* Ensure the incoming packet is not a QinQ packet */
@@ -3779,7 +3779,7 @@ hns3_udp_cksum_help(struct rte_mbuf *m)
 	uint16_t cksum = 0;
 	uint32_t l4_len;
 
-	if (ol_flags & PKT_TX_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		struct rte_ipv4_hdr *ipv4_hdr = rte_pktmbuf_mtod_offset(m,
 				struct rte_ipv4_hdr *, m->l2_len);
 		l4_len = rte_be_to_cpu_16(ipv4_hdr->total_length) - m->l3_len;
@@ -3810,8 +3810,8 @@ hns3_validate_tunnel_cksum(struct hns3_tx_queue *tx_queue, struct rte_mbuf *m)
 	uint16_t dst_port;
 
 	if (tx_queue->udp_cksum_mode == HNS3_SPECIAL_PORT_HW_CKSUM_MODE ||
-	    ol_flags & PKT_TX_TUNNEL_MASK ||
-	    (ol_flags & PKT_TX_L4_MASK) != PKT_TX_UDP_CKSUM)
+	    ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK ||
+	    (ol_flags & RTE_MBUF_F_TX_L4_MASK) != RTE_MBUF_F_TX_UDP_CKSUM)
 		return true;
 	/*
 	 * A UDP packet with the same dst_port as VXLAN\VXLAN_GPE\GENEVE will
@@ -3828,7 +3828,7 @@ hns3_validate_tunnel_cksum(struct hns3_tx_queue *tx_queue, struct rte_mbuf *m)
 	case RTE_VXLAN_GPE_DEFAULT_PORT:
 	case RTE_GENEVE_DEFAULT_PORT:
 		udp_hdr->dgram_cksum = hns3_udp_cksum_help(m);
-		m->ol_flags = ol_flags & ~PKT_TX_L4_MASK;
+		m->ol_flags = ol_flags & ~RTE_MBUF_F_TX_L4_MASK;
 		return false;
 	default:
 		return true;
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index bb309d38ed..70fd029aaa 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -471,7 +471,7 @@ struct hns3_tx_queue {
 	 *  - HNS3_SPECIAL_PORT_SW_CKSUM_MODE
 	 *     In this mode, HW can not do checksum for special UDP port like
 	 *     4789, 4790, 6081 for non-tunnel UDP packets and UDP tunnel
-	 *     packets without the PKT_TX_TUNEL_MASK in the mbuf. So, PMD need
+	 *     packets without the RTE_MBUF_F_TX_TUNEL_MASK in the mbuf. So, PMD need
 	 *     do the checksum for these packets to avoid a checksum error.
 	 *
 	 *  - HNS3_SPECIAL_PORT_HW_CKSUM_MODE
@@ -545,12 +545,11 @@ struct hns3_queue_info {
 	unsigned int socket_id;
 };
 
-#define HNS3_TX_CKSUM_OFFLOAD_MASK ( \
-	PKT_TX_OUTER_UDP_CKSUM | \
-	PKT_TX_OUTER_IP_CKSUM | \
-	PKT_TX_IP_CKSUM | \
-	PKT_TX_TCP_SEG | \
-	PKT_TX_L4_MASK)
+#define HNS3_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \
+	RTE_MBUF_F_TX_OUTER_IP_CKSUM | \
+	RTE_MBUF_F_TX_IP_CKSUM | \
+	RTE_MBUF_F_TX_TCP_SEG | \
+	RTE_MBUF_F_TX_L4_MASK)
 
 enum hns3_cksum_status {
 	HNS3_CKSUM_NONE = 0,
@@ -574,29 +573,29 @@ hns3_rx_set_cksum_flag(struct hns3_rx_queue *rxq,
 				 BIT(HNS3_RXD_OL4E_B))
 
 	if (likely((l234_info & HNS3_RXD_CKSUM_ERR_MASK) == 0)) {
-		rxm->ol_flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		rxm->ol_flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return;
 	}
 
 	if (unlikely(l234_info & BIT(HNS3_RXD_L3E_B))) {
-		rxm->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		rxq->dfx_stats.l3_csum_errors++;
 	} else {
-		rxm->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	}
 
 	if (unlikely(l234_info & BIT(HNS3_RXD_L4E_B))) {
-		rxm->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		rxq->dfx_stats.l4_csum_errors++;
 	} else {
-		rxm->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	if (unlikely(l234_info & BIT(HNS3_RXD_OL3E_B)))
 		rxq->dfx_stats.ol3_csum_errors++;
 
 	if (unlikely(l234_info & BIT(HNS3_RXD_OL4E_B))) {
-		rxm->ol_flags |= PKT_RX_OUTER_L4_CKSUM_BAD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 		rxq->dfx_stats.ol4_csum_errors++;
 	}
 }
diff --git a/drivers/net/hns3/hns3_rxtx_vec_neon.h b/drivers/net/hns3/hns3_rxtx_vec_neon.h
index 74c848d5ef..0edd4756f1 100644
--- a/drivers/net/hns3/hns3_rxtx_vec_neon.h
+++ b/drivers/net/hns3/hns3_rxtx_vec_neon.h
@@ -105,7 +105,7 @@ hns3_desc_parse_field(struct hns3_rx_queue *rxq,
 		pkt = sw_ring[i].mbuf;
 
 		/* init rte_mbuf.rearm_data last 64-bit */
-		pkt->ol_flags = PKT_RX_RSS_HASH;
+		pkt->ol_flags = RTE_MBUF_F_RX_RSS_HASH;
 
 		l234_info = rxdp[i].rx.l234_info;
 		ol_info = rxdp[i].rx.ol_info;
diff --git a/drivers/net/hns3/hns3_rxtx_vec_sve.c b/drivers/net/hns3/hns3_rxtx_vec_sve.c
index d5c49333b2..be1fdbcdf0 100644
--- a/drivers/net/hns3/hns3_rxtx_vec_sve.c
+++ b/drivers/net/hns3/hns3_rxtx_vec_sve.c
@@ -43,7 +43,7 @@ hns3_desc_parse_field_sve(struct hns3_rx_queue *rxq,
 
 	for (i = 0; i < (int)bd_vld_num; i++) {
 		/* init rte_mbuf.rearm_data last 64-bit */
-		rx_pkts[i]->ol_flags = PKT_RX_RSS_HASH;
+		rx_pkts[i]->ol_flags = RTE_MBUF_F_RX_RSS_HASH;
 
 		ret = hns3_handle_bdinfo(rxq, rx_pkts[i], key->bd_base_info[i],
 					 key->l234_info[i]);
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index c858354b73..1ce0f5e472 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -44,42 +44,39 @@
 #define I40E_TXD_CMD (I40E_TX_DESC_CMD_EOP | I40E_TX_DESC_CMD_RS)
 
 #ifdef RTE_LIBRTE_IEEE1588
-#define I40E_TX_IEEE1588_TMST PKT_TX_IEEE1588_TMST
+#define I40E_TX_IEEE1588_TMST RTE_MBUF_F_TX_IEEE1588_TMST
 #else
 #define I40E_TX_IEEE1588_TMST 0
 #endif
 
-#define I40E_TX_CKSUM_OFFLOAD_MASK (		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG |		 \
-		PKT_TX_OUTER_IP_CKSUM)
-
-#define I40E_TX_OFFLOAD_MASK (  \
-		PKT_TX_OUTER_IPV4 |	\
-		PKT_TX_OUTER_IPV6 |	\
-		PKT_TX_IPV4 |		\
-		PKT_TX_IPV6 |		\
-		PKT_TX_IP_CKSUM |       \
-		PKT_TX_L4_MASK |        \
-		PKT_TX_OUTER_IP_CKSUM | \
-		PKT_TX_TCP_SEG |        \
-		PKT_TX_QINQ |       \
-		PKT_TX_VLAN |	\
-		PKT_TX_TUNNEL_MASK |	\
+#define I40E_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG |		 \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+
+#define I40E_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV4 |	\
+		RTE_MBUF_F_TX_OUTER_IPV6 |	\
+		RTE_MBUF_F_TX_IPV4 |		\
+		RTE_MBUF_F_TX_IPV6 |		\
+		RTE_MBUF_F_TX_IP_CKSUM |       \
+		RTE_MBUF_F_TX_L4_MASK |        \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM | \
+		RTE_MBUF_F_TX_TCP_SEG |        \
+		RTE_MBUF_F_TX_QINQ |       \
+		RTE_MBUF_F_TX_VLAN |	\
+		RTE_MBUF_F_TX_TUNNEL_MASK |	\
 		I40E_TX_IEEE1588_TMST)
 
 #define I40E_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ I40E_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ I40E_TX_OFFLOAD_MASK)
 
-#define I40E_TX_OFFLOAD_SIMPLE_SUP_MASK ( \
-		PKT_TX_IPV4 | \
-		PKT_TX_IPV6 | \
-		PKT_TX_OUTER_IPV4 | \
-		PKT_TX_OUTER_IPV6)
+#define I40E_TX_OFFLOAD_SIMPLE_SUP_MASK (RTE_MBUF_F_TX_IPV4 | \
+		RTE_MBUF_F_TX_IPV6 | \
+		RTE_MBUF_F_TX_OUTER_IPV4 | \
+		RTE_MBUF_F_TX_OUTER_IPV6)
 
 #define I40E_TX_OFFLOAD_SIMPLE_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ I40E_TX_OFFLOAD_SIMPLE_SUP_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ I40E_TX_OFFLOAD_SIMPLE_SUP_MASK)
 
 static int
 i40e_monitor_callback(const uint64_t value,
@@ -119,7 +116,7 @@ i40e_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union i40e_rx_desc *rxdp)
 {
 	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
 		(1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci =
 			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
 		PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
@@ -130,8 +127,8 @@ i40e_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union i40e_rx_desc *rxdp)
 #ifndef RTE_LIBRTE_I40E_16BYTE_RX_DESC
 	if (rte_le_to_cpu_16(rxdp->wb.qword2.ext_status) &
 		(1 << I40E_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT)) {
-		mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
-			PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+		mb->ol_flags |= RTE_MBUF_F_RX_QINQ_STRIPPED | RTE_MBUF_F_RX_QINQ |
+			RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_RX_VLAN;
 		mb->vlan_tci_outer = mb->vlan_tci;
 		mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2);
 		PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
@@ -154,11 +151,11 @@ i40e_rxd_status_to_pkt_flags(uint64_t qword)
 	/* Check if RSS_HASH */
 	flags = (((qword >> I40E_RX_DESC_STATUS_FLTSTAT_SHIFT) &
 					I40E_RX_DESC_FLTSTAT_RSS_HASH) ==
-			I40E_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+			I40E_RX_DESC_FLTSTAT_RSS_HASH) ? RTE_MBUF_F_RX_RSS_HASH : 0;
 
 	/* Check if FDIR Match */
 	flags |= (qword & (1 << I40E_RX_DESC_STATUS_FLM_SHIFT) ?
-							PKT_RX_FDIR : 0);
+							RTE_MBUF_F_RX_FDIR : 0);
 
 	return flags;
 }
@@ -171,22 +168,22 @@ i40e_rxd_error_to_pkt_flags(uint64_t qword)
 
 #define I40E_RX_ERR_BITS 0x3f
 	if (likely((error_bits & I40E_RX_ERR_BITS) == 0)) {
-		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return flags;
 	}
 
 	if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_IPE_SHIFT)))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_L4E_SHIFT)))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_EIPE_SHIFT)))
-		flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 
 	return flags;
 }
@@ -205,9 +202,9 @@ i40e_get_iee15888_flags(struct rte_mbuf *mb, uint64_t qword)
 
 	if ((mb->packet_type & RTE_PTYPE_L2_MASK)
 			== RTE_PTYPE_L2_ETHER_TIMESYNC)
-		pkt_flags = PKT_RX_IEEE1588_PTP;
+		pkt_flags = RTE_MBUF_F_RX_IEEE1588_PTP;
 	if (tsyn & 0x04) {
-		pkt_flags |= PKT_RX_IEEE1588_TMST;
+		pkt_flags |= RTE_MBUF_F_RX_IEEE1588_TMST;
 		mb->timesync = tsyn & 0x03;
 	}
 
@@ -233,21 +230,21 @@ i40e_rxd_build_fdir(volatile union i40e_rx_desc *rxdp, struct rte_mbuf *mb)
 	if (flexbh == I40E_RX_DESC_EXT_STATUS_FLEXBH_FD_ID) {
 		mb->hash.fdir.hi =
 			rte_le_to_cpu_32(rxdp->wb.qword3.hi_dword.fd_id);
-		flags |= PKT_RX_FDIR_ID;
+		flags |= RTE_MBUF_F_RX_FDIR_ID;
 	} else if (flexbh == I40E_RX_DESC_EXT_STATUS_FLEXBH_FLEX) {
 		mb->hash.fdir.hi =
 			rte_le_to_cpu_32(rxdp->wb.qword3.hi_dword.flex_bytes_hi);
-		flags |= PKT_RX_FDIR_FLX;
+		flags |= RTE_MBUF_F_RX_FDIR_FLX;
 	}
 	if (flexbl == I40E_RX_DESC_EXT_STATUS_FLEXBL_FLEX) {
 		mb->hash.fdir.lo =
 			rte_le_to_cpu_32(rxdp->wb.qword3.lo_dword.flex_bytes_lo);
-		flags |= PKT_RX_FDIR_FLX;
+		flags |= RTE_MBUF_F_RX_FDIR_FLX;
 	}
 #else
 	mb->hash.fdir.hi =
 		rte_le_to_cpu_32(rxdp->wb.qword0.hi_dword.fd_id);
-	flags |= PKT_RX_FDIR_ID;
+	flags |= RTE_MBUF_F_RX_FDIR_ID;
 #endif
 	return flags;
 }
@@ -258,11 +255,11 @@ i40e_parse_tunneling_params(uint64_t ol_flags,
 			    uint32_t *cd_tunneling)
 {
 	/* EIPT: External (outer) IP header type */
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		*cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV4;
-	else if (ol_flags & PKT_TX_OUTER_IPV4)
+	else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)
 		*cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV4_NO_CSUM;
-	else if (ol_flags & PKT_TX_OUTER_IPV6)
+	else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)
 		*cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV6;
 
 	/* EIPLEN: External (outer) IP header length, in DWords */
@@ -270,15 +267,15 @@ i40e_parse_tunneling_params(uint64_t ol_flags,
 		I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT;
 
 	/* L4TUNT: L4 Tunneling Type */
-	switch (ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_IPIP:
+	switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_IPIP:
 		/* for non UDP / GRE tunneling, set to 00b */
 		break;
-	case PKT_TX_TUNNEL_VXLAN:
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		*cd_tunneling |= I40E_TXD_CTX_UDP_TUNNELING;
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		*cd_tunneling |= I40E_TXD_CTX_GRE_TUNNELING;
 		break;
 	default:
@@ -306,7 +303,7 @@ i40e_txd_enable_checksum(uint64_t ol_flags,
 			union i40e_tx_offload tx_offload)
 {
 	/* Set MACLEN */
-	if (ol_flags & PKT_TX_TUNNEL_MASK)
+	if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 		*td_offset |= (tx_offload.outer_l2_len >> 1)
 				<< I40E_TX_DESC_LENGTH_MACLEN_SHIFT;
 	else
@@ -314,21 +311,21 @@ i40e_txd_enable_checksum(uint64_t ol_flags,
 			<< I40E_TX_DESC_LENGTH_MACLEN_SHIFT;
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		*td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		*td_offset |= (tx_offload.l3_len >> 2)
 				<< I40E_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		*td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV4;
 		*td_offset |= (tx_offload.l3_len >> 2)
 				<< I40E_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV6;
 		*td_offset |= (tx_offload.l3_len >> 2)
 				<< I40E_TX_DESC_LENGTH_IPLEN_SHIFT;
 	}
 
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		*td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (tx_offload.l4_len >> 2)
 			<< I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -336,18 +333,18 @@ i40e_txd_enable_checksum(uint64_t ol_flags,
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		*td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 				I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		*td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_SCTP;
 		*td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 				I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		*td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_UDP;
 		*td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 				I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -526,10 +523,10 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
 				ptype_tbl[(uint8_t)((qword1 &
 				I40E_RXD_QW1_PTYPE_MASK) >>
 				I40E_RXD_QW1_PTYPE_SHIFT)];
-			if (pkt_flags & PKT_RX_RSS_HASH)
+			if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 				mb->hash.rss = rte_le_to_cpu_32(\
 					rxdp[j].wb.qword0.hi_dword.rss);
-			if (pkt_flags & PKT_RX_FDIR)
+			if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 				pkt_flags |= i40e_rxd_build_fdir(&rxdp[j], mb);
 
 #ifdef RTE_LIBRTE_IEEE1588
@@ -789,10 +786,10 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		rxm->packet_type =
 			ptype_tbl[(uint8_t)((qword1 &
 			I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT)];
-		if (pkt_flags & PKT_RX_RSS_HASH)
+		if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 			rxm->hash.rss =
 				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
-		if (pkt_flags & PKT_RX_FDIR)
+		if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 			pkt_flags |= i40e_rxd_build_fdir(&rxd, rxm);
 
 #ifdef RTE_LIBRTE_IEEE1588
@@ -957,10 +954,10 @@ i40e_recv_scattered_pkts(void *rx_queue,
 		first_seg->packet_type =
 			ptype_tbl[(uint8_t)((qword1 &
 			I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT)];
-		if (pkt_flags & PKT_RX_RSS_HASH)
+		if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 			first_seg->hash.rss =
 				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
-		if (pkt_flags & PKT_RX_FDIR)
+		if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 			pkt_flags |= i40e_rxd_build_fdir(&rxd, first_seg);
 
 #ifdef RTE_LIBRTE_IEEE1588
@@ -1004,13 +1001,13 @@ i40e_recv_scattered_pkts(void *rx_queue,
 static inline uint16_t
 i40e_calc_context_desc(uint64_t flags)
 {
-	static uint64_t mask = PKT_TX_OUTER_IP_CKSUM |
-		PKT_TX_TCP_SEG |
-		PKT_TX_QINQ |
-		PKT_TX_TUNNEL_MASK;
+	static uint64_t mask = RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+		RTE_MBUF_F_TX_TCP_SEG |
+		RTE_MBUF_F_TX_QINQ |
+		RTE_MBUF_F_TX_TUNNEL_MASK;
 
 #ifdef RTE_LIBRTE_IEEE1588
-	mask |= PKT_TX_IEEE1588_TMST;
+	mask |= RTE_MBUF_F_TX_IEEE1588_TMST;
 #endif
 
 	return (flags & mask) ? 1 : 0;
@@ -1029,7 +1026,7 @@ i40e_set_tso_ctx(struct rte_mbuf *mbuf, union i40e_tx_offload tx_offload)
 	}
 
 	hdr_len = tx_offload.l2_len + tx_offload.l3_len + tx_offload.l4_len;
-	hdr_len += (mbuf->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 		   tx_offload.outer_l2_len + tx_offload.outer_l3_len : 0;
 
 	cd_cmd = I40E_TX_CTX_DESC_TSO;
@@ -1122,7 +1119,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 * the mbuf data size exceeds max data size that hw allows
 		 * per tx desc.
 		 */
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			nb_used = (uint16_t)(i40e_calc_pkt_desc(tx_pkt) +
 					     nb_ctx);
 		else
@@ -1151,7 +1148,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+		if (ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 			td_cmd |= I40E_TX_DESC_CMD_IL2TAG1;
 			td_tag = tx_pkt->vlan_tci;
 		}
@@ -1161,7 +1158,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 		/* Fill in tunneling parameters if necessary */
 		cd_tunneling_params = 0;
-		if (ol_flags & PKT_TX_TUNNEL_MASK)
+		if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 			i40e_parse_tunneling_params(ol_flags, tx_offload,
 						    &cd_tunneling_params);
 		/* Enable checksum offloading */
@@ -1186,12 +1183,12 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			}
 
 			/* TSO enabled means no timestamp */
-			if (ol_flags & PKT_TX_TCP_SEG)
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 				cd_type_cmd_tso_mss |=
 					i40e_set_tso_ctx(tx_pkt, tx_offload);
 			else {
 #ifdef RTE_LIBRTE_IEEE1588
-				if (ol_flags & PKT_TX_IEEE1588_TMST)
+				if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 					cd_type_cmd_tso_mss |=
 						((uint64_t)I40E_TX_CTX_DESC_TSYN <<
 						 I40E_TXD_CTX_QW1_CMD_SHIFT);
@@ -1200,7 +1197,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 			ctx_txd->tunneling_params =
 				rte_cpu_to_le_32(cd_tunneling_params);
-			if (ol_flags & PKT_TX_QINQ) {
+			if (ol_flags & RTE_MBUF_F_TX_QINQ) {
 				cd_l2tag2 = tx_pkt->vlan_tci_outer;
 				cd_type_cmd_tso_mss |=
 					((uint64_t)I40E_TX_CTX_DESC_IL2TAG2 <<
@@ -1239,7 +1236,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			slen = m_seg->data_len;
 			buf_dma_addr = rte_mbuf_data_iova(m_seg);
 
-			while ((ol_flags & PKT_TX_TCP_SEG) &&
+			while ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
 				unlikely(slen > I40E_MAX_DATA_PER_TXD)) {
 				txd->buffer_addr =
 					rte_cpu_to_le_64(buf_dma_addr);
@@ -1580,7 +1577,7 @@ i40e_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		ol_flags = m->ol_flags;
 
 		/* Check for m->nb_segs to not exceed the limits. */
-		if (!(ol_flags & PKT_TX_TCP_SEG)) {
+		if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			if (m->nb_segs > I40E_TX_MAX_MTU_SEG ||
 			    m->pkt_len > I40E_FRAME_SIZE_MAX) {
 				rte_errno = EINVAL;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index b99323992f..d0bf86dfba 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -117,26 +117,26 @@ desc_to_olflags_v(vector unsigned long descs[4], struct rte_mbuf **rx_pkts)
 	/* map rss and vlan type to rss hash and vlan flag */
 	const vector unsigned char vlan_flags = (vector unsigned char){
 			0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0, 0, 0,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
 
 	const vector unsigned char rss_flags = (vector unsigned char){
-			0, PKT_RX_FDIR, 0, 0,
-			0, 0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH | PKT_RX_FDIR,
+			0, RTE_MBUF_F_RX_FDIR, 0, 0,
+			0, 0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
 
 	const vector unsigned char l3_l4e_flags = (vector unsigned char){
 			0,
-			PKT_RX_IP_CKSUM_BAD,
-			PKT_RX_L4_CKSUM_BAD,
-			PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD,
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD,
-			PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
-			PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD
-					     | PKT_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_L4_CKSUM_BAD,
+			RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD
+					     | RTE_MBUF_F_RX_IP_CKSUM_BAD,
 			0, 0, 0, 0, 0, 0, 0, 0};
 
 	vlan0 = (vector unsigned int)vec_mergel(descs[0], descs[1]);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 3b9eef91a9..ca10e0dd15 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -78,7 +78,7 @@ desc_fdir_processing_32b(volatile union i40e_rx_desc *rxdp,
 	 * - Position that bit correctly based on packet number
 	 * - OR in the resulting bit to mbuf_flags
 	 */
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
 	__m256i mbuf_flag_mask = _mm256_set_epi32(0, 0, 0, 1 << 13,
 						  0, 0, 0, 1 << 13);
 	__m256i desc_flag_bit =  _mm256_and_si256(mbuf_flag_mask, fdir_mask);
@@ -208,8 +208,8 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 * destination
 	 */
 	const __m256i vlan_flags_shuf = _mm256_set_epi32(
-			0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-			0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0);
+			0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+			0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0);
 	/*
 	 * data to be shuffled by result of flag mask, shifted down 11.
 	 * If RSS/FDIR bits are set, shuffle moves appropriate flags in
@@ -217,11 +217,11 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 */
 	const __m256i rss_flags_shuf = _mm256_set_epi8(
 			0, 0, 0, 0, 0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-			0, 0, PKT_RX_FDIR, 0, /* end up 128-bits */
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+			0, 0, RTE_MBUF_F_RX_FDIR, 0, /* end up 128-bits */
 			0, 0, 0, 0, 0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-			0, 0, PKT_RX_FDIR, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+			0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	/*
 	 * data to be shuffled by the result of the flags mask shifted by 22
@@ -229,37 +229,37 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 */
 	const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 			/* second 128-bits */
 			0, 0, 0, 0, 0, 0, 0, 0,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 
 	const __m256i cksum_mask = _mm256_set1_epi32(
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD);
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	RTE_SET_USED(avx_aligned); /* for 32B descriptors we don't use this */
 
@@ -442,7 +442,7 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 			 * order (hi->lo): [1, 3, 5, 7, 0, 2, 4, 6]
 			 * Then OR FDIR flags to mbuf_flags on FDIR ID hit.
 			 */
-			RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
+			RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
 			const __m256i pkt_fdir_bit = _mm256_set1_epi32(1 << 13);
 			const __m256i fdir_mask = _mm256_cmpeq_epi32(fdir, fdir_id);
 			__m256i fdir_bits = _mm256_and_si256(fdir_mask, pkt_fdir_bit);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index bd21d64223..2c779fa2a6 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -204,7 +204,7 @@ desc_fdir_processing_32b(volatile union i40e_rx_desc *rxdp,
 	 * - Position that bit correctly based on packet number
 	 * - OR in the resulting bit to mbuf_flags
 	 */
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
 	__m256i mbuf_flag_mask = _mm256_set_epi32(0, 0, 0, 1 << 13,
 						  0, 0, 0, 1 << 13);
 	__m256i desc_flag_bit =  _mm256_and_si256(mbuf_flag_mask, fdir_mask);
@@ -319,8 +319,8 @@ _recv_raw_pkts_vec_avx512(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 * destination
 	 */
 	const __m256i vlan_flags_shuf = _mm256_set_epi32
-		(0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-		0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0);
+		(0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+		0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0);
 
 	/* data to be shuffled by result of flag mask, shifted down 11.
 	 * If RSS/FDIR bits are set, shuffle moves appropriate flags in
@@ -328,11 +328,11 @@ _recv_raw_pkts_vec_avx512(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 */
 	const __m256i rss_flags_shuf = _mm256_set_epi8
 		(0, 0, 0, 0, 0, 0, 0, 0,
-		PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-		0, 0, PKT_RX_FDIR, 0, /* end up 128-bits */
+		RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+		0, 0, RTE_MBUF_F_RX_FDIR, 0, /* end up 128-bits */
 		0, 0, 0, 0, 0, 0, 0, 0,
-		PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-		0, 0, PKT_RX_FDIR, 0);
+		RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+		0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	/* data to be shuffled by the result of the flags mask shifted by 22
 	 * bits.  This gives use the l3_l4 flags.
@@ -340,33 +340,33 @@ _recv_raw_pkts_vec_avx512(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	const __m256i l3_l4_flags_shuf = _mm256_set_epi8
 		(0, 0, 0, 0, 0, 0, 0, 0,
 		/* shift right 1 bit to make sure it not exceed 255 */
-		(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-		PKT_RX_IP_CKSUM_BAD >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1,
 		/* second 128-bits */
 		0, 0, 0, 0, 0, 0, 0, 0,
-		(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-		PKT_RX_IP_CKSUM_BAD >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+		(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1);
 
 	const __m256i cksum_mask = _mm256_set1_epi32
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-		PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-		PKT_RX_OUTER_IP_CKSUM_BAD);
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	uint16_t i, received;
 
@@ -571,7 +571,7 @@ _recv_raw_pkts_vec_avx512(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 			 * order (hi->lo): [1, 3, 5, 7, 0, 2, 4, 6]
 			 * Then OR FDIR flags to mbuf_flags on FDIR ID hit.
 			 */
-			RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
+			RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
 			const __m256i pkt_fdir_bit = _mm256_set1_epi32(1 << 13);
 			const __m256i fdir_mask =
 				_mm256_cmpeq_epi32(fdir, fdir_id);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index b2683fda60..b9d9dec769 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -93,43 +93,43 @@ desc_to_olflags_v(struct i40e_rx_queue *rxq, uint64x2_t descs[4],
 			0x1c03804, 0x1c03804, 0x1c03804, 0x1c03804};
 
 	const uint32x4_t cksum_mask = {
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD};
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD};
 
 	/* map rss and vlan type to rss hash and vlan flag */
 	const uint8x16_t vlan_flags = {
 			0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0, 0, 0,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
 
 	const uint8x16_t rss_flags = {
-			0, PKT_RX_FDIR, 0, 0,
-			0, 0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH | PKT_RX_FDIR,
+			0, RTE_MBUF_F_RX_FDIR, 0, 0,
+			0, 0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
 
 	const uint8x16_t l3_l4e_flags = {
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1,
-			PKT_RX_IP_CKSUM_BAD >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-			 PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+			 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
 			0, 0, 0, 0, 0, 0, 0, 0};
 
 	vlan0 = vzipq_u32(vreinterpretq_u32_u64(descs[0]),
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index b235502db5..497b2404c6 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -143,7 +143,7 @@ descs_to_fdir_32b(volatile union i40e_rx_desc *rxdp, struct rte_mbuf **rx_pkt)
 	 * correct location in the mbuf->olflags
 	 */
 	const uint32_t FDIR_ID_BIT_SHIFT = 13;
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << FDIR_ID_BIT_SHIFT));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << FDIR_ID_BIT_SHIFT));
 	v_fd_id_mask = _mm_srli_epi32(v_fd_id_mask, 31);
 	v_fd_id_mask = _mm_slli_epi32(v_fd_id_mask, FDIR_ID_BIT_SHIFT);
 
@@ -203,9 +203,9 @@ descs_to_fdir_16b(__m128i fltstat, __m128i descs[4], struct rte_mbuf **rx_pkt)
 	__m128i v_desc0_mask = _mm_and_si128(v_desc_fdir_mask, v_desc0_shift);
 	descs[0] = _mm_blendv_epi8(descs[0], _mm_setzero_si128(), v_desc0_mask);
 
-	/* Shift to 1 or 0 bit per u32 lane, then to PKT_RX_FDIR_ID offset */
+	/* Shift to 1 or 0 bit per u32 lane, then to RTE_MBUF_F_RX_FDIR_ID offset */
 	const uint32_t FDIR_ID_BIT_SHIFT = 13;
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << FDIR_ID_BIT_SHIFT));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << FDIR_ID_BIT_SHIFT));
 	__m128i v_mask_one_bit = _mm_srli_epi32(v_fdir_id_mask, 31);
 	return _mm_slli_epi32(v_mask_one_bit, FDIR_ID_BIT_SHIFT);
 }
@@ -228,44 +228,44 @@ desc_to_olflags_v(struct i40e_rx_queue *rxq, volatile union i40e_rx_desc *rxdp,
 			0x1c03804, 0x1c03804, 0x1c03804, 0x1c03804);
 
 	const __m128i cksum_mask = _mm_set_epi32(
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD);
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	/* map rss and vlan type to rss hash and vlan flag */
 	const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
-			0, 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			0, 0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 			0, 0, 0, 0);
 
 	const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-			0, 0, PKT_RX_FDIR, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+			0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	const __m128i l3_l4e_flags = _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 
 	/* Unpack "status" from quadword 1, bits 0:32 */
 	vlan0 = _mm_unpackhi_epi32(descs[0], descs[1]);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 045fd92368..d7ee47610a 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -379,14 +379,14 @@ iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
 #endif
 
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
 #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 #endif
@@ -403,13 +403,13 @@ iavf_rxd_to_pkt_fields_by_comms_aux_v1(struct iavf_rx_queue *rxq,
 
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
@@ -445,13 +445,13 @@ iavf_rxd_to_pkt_fields_by_comms_aux_v2(struct iavf_rx_queue *rxq,
 
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
@@ -1044,7 +1044,7 @@ iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp)
 {
 	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
 		(1 << IAVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci =
 			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
 	} else {
@@ -1072,7 +1072,7 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,
 #endif
 
 	if (vlan_tci) {
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci = vlan_tci;
 	}
 }
@@ -1089,26 +1089,26 @@ iavf_rxd_to_pkt_flags(uint64_t qword)
 	/* Check if RSS_HASH */
 	flags = (((qword >> IAVF_RX_DESC_STATUS_FLTSTAT_SHIFT) &
 					IAVF_RX_DESC_FLTSTAT_RSS_HASH) ==
-			IAVF_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+			IAVF_RX_DESC_FLTSTAT_RSS_HASH) ? RTE_MBUF_F_RX_RSS_HASH : 0;
 
 	/* Check if FDIR Match */
 	flags |= (qword & (1 << IAVF_RX_DESC_STATUS_FLM_SHIFT) ?
-				PKT_RX_FDIR : 0);
+				RTE_MBUF_F_RX_FDIR : 0);
 
 	if (likely((error_bits & IAVF_RX_ERR_BITS) == 0)) {
-		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return flags;
 	}
 
 	if (unlikely(error_bits & (1 << IAVF_RX_DESC_ERROR_IPE_SHIFT)))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely(error_bits & (1 << IAVF_RX_DESC_ERROR_L4E_SHIFT)))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	/* TODO: Oversize error bit is not processed here */
 
@@ -1129,12 +1129,12 @@ iavf_rxd_build_fdir(volatile union iavf_rx_desc *rxdp, struct rte_mbuf *mb)
 	if (flexbh == IAVF_RX_DESC_EXT_STATUS_FLEXBH_FD_ID) {
 		mb->hash.fdir.hi =
 			rte_le_to_cpu_32(rxdp->wb.qword3.hi_dword.fd_id);
-		flags |= PKT_RX_FDIR_ID;
+		flags |= RTE_MBUF_F_RX_FDIR_ID;
 	}
 #else
 	mb->hash.fdir.hi =
 		rte_le_to_cpu_32(rxdp->wb.qword0.hi_dword.fd_id);
-	flags |= PKT_RX_FDIR_ID;
+	flags |= RTE_MBUF_F_RX_FDIR_ID;
 #endif
 	return flags;
 }
@@ -1158,22 +1158,22 @@ iavf_flex_rxd_error_to_pkt_flags(uint16_t stat_err0)
 		return 0;
 
 	if (likely(!(stat_err0 & IAVF_RX_FLEX_ERR0_BITS))) {
-		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return flags;
 	}
 
 	if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))
-		flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 
 	return flags;
 }
@@ -1292,11 +1292,11 @@ iavf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			ptype_tbl[(uint8_t)((qword1 &
 			IAVF_RXD_QW1_PTYPE_MASK) >> IAVF_RXD_QW1_PTYPE_SHIFT)];
 
-		if (pkt_flags & PKT_RX_RSS_HASH)
+		if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 			rxm->hash.rss =
 				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
 
-		if (pkt_flags & PKT_RX_FDIR)
+		if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 			pkt_flags |= iavf_rxd_build_fdir(&rxd, rxm);
 
 		rxm->ol_flags |= pkt_flags;
@@ -1693,11 +1693,11 @@ iavf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			ptype_tbl[(uint8_t)((qword1 &
 			IAVF_RXD_QW1_PTYPE_MASK) >> IAVF_RXD_QW1_PTYPE_SHIFT)];
 
-		if (pkt_flags & PKT_RX_RSS_HASH)
+		if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 			first_seg->hash.rss =
 				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
 
-		if (pkt_flags & PKT_RX_FDIR)
+		if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 			pkt_flags |= iavf_rxd_build_fdir(&rxd, first_seg);
 
 		first_seg->ol_flags |= pkt_flags;
@@ -1862,11 +1862,11 @@ iavf_rx_scan_hw_ring(struct iavf_rx_queue *rxq)
 				IAVF_RXD_QW1_PTYPE_MASK) >>
 				IAVF_RXD_QW1_PTYPE_SHIFT)];
 
-			if (pkt_flags & PKT_RX_RSS_HASH)
+			if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 				mb->hash.rss = rte_le_to_cpu_32(
 					rxdp[j].wb.qword0.hi_dword.rss);
 
-			if (pkt_flags & PKT_RX_FDIR)
+			if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 				pkt_flags |= iavf_rxd_build_fdir(&rxdp[j], mb);
 
 			mb->ol_flags |= pkt_flags;
@@ -2072,9 +2072,9 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq)
 static inline uint16_t
 iavf_calc_context_desc(uint64_t flags, uint8_t vlan_flag)
 {
-	if (flags & PKT_TX_TCP_SEG)
+	if (flags & RTE_MBUF_F_TX_TCP_SEG)
 		return 1;
-	if (flags & PKT_TX_VLAN &&
+	if (flags & RTE_MBUF_F_TX_VLAN &&
 	    vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2)
 		return 1;
 	return 0;
@@ -2091,21 +2091,21 @@ iavf_txd_enable_checksum(uint64_t ol_flags,
 		      IAVF_TX_DESC_LENGTH_MACLEN_SHIFT;
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		*td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		*td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV4;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV6;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
 	}
 
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		*td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (tx_offload.l4_len >> 2) <<
 			      IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -2113,18 +2113,18 @@ iavf_txd_enable_checksum(uint64_t ol_flags,
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		*td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 			      IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		*td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_SCTP;
 		*td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 			      IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		*td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_UDP;
 		*td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 			      IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -2260,7 +2260,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (ol_flags & PKT_TX_VLAN &&
+		if (ol_flags & RTE_MBUF_F_TX_VLAN &&
 		    txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1) {
 			td_cmd |= IAVF_TX_DESC_CMD_IL2TAG1;
 			td_tag = tx_pkt->vlan_tci;
@@ -2297,12 +2297,12 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			}
 
 			/* TSO enabled */
-			if (ol_flags & PKT_TX_TCP_SEG)
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 				cd_type_cmd_tso_mss |=
 					iavf_set_tso_ctx(tx_pkt, tx_offload);
 
-			if (ol_flags & PKT_TX_VLAN &&
-			   txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) {
+			if (ol_flags & RTE_MBUF_F_TX_VLAN &&
+			    txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) {
 				cd_type_cmd_tso_mss |= IAVF_TX_CTX_DESC_IL2TAG2
 					<< IAVF_TXD_CTX_QW1_CMD_SHIFT;
 				cd_l2tag2 = tx_pkt->vlan_tci;
@@ -2415,7 +2415,7 @@ iavf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		ol_flags = m->ol_flags;
 
 		/* Check condition for nb_segs > IAVF_TX_MAX_MTU_SEG. */
-		if (!(ol_flags & PKT_TX_TCP_SEG)) {
+		if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			if (m->nb_segs > IAVF_TX_MAX_MTU_SEG) {
 				rte_errno = EINVAL;
 				return i;
@@ -2446,7 +2446,7 @@ iavf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		}
 
 		if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS &&
-		    ol_flags & (PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN)) {
+		    ol_flags & (RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_RX_VLAN)) {
 			ret = iavf_check_vlan_up2tc(txq, m);
 			if (ret != 0) {
 				rte_errno = -ret;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 229a2ea4dd..a8df309a55 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -52,23 +52,21 @@
 #define IAVF_TSO_MAX_SEG          UINT8_MAX
 #define IAVF_TX_MAX_MTU_SEG       8
 
-#define IAVF_TX_CKSUM_OFFLOAD_MASK (		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG)
-
-#define IAVF_TX_OFFLOAD_MASK (  \
-		PKT_TX_OUTER_IPV6 |		 \
-		PKT_TX_OUTER_IPV4 |		 \
-		PKT_TX_IPV6 |			 \
-		PKT_TX_IPV4 |			 \
-		PKT_TX_VLAN |		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG)
+#define IAVF_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG)
+
+#define IAVF_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV6 |		 \
+		RTE_MBUF_F_TX_OUTER_IPV4 |		 \
+		RTE_MBUF_F_TX_IPV6 |			 \
+		RTE_MBUF_F_TX_IPV4 |			 \
+		RTE_MBUF_F_TX_VLAN |		 \
+		RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG)
 
 #define IAVF_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
 
 /**
  * Rx Flex Descriptors
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 96c05d9319..9817d2c011 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -127,8 +127,8 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq,
 	 * destination
 	 */
 	const __m256i vlan_flags_shuf =
-		_mm256_set_epi32(0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-				 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0);
+		_mm256_set_epi32(0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+				 0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0);
 	/**
 	 * data to be shuffled by result of flag mask, shifted down 11.
 	 * If RSS/FDIR bits are set, shuffle moves appropriate flags in
@@ -136,11 +136,11 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq,
 	 */
 	const __m256i rss_flags_shuf =
 		_mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
-				PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH,
-				0, 0, 0, 0, PKT_RX_FDIR, 0,/* end up 128-bits */
+				RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH,
+				0, 0, 0, 0, RTE_MBUF_F_RX_FDIR, 0,/* end up 128-bits */
 				0, 0, 0, 0, 0, 0, 0, 0,
-				PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH,
-				0, 0, 0, 0, PKT_RX_FDIR, 0);
+				RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH,
+				0, 0, 0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	/**
 	 * data to be shuffled by the result of the flags mask shifted by 22
@@ -148,33 +148,33 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq,
 	 */
 	const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-			 PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-			PKT_RX_IP_CKSUM_BAD >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+			 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1,
 			/* second 128-bits */
 			0, 0, 0, 0, 0, 0, 0, 0,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-			 PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-			PKT_RX_IP_CKSUM_BAD >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+			 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1);
 
 	const __m256i cksum_mask =
-		 _mm256_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-				   PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-				   PKT_RX_OUTER_IP_CKSUM_BAD);
+		 _mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+				   RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+				   RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	RTE_SET_USED(avx_aligned); /* for 32B descriptors we don't use this */
 
@@ -502,10 +502,10 @@ static inline __m256i
 flex_rxd_to_fdir_flags_vec_avx2(const __m256i fdir_id0_7)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m256i pkt_fdir_bit = _mm256_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m256i pkt_fdir_bit = _mm256_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m256i fdir_mis_mask = _mm256_set1_epi32(FDID_MIS_MAGIC);
 	__m256i fdir_mask = _mm256_cmpeq_epi32(fdir_id0_7,
@@ -626,36 +626,36 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 	 */
 	const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 			/* second 128-bits */
 			0, 0, 0, 0, 0, 0, 0, 0,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 	const __m256i cksum_mask =
-		 _mm256_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-				   PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-				   PKT_RX_OUTER_IP_CKSUM_BAD);
+		 _mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+				   RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+				   RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 	/**
 	 * data to be shuffled by result of flag mask, shifted down 12.
 	 * If RSS(bit12)/VLAN(bit13) are set,
@@ -664,27 +664,27 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 	const __m256i rss_flags_shuf = _mm256_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH, 0,
-			PKT_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, 0,
 			/* end up 128-bits */
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH, 0,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	const __m256i vlan_flags_shuf = _mm256_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 			0, 0,
 			/* end up 128-bits */
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 			0, 0);
 
 	uint16_t i, received;
@@ -1025,8 +1025,8 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 							0, 0, 0, 0,
 							0, 0, 0, 0,
 							0, 0,
-							PKT_RX_VLAN |
-							PKT_RX_VLAN_STRIPPED,
+							RTE_MBUF_F_RX_VLAN |
+							RTE_MBUF_F_RX_VLAN_STRIPPED,
 							0);
 
 				vlan_flags =
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index cb0b057b0f..c47fd04593 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -431,8 +431,8 @@ _iavf_recv_raw_pkts_vec_avx512(struct iavf_rx_queue *rxq,
 			 * destination
 			 */
 			const __m256i vlan_flags_shuf =
-				_mm256_set_epi32(0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-						 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0);
+				_mm256_set_epi32(0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+						 0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0);
 #endif
 
 #ifdef IAVF_RX_RSS_OFFLOAD
@@ -443,11 +443,11 @@ _iavf_recv_raw_pkts_vec_avx512(struct iavf_rx_queue *rxq,
 			 */
 			const __m256i rss_flags_shuf =
 				_mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
-						PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH,
-						0, 0, 0, 0, PKT_RX_FDIR, 0,/* end up 128-bits */
+						RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH,
+						0, 0, 0, 0, RTE_MBUF_F_RX_FDIR, 0,/* end up 128-bits */
 						0, 0, 0, 0, 0, 0, 0, 0,
-						PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH,
-						0, 0, 0, 0, PKT_RX_FDIR, 0);
+						RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH,
+						0, 0, 0, 0, RTE_MBUF_F_RX_FDIR, 0);
 #endif
 
 #ifdef IAVF_RX_CSUM_OFFLOAD
@@ -457,33 +457,33 @@ _iavf_recv_raw_pkts_vec_avx512(struct iavf_rx_queue *rxq,
 			 */
 			const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 					/* shift right 1 bit to make sure it not exceed 255 */
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-					 PKT_RX_L4_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-					PKT_RX_IP_CKSUM_BAD >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+					 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+					RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1,
 					/* second 128-bits */
 					0, 0, 0, 0, 0, 0, 0, 0,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-					 PKT_RX_L4_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-					PKT_RX_IP_CKSUM_BAD >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+					 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+					RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1);
 
 			const __m256i cksum_mask =
-				_mm256_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-						  PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-						  PKT_RX_OUTER_IP_CKSUM_BAD);
+				_mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+						  RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+						  RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 #endif
 
 #if defined(IAVF_RX_CSUM_OFFLOAD) || defined(IAVF_RX_VLAN_OFFLOAD) || defined(IAVF_RX_RSS_OFFLOAD)
@@ -688,10 +688,10 @@ static __rte_always_inline __m256i
 flex_rxd_to_fdir_flags_vec_avx512(const __m256i fdir_id0_7)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m256i pkt_fdir_bit = _mm256_set1_epi32(PKT_RX_FDIR |
-						       PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m256i pkt_fdir_bit = _mm256_set1_epi32(RTE_MBUF_F_RX_FDIR |
+						       RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m256i fdir_mis_mask = _mm256_set1_epi32(FDID_MIS_MAGIC);
 	__m256i fdir_mask = _mm256_cmpeq_epi32(fdir_id0_7,
@@ -974,36 +974,36 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 			 */
 			const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 					/* shift right 1 bit to make sure it not exceed 255 */
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-					 PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+					 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 					/* second 128-bits */
 					0, 0, 0, 0, 0, 0, 0, 0,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-					 PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+					 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 			const __m256i cksum_mask =
-				_mm256_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-						  PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-						  PKT_RX_OUTER_IP_CKSUM_BAD);
+				_mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+						  RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+						  RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 #endif
 #if defined(IAVF_RX_VLAN_OFFLOAD) || defined(IAVF_RX_RSS_OFFLOAD)
 			/**
@@ -1015,28 +1015,28 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 					(0, 0, 0, 0,
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
-					 PKT_RX_RSS_HASH, 0,
-					 PKT_RX_RSS_HASH, 0,
+					 RTE_MBUF_F_RX_RSS_HASH, 0,
+					 RTE_MBUF_F_RX_RSS_HASH, 0,
 					 /* end up 128-bits */
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
-					 PKT_RX_RSS_HASH, 0,
-					 PKT_RX_RSS_HASH, 0);
+					 RTE_MBUF_F_RX_RSS_HASH, 0,
+					 RTE_MBUF_F_RX_RSS_HASH, 0);
 
 			const __m256i vlan_flags_shuf = _mm256_set_epi8
 					(0, 0, 0, 0,
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
-					 PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-					 PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+					 RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+					 RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 					 0, 0,
 					 /* end up 128-bits */
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
-					 PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-					 PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+					 RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+					 RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 					 0, 0);
 #endif
 
@@ -1273,8 +1273,8 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 							 0, 0, 0, 0,
 							 0, 0, 0, 0,
 							 0, 0,
-							 PKT_RX_VLAN |
-							 PKT_RX_VLAN_STRIPPED,
+							 RTE_MBUF_F_RX_VLAN |
+							 RTE_MBUF_F_RX_VLAN_STRIPPED,
 							 0);
 
 					vlan_flags =
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 457d6339e1..1fd37b74c1 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -326,33 +326,33 @@ iavf_txd_enable_offload(__rte_unused struct rte_mbuf *tx_pkt,
 		     IAVF_TX_DESC_LENGTH_MACLEN_SHIFT;
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			     IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV4;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			     IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV6;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			     IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_TCP;
 		td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 			     IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_SCTP;
 		td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 			     IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_UDP;
 		td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 			     IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -365,7 +365,7 @@ iavf_txd_enable_offload(__rte_unused struct rte_mbuf *tx_pkt,
 #endif
 
 #ifdef IAVF_TX_VLAN_QINQ_OFFLOAD
-	if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+	if (ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 		td_cmd |= IAVF_TX_DESC_CMD_IL2TAG1;
 		*txd_hi |= ((uint64_t)tx_pkt->vlan_tci <<
 			    IAVF_TXD_QW1_L2TAG1_SHIFT);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index ee1e905525..363d0e62df 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -108,42 +108,42 @@ desc_to_olflags_v(struct iavf_rx_queue *rxq, __m128i descs[4],
 			0x1c03804, 0x1c03804, 0x1c03804, 0x1c03804);
 
 	const __m128i cksum_mask = _mm_set_epi32(
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD);
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	/* map rss and vlan type to rss hash and vlan flag */
 	const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
-			0, 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			0, 0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 			0, 0, 0, 0);
 
 	const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-			0, 0, PKT_RX_FDIR, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+			0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	const __m128i l3_l4e_flags = _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-			 PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-			PKT_RX_IP_CKSUM_BAD >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+			 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1);
 
 	vlan0 = _mm_unpackhi_epi32(descs[0], descs[1]);
 	vlan1 = _mm_unpackhi_epi32(descs[2], descs[3]);
@@ -193,10 +193,10 @@ static inline __m128i
 flex_rxd_to_fdir_flags_vec(const __m128i fdir_id0_3)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m128i pkt_fdir_bit = _mm_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m128i pkt_fdir_bit = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m128i fdir_mis_mask = _mm_set1_epi32(FDID_MIS_MAGIC);
 	__m128i fdir_mask = _mm_cmpeq_epi32(fdir_id0_3,
@@ -225,43 +225,43 @@ flex_desc_to_olflags_v(struct iavf_rx_queue *rxq, __m128i descs[4],
 	const __m128i desc_mask = _mm_set_epi32(0x3070, 0x3070,
 						0x3070, 0x3070);
 
-	const __m128i cksum_mask = _mm_set_epi32(PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD);
+	const __m128i cksum_mask = _mm_set_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	/* map the checksum, rss and vlan fields to the checksum, rss
 	 * and vlan flag
 	 */
 	const __m128i cksum_flags = _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 
 	const __m128i rss_vlan_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	/* merge 4 descriptors */
 	flags = _mm_unpackhi_epi32(descs[0], descs[1]);
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 7a2220daa4..edf8d6fcd8 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -10,11 +10,10 @@
 #include "ice_rxtx.h"
 #include "ice_rxtx_vec_common.h"
 
-#define ICE_TX_CKSUM_OFFLOAD_MASK (		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG |		 \
-		PKT_TX_OUTER_IP_CKSUM)
+#define ICE_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG |		 \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 
 /* Offset of mbuf dynamic field for protocol extraction data */
 int rte_net_ice_dynfield_proto_xtr_metadata_offs = -1;
@@ -88,13 +87,13 @@ ice_rxd_to_pkt_fields_by_comms_generic(__rte_unused struct ice_rx_queue *rxq,
 	uint16_t stat_err = rte_le_to_cpu_16(desc->status_error0);
 
 	if (likely(stat_err & (1 << ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 #endif
@@ -112,14 +111,14 @@ ice_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct ice_rx_queue *rxq,
 #endif
 
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 #endif
@@ -136,13 +135,13 @@ ice_rxd_to_pkt_fields_by_comms_aux_v1(struct ice_rx_queue *rxq,
 
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
@@ -178,13 +177,13 @@ ice_rxd_to_pkt_fields_by_comms_aux_v2(struct ice_rx_queue *rxq,
 
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
@@ -1506,27 +1505,27 @@ ice_rxd_error_to_pkt_flags(uint16_t stat_err0)
 		return 0;
 
 	if (likely(!(stat_err0 & ICE_RX_FLEX_ERR0_BITS))) {
-		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return flags;
 	}
 
 	if (unlikely(stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely(stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (unlikely(stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))
-		flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 
 	if (unlikely(stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S)))
-		flags |= PKT_RX_OUTER_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_OUTER_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
 
 	return flags;
 }
@@ -1536,7 +1535,7 @@ ice_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union ice_rx_flex_desc *rxdp)
 {
 	if (rte_le_to_cpu_16(rxdp->wb.status_error0) &
 	    (1 << ICE_RX_FLEX_DESC_STATUS0_L2TAG1P_S)) {
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci =
 			rte_le_to_cpu_16(rxdp->wb.l2tag1);
 		PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
@@ -1548,8 +1547,8 @@ ice_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union ice_rx_flex_desc *rxdp)
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	if (rte_le_to_cpu_16(rxdp->wb.status_error1) &
 	    (1 << ICE_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) {
-		mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
-				PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+		mb->ol_flags |= RTE_MBUF_F_RX_QINQ_STRIPPED | RTE_MBUF_F_RX_QINQ |
+				RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_RX_VLAN;
 		mb->vlan_tci_outer = mb->vlan_tci;
 		mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd);
 		PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
@@ -1642,7 +1641,7 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq)
 				rxq->time_high =
 				   rte_le_to_cpu_32(rxdp[j].wb.flex_ts.ts_high);
 				mb->timesync = rxq->queue_id;
-				pkt_flags |= PKT_RX_IEEE1588_PTP;
+				pkt_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
 			}
 
 			mb->ol_flags |= pkt_flags;
@@ -1959,7 +1958,7 @@ ice_recv_scattered_pkts(void *rx_queue,
 			rxq->time_high =
 			   rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high);
 			first_seg->timesync = rxq->queue_id;
-			pkt_flags |= PKT_RX_IEEE1588_PTP;
+			pkt_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
 		}
 
 		first_seg->ol_flags |= pkt_flags;
@@ -2389,7 +2388,7 @@ ice_recv_pkts(void *rx_queue,
 			rxq->time_high =
 			   rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high);
 			rxm->timesync = rxq->queue_id;
-			pkt_flags |= PKT_RX_IEEE1588_PTP;
+			pkt_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
 		}
 
 		rxm->ol_flags |= pkt_flags;
@@ -2423,11 +2422,11 @@ ice_parse_tunneling_params(uint64_t ol_flags,
 			    uint32_t *cd_tunneling)
 {
 	/* EIPT: External (outer) IP header type */
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		*cd_tunneling |= ICE_TX_CTX_EIPT_IPV4;
-	else if (ol_flags & PKT_TX_OUTER_IPV4)
+	else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)
 		*cd_tunneling |= ICE_TX_CTX_EIPT_IPV4_NO_CSUM;
-	else if (ol_flags & PKT_TX_OUTER_IPV6)
+	else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)
 		*cd_tunneling |= ICE_TX_CTX_EIPT_IPV6;
 
 	/* EIPLEN: External (outer) IP header length, in DWords */
@@ -2435,16 +2434,16 @@ ice_parse_tunneling_params(uint64_t ol_flags,
 		ICE_TXD_CTX_QW0_EIPLEN_S;
 
 	/* L4TUNT: L4 Tunneling Type */
-	switch (ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_IPIP:
+	switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_IPIP:
 		/* for non UDP / GRE tunneling, set to 00b */
 		break;
-	case PKT_TX_TUNNEL_VXLAN:
-	case PKT_TX_TUNNEL_GTP:
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_GTP:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		*cd_tunneling |= ICE_TXD_CTX_UDP_TUNNELING;
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		*cd_tunneling |= ICE_TXD_CTX_GRE_TUNNELING;
 		break;
 	default:
@@ -2481,7 +2480,7 @@ ice_txd_enable_checksum(uint64_t ol_flags,
 			union ice_tx_offload tx_offload)
 {
 	/* Set MACLEN */
-	if (ol_flags & PKT_TX_TUNNEL_MASK)
+	if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 		*td_offset |= (tx_offload.outer_l2_len >> 1)
 			<< ICE_TX_DESC_LEN_MACLEN_S;
 	else
@@ -2489,21 +2488,21 @@ ice_txd_enable_checksum(uint64_t ol_flags,
 			<< ICE_TX_DESC_LEN_MACLEN_S;
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      ICE_TX_DESC_LEN_IPLEN_S;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      ICE_TX_DESC_LEN_IPLEN_S;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      ICE_TX_DESC_LEN_IPLEN_S;
 	}
 
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (tx_offload.l4_len >> 2) <<
 			      ICE_TX_DESC_LEN_L4_LEN_S;
@@ -2511,18 +2510,18 @@ ice_txd_enable_checksum(uint64_t ol_flags,
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 			      ICE_TX_DESC_LEN_L4_LEN_S;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_SCTP;
 		*td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 			      ICE_TX_DESC_LEN_L4_LEN_S;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_UDP;
 		*td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 			      ICE_TX_DESC_LEN_L4_LEN_S;
@@ -2600,11 +2599,11 @@ ice_build_ctob(uint32_t td_cmd,
 static inline uint16_t
 ice_calc_context_desc(uint64_t flags)
 {
-	static uint64_t mask = PKT_TX_TCP_SEG |
-		PKT_TX_QINQ |
-		PKT_TX_OUTER_IP_CKSUM |
-		PKT_TX_TUNNEL_MASK |
-		PKT_TX_IEEE1588_TMST;
+	static uint64_t mask = RTE_MBUF_F_TX_TCP_SEG |
+		RTE_MBUF_F_TX_QINQ |
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+		RTE_MBUF_F_TX_TUNNEL_MASK |
+		RTE_MBUF_F_TX_IEEE1588_TMST;
 
 	return (flags & mask) ? 1 : 0;
 }
@@ -2622,7 +2621,7 @@ ice_set_tso_ctx(struct rte_mbuf *mbuf, union ice_tx_offload tx_offload)
 	}
 
 	hdr_len = tx_offload.l2_len + tx_offload.l3_len + tx_offload.l4_len;
-	hdr_len += (mbuf->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 		   tx_offload.outer_l2_len + tx_offload.outer_l3_len : 0;
 
 	cd_cmd = ICE_TX_CTX_DESC_TSO;
@@ -2709,7 +2708,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 * the mbuf data size exceeds max data size that hw allows
 		 * per tx desc.
 		 */
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			nb_used = (uint16_t)(ice_calc_pkt_desc(tx_pkt) +
 					     nb_ctx);
 		else
@@ -2738,14 +2737,14 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+		if (ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 			td_cmd |= ICE_TX_DESC_CMD_IL2TAG1;
 			td_tag = tx_pkt->vlan_tci;
 		}
 
 		/* Fill in tunneling parameters if necessary */
 		cd_tunneling_params = 0;
-		if (ol_flags & PKT_TX_TUNNEL_MASK)
+		if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 			ice_parse_tunneling_params(ol_flags, tx_offload,
 						   &cd_tunneling_params);
 
@@ -2769,10 +2768,10 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				txe->mbuf = NULL;
 			}
 
-			if (ol_flags & PKT_TX_TCP_SEG)
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 				cd_type_cmd_tso_mss |=
 					ice_set_tso_ctx(tx_pkt, tx_offload);
-			else if (ol_flags & PKT_TX_IEEE1588_TMST)
+			else if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 				cd_type_cmd_tso_mss |=
 					((uint64_t)ICE_TX_CTX_DESC_TSYN <<
 					ICE_TXD_CTX_QW1_CMD_S);
@@ -2781,7 +2780,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				rte_cpu_to_le_32(cd_tunneling_params);
 
 			/* TX context descriptor based double VLAN insert */
-			if (ol_flags & PKT_TX_QINQ) {
+			if (ol_flags & RTE_MBUF_F_TX_QINQ) {
 				cd_l2tag2 = tx_pkt->vlan_tci_outer;
 				cd_type_cmd_tso_mss |=
 					((uint64_t)ICE_TX_CTX_DESC_IL2TAG2 <<
@@ -2809,7 +2808,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			slen = m_seg->data_len;
 			buf_dma_addr = rte_mbuf_data_iova(m_seg);
 
-			while ((ol_flags & PKT_TX_TCP_SEG) &&
+			while ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
 				unlikely(slen > ICE_MAX_DATA_PER_TXD)) {
 				txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
 				txd->cmd_type_offset_bsz =
@@ -3398,7 +3397,7 @@ ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		m = tx_pkts[i];
 		ol_flags = m->ol_flags;
 
-		if (ol_flags & PKT_TX_TCP_SEG &&
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG &&
 		    (m->tso_segsz < ICE_MIN_TSO_MSS ||
 		     m->tso_segsz > ICE_MAX_TSO_MSS ||
 		     m->pkt_len > ICE_MAX_TSO_FRAME_SIZE)) {
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 9725ac0180..c20927dc5c 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -20,10 +20,10 @@ static __rte_always_inline __m256i
 ice_flex_rxd_to_fdir_flags_vec_avx2(const __m256i fdir_id0_7)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m256i pkt_fdir_bit = _mm256_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m256i pkt_fdir_bit = _mm256_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m256i fdir_mis_mask = _mm256_set1_epi32(FDID_MIS_MAGIC);
 	__m256i fdir_mask = _mm256_cmpeq_epi32(fdir_id0_7,
@@ -142,82 +142,82 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 * bits.  This gives use the l3_l4 flags.
 	 */
 	const __m256i l3_l4_flags_shuf =
-		_mm256_set_epi8((PKT_RX_OUTER_L4_CKSUM_BAD >> 20 |
-		 PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		  PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
+		_mm256_set_epi8((RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 |
+		 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		  RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 		/**
 		 * second 128-bits
 		 * shift right 20 bits to use the low two bits to indicate
 		 * outer checksum status
 		 * shift right 1 bit to make sure it not exceed 255
 		 */
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1);
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 	const __m256i cksum_mask =
-		 _mm256_set1_epi32(PKT_RX_IP_CKSUM_MASK |
-				   PKT_RX_L4_CKSUM_MASK |
-				   PKT_RX_OUTER_IP_CKSUM_BAD |
-				   PKT_RX_OUTER_L4_CKSUM_MASK);
+		 _mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK |
+				   RTE_MBUF_F_RX_L4_CKSUM_MASK |
+				   RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+				   RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK);
 	/**
 	 * data to be shuffled by result of flag mask, shifted down 12.
 	 * If RSS(bit12)/VLAN(bit13) are set,
@@ -226,16 +226,16 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	const __m256i rss_vlan_flags_shuf = _mm256_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0,
 			/* end up 128-bits */
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	RTE_SET_USED(avx_aligned); /* for 32B descriptors we don't use this */
 
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 5bba9887d2..1fe3de5aa2 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -135,10 +135,10 @@ static inline __m256i
 ice_flex_rxd_to_fdir_flags_vec_avx512(const __m256i fdir_id0_7)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m256i pkt_fdir_bit = _mm256_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m256i pkt_fdir_bit = _mm256_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m256i fdir_mis_mask = _mm256_set1_epi32(FDID_MIS_MAGIC);
 	__m256i fdir_mask = _mm256_cmpeq_epi32(fdir_id0_7,
@@ -242,82 +242,82 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
 	 * bits.  This gives use the l3_l4 flags.
 	 */
 	const __m256i l3_l4_flags_shuf =
-		_mm256_set_epi8((PKT_RX_OUTER_L4_CKSUM_BAD >> 20 |
-		 PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		  PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
+		_mm256_set_epi8((RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 |
+		 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		  RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 		/**
 		 * second 128-bits
 		 * shift right 20 bits to use the low two bits to indicate
 		 * outer checksum status
 		 * shift right 1 bit to make sure it not exceed 255
 		 */
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1);
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 	const __m256i cksum_mask =
-		 _mm256_set1_epi32(PKT_RX_IP_CKSUM_MASK |
-				   PKT_RX_L4_CKSUM_MASK |
-				   PKT_RX_OUTER_IP_CKSUM_BAD |
-				   PKT_RX_OUTER_L4_CKSUM_MASK);
+		 _mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK |
+				   RTE_MBUF_F_RX_L4_CKSUM_MASK |
+				   RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+				   RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK);
 	/**
 	 * data to be shuffled by result of flag mask, shifted down 12.
 	 * If RSS(bit12)/VLAN(bit13) are set,
@@ -326,16 +326,16 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
 	const __m256i rss_vlan_flags_shuf = _mm256_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0,
 			/* 2nd 128-bits */
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	uint16_t i, received;
 
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 5b5250565e..8983b6bf2c 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -568,33 +568,33 @@ ice_txd_enable_offload(struct rte_mbuf *tx_pkt,
 			ICE_TX_DESC_LEN_MACLEN_S;
 
 	/* Enable L3 checksum offload */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			ICE_TX_DESC_LEN_IPLEN_S;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			ICE_TX_DESC_LEN_IPLEN_S;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			ICE_TX_DESC_LEN_IPLEN_S;
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
 		td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 			ICE_TX_DESC_LEN_L4_LEN_S;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_SCTP;
 		td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 			ICE_TX_DESC_LEN_L4_LEN_S;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_UDP;
 		td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 			ICE_TX_DESC_LEN_L4_LEN_S;
@@ -606,7 +606,7 @@ ice_txd_enable_offload(struct rte_mbuf *tx_pkt,
 	*txd_hi |= ((uint64_t)td_offset) << ICE_TXD_QW1_OFFSET_S;
 
 	/* Tx VLAN/QINQ insertion Offload */
-	if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+	if (ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 		td_cmd |= ICE_TX_DESC_CMD_IL2TAG1;
 		*txd_hi |= ((uint64_t)tx_pkt->vlan_tci <<
 				ICE_TXD_QW1_L2TAG1_S);
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 653bd28b41..df1347e64d 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -14,10 +14,10 @@ static inline __m128i
 ice_flex_rxd_to_fdir_flags_vec(const __m128i fdir_id0_3)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m128i pkt_fdir_bit = _mm_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m128i pkt_fdir_bit = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m128i fdir_mis_mask = _mm_set1_epi32(FDID_MIS_MAGIC);
 	__m128i fdir_mask = _mm_cmpeq_epi32(fdir_id0_3,
@@ -116,72 +116,72 @@ ice_rx_desc_to_olflags_v(struct ice_rx_queue *rxq, __m128i descs[4],
 	 */
 	const __m128i desc_mask = _mm_set_epi32(0x30f0, 0x30f0,
 						0x30f0, 0x30f0);
-	const __m128i cksum_mask = _mm_set_epi32(PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD);
+	const __m128i cksum_mask = _mm_set_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	/* map the checksum, rss and vlan fields to the checksum, rss
 	 * and vlan flag
 	 */
 	const __m128i cksum_flags =
-		_mm_set_epi8((PKT_RX_OUTER_L4_CKSUM_BAD >> 20 |
-		 PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		  PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
+		_mm_set_epi8((RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 |
+		 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		  RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 		/**
 		 * shift right 20 bits to use the low two bits to indicate
 		 * outer checksum status
 		 * shift right 1 bit to make sure it not exceed 255
 		 */
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1);
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 
 	const __m128i rss_vlan_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	/* merge 4 descriptors */
 	flags = _mm_unpackhi_epi32(descs[0], descs[1]);
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 97656b39fd..7a5cb2f371 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -74,17 +74,16 @@
 #define IGC_TSO_MAX_MSS			9216
 
 /* Bit Mask to indicate what bits required for building TX context */
-#define IGC_TX_OFFLOAD_MASK (		\
-		PKT_TX_OUTER_IPV4 |	\
-		PKT_TX_IPV6 |		\
-		PKT_TX_IPV4 |		\
-		PKT_TX_VLAN |	\
-		PKT_TX_IP_CKSUM |	\
-		PKT_TX_L4_MASK |	\
-		PKT_TX_TCP_SEG |	\
-		PKT_TX_UDP_SEG)
-
-#define IGC_TX_OFFLOAD_SEG	(PKT_TX_TCP_SEG | PKT_TX_UDP_SEG)
+#define IGC_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV4 |	\
+		RTE_MBUF_F_TX_IPV6 |		\
+		RTE_MBUF_F_TX_IPV4 |		\
+		RTE_MBUF_F_TX_VLAN |	\
+		RTE_MBUF_F_TX_IP_CKSUM |	\
+		RTE_MBUF_F_TX_L4_MASK |	\
+		RTE_MBUF_F_TX_TCP_SEG |	\
+		RTE_MBUF_F_TX_UDP_SEG)
+
+#define IGC_TX_OFFLOAD_SEG	(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)
 
 #define IGC_ADVTXD_POPTS_TXSM	0x00000200 /* L4 Checksum offload request */
 #define IGC_ADVTXD_POPTS_IXSM	0x00000100 /* IP Checksum offload request */
@@ -92,7 +91,7 @@
 /* L4 Packet TYPE of Reserved */
 #define IGC_ADVTXD_TUCMD_L4T_RSV	0x00001800
 
-#define IGC_TX_OFFLOAD_NOTSUP_MASK (PKT_TX_OFFLOAD_MASK ^ IGC_TX_OFFLOAD_MASK)
+#define IGC_TX_OFFLOAD_NOTSUP_MASK (RTE_MBUF_F_TX_OFFLOAD_MASK ^ IGC_TX_OFFLOAD_MASK)
 
 /**
  * Structure associated with each descriptor of the RX ring of a RX queue.
@@ -215,16 +214,18 @@ struct igc_tx_queue {
 static inline uint64_t
 rx_desc_statuserr_to_pkt_flags(uint32_t statuserr)
 {
-	static uint64_t l4_chksum_flags[] = {0, 0, PKT_RX_L4_CKSUM_GOOD,
-			PKT_RX_L4_CKSUM_BAD};
+	static uint64_t l4_chksum_flags[] = {0, 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+			RTE_MBUF_F_RX_L4_CKSUM_BAD};
 
-	static uint64_t l3_chksum_flags[] = {0, 0, PKT_RX_IP_CKSUM_GOOD,
-			PKT_RX_IP_CKSUM_BAD};
+	static uint64_t l3_chksum_flags[] = {0, 0,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD};
 	uint64_t pkt_flags = 0;
 	uint32_t tmp;
 
 	if (statuserr & IGC_RXD_STAT_VP)
-		pkt_flags |= PKT_RX_VLAN_STRIPPED;
+		pkt_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 
 	tmp = !!(statuserr & (IGC_RXD_STAT_L4CS | IGC_RXD_STAT_UDPCS));
 	tmp = (tmp << 1) | (uint32_t)!!(statuserr & IGC_RXD_EXT_ERR_L4E);
@@ -332,10 +333,10 @@ rx_desc_get_pkt_info(struct igc_rx_queue *rxq, struct rte_mbuf *rxm,
 	rxm->vlan_tci = rte_le_to_cpu_16(rxd->wb.upper.vlan);
 
 	pkt_flags = (hlen_type_rss & IGC_RXD_RSS_TYPE_MASK) ?
-			PKT_RX_RSS_HASH : 0;
+			RTE_MBUF_F_RX_RSS_HASH : 0;
 
 	if (hlen_type_rss & IGC_RXD_VPKT)
-		pkt_flags |= PKT_RX_VLAN;
+		pkt_flags |= RTE_MBUF_F_RX_VLAN;
 
 	pkt_flags |= rx_desc_statuserr_to_pkt_flags(staterr);
 
@@ -1468,7 +1469,7 @@ check_tso_para(uint64_t ol_req, union igc_tx_offload ol_para)
 	if (ol_para.tso_segsz > IGC_TSO_MAX_MSS || ol_para.l2_len +
 		ol_para.l3_len + ol_para.l4_len > IGC_TSO_MAX_HDRLEN) {
 		ol_req &= ~IGC_TX_OFFLOAD_SEG;
-		ol_req |= PKT_TX_TCP_CKSUM;
+		ol_req |= RTE_MBUF_F_TX_TCP_CKSUM;
 	}
 	return ol_req;
 }
@@ -1530,20 +1531,20 @@ igc_set_xmit_ctx(struct igc_tx_queue *txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx = (ctx_curr << IGC_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		tx_offload_mask.vlan_tci = 0xffff;
 
 	/* check if TCP segmentation required for this packet */
 	if (ol_flags & IGC_TX_OFFLOAD_SEG) {
 		/* implies IP cksum in IPv4 */
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = IGC_ADVTXD_TUCMD_IPV4 |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 		else
 			type_tucmd_mlhl = IGC_ADVTXD_TUCMD_IPV6 |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_TCP;
 		else
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_UDP;
@@ -1554,26 +1555,26 @@ igc_set_xmit_ctx(struct igc_tx_queue *txq,
 		mss_l4len_idx |= (uint32_t)tx_offload.l4_len <<
 				IGC_ADVTXD_L4LEN_SHIFT;
 	} else { /* no TSO, check if hardware checksum is needed */
-		if (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK))
+		if (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK))
 			tx_offload_mask.data |= TX_MACIP_LEN_CMP_MASK;
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = IGC_ADVTXD_TUCMD_IPV4;
 
-		switch (ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_TCP_CKSUM:
+		switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_TCP |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= (uint32_t)sizeof(struct rte_tcp_hdr)
 				<< IGC_ADVTXD_L4LEN_SHIFT;
 			break;
-		case PKT_TX_UDP_CKSUM:
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_UDP |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= (uint32_t)sizeof(struct rte_udp_hdr)
 				<< IGC_ADVTXD_L4LEN_SHIFT;
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_SCTP |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= (uint32_t)sizeof(struct rte_sctp_hdr)
@@ -1604,7 +1605,7 @@ tx_desc_vlan_flags_to_cmdtype(uint64_t ol_flags)
 	uint32_t cmdtype;
 	static uint32_t vlan_cmd[2] = {0, IGC_ADVTXD_DCMD_VLE};
 	static uint32_t tso_cmd[2] = {0, IGC_ADVTXD_DCMD_TSE};
-	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN) != 0];
+	cmdtype = vlan_cmd[(ol_flags & RTE_MBUF_F_TX_VLAN) != 0];
 	cmdtype |= tso_cmd[(ol_flags & IGC_TX_OFFLOAD_SEG) != 0];
 	return cmdtype;
 }
@@ -1616,8 +1617,8 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 	static const uint32_t l3_olinfo[2] = {0, IGC_ADVTXD_POPTS_IXSM};
 	uint32_t tmp;
 
-	tmp  = l4_olinfo[(ol_flags & PKT_TX_L4_MASK)  != PKT_TX_L4_NO_CKSUM];
-	tmp |= l3_olinfo[(ol_flags & PKT_TX_IP_CKSUM) != 0];
+	tmp  = l4_olinfo[(ol_flags & RTE_MBUF_F_TX_L4_MASK)  != RTE_MBUF_F_TX_L4_NO_CKSUM];
+	tmp |= l3_olinfo[(ol_flags & RTE_MBUF_F_TX_IP_CKSUM) != 0];
 	tmp |= l4_olinfo[(ol_flags & IGC_TX_OFFLOAD_SEG) != 0];
 	return tmp;
 }
@@ -1774,7 +1775,7 @@ igc_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 * Timer 0 should be used to for packet timestamping,
 		 * sample the packet timestamp to reg 0
 		 */
-		if (ol_flags & PKT_TX_IEEE1588_TMST)
+		if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 			cmd_type_len |= IGC_ADVTXD_MAC_TSTAMP;
 
 		if (tx_ol_req) {
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index 001a368856..fa77ca4327 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -257,7 +257,7 @@ ionic_tx_tcp_pseudo_csum(struct rte_mbuf *txm)
 	struct rte_tcp_hdr *tcp_hdr = (struct rte_tcp_hdr *)
 		(l3_hdr + txm->l3_len);
 
-	if (txm->ol_flags & PKT_TX_IP_CKSUM) {
+	if (txm->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		struct rte_ipv4_hdr *ipv4_hdr = (struct rte_ipv4_hdr *)l3_hdr;
 		ipv4_hdr->hdr_checksum = 0;
 		tcp_hdr->cksum = 0;
@@ -278,7 +278,7 @@ ionic_tx_tcp_inner_pseudo_csum(struct rte_mbuf *txm)
 	struct rte_tcp_hdr *tcp_hdr = (struct rte_tcp_hdr *)
 		(l3_hdr + txm->l3_len);
 
-	if (txm->ol_flags & PKT_TX_IPV4) {
+	if (txm->ol_flags & RTE_MBUF_F_TX_IPV4) {
 		struct rte_ipv4_hdr *ipv4_hdr = (struct rte_ipv4_hdr *)l3_hdr;
 		ipv4_hdr->hdr_checksum = 0;
 		tcp_hdr->cksum = 0;
@@ -355,14 +355,14 @@ ionic_tx_tso(struct ionic_tx_qcq *txq, struct rte_mbuf *txm)
 	uint32_t offset = 0;
 	bool start, done;
 	bool encap;
-	bool has_vlan = !!(txm->ol_flags & PKT_TX_VLAN);
+	bool has_vlan = !!(txm->ol_flags & RTE_MBUF_F_TX_VLAN);
 	uint16_t vlan_tci = txm->vlan_tci;
 	uint64_t ol_flags = txm->ol_flags;
 
-	encap = ((ol_flags & PKT_TX_OUTER_IP_CKSUM) ||
-		(ol_flags & PKT_TX_OUTER_UDP_CKSUM)) &&
-		((ol_flags & PKT_TX_OUTER_IPV4) ||
-		(ol_flags & PKT_TX_OUTER_IPV6));
+	encap = ((ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) ||
+		 (ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) &&
+		((ol_flags & RTE_MBUF_F_TX_OUTER_IPV4) ||
+		 (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6));
 
 	/* Preload inner-most TCP csum field with IP pseudo hdr
 	 * calculated with IP length set to zero.  HW will later
@@ -477,15 +477,15 @@ ionic_tx(struct ionic_tx_qcq *txq, struct rte_mbuf *txm)
 	desc = &desc_base[q->head_idx];
 	info = IONIC_INFO_PTR(q, q->head_idx);
 
-	if ((ol_flags & PKT_TX_IP_CKSUM) &&
+	if ((ol_flags & RTE_MBUF_F_TX_IP_CKSUM) &&
 	    (txq->flags & IONIC_QCQ_F_CSUM_L3)) {
 		opcode = IONIC_TXQ_DESC_OPCODE_CSUM_HW;
 		flags |= IONIC_TXQ_DESC_FLAG_CSUM_L3;
 	}
 
-	if (((ol_flags & PKT_TX_TCP_CKSUM) &&
+	if (((ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) &&
 	     (txq->flags & IONIC_QCQ_F_CSUM_TCP)) ||
-	    ((ol_flags & PKT_TX_UDP_CKSUM) &&
+	    ((ol_flags & RTE_MBUF_F_TX_UDP_CKSUM) &&
 	     (txq->flags & IONIC_QCQ_F_CSUM_UDP))) {
 		opcode = IONIC_TXQ_DESC_OPCODE_CSUM_HW;
 		flags |= IONIC_TXQ_DESC_FLAG_CSUM_L4;
@@ -494,11 +494,11 @@ ionic_tx(struct ionic_tx_qcq *txq, struct rte_mbuf *txm)
 	if (opcode == IONIC_TXQ_DESC_OPCODE_CSUM_NONE)
 		stats->no_csum++;
 
-	has_vlan = (ol_flags & PKT_TX_VLAN);
-	encap = ((ol_flags & PKT_TX_OUTER_IP_CKSUM) ||
-			(ol_flags & PKT_TX_OUTER_UDP_CKSUM)) &&
-			((ol_flags & PKT_TX_OUTER_IPV4) ||
-			(ol_flags & PKT_TX_OUTER_IPV6));
+	has_vlan = (ol_flags & RTE_MBUF_F_TX_VLAN);
+	encap = ((ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) ||
+			(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) &&
+			((ol_flags & RTE_MBUF_F_TX_OUTER_IPV4) ||
+			 (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6));
 
 	flags |= has_vlan ? IONIC_TXQ_DESC_FLAG_VLAN : 0;
 	flags |= encap ? IONIC_TXQ_DESC_FLAG_ENCAP : 0;
@@ -555,7 +555,7 @@ ionic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			rte_prefetch0(&q->info[next_q_head_idx]);
 		}
 
-		if (tx_pkts[nb_tx]->ol_flags & PKT_TX_TCP_SEG)
+		if (tx_pkts[nb_tx]->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			err = ionic_tx_tso(txq, tx_pkts[nb_tx]);
 		else
 			err = ionic_tx(txq, tx_pkts[nb_tx]);
@@ -585,16 +585,15 @@ ionic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
  *
  **********************************************************************/
 
-#define IONIC_TX_OFFLOAD_MASK (	\
-	PKT_TX_IPV4 |		\
-	PKT_TX_IPV6 |		\
-	PKT_TX_VLAN |		\
-	PKT_TX_IP_CKSUM |	\
-	PKT_TX_TCP_SEG |	\
-	PKT_TX_L4_MASK)
+#define IONIC_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_IPV4 |		\
+	RTE_MBUF_F_TX_IPV6 |		\
+	RTE_MBUF_F_TX_VLAN |		\
+	RTE_MBUF_F_TX_IP_CKSUM |	\
+	RTE_MBUF_F_TX_TCP_SEG |	\
+	RTE_MBUF_F_TX_L4_MASK)
 
 #define IONIC_TX_OFFLOAD_NOTSUP_MASK \
-	(PKT_TX_OFFLOAD_MASK ^ IONIC_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IONIC_TX_OFFLOAD_MASK)
 
 uint16_t
 ionic_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
@@ -840,30 +839,30 @@ ionic_rx_clean(struct ionic_rx_qcq *rxq,
 	}
 
 	/* RSS */
-	pkt_flags |= PKT_RX_RSS_HASH;
+	pkt_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	rxm->hash.rss = cq_desc->rss_hash;
 
 	/* Vlan Strip */
 	if (cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_VLAN) {
-		pkt_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		pkt_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		rxm->vlan_tci = cq_desc->vlan_tci;
 	}
 
 	/* Checksum */
 	if (cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_CALC) {
 		if (cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_IP_OK)
-			pkt_flags |= PKT_RX_IP_CKSUM_GOOD;
+			pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		else if (cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_IP_BAD)
-			pkt_flags |= PKT_RX_IP_CKSUM_BAD;
+			pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 		if ((cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_TCP_OK) ||
 			(cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_UDP_OK))
-			pkt_flags |= PKT_RX_L4_CKSUM_GOOD;
+			pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		else if ((cq_desc->csum_flags &
 				IONIC_RXQ_COMP_CSUM_F_TCP_BAD) ||
 				(cq_desc->csum_flags &
 				IONIC_RXQ_COMP_CSUM_F_UDP_BAD))
-			pkt_flags |= PKT_RX_L4_CKSUM_BAD;
+			pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	}
 
 	rxm->ol_flags = pkt_flags;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index a127dc0d86..3a5472a5bd 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1960,10 +1960,10 @@ ixgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 	rxq = dev->data->rx_queues[queue];
 
 	if (on) {
-		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		rxq->vlan_flags = RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
 	} else {
-		rxq->vlan_flags = PKT_RX_VLAN;
+		rxq->vlan_flags = RTE_MBUF_F_RX_VLAN;
 		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 0e3aec9906..1c80cd55d3 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -54,27 +54,26 @@
 #include "ixgbe_rxtx.h"
 
 #ifdef RTE_LIBRTE_IEEE1588
-#define IXGBE_TX_IEEE1588_TMST PKT_TX_IEEE1588_TMST
+#define IXGBE_TX_IEEE1588_TMST RTE_MBUF_F_TX_IEEE1588_TMST
 #else
 #define IXGBE_TX_IEEE1588_TMST 0
 #endif
 /* Bit Mask to indicate what bits required for building TX context */
-#define IXGBE_TX_OFFLOAD_MASK (			 \
-		PKT_TX_OUTER_IPV6 |		 \
-		PKT_TX_OUTER_IPV4 |		 \
-		PKT_TX_IPV6 |			 \
-		PKT_TX_IPV4 |			 \
-		PKT_TX_VLAN |		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG |		 \
-		PKT_TX_MACSEC |			 \
-		PKT_TX_OUTER_IP_CKSUM |		 \
-		PKT_TX_SEC_OFFLOAD |	 \
+#define IXGBE_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV6 |		 \
+		RTE_MBUF_F_TX_OUTER_IPV4 |		 \
+		RTE_MBUF_F_TX_IPV6 |			 \
+		RTE_MBUF_F_TX_IPV4 |			 \
+		RTE_MBUF_F_TX_VLAN |		 \
+		RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG |		 \
+		RTE_MBUF_F_TX_MACSEC |			 \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_SEC_OFFLOAD |	 \
 		IXGBE_TX_IEEE1588_TMST)
 
 #define IXGBE_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ IXGBE_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IXGBE_TX_OFFLOAD_MASK)
 
 #if 1
 #define RTE_PMD_USE_PREFETCH
@@ -384,14 +383,14 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx |= (ctx_idx << IXGBE_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN) {
+	if (ol_flags & RTE_MBUF_F_TX_VLAN) {
 		tx_offload_mask.vlan_tci |= ~0;
 	}
 
 	/* check if TCP segmentation required for this packet */
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		/* implies IP cksum in IPv4 */
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = IXGBE_ADVTXD_TUCMD_IPV4 |
 				IXGBE_ADVTXD_TUCMD_L4T_TCP |
 				IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
@@ -407,14 +406,14 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 		mss_l4len_idx |= tx_offload.tso_segsz << IXGBE_ADVTXD_MSS_SHIFT;
 		mss_l4len_idx |= tx_offload.l4_len << IXGBE_ADVTXD_L4LEN_SHIFT;
 	} else { /* no TSO, check if hardware checksum is needed */
-		if (ol_flags & PKT_TX_IP_CKSUM) {
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 			type_tucmd_mlhl = IXGBE_ADVTXD_TUCMD_IPV4;
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 		}
 
-		switch (ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_UDP_CKSUM:
+		switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_UDP |
 				IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_udp_hdr)
@@ -422,7 +421,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 			break;
-		case PKT_TX_TCP_CKSUM:
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_TCP |
 				IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_tcp_hdr)
@@ -430,7 +429,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_SCTP |
 				IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_sctp_hdr)
@@ -445,7 +444,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 		}
 	}
 
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) {
 		tx_offload_mask.outer_l2_len |= ~0;
 		tx_offload_mask.outer_l3_len |= ~0;
 		tx_offload_mask.l2_len |= ~0;
@@ -455,7 +454,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 			       << IXGBE_ADVTXD_TUNNEL_LEN;
 	}
 #ifdef RTE_LIB_SECURITY
-	if (ol_flags & PKT_TX_SEC_OFFLOAD) {
+	if (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
 		union ixgbe_crypto_tx_desc_md *md =
 				(union ixgbe_crypto_tx_desc_md *)mdata;
 		seqnum_seed |=
@@ -479,7 +478,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 
 	ctx_txd->type_tucmd_mlhl = rte_cpu_to_le_32(type_tucmd_mlhl);
 	vlan_macip_lens = tx_offload.l3_len;
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		vlan_macip_lens |= (tx_offload.outer_l2_len <<
 				    IXGBE_ADVTXD_MACLEN_SHIFT);
 	else
@@ -529,11 +528,11 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 {
 	uint32_t tmp = 0;
 
-	if ((ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM)
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) != RTE_MBUF_F_TX_L4_NO_CKSUM)
 		tmp |= IXGBE_ADVTXD_POPTS_TXSM;
-	if (ol_flags & PKT_TX_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 		tmp |= IXGBE_ADVTXD_POPTS_IXSM;
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		tmp |= IXGBE_ADVTXD_POPTS_TXSM;
 	return tmp;
 }
@@ -543,13 +542,13 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
 {
 	uint32_t cmdtype = 0;
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		cmdtype |= IXGBE_ADVTXD_DCMD_VLE;
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		cmdtype |= IXGBE_ADVTXD_DCMD_TSE;
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		cmdtype |= (1 << IXGBE_ADVTXD_OUTERIPCS_SHIFT);
-	if (ol_flags & PKT_TX_MACSEC)
+	if (ol_flags & RTE_MBUF_F_TX_MACSEC)
 		cmdtype |= IXGBE_ADVTXD_MAC_LINKSEC;
 	return cmdtype;
 }
@@ -678,7 +677,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 */
 		ol_flags = tx_pkt->ol_flags;
 #ifdef RTE_LIB_SECURITY
-		use_ipsec = txq->using_ipsec && (ol_flags & PKT_TX_SEC_OFFLOAD);
+		use_ipsec = txq->using_ipsec && (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD);
 #endif
 
 		/* If hardware offload required */
@@ -826,14 +825,14 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			IXGBE_ADVTXD_DCMD_IFCS | IXGBE_ADVTXD_DCMD_DEXT;
 
 #ifdef RTE_LIBRTE_IEEE1588
-		if (ol_flags & PKT_TX_IEEE1588_TMST)
+		if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 			cmd_type_len |= IXGBE_ADVTXD_MAC_1588;
 #endif
 
 		olinfo_status = 0;
 		if (tx_ol_req) {
 
-			if (ol_flags & PKT_TX_TCP_SEG) {
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 				/* when TSO is on, paylen in descriptor is the
 				 * not the packet len but the tcp payload len */
 				pkt_len -= (tx_offload.l2_len +
@@ -1433,14 +1432,14 @@ static inline uint64_t
 ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
 {
 	static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
-		0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-		0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
-		PKT_RX_RSS_HASH, 0, 0, 0,
-		0, 0, 0,  PKT_RX_FDIR,
+		0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+		0, RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH,
+		RTE_MBUF_F_RX_RSS_HASH, 0, 0, 0,
+		0, 0, 0,  RTE_MBUF_F_RX_FDIR,
 	};
 #ifdef RTE_LIBRTE_IEEE1588
 	static uint64_t ip_pkt_etqf_map[8] = {
-		0, 0, 0, PKT_RX_IEEE1588_PTP,
+		0, 0, 0, RTE_MBUF_F_RX_IEEE1588_PTP,
 		0, 0, 0, 0,
 	};
 
@@ -1468,7 +1467,7 @@ rx_desc_status_to_pkt_flags(uint32_t rx_status, uint64_t vlan_flags)
 
 #ifdef RTE_LIBRTE_IEEE1588
 	if (rx_status & IXGBE_RXD_STAT_TMST)
-		pkt_flags = pkt_flags | PKT_RX_IEEE1588_TMST;
+		pkt_flags = pkt_flags | RTE_MBUF_F_RX_IEEE1588_TMST;
 #endif
 	return pkt_flags;
 }
@@ -1484,10 +1483,10 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status, uint16_t pkt_info,
 	 * Bit 30: L4I, L4I integrity error
 	 */
 	static uint64_t error_to_pkt_flags_map[4] = {
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD
 	};
 	pkt_flags = error_to_pkt_flags_map[(rx_status >>
 		IXGBE_RXDADV_ERR_CKSUM_BIT) & IXGBE_RXDADV_ERR_CKSUM_MSK];
@@ -1499,18 +1498,18 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status, uint16_t pkt_info,
 	if ((rx_status & IXGBE_RXDADV_ERR_TCPE) &&
 	    (pkt_info & IXGBE_RXDADV_PKTTYPE_UDP) &&
 	    rx_udp_csum_zero_err)
-		pkt_flags &= ~PKT_RX_L4_CKSUM_BAD;
+		pkt_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_BAD;
 
 	if ((rx_status & IXGBE_RXD_STAT_OUTERIPCS) &&
 	    (rx_status & IXGBE_RXDADV_ERR_OUTERIPER)) {
-		pkt_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		pkt_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 	}
 
 #ifdef RTE_LIB_SECURITY
 	if (rx_status & IXGBE_RXD_STAT_SECP) {
-		pkt_flags |= PKT_RX_SEC_OFFLOAD;
+		pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
 		if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
-			pkt_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 	}
 #endif
 
@@ -1597,10 +1596,10 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
 				ixgbe_rxd_pkt_info_to_pkt_type
 					(pkt_info[j], rxq->pkt_type_mask);
 
-			if (likely(pkt_flags & PKT_RX_RSS_HASH))
+			if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH))
 				mb->hash.rss = rte_le_to_cpu_32(
 				    rxdp[j].wb.lower.hi_dword.rss);
-			else if (pkt_flags & PKT_RX_FDIR) {
+			else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 				mb->hash.fdir.hash = rte_le_to_cpu_16(
 				    rxdp[j].wb.lower.hi_dword.csum_ip.csum) &
 				    IXGBE_ATR_HASH_MASK;
@@ -1918,7 +1917,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->port = rxq->port_id;
 
 		pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
-		/* Only valid if PKT_RX_VLAN set in pkt_flags */
+		/* Only valid if RTE_MBUF_F_RX_VLAN set in pkt_flags */
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
 
 		pkt_flags = rx_desc_status_to_pkt_flags(staterr, vlan_flags);
@@ -1932,10 +1931,10 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			ixgbe_rxd_pkt_info_to_pkt_type(pkt_info,
 						       rxq->pkt_type_mask);
 
-		if (likely(pkt_flags & PKT_RX_RSS_HASH))
+		if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH))
 			rxm->hash.rss = rte_le_to_cpu_32(
 						rxd.wb.lower.hi_dword.rss);
-		else if (pkt_flags & PKT_RX_FDIR) {
+		else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 			rxm->hash.fdir.hash = rte_le_to_cpu_16(
 					rxd.wb.lower.hi_dword.csum_ip.csum) &
 					IXGBE_ATR_HASH_MASK;
@@ -2011,7 +2010,7 @@ ixgbe_fill_cluster_head_buf(
 
 	head->port = rxq->port_id;
 
-	/* The vlan_tci field is only valid when PKT_RX_VLAN is
+	/* The vlan_tci field is only valid when RTE_MBUF_F_RX_VLAN is
 	 * set in the pkt_flags field.
 	 */
 	head->vlan_tci = rte_le_to_cpu_16(desc->wb.upper.vlan);
@@ -2024,9 +2023,9 @@ ixgbe_fill_cluster_head_buf(
 	head->packet_type =
 		ixgbe_rxd_pkt_info_to_pkt_type(pkt_info, rxq->pkt_type_mask);
 
-	if (likely(pkt_flags & PKT_RX_RSS_HASH))
+	if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH))
 		head->hash.rss = rte_le_to_cpu_32(desc->wb.lower.hi_dword.rss);
-	else if (pkt_flags & PKT_RX_FDIR) {
+	else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 		head->hash.fdir.hash =
 			rte_le_to_cpu_16(desc->wb.lower.hi_dword.csum_ip.csum)
 							  & IXGBE_ATR_HASH_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index c541f537c7..90b254ea26 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -105,10 +105,10 @@ desc_to_olflags_v(uint8x16x2_t sterr_tmp1, uint8x16x2_t sterr_tmp2,
 			0x00, 0x00, 0x00, 0x00};
 
 	const uint8x16_t rss_flags = {
-			0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, 0, 0, 0,
-			0, 0, 0, PKT_RX_FDIR};
+			0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			0, RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, 0, 0, 0,
+			0, 0, 0, RTE_MBUF_F_RX_FDIR};
 
 	/* mask everything except vlan present and l4/ip csum error */
 	const uint8x16_t vlan_csum_msk = {
@@ -123,23 +123,23 @@ desc_to_olflags_v(uint8x16x2_t sterr_tmp1, uint8x16x2_t sterr_tmp2,
 
 	/* map vlan present (0x8), IPE (0x2), L4E (0x1) to ol_flags */
 	const uint8x16_t vlan_csum_map_lo = {
-			PKT_RX_IP_CKSUM_GOOD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
 			0, 0, 0, 0,
-			vlan_flags | PKT_RX_IP_CKSUM_GOOD,
-			vlan_flags | PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-			vlan_flags | PKT_RX_IP_CKSUM_BAD,
-			vlan_flags | PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
+			vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_GOOD,
+			vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+			vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
 			0, 0, 0, 0};
 
 	const uint8x16_t vlan_csum_map_hi = {
-			PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
-			PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
 			0, 0, 0, 0,
-			PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
-			PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
 			0, 0, 0, 0};
 
 	/* change mask from 0x200(IXGBE_RXDADV_PKTTYPE_UDP) to 0x2 */
@@ -153,7 +153,7 @@ desc_to_olflags_v(uint8x16x2_t sterr_tmp1, uint8x16x2_t sterr_tmp2,
 			0, 0, 0, 0};
 
 	const uint8x16_t udp_csum_bad_shuf = {
-			0xFF, ~(uint8_t)PKT_RX_L4_CKSUM_BAD, 0, 0,
+			0xFF, ~(uint8_t)RTE_MBUF_F_RX_L4_CKSUM_BAD, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
@@ -194,7 +194,7 @@ desc_to_olflags_v(uint8x16x2_t sterr_tmp1, uint8x16x2_t sterr_tmp2,
 	vtag_lo = vorrq_u8(ptype, vtag_lo);
 
 	/* convert the UDP header present 0x2 to 0x1 for aligning with each
-	 * PKT_RX_L4_CKSUM_BAD value in low byte of 8 bits word ol_flag in
+	 * RTE_MBUF_F_RX_L4_CKSUM_BAD value in low byte of 8 bits word ol_flag in
 	 * vtag_lo (4x8). Then mask out the bad checksum value by shuffle and
 	 * bit-mask.
 	 */
@@ -337,7 +337,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	sw_ring = &rxq->sw_ring[rxq->rx_tail];
 
 	/* ensure these 2 flags are in the lower 8 bits */
-	RTE_BUILD_BUG_ON((PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED) > UINT8_MAX);
+	RTE_BUILD_BUG_ON((RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED) > UINT8_MAX);
 	vlan_flags = rxq->vlan_flags & UINT8_MAX;
 
 	/* A. load 4 packet in one loop
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index 1dea95e73b..1eed949495 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -108,9 +108,9 @@ desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
 	const __m128i ipsec_proc_msk  =
 			_mm_set1_epi32(IXGBE_RXDADV_IPSEC_STATUS_SECP);
 	const __m128i ipsec_err_flag  =
-			_mm_set1_epi32(PKT_RX_SEC_OFFLOAD_FAILED |
-				       PKT_RX_SEC_OFFLOAD);
-	const __m128i ipsec_proc_flag = _mm_set1_epi32(PKT_RX_SEC_OFFLOAD);
+			_mm_set1_epi32(RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED |
+				       RTE_MBUF_F_RX_SEC_OFFLOAD);
+	const __m128i ipsec_proc_flag = _mm_set1_epi32(RTE_MBUF_F_RX_SEC_OFFLOAD);
 
 	rearm = _mm_set_epi32(*rearm3, *rearm2, *rearm1, *rearm0);
 	sterr = _mm_set_epi32(_mm_extract_epi32(descs[3], 2),
@@ -148,10 +148,10 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
 			0x00FF, 0x00FF, 0x00FF, 0x00FF);
 
 	/* map rss type to rss hash flag */
-	const __m128i rss_flags = _mm_set_epi8(PKT_RX_FDIR, 0, 0, 0,
-			0, 0, 0, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH, 0,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, 0);
+	const __m128i rss_flags = _mm_set_epi8(RTE_MBUF_F_RX_FDIR, 0, 0, 0,
+			0, 0, 0, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	/* mask everything except vlan present and l4/ip csum error */
 	const __m128i vlan_csum_msk = _mm_set_epi16(
@@ -165,23 +165,23 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
 	/* map vlan present (0x8), IPE (0x2), L4E (0x1) to ol_flags */
 	const __m128i vlan_csum_map_lo = _mm_set_epi8(
 		0, 0, 0, 0,
-		vlan_flags | PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
-		vlan_flags | PKT_RX_IP_CKSUM_BAD,
-		vlan_flags | PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-		vlan_flags | PKT_RX_IP_CKSUM_GOOD,
+		vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_BAD,
+		vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_GOOD,
 		0, 0, 0, 0,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_GOOD);
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD);
 
 	const __m128i vlan_csum_map_hi = _mm_set_epi8(
 		0, 0, 0, 0,
-		0, PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
-		PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t),
+		0, RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t),
 		0, 0, 0, 0,
-		0, PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
-		PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t));
+		0, RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t));
 
 	/* mask everything except UDP header present if specified */
 	const __m128i udp_hdr_p_msk = _mm_set_epi16
@@ -190,7 +190,7 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
 
 	const __m128i udp_csum_bad_shuf = _mm_set_epi8
 		(0, 0, 0, 0, 0, 0, 0, 0,
-		 0, 0, 0, 0, 0, 0, ~(uint8_t)PKT_RX_L4_CKSUM_BAD, 0xFF);
+		 0, 0, 0, 0, 0, 0, ~(uint8_t)RTE_MBUF_F_RX_L4_CKSUM_BAD, 0xFF);
 
 	ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
 	ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
@@ -228,7 +228,7 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
 	vtag1 = _mm_or_si128(ptype0, vtag1);
 
 	/* convert the UDP header present 0x200 to 0x1 for aligning with each
-	 * PKT_RX_L4_CKSUM_BAD value in low byte of 16 bits word ol_flag in
+	 * RTE_MBUF_F_RX_L4_CKSUM_BAD value in low byte of 16 bits word ol_flag in
 	 * vtag1 (4x16). Then mask out the bad checksum value by shuffle and
 	 * bit-mask.
 	 */
@@ -428,7 +428,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	sw_ring = &rxq->sw_ring[rxq->rx_tail];
 
 	/* ensure these 2 flags are in the lower 8 bits */
-	RTE_BUILD_BUG_ON((PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED) > UINT8_MAX);
+	RTE_BUILD_BUG_ON((RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED) > UINT8_MAX);
 	vlan_flags = rxq->vlan_flags & UINT8_MAX;
 
 	/* A. load 4 packet in one loop
diff --git a/drivers/net/liquidio/lio_rxtx.c b/drivers/net/liquidio/lio_rxtx.c
index 616abec070..ef127f26c4 100644
--- a/drivers/net/liquidio/lio_rxtx.c
+++ b/drivers/net/liquidio/lio_rxtx.c
@@ -437,7 +437,7 @@ lio_droq_fast_process_packet(struct lio_device *lio_dev,
 				if (rh->r_dh.has_hash) {
 					uint64_t *hash_ptr;
 
-					nicbuf->ol_flags |= PKT_RX_RSS_HASH;
+					nicbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 					hash_ptr = rte_pktmbuf_mtod(nicbuf,
 								    uint64_t *);
 					lio_swap_8B_data(hash_ptr, 1);
@@ -494,7 +494,7 @@ lio_droq_fast_process_packet(struct lio_device *lio_dev,
 						uint64_t *hash_ptr;
 
 						nicbuf->ol_flags |=
-						    PKT_RX_RSS_HASH;
+						    RTE_MBUF_F_RX_RSS_HASH;
 						hash_ptr = rte_pktmbuf_mtod(
 						    nicbuf, uint64_t *);
 						lio_swap_8B_data(hash_ptr, 1);
@@ -547,10 +547,10 @@ lio_droq_fast_process_packet(struct lio_device *lio_dev,
 		struct rte_mbuf *m = rx_pkts[data_pkts - 1];
 
 		if (rh->r_dh.csum_verified & LIO_IP_CSUM_VERIFIED)
-			m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 		if (rh->r_dh.csum_verified & LIO_L4_CSUM_VERIFIED)
-			m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	if (droq->refill_count >= droq->refill_threshold) {
@@ -1675,13 +1675,13 @@ lio_dev_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
 		cmdsetup.s.iq_no = iq_no;
 
 		/* check checksum offload flags to form cmd */
-		if (m->ol_flags & PKT_TX_IP_CKSUM)
+		if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			cmdsetup.s.ip_csum = 1;
 
-		if (m->ol_flags & PKT_TX_OUTER_IP_CKSUM)
+		if (m->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 			cmdsetup.s.tnl_csum = 1;
-		else if ((m->ol_flags & PKT_TX_TCP_CKSUM) ||
-				(m->ol_flags & PKT_TX_UDP_CKSUM))
+		else if ((m->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) ||
+				(m->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM))
 			cmdsetup.s.transport_csum = 1;
 
 		if (m->nb_segs == 1) {
diff --git a/drivers/net/mlx4/mlx4_rxtx.c b/drivers/net/mlx4/mlx4_rxtx.c
index ecf08f53cf..ed9e41fcde 100644
--- a/drivers/net/mlx4/mlx4_rxtx.c
+++ b/drivers/net/mlx4/mlx4_rxtx.c
@@ -406,7 +406,7 @@ mlx4_tx_burst_tso_get_params(struct rte_mbuf *buf,
 {
 	struct mlx4_sq *sq = &txq->msq;
 	const uint8_t tunneled = txq->priv->hw_csum_l2tun &&
-				 (buf->ol_flags & PKT_TX_TUNNEL_MASK);
+				 (buf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK);
 
 	tinfo->tso_header_size = buf->l2_len + buf->l3_len + buf->l4_len;
 	if (tunneled)
@@ -915,7 +915,7 @@ mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 			uint16_t flags16[2];
 		} srcrb;
 		uint32_t lkey;
-		bool tso = txq->priv->tso && (buf->ol_flags & PKT_TX_TCP_SEG);
+		bool tso = txq->priv->tso && (buf->ol_flags & RTE_MBUF_F_TX_TCP_SEG);
 
 		/* Clean up old buffer. */
 		if (likely(elt->buf != NULL)) {
@@ -991,15 +991,15 @@ mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		/* Enable HW checksum offload if requested */
 		if (txq->csum &&
 		    (buf->ol_flags &
-		     (PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM))) {
+		     (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM))) {
 			const uint64_t is_tunneled = (buf->ol_flags &
-						      (PKT_TX_TUNNEL_GRE |
-						       PKT_TX_TUNNEL_VXLAN));
+						      (RTE_MBUF_F_TX_TUNNEL_GRE |
+						       RTE_MBUF_F_TX_TUNNEL_VXLAN));
 
 			if (is_tunneled && txq->csum_l2tun) {
 				owner_opcode |= MLX4_WQE_CTRL_IIP_HDR_CSUM |
 						MLX4_WQE_CTRL_IL4_HDR_CSUM;
-				if (buf->ol_flags & PKT_TX_OUTER_IP_CKSUM)
+				if (buf->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 					srcrb.flags |=
 					    RTE_BE32(MLX4_WQE_CTRL_IP_HDR_CSUM);
 			} else {
@@ -1112,18 +1112,18 @@ rxq_cq_to_ol_flags(uint32_t flags, int csum, int csum_l2tun)
 		ol_flags |=
 			mlx4_transpose(flags,
 				       MLX4_CQE_STATUS_IP_HDR_CSUM_OK,
-				       PKT_RX_IP_CKSUM_GOOD) |
+				       RTE_MBUF_F_RX_IP_CKSUM_GOOD) |
 			mlx4_transpose(flags,
 				       MLX4_CQE_STATUS_TCP_UDP_CSUM_OK,
-				       PKT_RX_L4_CKSUM_GOOD);
+				       RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 	if ((flags & MLX4_CQE_L2_TUNNEL) && csum_l2tun)
 		ol_flags |=
 			mlx4_transpose(flags,
 				       MLX4_CQE_L2_TUNNEL_IPOK,
-				       PKT_RX_IP_CKSUM_GOOD) |
+				       RTE_MBUF_F_RX_IP_CKSUM_GOOD) |
 			mlx4_transpose(flags,
 				       MLX4_CQE_L2_TUNNEL_L4_CSUM,
-				       PKT_RX_L4_CKSUM_GOOD);
+				       RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 	return ol_flags;
 }
 
@@ -1274,7 +1274,7 @@ mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 			/* Update packet information. */
 			pkt->packet_type =
 				rxq_cq_to_pkt_type(cqe, rxq->l2tun_offload);
-			pkt->ol_flags = PKT_RX_RSS_HASH;
+			pkt->ol_flags = RTE_MBUF_F_RX_RSS_HASH;
 			pkt->hash.rss = cqe->immed_rss_invalid;
 			if (rxq->crc_present)
 				len -= RTE_ETHER_CRC_LEN;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index c914a7120c..ffdd50c93d 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -9275,7 +9275,7 @@ mlx5_flow_tunnel_get_restore_info(struct rte_eth_dev *dev,
 {
 	uint64_t ol_flags = m->ol_flags;
 	const struct mlx5_flow_tbl_data_entry *tble;
-	const uint64_t mask = PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	const uint64_t mask = RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 
 	if (!is_tunnel_offload_active(dev)) {
 		info->flags = 0;
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index e3b1051ba4..3ae62cb8e0 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -692,10 +692,10 @@ rxq_cq_to_ol_flags(volatile struct mlx5_cqe *cqe)
 	ol_flags =
 		TRANSPOSE(flags,
 			  MLX5_CQE_RX_L3_HDR_VALID,
-			  PKT_RX_IP_CKSUM_GOOD) |
+			  RTE_MBUF_F_RX_IP_CKSUM_GOOD) |
 		TRANSPOSE(flags,
 			  MLX5_CQE_RX_L4_HDR_VALID,
-			  PKT_RX_L4_CKSUM_GOOD);
+			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 	return ol_flags;
 }
 
@@ -731,7 +731,7 @@ rxq_cq_to_mbuf(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt,
 			rss_hash_res = rte_be_to_cpu_32(mcqe->rx_hash_result);
 		if (rss_hash_res) {
 			pkt->hash.rss = rss_hash_res;
-			pkt->ol_flags |= PKT_RX_RSS_HASH;
+			pkt->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		}
 	}
 	if (rxq->mark) {
@@ -745,9 +745,9 @@ rxq_cq_to_mbuf(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt,
 			mark = ((mcqe->byte_cnt_flow & 0xff) << 8) |
 				(mcqe->flow_tag_high << 16);
 		if (MLX5_FLOW_MARK_IS_VALID(mark)) {
-			pkt->ol_flags |= PKT_RX_FDIR;
+			pkt->ol_flags |= RTE_MBUF_F_RX_FDIR;
 			if (mark != RTE_BE32(MLX5_FLOW_MARK_DEFAULT)) {
-				pkt->ol_flags |= PKT_RX_FDIR_ID;
+				pkt->ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
 				pkt->hash.fdir.hi = mlx5_flow_mark_get(mark);
 			}
 		}
@@ -775,7 +775,7 @@ rxq_cq_to_mbuf(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt,
 			vlan_strip = mcqe->hdr_type &
 				     RTE_BE16(MLX5_CQE_VLAN_STRIPPED);
 		if (vlan_strip) {
-			pkt->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			pkt->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 			pkt->vlan_tci = rte_be_to_cpu_16(cqe->vlan_info);
 		}
 	}
@@ -863,7 +863,7 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 			}
 			pkt = seg;
 			MLX5_ASSERT(len >= (rxq->crc_present << 2));
-			pkt->ol_flags &= EXT_ATTACHED_MBUF;
+			pkt->ol_flags &= RTE_MBUF_F_EXTERNAL;
 			rxq_cq_to_mbuf(rxq, pkt, cqe, mcqe);
 			if (rxq->crc_present)
 				len -= RTE_ETHER_CRC_LEN;
@@ -872,7 +872,7 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 				mlx5_lro_update_hdr
 					(rte_pktmbuf_mtod(pkt, uint8_t *), cqe,
 					 mcqe, rxq, len);
-				pkt->ol_flags |= PKT_RX_LRO;
+				pkt->ol_flags |= RTE_MBUF_F_RX_LRO;
 				pkt->tso_segsz = len / cqe->lro_num_seg;
 			}
 		}
@@ -1144,7 +1144,7 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		if (cqe->lro_num_seg > 1) {
 			mlx5_lro_update_hdr(rte_pktmbuf_mtod(pkt, uint8_t *),
 					    cqe, mcqe, rxq, len);
-			pkt->ol_flags |= PKT_RX_LRO;
+			pkt->ol_flags |= RTE_MBUF_F_RX_LRO;
 			pkt->tso_segsz = len / cqe->lro_num_seg;
 		}
 		PKT_LEN(pkt) = len;
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 2b7ad3e48b..32e9c97b64 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -488,7 +488,7 @@ mprq_buf_to_pkt(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, uint32_t len,
 		shinfo = &buf->shinfos[strd_idx];
 		rte_mbuf_ext_refcnt_set(shinfo, 1);
 		/*
-		 * EXT_ATTACHED_MBUF will be set to pkt->ol_flags when
+		 * RTE_MBUF_F_EXTERNAL will be set to pkt->ol_flags when
 		 * attaching the stride to mbuf and more offload flags
 		 * will be added below by calling rxq_cq_to_mbuf().
 		 * Other fields will be overwritten.
@@ -497,7 +497,7 @@ mprq_buf_to_pkt(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, uint32_t len,
 					  buf_len, shinfo);
 		/* Set mbuf head-room. */
 		SET_DATA_OFF(pkt, RTE_PKTMBUF_HEADROOM);
-		MLX5_ASSERT(pkt->ol_flags == EXT_ATTACHED_MBUF);
+		MLX5_ASSERT(pkt->ol_flags == RTE_MBUF_F_EXTERNAL);
 		MLX5_ASSERT(rte_pktmbuf_tailroom(pkt) >=
 			len - (hdrm_overlap > 0 ? hdrm_overlap : 0));
 		DATA_LEN(pkt) = len;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index b68443bed5..21a455b1b2 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -180,7 +180,7 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
 		mbuf_init->nb_segs = 1;
 		mbuf_init->port = rxq->port_id;
 		if (priv->flags & RTE_PKTMBUF_POOL_F_PINNED_EXT_BUF)
-			mbuf_init->ol_flags = EXT_ATTACHED_MBUF;
+			mbuf_init->ol_flags = RTE_MBUF_F_EXTERNAL;
 		/*
 		 * prevent compiler reordering:
 		 * rearm_data covers previous fields.
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 7b984eff35..646d2a31e2 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -255,10 +255,10 @@ mlx5_set_cksum_table(void)
 
 	/*
 	 * The index should have:
-	 * bit[0] = PKT_TX_TCP_SEG
-	 * bit[2:3] = PKT_TX_UDP_CKSUM, PKT_TX_TCP_CKSUM
-	 * bit[4] = PKT_TX_IP_CKSUM
-	 * bit[8] = PKT_TX_OUTER_IP_CKSUM
+	 * bit[0] = RTE_MBUF_F_TX_TCP_SEG
+	 * bit[2:3] = RTE_MBUF_F_TX_UDP_CKSUM, RTE_MBUF_F_TX_TCP_CKSUM
+	 * bit[4] = RTE_MBUF_F_TX_IP_CKSUM
+	 * bit[8] = RTE_MBUF_F_TX_OUTER_IP_CKSUM
 	 * bit[9] = tunnel
 	 */
 	for (i = 0; i < RTE_DIM(mlx5_cksum_table); ++i) {
@@ -293,10 +293,10 @@ mlx5_set_swp_types_table(void)
 
 	/*
 	 * The index should have:
-	 * bit[0:1] = PKT_TX_L4_MASK
-	 * bit[4] = PKT_TX_IPV6
-	 * bit[8] = PKT_TX_OUTER_IPV6
-	 * bit[9] = PKT_TX_OUTER_UDP
+	 * bit[0:1] = RTE_MBUF_F_TX_L4_MASK
+	 * bit[4] = RTE_MBUF_F_TX_IPV6
+	 * bit[8] = RTE_MBUF_F_TX_OUTER_IPV6
+	 * bit[9] = RTE_MBUF_F_TX_OUTER_UDP
 	 */
 	for (i = 0; i < RTE_DIM(mlx5_swp_types_table); ++i) {
 		v = 0;
@@ -306,7 +306,7 @@ mlx5_set_swp_types_table(void)
 			v |= MLX5_ETH_WQE_L4_OUTER_UDP;
 		if (i & (1 << 4))
 			v |= MLX5_ETH_WQE_L3_INNER_IPV6;
-		if ((i & 3) == (PKT_TX_UDP_CKSUM >> 52))
+		if ((i & 3) == (RTE_MBUF_F_TX_UDP_CKSUM >> 52))
 			v |= MLX5_ETH_WQE_L4_INNER_UDP;
 		mlx5_swp_types_table[i] = v;
 	}
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
index 68cef1a83e..bcf487c34e 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
@@ -283,20 +283,20 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const vector unsigned char fdir_flags =
 					(vector unsigned char)
 					(vector unsigned int){
-					PKT_RX_FDIR, PKT_RX_FDIR,
-					PKT_RX_FDIR, PKT_RX_FDIR};
+					RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR,
+					RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR};
 				const vector unsigned char fdir_all_flags =
 					(vector unsigned char)
 					(vector unsigned int){
-					PKT_RX_FDIR | PKT_RX_FDIR_ID,
-					PKT_RX_FDIR | PKT_RX_FDIR_ID,
-					PKT_RX_FDIR | PKT_RX_FDIR_ID,
-					PKT_RX_FDIR | PKT_RX_FDIR_ID};
+					RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID,
+					RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID,
+					RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID,
+					RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID};
 				vector unsigned char fdir_id_flags =
 					(vector unsigned char)
 					(vector unsigned int){
-					PKT_RX_FDIR_ID, PKT_RX_FDIR_ID,
-					PKT_RX_FDIR_ID, PKT_RX_FDIR_ID};
+					RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID,
+					RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID};
 				/* Extract flow_tag field. */
 				vector unsigned char ftag0 = vec_perm(mcqe1,
 							zero, flow_mark_shuf);
@@ -316,7 +316,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 					ol_flags_mask,
 					(vector unsigned long)fdir_all_flags);
 
-				/* Set PKT_RX_FDIR if flow tag is non-zero. */
+				/* Set RTE_MBUF_F_RX_FDIR if flow tag is non-zero. */
 				invalid_mask = (vector unsigned char)
 					vec_cmpeq((vector unsigned int)ftag,
 					(vector unsigned int)zero);
@@ -376,10 +376,10 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const vector unsigned char vlan_mask =
 					(vector unsigned char)
 					(vector unsigned int) {
-					(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-					(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-					(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-					(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED)};
+					(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+					(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+					(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+					(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED)};
 				const vector unsigned char cv_mask =
 					(vector unsigned char)
 					(vector unsigned int) {
@@ -433,10 +433,10 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 			}
 			const vector unsigned char hash_mask =
 				(vector unsigned char)(vector unsigned int) {
-					PKT_RX_RSS_HASH,
-					PKT_RX_RSS_HASH,
-					PKT_RX_RSS_HASH,
-					PKT_RX_RSS_HASH};
+					RTE_MBUF_F_RX_RSS_HASH,
+					RTE_MBUF_F_RX_RSS_HASH,
+					RTE_MBUF_F_RX_RSS_HASH,
+					RTE_MBUF_F_RX_RSS_HASH};
 			const vector unsigned char rearm_flags =
 				(vector unsigned char)(vector unsigned int) {
 				(uint32_t)t_pkt->ol_flags,
@@ -531,13 +531,13 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 	vector unsigned char pinfo, ptype;
 	vector unsigned char ol_flags = (vector unsigned char)
 		(vector unsigned int){
-			rxq->rss_hash * PKT_RX_RSS_HASH |
+			rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 				rxq->hw_timestamp * rxq->timestamp_rx_flag,
-			rxq->rss_hash * PKT_RX_RSS_HASH |
+			rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 				rxq->hw_timestamp * rxq->timestamp_rx_flag,
-			rxq->rss_hash * PKT_RX_RSS_HASH |
+			rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 				rxq->hw_timestamp * rxq->timestamp_rx_flag,
-			rxq->rss_hash * PKT_RX_RSS_HASH |
+			rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 				rxq->hw_timestamp * rxq->timestamp_rx_flag};
 	vector unsigned char cv_flags;
 	const vector unsigned char zero = (vector unsigned char){0};
@@ -551,21 +551,21 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 		(vector unsigned char)(vector unsigned int){
 		0x00000003, 0x00000003, 0x00000003, 0x00000003};
 	const vector unsigned char cv_flag_sel = (vector unsigned char){
-		0, (uint8_t)(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-		(uint8_t)(PKT_RX_IP_CKSUM_GOOD >> 1), 0,
-		(uint8_t)(PKT_RX_L4_CKSUM_GOOD >> 1), 0,
-		(uint8_t)((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1),
+		0, (uint8_t)(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+		(uint8_t)(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1), 0,
+		(uint8_t)(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1), 0,
+		(uint8_t)((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 		0, 0, 0, 0, 0, 0, 0, 0, 0};
 	const vector unsigned char cv_mask =
 		(vector unsigned char)(vector unsigned int){
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED};
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED};
 	const vector unsigned char mbuf_init =
 		(vector unsigned char)vec_vsx_ld
 			(0, (vector unsigned char *)&rxq->mbuf_initializer);
@@ -602,19 +602,19 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 			0xffffff00, 0xffffff00, 0xffffff00, 0xffffff00};
 		const vector unsigned char fdir_flags =
 			(vector unsigned char)(vector unsigned int){
-			PKT_RX_FDIR, PKT_RX_FDIR,
-			PKT_RX_FDIR, PKT_RX_FDIR};
+			RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR,
+			RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR};
 		vector unsigned char fdir_id_flags =
 			(vector unsigned char)(vector unsigned int){
-			PKT_RX_FDIR_ID, PKT_RX_FDIR_ID,
-			PKT_RX_FDIR_ID, PKT_RX_FDIR_ID};
+			RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID,
+			RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID};
 		vector unsigned char flow_tag, invalid_mask;
 
 		flow_tag = (vector unsigned char)
 			vec_and((vector unsigned long)pinfo,
 			(vector unsigned long)pinfo_ft_mask);
 
-		/* Check if flow tag is non-zero then set PKT_RX_FDIR. */
+		/* Check if flow tag is non-zero then set RTE_MBUF_F_RX_FDIR. */
 		invalid_mask = (vector unsigned char)
 			vec_cmpeq((vector unsigned int)flow_tag,
 			(vector unsigned int)zero);
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
index 5ff792f4cb..aa36df29a0 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
@@ -220,12 +220,12 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const uint32x4_t ft_mask =
 					vdupq_n_u32(MLX5_FLOW_MARK_DEFAULT);
 				const uint32x4_t fdir_flags =
-					vdupq_n_u32(PKT_RX_FDIR);
+					vdupq_n_u32(RTE_MBUF_F_RX_FDIR);
 				const uint32x4_t fdir_all_flags =
-					vdupq_n_u32(PKT_RX_FDIR |
-						    PKT_RX_FDIR_ID);
+					vdupq_n_u32(RTE_MBUF_F_RX_FDIR |
+						    RTE_MBUF_F_RX_FDIR_ID);
 				uint32x4_t fdir_id_flags =
-					vdupq_n_u32(PKT_RX_FDIR_ID);
+					vdupq_n_u32(RTE_MBUF_F_RX_FDIR_ID);
 				uint32x4_t invalid_mask, ftag;
 
 				__asm__ volatile
@@ -240,7 +240,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				invalid_mask = vceqzq_u32(ftag);
 				ol_flags_mask = vorrq_u32(ol_flags_mask,
 							  fdir_all_flags);
-				/* Set PKT_RX_FDIR if flow tag is non-zero. */
+				/* Set RTE_MBUF_F_RX_FDIR if flow tag is non-zero. */
 				ol_flags = vorrq_u32(ol_flags,
 					vbicq_u32(fdir_flags, invalid_mask));
 				/* Mask out invalid entries. */
@@ -276,8 +276,8 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const uint8_t pkt_hdr3 =
 					mcq[pos % 8 + 3].hdr_type;
 				const uint32x4_t vlan_mask =
-					vdupq_n_u32(PKT_RX_VLAN |
-						    PKT_RX_VLAN_STRIPPED);
+					vdupq_n_u32(RTE_MBUF_F_RX_VLAN |
+						    RTE_MBUF_F_RX_VLAN_STRIPPED);
 				const uint32x4_t cv_mask =
 					vdupq_n_u32(MLX5_CQE_VLAN_STRIPPED);
 				const uint32x4_t pkt_cv = {
@@ -317,7 +317,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				}
 			}
 			const uint32x4_t hash_flags =
-				vdupq_n_u32(PKT_RX_RSS_HASH);
+				vdupq_n_u32(RTE_MBUF_F_RX_RSS_HASH);
 			const uint32x4_t rearm_flags =
 				vdupq_n_u32((uint32_t)t_pkt->ol_flags);
 
@@ -396,22 +396,22 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 	uint16x4_t ptype;
 	uint32x4_t pinfo, cv_flags;
 	uint32x4_t ol_flags =
-		vdupq_n_u32(rxq->rss_hash * PKT_RX_RSS_HASH |
+		vdupq_n_u32(rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 			    rxq->hw_timestamp * rxq->timestamp_rx_flag);
 	const uint32x4_t ptype_ol_mask = { 0x106, 0x106, 0x106, 0x106 };
 	const uint8x16_t cv_flag_sel = {
 		0,
-		(uint8_t)(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-		(uint8_t)(PKT_RX_IP_CKSUM_GOOD >> 1),
+		(uint8_t)(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+		(uint8_t)(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1),
 		0,
-		(uint8_t)(PKT_RX_L4_CKSUM_GOOD >> 1),
+		(uint8_t)(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1),
 		0,
-		(uint8_t)((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1),
+		(uint8_t)((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 		0, 0, 0, 0, 0, 0, 0, 0, 0
 	};
 	const uint32x4_t cv_mask =
-		vdupq_n_u32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-			    PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+		vdupq_n_u32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			    RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 	const uint64x2_t mbuf_init = vld1q_u64
 				((const uint64_t *)&rxq->mbuf_initializer);
 	uint64x2_t rearm0, rearm1, rearm2, rearm3;
@@ -419,11 +419,11 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 
 	if (rxq->mark) {
 		const uint32x4_t ft_def = vdupq_n_u32(MLX5_FLOW_MARK_DEFAULT);
-		const uint32x4_t fdir_flags = vdupq_n_u32(PKT_RX_FDIR);
-		uint32x4_t fdir_id_flags = vdupq_n_u32(PKT_RX_FDIR_ID);
+		const uint32x4_t fdir_flags = vdupq_n_u32(RTE_MBUF_F_RX_FDIR);
+		uint32x4_t fdir_id_flags = vdupq_n_u32(RTE_MBUF_F_RX_FDIR_ID);
 		uint32x4_t invalid_mask;
 
-		/* Check if flow tag is non-zero then set PKT_RX_FDIR. */
+		/* Check if flow tag is non-zero then set RTE_MBUF_F_RX_FDIR. */
 		invalid_mask = vceqzq_u32(flow_tag);
 		ol_flags = vorrq_u32(ol_flags,
 				     vbicq_u32(fdir_flags, invalid_mask));
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
index adf991f013..b0fc29d7b9 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
@@ -204,12 +204,12 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const __m128i ft_mask =
 					_mm_set1_epi32(0xffffff00);
 				const __m128i fdir_flags =
-					_mm_set1_epi32(PKT_RX_FDIR);
+					_mm_set1_epi32(RTE_MBUF_F_RX_FDIR);
 				const __m128i fdir_all_flags =
-					_mm_set1_epi32(PKT_RX_FDIR |
-						       PKT_RX_FDIR_ID);
+					_mm_set1_epi32(RTE_MBUF_F_RX_FDIR |
+						       RTE_MBUF_F_RX_FDIR_ID);
 				__m128i fdir_id_flags =
-					_mm_set1_epi32(PKT_RX_FDIR_ID);
+					_mm_set1_epi32(RTE_MBUF_F_RX_FDIR_ID);
 
 				/* Extract flow_tag field. */
 				__m128i ftag0 =
@@ -223,7 +223,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 
 				ol_flags_mask = _mm_or_si128(ol_flags_mask,
 							     fdir_all_flags);
-				/* Set PKT_RX_FDIR if flow tag is non-zero. */
+				/* Set RTE_MBUF_F_RX_FDIR if flow tag is non-zero. */
 				ol_flags = _mm_or_si128(ol_flags,
 					_mm_andnot_si128(invalid_mask,
 							 fdir_flags));
@@ -260,8 +260,8 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const uint8_t pkt_hdr3 =
 					_mm_extract_epi8(mcqe2, 8);
 				const __m128i vlan_mask =
-					_mm_set1_epi32(PKT_RX_VLAN |
-						       PKT_RX_VLAN_STRIPPED);
+					_mm_set1_epi32(RTE_MBUF_F_RX_VLAN |
+						       RTE_MBUF_F_RX_VLAN_STRIPPED);
 				const __m128i cv_mask =
 					_mm_set1_epi32(MLX5_CQE_VLAN_STRIPPED);
 				const __m128i pkt_cv =
@@ -303,7 +303,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				}
 			}
 			const __m128i hash_flags =
-				_mm_set1_epi32(PKT_RX_RSS_HASH);
+				_mm_set1_epi32(RTE_MBUF_F_RX_RSS_HASH);
 			const __m128i rearm_flags =
 				_mm_set1_epi32((uint32_t)t_pkt->ol_flags);
 
@@ -381,7 +381,7 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq, __m128i cqes[4],
 {
 	__m128i pinfo0, pinfo1;
 	__m128i pinfo, ptype;
-	__m128i ol_flags = _mm_set1_epi32(rxq->rss_hash * PKT_RX_RSS_HASH |
+	__m128i ol_flags = _mm_set1_epi32(rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 					  rxq->hw_timestamp * rxq->timestamp_rx_flag);
 	__m128i cv_flags;
 	const __m128i zero = _mm_setzero_si128();
@@ -390,17 +390,17 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq, __m128i cqes[4],
 	const __m128i pinfo_mask = _mm_set1_epi32(0x3);
 	const __m128i cv_flag_sel =
 		_mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0, 0,
-			     (uint8_t)((PKT_RX_IP_CKSUM_GOOD |
-					PKT_RX_L4_CKSUM_GOOD) >> 1),
+			     (uint8_t)((RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+					RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 			     0,
-			     (uint8_t)(PKT_RX_L4_CKSUM_GOOD >> 1),
+			     (uint8_t)(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1),
 			     0,
-			     (uint8_t)(PKT_RX_IP_CKSUM_GOOD >> 1),
-			     (uint8_t)(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
+			     (uint8_t)(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1),
+			     (uint8_t)(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
 			     0);
 	const __m128i cv_mask =
-		_mm_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-			      PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+		_mm_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 	const __m128i mbuf_init =
 		_mm_load_si128((__m128i *)&rxq->mbuf_initializer);
 	__m128i rearm0, rearm1, rearm2, rearm3;
@@ -416,12 +416,12 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq, __m128i cqes[4],
 	ptype = _mm_unpacklo_epi64(pinfo0, pinfo1);
 	if (rxq->mark) {
 		const __m128i pinfo_ft_mask = _mm_set1_epi32(0xffffff00);
-		const __m128i fdir_flags = _mm_set1_epi32(PKT_RX_FDIR);
-		__m128i fdir_id_flags = _mm_set1_epi32(PKT_RX_FDIR_ID);
+		const __m128i fdir_flags = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR);
+		__m128i fdir_id_flags = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR_ID);
 		__m128i flow_tag, invalid_mask;
 
 		flow_tag = _mm_and_si128(pinfo, pinfo_ft_mask);
-		/* Check if flow tag is non-zero then set PKT_RX_FDIR. */
+		/* Check if flow tag is non-zero then set RTE_MBUF_F_RX_FDIR. */
 		invalid_mask = _mm_cmpeq_epi32(flow_tag, zero);
 		ol_flags = _mm_or_si128(ol_flags,
 					_mm_andnot_si128(invalid_mask,
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 4f83291cc2..67985d3402 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -78,7 +78,7 @@ uint16_t mlx5_tx_burst_##func(void *txq, \
 
 /* Mbuf dynamic flag offset for inline. */
 extern uint64_t rte_net_mlx5_dynf_inline_mask;
-#define PKT_TX_DYNF_NOINLINE rte_net_mlx5_dynf_inline_mask
+#define RTE_MBUF_F_TX_DYNF_NOINLINE rte_net_mlx5_dynf_inline_mask
 
 extern uint32_t mlx5_ptype_table[] __rte_cache_aligned;
 extern uint8_t mlx5_cksum_table[1 << 10] __rte_cache_aligned;
@@ -513,22 +513,22 @@ txq_mbuf_to_swp(struct mlx5_txq_local *__rte_restrict loc,
 	if (!MLX5_TXOFF_CONFIG(SWP))
 		return 0;
 	ol = loc->mbuf->ol_flags;
-	tunnel = ol & PKT_TX_TUNNEL_MASK;
+	tunnel = ol & RTE_MBUF_F_TX_TUNNEL_MASK;
 	/*
 	 * Check whether Software Parser is required.
 	 * Only customized tunnels may ask for.
 	 */
-	if (likely(tunnel != PKT_TX_TUNNEL_UDP && tunnel != PKT_TX_TUNNEL_IP))
+	if (likely(tunnel != RTE_MBUF_F_TX_TUNNEL_UDP && tunnel != RTE_MBUF_F_TX_TUNNEL_IP))
 		return 0;
 	/*
 	 * The index should have:
-	 * bit[0:1] = PKT_TX_L4_MASK
-	 * bit[4] = PKT_TX_IPV6
-	 * bit[8] = PKT_TX_OUTER_IPV6
-	 * bit[9] = PKT_TX_OUTER_UDP
+	 * bit[0:1] = RTE_MBUF_F_TX_L4_MASK
+	 * bit[4] = RTE_MBUF_F_TX_IPV6
+	 * bit[8] = RTE_MBUF_F_TX_OUTER_IPV6
+	 * bit[9] = RTE_MBUF_F_TX_OUTER_UDP
 	 */
-	idx = (ol & (PKT_TX_L4_MASK | PKT_TX_IPV6 | PKT_TX_OUTER_IPV6)) >> 52;
-	idx |= (tunnel == PKT_TX_TUNNEL_UDP) ? (1 << 9) : 0;
+	idx = (ol & (RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_IPV6 | RTE_MBUF_F_TX_OUTER_IPV6)) >> 52;
+	idx |= (tunnel == RTE_MBUF_F_TX_TUNNEL_UDP) ? (1 << 9) : 0;
 	*swp_flags = mlx5_swp_types_table[idx];
 	/*
 	 * Set offsets for SW parser. Since ConnectX-5, SW parser just
@@ -538,19 +538,19 @@ txq_mbuf_to_swp(struct mlx5_txq_local *__rte_restrict loc,
 	 * should be set regardless of HW offload.
 	 */
 	off = loc->mbuf->outer_l2_len;
-	if (MLX5_TXOFF_CONFIG(VLAN) && ol & PKT_TX_VLAN)
+	if (MLX5_TXOFF_CONFIG(VLAN) && ol & RTE_MBUF_F_TX_VLAN)
 		off += sizeof(struct rte_vlan_hdr);
 	set = (off >> 1) << 8; /* Outer L3 offset. */
 	off += loc->mbuf->outer_l3_len;
-	if (tunnel == PKT_TX_TUNNEL_UDP)
+	if (tunnel == RTE_MBUF_F_TX_TUNNEL_UDP)
 		set |= off >> 1; /* Outer L4 offset. */
-	if (ol & (PKT_TX_IPV4 | PKT_TX_IPV6)) { /* Inner IP. */
-		const uint64_t csum = ol & PKT_TX_L4_MASK;
+	if (ol & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)) { /* Inner IP. */
+		const uint64_t csum = ol & RTE_MBUF_F_TX_L4_MASK;
 			off += loc->mbuf->l2_len;
 		set |= (off >> 1) << 24; /* Inner L3 offset. */
-		if (csum == PKT_TX_TCP_CKSUM ||
-		    csum == PKT_TX_UDP_CKSUM ||
-		    (MLX5_TXOFF_CONFIG(TSO) && ol & PKT_TX_TCP_SEG)) {
+		if (csum == RTE_MBUF_F_TX_TCP_CKSUM ||
+		    csum == RTE_MBUF_F_TX_UDP_CKSUM ||
+		    (MLX5_TXOFF_CONFIG(TSO) && ol & RTE_MBUF_F_TX_TCP_SEG)) {
 			off += loc->mbuf->l3_len;
 			set |= (off >> 1) << 16; /* Inner L4 offset. */
 		}
@@ -572,16 +572,16 @@ static __rte_always_inline uint8_t
 txq_ol_cksum_to_cs(struct rte_mbuf *buf)
 {
 	uint32_t idx;
-	uint8_t is_tunnel = !!(buf->ol_flags & PKT_TX_TUNNEL_MASK);
-	const uint64_t ol_flags_mask = PKT_TX_TCP_SEG | PKT_TX_L4_MASK |
-				       PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM;
+	uint8_t is_tunnel = !!(buf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK);
+	const uint64_t ol_flags_mask = RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_L4_MASK |
+				       RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_OUTER_IP_CKSUM;
 
 	/*
 	 * The index should have:
-	 * bit[0] = PKT_TX_TCP_SEG
-	 * bit[2:3] = PKT_TX_UDP_CKSUM, PKT_TX_TCP_CKSUM
-	 * bit[4] = PKT_TX_IP_CKSUM
-	 * bit[8] = PKT_TX_OUTER_IP_CKSUM
+	 * bit[0] = RTE_MBUF_F_TX_TCP_SEG
+	 * bit[2:3] = RTE_MBUF_F_TX_UDP_CKSUM, RTE_MBUF_F_TX_TCP_CKSUM
+	 * bit[4] = RTE_MBUF_F_TX_IP_CKSUM
+	 * bit[8] = RTE_MBUF_F_TX_OUTER_IP_CKSUM
 	 * bit[9] = tunnel
 	 */
 	idx = ((buf->ol_flags & ol_flags_mask) >> 50) | (!!is_tunnel << 9);
@@ -952,12 +952,12 @@ mlx5_tx_eseg_none(struct mlx5_txq_data *__rte_restrict txq __rte_unused,
 	es->swp_offs = txq_mbuf_to_swp(loc, &es->swp_flags, olx);
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
-		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
+		       loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
 		       rte_cpu_to_be_32(*RTE_FLOW_DYNF_METADATA(loc->mbuf)) :
 		       0 : 0;
 	/* Engage VLAN tag insertion feature if requested. */
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    loc->mbuf->ol_flags & PKT_TX_VLAN) {
+	    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		/*
 		 * We should get here only if device support
 		 * this feature correctly.
@@ -1013,7 +1013,7 @@ mlx5_tx_eseg_dmin(struct mlx5_txq_data *__rte_restrict txq __rte_unused,
 	es->swp_offs = txq_mbuf_to_swp(loc, &es->swp_flags, olx);
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
-		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
+		       loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
 		       rte_cpu_to_be_32(*RTE_FLOW_DYNF_METADATA(loc->mbuf)) :
 		       0 : 0;
 	psrc = rte_pktmbuf_mtod(loc->mbuf, uint8_t *);
@@ -1097,7 +1097,7 @@ mlx5_tx_eseg_data(struct mlx5_txq_data *__rte_restrict txq,
 	es->swp_offs = txq_mbuf_to_swp(loc, &es->swp_flags, olx);
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
-		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
+		       loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
 		       rte_cpu_to_be_32(*RTE_FLOW_DYNF_METADATA(loc->mbuf)) :
 		       0 : 0;
 	psrc = rte_pktmbuf_mtod(loc->mbuf, uint8_t *);
@@ -1206,7 +1206,7 @@ mlx5_tx_mseg_memcpy(uint8_t *pdst,
 			MLX5_ASSERT(loc->mbuf_nseg > 1);
 			MLX5_ASSERT(loc->mbuf);
 			--loc->mbuf_nseg;
-			if (loc->mbuf->ol_flags & PKT_TX_DYNF_NOINLINE) {
+			if (loc->mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE) {
 				unsigned int diff;
 
 				if (copy >= must) {
@@ -1310,7 +1310,7 @@ mlx5_tx_eseg_mdat(struct mlx5_txq_data *__rte_restrict txq,
 	es->swp_offs = txq_mbuf_to_swp(loc, &es->swp_flags, olx);
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
-		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
+		       loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
 		       rte_cpu_to_be_32(*RTE_FLOW_DYNF_METADATA(loc->mbuf)) :
 		       0 : 0;
 	MLX5_ASSERT(inlen >= MLX5_ESEG_MIN_INLINE_SIZE);
@@ -1818,13 +1818,13 @@ mlx5_tx_packet_multi_tso(struct mlx5_txq_data *__rte_restrict txq,
 	 * the required space in WQE ring buffer.
 	 */
 	dlen = rte_pktmbuf_pkt_len(loc->mbuf);
-	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN)
+	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN)
 		vlan = sizeof(struct rte_vlan_hdr);
 	inlen = loc->mbuf->l2_len + vlan +
 		loc->mbuf->l3_len + loc->mbuf->l4_len;
 	if (unlikely((!inlen || !loc->mbuf->tso_segsz)))
 		return MLX5_TXCMP_CODE_ERROR;
-	if (loc->mbuf->ol_flags & PKT_TX_TUNNEL_MASK)
+	if (loc->mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 		inlen += loc->mbuf->outer_l2_len + loc->mbuf->outer_l3_len;
 	/* Packet must contain all TSO headers. */
 	if (unlikely(inlen > MLX5_MAX_TSO_HEADER ||
@@ -1933,7 +1933,7 @@ mlx5_tx_packet_multi_send(struct mlx5_txq_data *__rte_restrict txq,
 	/* Update sent data bytes counter. */
 	txq->stats.obytes += rte_pktmbuf_pkt_len(loc->mbuf);
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    loc->mbuf->ol_flags & PKT_TX_VLAN)
+	    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN)
 		txq->stats.obytes += sizeof(struct rte_vlan_hdr);
 #endif
 	/*
@@ -2032,7 +2032,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 	 * to estimate the required space for WQE.
 	 */
 	dlen = rte_pktmbuf_pkt_len(loc->mbuf);
-	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN)
+	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN)
 		vlan = sizeof(struct rte_vlan_hdr);
 	inlen = dlen + vlan;
 	/* Check against minimal length. */
@@ -2040,7 +2040,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 		return MLX5_TXCMP_CODE_ERROR;
 	MLX5_ASSERT(txq->inlen_send >= MLX5_ESEG_MIN_INLINE_SIZE);
 	if (inlen > txq->inlen_send ||
-	    loc->mbuf->ol_flags & PKT_TX_DYNF_NOINLINE) {
+	    loc->mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE) {
 		struct rte_mbuf *mbuf;
 		unsigned int nxlen;
 		uintptr_t start;
@@ -2062,7 +2062,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 			 * support the offload, will do with software inline.
 			 */
 			inlen = MLX5_ESEG_MIN_INLINE_SIZE;
-		} else if (mbuf->ol_flags & PKT_TX_DYNF_NOINLINE ||
+		} else if (mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE ||
 			   nxlen > txq->inlen_send) {
 			return mlx5_tx_packet_multi_send(txq, loc, olx);
 		} else {
@@ -2202,7 +2202,7 @@ mlx5_tx_burst_mseg(struct mlx5_txq_data *__rte_restrict txq,
 		if (loc->elts_free < NB_SEGS(loc->mbuf))
 			return MLX5_TXCMP_CODE_EXIT;
 		if (MLX5_TXOFF_CONFIG(TSO) &&
-		    unlikely(loc->mbuf->ol_flags & PKT_TX_TCP_SEG)) {
+		    unlikely(loc->mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			/* Proceed with multi-segment TSO. */
 			ret = mlx5_tx_packet_multi_tso(txq, loc, olx);
 		} else if (MLX5_TXOFF_CONFIG(INLINE)) {
@@ -2228,7 +2228,7 @@ mlx5_tx_burst_mseg(struct mlx5_txq_data *__rte_restrict txq,
 			continue;
 		/* Here ends the series of multi-segment packets. */
 		if (MLX5_TXOFF_CONFIG(TSO) &&
-		    unlikely(loc->mbuf->ol_flags & PKT_TX_TCP_SEG))
+		    unlikely(loc->mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 			return MLX5_TXCMP_CODE_TSO;
 		return MLX5_TXCMP_CODE_SINGLE;
 	}
@@ -2295,7 +2295,7 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq,
 		}
 		dlen = rte_pktmbuf_data_len(loc->mbuf);
 		if (MLX5_TXOFF_CONFIG(VLAN) &&
-		    loc->mbuf->ol_flags & PKT_TX_VLAN) {
+		    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			vlan = sizeof(struct rte_vlan_hdr);
 		}
 		/*
@@ -2306,7 +2306,7 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq,
 		       loc->mbuf->l3_len + loc->mbuf->l4_len;
 		if (unlikely((!hlen || !loc->mbuf->tso_segsz)))
 			return MLX5_TXCMP_CODE_ERROR;
-		if (loc->mbuf->ol_flags & PKT_TX_TUNNEL_MASK)
+		if (loc->mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 			hlen += loc->mbuf->outer_l2_len +
 				loc->mbuf->outer_l3_len;
 		/* Segment must contain all TSO headers. */
@@ -2372,7 +2372,7 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq,
 		if (MLX5_TXOFF_CONFIG(MULTI) &&
 		    unlikely(NB_SEGS(loc->mbuf) > 1))
 			return MLX5_TXCMP_CODE_MULTI;
-		if (likely(!(loc->mbuf->ol_flags & PKT_TX_TCP_SEG)))
+		if (likely(!(loc->mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG)))
 			return MLX5_TXCMP_CODE_SINGLE;
 		/* Continue with the next TSO packet. */
 	}
@@ -2413,14 +2413,14 @@ mlx5_tx_able_to_empw(struct mlx5_txq_data *__rte_restrict txq,
 	/* Check for TSO packet. */
 	if (newp &&
 	    MLX5_TXOFF_CONFIG(TSO) &&
-	    unlikely(loc->mbuf->ol_flags & PKT_TX_TCP_SEG))
+	    unlikely(loc->mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		return MLX5_TXCMP_CODE_TSO;
 	/* Check if eMPW is enabled at all. */
 	if (!MLX5_TXOFF_CONFIG(EMPW))
 		return MLX5_TXCMP_CODE_SINGLE;
 	/* Check if eMPW can be engaged. */
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    unlikely(loc->mbuf->ol_flags & PKT_TX_VLAN) &&
+	    unlikely(loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) &&
 		(!MLX5_TXOFF_CONFIG(INLINE) ||
 		 unlikely((rte_pktmbuf_data_len(loc->mbuf) +
 			   sizeof(struct rte_vlan_hdr)) > txq->inlen_empw))) {
@@ -2473,8 +2473,8 @@ mlx5_tx_match_empw(struct mlx5_txq_data *__rte_restrict txq,
 		return false;
 	/* Fill metadata field if needed. */
 	if (MLX5_TXOFF_CONFIG(METADATA) &&
-		es->metadata != (loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
-		rte_cpu_to_be_32(*RTE_FLOW_DYNF_METADATA(loc->mbuf)) : 0))
+		es->metadata != (loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
+				 rte_cpu_to_be_32(*RTE_FLOW_DYNF_METADATA(loc->mbuf)) : 0))
 		return false;
 	/* Legacy MPW can send packets with the same length only. */
 	if (MLX5_TXOFF_CONFIG(MPW) &&
@@ -2482,7 +2482,7 @@ mlx5_tx_match_empw(struct mlx5_txq_data *__rte_restrict txq,
 		return false;
 	/* There must be no VLAN packets in eMPW loop. */
 	if (MLX5_TXOFF_CONFIG(VLAN))
-		MLX5_ASSERT(!(loc->mbuf->ol_flags & PKT_TX_VLAN));
+		MLX5_ASSERT(!(loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN));
 	/* Check if the scheduling is requested. */
 	if (MLX5_TXOFF_CONFIG(TXPP) &&
 	    loc->mbuf->ol_flags & txq->ts_mask)
@@ -2918,7 +2918,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq,
 			}
 			/* Inline or not inline - that's the Question. */
 			if (dlen > txq->inlen_empw ||
-			    loc->mbuf->ol_flags & PKT_TX_DYNF_NOINLINE)
+			    loc->mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE)
 				goto pointer_empw;
 			if (MLX5_TXOFF_CONFIG(MPW)) {
 				if (dlen > txq->inlen_send)
@@ -2943,7 +2943,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq,
 			}
 			/* Inline entire packet, optional VLAN insertion. */
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN) {
+			    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 				/*
 				 * The packet length must be checked in
 				 * mlx5_tx_able_to_empw() and packet
@@ -3008,7 +3008,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq,
 			MLX5_ASSERT(room >= MLX5_WQE_DSEG_SIZE);
 			if (MLX5_TXOFF_CONFIG(VLAN))
 				MLX5_ASSERT(!(loc->mbuf->ol_flags &
-					    PKT_TX_VLAN));
+					    RTE_MBUF_F_TX_VLAN));
 			mlx5_tx_dseg_ptr(txq, loc, dseg, dptr, dlen, olx);
 			/* We have to store mbuf in elts.*/
 			txq->elts[txq->elts_head++ & txq->elts_m] = loc->mbuf;
@@ -3153,7 +3153,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
 
 			inlen = rte_pktmbuf_data_len(loc->mbuf);
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN) {
+			    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 				vlan = sizeof(struct rte_vlan_hdr);
 				inlen += vlan;
 			}
@@ -3174,7 +3174,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
 				if (inlen <= MLX5_ESEG_MIN_INLINE_SIZE)
 					return MLX5_TXCMP_CODE_ERROR;
 				if (loc->mbuf->ol_flags &
-				    PKT_TX_DYNF_NOINLINE) {
+				    RTE_MBUF_F_TX_DYNF_NOINLINE) {
 					/*
 					 * The hint flag not to inline packet
 					 * data is set. Check whether we can
@@ -3384,7 +3384,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
 			/* Update sent data bytes counter. */
 			txq->stats.obytes += rte_pktmbuf_data_len(loc->mbuf);
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN)
+			    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN)
 				txq->stats.obytes +=
 					sizeof(struct rte_vlan_hdr);
 #endif
@@ -3580,7 +3580,7 @@ mlx5_tx_burst_tmpl(struct mlx5_txq_data *__rte_restrict txq,
 		}
 		/* Dedicated branch for single-segment TSO packets. */
 		if (MLX5_TXOFF_CONFIG(TSO) &&
-		    unlikely(loc.mbuf->ol_flags & PKT_TX_TCP_SEG)) {
+		    unlikely(loc.mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			/*
 			 * TSO might require special way for inlining
 			 * (dedicated parameters) and is sent with
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index ef8067790f..d4a06d5795 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -64,9 +64,9 @@
 #define MVNETA_TX_OFFLOADS (MVNETA_TX_OFFLOAD_CHECKSUM | \
 			    DEV_TX_OFFLOAD_MULTI_SEGS)
 
-#define MVNETA_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
-				PKT_TX_TCP_CKSUM | \
-				PKT_TX_UDP_CKSUM)
+#define MVNETA_TX_PKT_OFFLOADS (RTE_MBUF_F_TX_IP_CKSUM | \
+				RTE_MBUF_F_TX_TCP_CKSUM | \
+				RTE_MBUF_F_TX_UDP_CKSUM)
 
 struct mvneta_priv {
 	/* Hot fields, used in fast path. */
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index 2d61930382..f5340aa8df 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -304,18 +304,18 @@ mvneta_prepare_proto_info(uint64_t ol_flags,
 	 * default value
 	 */
 	*l3_type = NETA_OUTQ_L3_TYPE_IPV4;
-	*gen_l3_cksum = ol_flags & PKT_TX_IP_CKSUM ? 1 : 0;
+	*gen_l3_cksum = ol_flags & RTE_MBUF_F_TX_IP_CKSUM ? 1 : 0;
 
-	if (ol_flags & PKT_TX_IPV6) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*l3_type = NETA_OUTQ_L3_TYPE_IPV6;
 		/* no checksum for ipv6 header */
 		*gen_l3_cksum = 0;
 	}
 
-	if (ol_flags & PKT_TX_TCP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) {
 		*l4_type = NETA_OUTQ_L4_TYPE_TCP;
 		*gen_l4_cksum = 1;
-	} else if (ol_flags & PKT_TX_UDP_CKSUM) {
+	} else if (ol_flags & RTE_MBUF_F_TX_UDP_CKSUM) {
 		*l4_type = NETA_OUTQ_L4_TYPE_UDP;
 		*gen_l4_cksum = 1;
 	} else {
@@ -342,15 +342,15 @@ mvneta_desc_to_ol_flags(struct neta_ppio_desc *desc)
 
 	status = neta_ppio_inq_desc_get_l3_pkt_error(desc);
 	if (unlikely(status != NETA_DESC_ERR_OK))
-		flags = PKT_RX_IP_CKSUM_BAD;
+		flags = RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags = PKT_RX_IP_CKSUM_GOOD;
+		flags = RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	status = neta_ppio_inq_desc_get_l4_pkt_error(desc);
 	if (unlikely(status != NETA_DESC_ERR_OK))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	return flags;
 }
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 65d011300a..50b991afef 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -69,9 +69,9 @@
 #define MRVL_TX_OFFLOADS (MRVL_TX_OFFLOAD_CHECKSUM | \
 			  DEV_TX_OFFLOAD_MULTI_SEGS)
 
-#define MRVL_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
-			      PKT_TX_TCP_CKSUM | \
-			      PKT_TX_UDP_CKSUM)
+#define MRVL_TX_PKT_OFFLOADS (RTE_MBUF_F_TX_IP_CKSUM | \
+			      RTE_MBUF_F_TX_TCP_CKSUM | \
+			      RTE_MBUF_F_TX_UDP_CKSUM)
 
 static const char * const valid_args[] = {
 	MRVL_IFACE_NAME_ARG,
@@ -2549,18 +2549,18 @@ mrvl_desc_to_ol_flags(struct pp2_ppio_desc *desc, uint64_t packet_type)
 	if (RTE_ETH_IS_IPV4_HDR(packet_type)) {
 		status = pp2_ppio_inq_desc_get_l3_pkt_error(desc);
 		if (unlikely(status != PP2_DESC_ERR_OK))
-			flags |= PKT_RX_IP_CKSUM_BAD;
+			flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else
-			flags |= PKT_RX_IP_CKSUM_GOOD;
+			flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	}
 
 	if (((packet_type & RTE_PTYPE_L4_UDP) == RTE_PTYPE_L4_UDP) ||
 	    ((packet_type & RTE_PTYPE_L4_TCP) == RTE_PTYPE_L4_TCP)) {
 		status = pp2_ppio_inq_desc_get_l4_pkt_error(desc);
 		if (unlikely(status != PP2_DESC_ERR_OK))
-			flags |= PKT_RX_L4_CKSUM_BAD;
+			flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else
-			flags |= PKT_RX_L4_CKSUM_GOOD;
+			flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	return flags;
@@ -2720,18 +2720,18 @@ mrvl_prepare_proto_info(uint64_t ol_flags,
 	 * default value
 	 */
 	*l3_type = PP2_OUTQ_L3_TYPE_IPV4;
-	*gen_l3_cksum = ol_flags & PKT_TX_IP_CKSUM ? 1 : 0;
+	*gen_l3_cksum = ol_flags & RTE_MBUF_F_TX_IP_CKSUM ? 1 : 0;
 
-	if (ol_flags & PKT_TX_IPV6) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*l3_type = PP2_OUTQ_L3_TYPE_IPV6;
 		/* no checksum for ipv6 header */
 		*gen_l3_cksum = 0;
 	}
 
-	if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) {
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM) {
 		*l4_type = PP2_OUTQ_L4_TYPE_TCP;
 		*gen_l4_cksum = 1;
-	} else if ((ol_flags & PKT_TX_L4_MASK) ==  PKT_TX_UDP_CKSUM) {
+	} else if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) ==  RTE_MBUF_F_TX_UDP_CKSUM) {
 		*l4_type = PP2_OUTQ_L4_TYPE_UDP;
 		*gen_l4_cksum = 1;
 	} else {
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index acae68e082..b915bff459 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -615,7 +615,7 @@ static void hn_rxpkt(struct hn_rx_queue *rxq, struct hn_rx_bufinfo *rxb,
 
 	if (info->vlan_info != HN_NDIS_VLAN_INFO_INVALID) {
 		m->vlan_tci = info->vlan_info;
-		m->ol_flags |= PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+		m->ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_RX_VLAN;
 
 		/* NDIS always strips tag, put it back if necessary */
 		if (!hv->vlan_strip && rte_vlan_insert(&m)) {
@@ -630,18 +630,18 @@ static void hn_rxpkt(struct hn_rx_queue *rxq, struct hn_rx_bufinfo *rxb,
 
 	if (info->csum_info != HN_NDIS_RXCSUM_INFO_INVALID) {
 		if (info->csum_info & NDIS_RXCSUM_INFO_IPCS_OK)
-			m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 		if (info->csum_info & (NDIS_RXCSUM_INFO_UDPCS_OK
 				       | NDIS_RXCSUM_INFO_TCPCS_OK))
-			m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		else if (info->csum_info & (NDIS_RXCSUM_INFO_TCPCS_FAILED
 					    | NDIS_RXCSUM_INFO_UDPCS_FAILED))
-			m->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	}
 
 	if (info->hash_info != HN_NDIS_HASH_INFO_INVALID) {
-		m->ol_flags |= PKT_RX_RSS_HASH;
+		m->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		m->hash.rss = info->hash_value;
 	}
 
@@ -1331,17 +1331,17 @@ static void hn_encap(struct rndis_packet_msg *pkt,
 					  NDIS_PKTINFO_TYPE_HASHVAL);
 	*pi_data = queue_id;
 
-	if (m->ol_flags & PKT_TX_VLAN) {
+	if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		pi_data = hn_rndis_pktinfo_append(pkt, NDIS_VLAN_INFO_SIZE,
 						  NDIS_PKTINFO_TYPE_VLAN);
 		*pi_data = m->vlan_tci;
 	}
 
-	if (m->ol_flags & PKT_TX_TCP_SEG) {
+	if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		pi_data = hn_rndis_pktinfo_append(pkt, NDIS_LSO2_INFO_SIZE,
 						  NDIS_PKTINFO_TYPE_LSO);
 
-		if (m->ol_flags & PKT_TX_IPV6) {
+		if (m->ol_flags & RTE_MBUF_F_TX_IPV6) {
 			*pi_data = NDIS_LSO2_INFO_MAKEIPV6(hlen,
 							   m->tso_segsz);
 		} else {
@@ -1349,23 +1349,23 @@ static void hn_encap(struct rndis_packet_msg *pkt,
 							   m->tso_segsz);
 		}
 	} else if (m->ol_flags &
-		   (PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM)) {
+		   (RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_IP_CKSUM)) {
 		pi_data = hn_rndis_pktinfo_append(pkt, NDIS_TXCSUM_INFO_SIZE,
 						  NDIS_PKTINFO_TYPE_CSUM);
 		*pi_data = 0;
 
-		if (m->ol_flags & PKT_TX_IPV6)
+		if (m->ol_flags & RTE_MBUF_F_TX_IPV6)
 			*pi_data |= NDIS_TXCSUM_INFO_IPV6;
-		if (m->ol_flags & PKT_TX_IPV4) {
+		if (m->ol_flags & RTE_MBUF_F_TX_IPV4) {
 			*pi_data |= NDIS_TXCSUM_INFO_IPV4;
 
-			if (m->ol_flags & PKT_TX_IP_CKSUM)
+			if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 				*pi_data |= NDIS_TXCSUM_INFO_IPCS;
 		}
 
-		if (m->ol_flags & PKT_TX_TCP_CKSUM)
+		if (m->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM)
 			*pi_data |= NDIS_TXCSUM_INFO_MKTCPCS(hlen);
-		else if (m->ol_flags & PKT_TX_UDP_CKSUM)
+		else if (m->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM)
 			*pi_data |= NDIS_TXCSUM_INFO_MKUDPCS(hlen);
 	}
 
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 0df300fe0d..300382984d 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -203,7 +203,7 @@ nfp_net_set_hash(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
 	}
 
 	mbuf->hash.rss = hash;
-	mbuf->ol_flags |= PKT_RX_RSS_HASH;
+	mbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 
 	switch (hash_type) {
 	case NFP_NET_RSS_IPV4:
@@ -245,9 +245,9 @@ nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
 	/* If IPv4 and IP checksum error, fail */
 	if (unlikely((rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM) &&
 	    !(rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM_OK)))
-		mb->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		mb->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		mb->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+		mb->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	/* If neither UDP nor TCP return */
 	if (!(rxd->rxd.flags & PCIE_DESC_RX_TCP_CSUM) &&
@@ -255,9 +255,9 @@ nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
 		return;
 
 	if (likely(rxd->rxd.flags & PCIE_DESC_RX_L4_CSUM_OK))
-		mb->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		mb->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	else
-		mb->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		mb->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 }
 
 /*
@@ -403,7 +403,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		if ((rxds->rxd.flags & PCIE_DESC_RX_VLAN) &&
 		    (hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN)) {
 			mb->vlan_tci = rte_cpu_to_le_32(rxds->rxd.vlan);
-			mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		}
 
 		/* Adding the mbuf to the mbuf array passed by the app */
@@ -827,7 +827,7 @@ nfp_net_tx_tso(struct nfp_net_txq *txq, struct nfp_net_tx_desc *txd,
 
 	ol_flags = mb->ol_flags;
 
-	if (!(ol_flags & PKT_TX_TCP_SEG))
+	if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		goto clean_txd;
 
 	txd->l3_offset = mb->l2_len;
@@ -859,19 +859,19 @@ nfp_net_tx_cksum(struct nfp_net_txq *txq, struct nfp_net_tx_desc *txd,
 	ol_flags = mb->ol_flags;
 
 	/* IPv6 does not need checksum */
-	if (ol_flags & PKT_TX_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 		txd->flags |= PCIE_DESC_TX_IP4_CSUM;
 
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_UDP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		txd->flags |= PCIE_DESC_TX_UDP_CSUM;
 		break;
-	case PKT_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		txd->flags |= PCIE_DESC_TX_TCP_CSUM;
 		break;
 	}
 
-	if (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK))
+	if (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK))
 		txd->flags |= PCIE_DESC_TX_CSUM;
 }
 
@@ -935,7 +935,7 @@ nfp_net_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		nfp_net_tx_tso(txq, &txd, pkt);
 		nfp_net_tx_cksum(txq, &txd, pkt);
 
-		if ((pkt->ol_flags & PKT_TX_VLAN) &&
+		if ((pkt->ol_flags & RTE_MBUF_F_TX_VLAN) &&
 		    (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)) {
 			txd.flags |= PCIE_DESC_TX_VLAN;
 			txd.vlan = pkt->vlan_tci;
diff --git a/drivers/net/octeontx/octeontx_rxtx.h b/drivers/net/octeontx/octeontx_rxtx.h
index e0723ac26a..eeadd555c7 100644
--- a/drivers/net/octeontx/octeontx_rxtx.h
+++ b/drivers/net/octeontx/octeontx_rxtx.h
@@ -242,20 +242,20 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 	 * 0x2 - TCP L4 checksum
 	 * 0x3 - SCTP L4 checksum
 	 */
-	const uint8_t csum = (!(((ol_flags ^ PKT_TX_UDP_CKSUM) >> 52) & 0x3) +
-		      (!(((ol_flags ^ PKT_TX_TCP_CKSUM) >> 52) & 0x3) * 2) +
-		      (!(((ol_flags ^ PKT_TX_SCTP_CKSUM) >> 52) & 0x3) * 3));
-
-	const uint8_t is_tunnel_parsed = (!!(ol_flags & PKT_TX_TUNNEL_GTP) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_VXLAN_GPE) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_VXLAN) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_GRE) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_GENEVE) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_IP) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_IPIP));
-
-	const uint8_t csum_outer = (!!(ol_flags & PKT_TX_OUTER_UDP_CKSUM) ||
-				    !!(ol_flags & PKT_TX_TUNNEL_UDP));
+	const uint8_t csum = (!(((ol_flags ^ RTE_MBUF_F_TX_UDP_CKSUM) >> 52) & 0x3) +
+		      (!(((ol_flags ^ RTE_MBUF_F_TX_TCP_CKSUM) >> 52) & 0x3) * 2) +
+		      (!(((ol_flags ^ RTE_MBUF_F_TX_SCTP_CKSUM) >> 52) & 0x3) * 3));
+
+	const uint8_t is_tunnel_parsed = (!!(ol_flags & RTE_MBUF_F_TX_TUNNEL_GTP) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_GRE) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_GENEVE) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_IP) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_IPIP));
+
+	const uint8_t csum_outer = (!!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM) ||
+				    !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_UDP));
 	const uint8_t outer_l2_len = m->outer_l2_len;
 	const uint8_t l2_len = m->l2_len;
 
@@ -266,7 +266,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 			send_hdr->w0.l3ptr = outer_l2_len;
 			send_hdr->w0.l4ptr = outer_l2_len + m->outer_l3_len;
 			/* Set clk3 for PKO to calculate IPV4 header checksum */
-			send_hdr->w0.ckl3 = !!(ol_flags & PKT_TX_OUTER_IPV4);
+			send_hdr->w0.ckl3 = !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4);
 
 			/* Outer L4 */
 			send_hdr->w0.ckl4 = csum_outer;
@@ -277,7 +277,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 			/* Set clke for PKO to calculate inner IPV4 header
 			 * checksum.
 			 */
-			send_hdr->w0.ckle = !!(ol_flags & PKT_TX_IPV4);
+			send_hdr->w0.ckle = !!(ol_flags & RTE_MBUF_F_TX_IPV4);
 
 			/* Inner L4 */
 			send_hdr->w0.cklf = csum;
@@ -286,7 +286,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 			send_hdr->w0.l3ptr = l2_len;
 			send_hdr->w0.l4ptr = l2_len + m->l3_len;
 			/* Set clk3 for PKO to calculate IPV4 header checksum */
-			send_hdr->w0.ckl3 = !!(ol_flags & PKT_TX_IPV4);
+			send_hdr->w0.ckl3 = !!(ol_flags & RTE_MBUF_F_TX_IPV4);
 
 			/* Inner L4 */
 			send_hdr->w0.ckl4 = csum;
@@ -296,7 +296,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 		send_hdr->w0.l3ptr = outer_l2_len;
 		send_hdr->w0.l4ptr = outer_l2_len + m->outer_l3_len;
 		/* Set clk3 for PKO to calculate IPV4 header checksum */
-		send_hdr->w0.ckl3 = !!(ol_flags & PKT_TX_OUTER_IPV4);
+		send_hdr->w0.ckl3 = !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4);
 
 		/* Outer L4 */
 		send_hdr->w0.ckl4 = csum_outer;
@@ -305,7 +305,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 		send_hdr->w0.l3ptr = l2_len;
 		send_hdr->w0.l4ptr = l2_len + m->l3_len;
 		/* Set clk3 for PKO to calculate IPV4 header checksum */
-		send_hdr->w0.ckl3 = !!(ol_flags & PKT_TX_IPV4);
+		send_hdr->w0.ckl3 = !!(ol_flags & RTE_MBUF_F_TX_IPV4);
 
 		/* Inner L4 */
 		send_hdr->w0.ckl4 = csum;
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index d576bc6989..8efeb154b4 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -746,15 +746,15 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	uint16_t flags = 0;
 
 	/* Fastpath is dependent on these enums */
-	RTE_BUILD_BUG_ON(PKT_TX_TCP_CKSUM != (1ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_SCTP_CKSUM != (2ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_UDP_CKSUM != (3ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_IP_CKSUM != (1ULL << 54));
-	RTE_BUILD_BUG_ON(PKT_TX_IPV4 != (1ULL << 55));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IP_CKSUM != (1ULL << 58));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV4 != (1ULL << 59));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV6 != (1ULL << 60));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_UDP_CKSUM != (1ULL << 41));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_TCP_CKSUM != (1ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_SCTP_CKSUM != (2ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_UDP_CKSUM != (3ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IP_CKSUM != (1ULL << 54));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IPV4 != (1ULL << 55));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IP_CKSUM != (1ULL << 58));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV4 != (1ULL << 59));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV6 != (1ULL << 60));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_UDP_CKSUM != (1ULL << 41));
 	RTE_BUILD_BUG_ON(RTE_MBUF_L2_LEN_BITS != 7);
 	RTE_BUILD_BUG_ON(RTE_MBUF_L3_LEN_BITS != 9);
 	RTE_BUILD_BUG_ON(RTE_MBUF_OUTL2_LEN_BITS != 7);
diff --git a/drivers/net/octeontx2/otx2_lookup.c b/drivers/net/octeontx2/otx2_lookup.c
index 4764608c2d..5fa9ae1396 100644
--- a/drivers/net/octeontx2/otx2_lookup.c
+++ b/drivers/net/octeontx2/otx2_lookup.c
@@ -264,9 +264,9 @@ nix_create_rx_ol_flags_array(void *mem)
 		errlev = idx & 0xf;
 		errcode = (idx & 0xff0) >> 4;
 
-		val = PKT_RX_IP_CKSUM_UNKNOWN;
-		val |= PKT_RX_L4_CKSUM_UNKNOWN;
-		val |= PKT_RX_OUTER_L4_CKSUM_UNKNOWN;
+		val = RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN;
 
 		switch (errlev) {
 		case NPC_ERRLEV_RE:
@@ -274,46 +274,46 @@ nix_create_rx_ol_flags_array(void *mem)
 			 * including Outer L2 length mismatch error
 			 */
 			if (errcode) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		case NPC_ERRLEV_LC:
 			if (errcode == NPC_EC_OIP4_CSUM ||
 			    errcode == NPC_EC_IP_FRAG_OFFSET_1) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 			break;
 		case NPC_ERRLEV_LG:
 			if (errcode == NPC_EC_IIP4_CSUM)
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			else
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case NPC_ERRLEV_NIX:
 			if (errcode == NIX_RX_PERRCODE_OL4_CHK ||
 			    errcode == NIX_RX_PERRCODE_OL4_LEN ||
 			    errcode == NIX_RX_PERRCODE_OL4_PORT) {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_BAD;
-				val |= PKT_RX_OUTER_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 			} else if (errcode == NIX_RX_PERRCODE_IL4_CHK ||
 				   errcode == NIX_RX_PERRCODE_IL4_LEN ||
 				   errcode == NIX_RX_PERRCODE_IL4_PORT) {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else if (errcode == NIX_RX_PERRCODE_IL3_LEN ||
 				   errcode == NIX_RX_PERRCODE_OL3_LEN) {
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		}
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index ffeade5952..5a7d220e22 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -92,7 +92,7 @@ static __rte_always_inline uint64_t
 nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f)
 {
 	if (w2 & BIT_ULL(21) /* vtag0_gone */) {
-		ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		*f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5);
 	}
 
@@ -103,7 +103,7 @@ static __rte_always_inline uint64_t
 nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
 {
 	if (w2 & BIT_ULL(23) /* vtag1_gone */) {
-		ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 		mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48);
 	}
 
@@ -205,10 +205,10 @@ nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 			f1 = vsetq_lane_u32(cq1_w0, f1, 3);
 			f2 = vsetq_lane_u32(cq2_w0, f2, 3);
 			f3 = vsetq_lane_u32(cq3_w0, f3, 3);
-			ol_flags0 = PKT_RX_RSS_HASH;
-			ol_flags1 = PKT_RX_RSS_HASH;
-			ol_flags2 = PKT_RX_RSS_HASH;
-			ol_flags3 = PKT_RX_RSS_HASH;
+			ol_flags0 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags1 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags2 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags3 = RTE_MBUF_F_RX_RSS_HASH;
 		} else {
 			ol_flags0 = 0; ol_flags1 = 0;
 			ol_flags2 = 0; ol_flags3 = 0;
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index ea29aec62f..530bf0082f 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -88,15 +88,15 @@ otx2_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf,
 		 */
 		*otx2_timestamp_dynfield(mbuf, tstamp) =
 				rte_be_to_cpu_64(*tstamp_ptr);
-		/* PKT_RX_IEEE1588_TMST flag needs to be set only in case
+		/* RTE_MBUF_F_RX_IEEE1588_TMST flag needs to be set only in case
 		 * PTP packets are received.
 		 */
 		if (mbuf->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC) {
 			tstamp->rx_tstamp =
 					*otx2_timestamp_dynfield(mbuf, tstamp);
 			tstamp->rx_ready = 1;
-			mbuf->ol_flags |= PKT_RX_IEEE1588_PTP |
-				PKT_RX_IEEE1588_TMST |
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP |
+				RTE_MBUF_F_RX_IEEE1588_TMST |
 				tstamp->rx_tstamp_dynflag;
 		}
 	}
@@ -161,9 +161,9 @@ nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
 	 * 0 to OTX2_FLOW_ACTION_FLAG_DEFAULT - 2
 	 */
 	if (likely(match_id)) {
-		ol_flags |= PKT_RX_FDIR;
+		ol_flags |= RTE_MBUF_F_RX_FDIR;
 		if (match_id != OTX2_FLOW_ACTION_FLAG_DEFAULT) {
-			ol_flags |= PKT_RX_FDIR_ID;
+			ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
 			mbuf->hash.fdir.hi = match_id - 1;
 		}
 	}
@@ -252,7 +252,7 @@ nix_rx_sec_mbuf_update(const struct nix_rx_parse_s *rx,
 	int i;
 
 	if (unlikely(nix_rx_sec_cptres_get(cq) != OTX2_SEC_COMP_GOOD))
-		return PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED;
+		return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 
 	/* 20 bits of tag would have the SPI */
 	spi = cq->tag & 0xFFFFF;
@@ -266,7 +266,7 @@ nix_rx_sec_mbuf_update(const struct nix_rx_parse_s *rx,
 
 	if (sa->replay_win_sz) {
 		if (cpt_ipsec_ip_antireplay_check(sa, l3_ptr) < 0)
-			return PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED;
+			return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 	}
 
 	l2_ptr_actual = RTE_PTR_ADD(l2_ptr,
@@ -294,7 +294,7 @@ nix_rx_sec_mbuf_update(const struct nix_rx_parse_s *rx,
 	m_len = ip_len + l2_len;
 	m->data_len = m_len;
 	m->pkt_len = m_len;
-	return PKT_RX_SEC_OFFLOAD;
+	return RTE_MBUF_F_RX_SEC_OFFLOAD;
 }
 
 static __rte_always_inline void
@@ -318,7 +318,7 @@ otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_RSS_F) {
 		mbuf->hash.rss = tag;
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 	if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
@@ -326,11 +326,11 @@ otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 		if (rx->vtag0_gone) {
-			ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 			mbuf->vlan_tci = rx->vtag0_tci;
 		}
 		if (rx->vtag1_gone) {
-			ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 			mbuf->vlan_tci_outer = rx->vtag1_tci;
 		}
 	}
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index ff299f00b9..afc47ca888 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -364,26 +364,26 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			const uint8x16_t tbl = {
 				/* [0-15] = il4type:il3type */
 				0x04, /* none (IPv6 assumed) */
-				0x14, /* PKT_TX_TCP_CKSUM (IPv6 assumed) */
-				0x24, /* PKT_TX_SCTP_CKSUM (IPv6 assumed) */
-				0x34, /* PKT_TX_UDP_CKSUM (IPv6 assumed) */
-				0x03, /* PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM */
-				0x23, /* PKT_TX_IP_CKSUM | PKT_TX_SCTP_CKSUM */
-				0x33, /* PKT_TX_IP_CKSUM | PKT_TX_UDP_CKSUM */
-				0x02, /* PKT_TX_IPV4  */
-				0x12, /* PKT_TX_IPV4 | PKT_TX_TCP_CKSUM */
-				0x22, /* PKT_TX_IPV4 | PKT_TX_SCTP_CKSUM */
-				0x32, /* PKT_TX_IPV4 | PKT_TX_UDP_CKSUM */
-				0x03, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_TCP_CKSUM
+				0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6 assumed) */
+				0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6 assumed) */
+				0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6 assumed) */
+				0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x23, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x33, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x02, /* RTE_MBUF_F_TX_IPV4  */
+				0x12, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x22, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x32, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x03, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_TCP_CKSUM
 				       */
-				0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_SCTP_CKSUM
+				0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_SCTP_CKSUM
 				       */
-				0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_UDP_CKSUM
+				0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_UDP_CKSUM
 				       */
 			};
 
@@ -655,40 +655,40 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 				{
 					/* [0-15] = il4type:il3type */
 					0x04, /* none (IPv6) */
-					0x14, /* PKT_TX_TCP_CKSUM (IPv6) */
-					0x24, /* PKT_TX_SCTP_CKSUM (IPv6) */
-					0x34, /* PKT_TX_UDP_CKSUM (IPv6) */
-					0x03, /* PKT_TX_IP_CKSUM */
-					0x13, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6) */
+					0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6) */
+					0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6) */
+					0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+					0x13, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x02, /* PKT_TX_IPV4 */
-					0x12, /* PKT_TX_IPV4 |
-					       * PKT_TX_TCP_CKSUM
+					0x02, /* RTE_MBUF_F_TX_IPV4 */
+					0x12, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x22, /* PKT_TX_IPV4 |
-					       * PKT_TX_SCTP_CKSUM
+					0x22, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x32, /* PKT_TX_IPV4 |
-					       * PKT_TX_UDP_CKSUM
+					0x32, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x03, /* PKT_TX_IPV4 |
-					       * PKT_TX_IP_CKSUM
+					0x03, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_IP_CKSUM
 					       */
-					0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
 				},
 
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
index 486248dff7..c9558e50a7 100644
--- a/drivers/net/octeontx2/otx2_tx.h
+++ b/drivers/net/octeontx2/otx2_tx.h
@@ -29,8 +29,8 @@
 	 NIX_TX_OFFLOAD_TSO_F)
 
 #define NIX_UDP_TUN_BITMASK \
-	((1ull << (PKT_TX_TUNNEL_VXLAN >> 45)) | \
-	 (1ull << (PKT_TX_TUNNEL_GENEVE >> 45)))
+	((1ull << (RTE_MBUF_F_TX_TUNNEL_VXLAN >> 45)) | \
+	 (1ull << (RTE_MBUF_F_TX_TUNNEL_GENEVE >> 45)))
 
 #define NIX_LSO_FORMAT_IDX_TSOV4	(0)
 #define NIX_LSO_FORMAT_IDX_TSOV6	(1)
@@ -54,7 +54,7 @@ otx2_nix_xmit_prepare_tstamp(uint64_t *cmd,  const uint64_t *send_mem_desc,
 	if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
 		struct nix_send_mem_s *send_mem;
 		uint16_t off = (no_segdw - 1) << 1;
-		const uint8_t is_ol_tstamp = !(ol_flags & PKT_TX_IEEE1588_TMST);
+		const uint8_t is_ol_tstamp = !(ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST);
 
 		send_mem = (struct nix_send_mem_s *)(cmd + off);
 		if (flags & NIX_TX_MULTI_SEG_F) {
@@ -67,7 +67,7 @@ otx2_nix_xmit_prepare_tstamp(uint64_t *cmd,  const uint64_t *send_mem_desc,
 			rte_compiler_barrier();
 		}
 
-		/* Packets for which PKT_TX_IEEE1588_TMST is not set, tx tstamp
+		/* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp
 		 * should not be recorded, hence changing the alg type to
 		 * NIX_SENDMEMALG_SET and also changing send mem addr field to
 		 * next 8 bytes as it corrpt the actual tx tstamp registered
@@ -152,12 +152,12 @@ otx2_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 	uint64_t mask, ol_flags = m->ol_flags;
 
 	if (flags & NIX_TX_OFFLOAD_TSO_F &&
-	    (ol_flags & PKT_TX_TCP_SEG)) {
+	    (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uintptr_t mdata = rte_pktmbuf_mtod(m, uintptr_t);
 		uint16_t *iplen, *oiplen, *oudplen;
 		uint16_t lso_sb, paylen;
 
-		mask = -!!(ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6));
+		mask = -!!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6));
 		lso_sb = (mask & (m->outer_l2_len + m->outer_l3_len)) +
 			m->l2_len + m->l3_len + m->l4_len;
 
@@ -166,15 +166,15 @@ otx2_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 
 		/* Get iplen position assuming no tunnel hdr */
 		iplen = (uint16_t *)(mdata + m->l2_len +
-				     (2 << !!(ol_flags & PKT_TX_IPV6)));
+				     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun = (NIX_UDP_TUN_BITMASK >>
-				((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) & 0x1;
+				((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) & 0x1;
 
 			oiplen = (uint16_t *)(mdata + m->outer_l2_len +
-				(2 << !!(ol_flags & PKT_TX_OUTER_IPV6)));
+				(2 << !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)));
 			*oiplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*oiplen) -
 						   paylen);
 
@@ -189,7 +189,7 @@ otx2_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 
 			/* Update iplen position to inner ip hdr */
 			iplen = (uint16_t *)(mdata + lso_sb - m->l3_len -
-				m->l4_len + (2 << !!(ol_flags & PKT_TX_IPV6)));
+				m->l4_len + (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		}
 
 		*iplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*iplen) - paylen);
@@ -239,11 +239,11 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
 	    (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t ol3type =
-			((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L3 */
 		w1.ol3type = ol3type;
@@ -255,15 +255,15 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol4type = csum + (csum << 1);
 
 		/* Inner L3 */
-		w1.il3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_IPV6)) << 2);
+		w1.il3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2);
 		w1.il3ptr = w1.ol4ptr + m->l2_len;
 		w1.il4ptr = w1.il3ptr + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.il3type = w1.il3type + !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.il3type = w1.il3type + !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.il4type =  (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.il4type =  (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 
 		/* In case of no tunnel header use only
 		 * shift IL3/IL4 fields a bit to use
@@ -274,16 +274,16 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 			((w1.u & 0X00000000FFFFFFFF) >> (mask << 4));
 
 	} else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t outer_l2_len = m->outer_l2_len;
 
 		/* Outer L3 */
 		w1.ol3ptr = outer_l2_len;
 		w1.ol4ptr = outer_l2_len + m->outer_l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L4 */
 		w1.ol4type = csum + (csum << 1);
@@ -299,29 +299,29 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol3ptr = l2_len;
 		w1.ol4ptr = l2_len + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.ol4type =  (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.ol4type =  (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 	}
 
 	if (flags & NIX_TX_NEED_EXT_HDR &&
 	    flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
-		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & PKT_TX_VLAN);
+		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
 		/* HW will update ptr after vlan0 update */
 		send_hdr_ext->w1.vlan1_ins_ptr = 12;
 		send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci;
 
-		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & PKT_TX_QINQ);
+		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_QINQ);
 		/* 2B before end of l2 header */
 		send_hdr_ext->w1.vlan0_ins_ptr = 12;
 		send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer;
 	}
 
 	if (flags & NIX_TX_OFFLOAD_TSO_F &&
-	    (ol_flags & PKT_TX_TCP_SEG)) {
+	    (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uint16_t lso_sb;
 		uint64_t mask;
 
@@ -332,18 +332,18 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		send_hdr_ext->w0.lso = 1;
 		send_hdr_ext->w0.lso_mps = m->tso_segsz;
 		send_hdr_ext->w0.lso_format =
-			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 		w1.ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun = (NIX_UDP_TUN_BITMASK >>
-				((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) & 0x1;
+				((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) & 0x1;
 			uint8_t shift = is_udp_tun ? 32 : 0;
 
-			shift += (!!(ol_flags & PKT_TX_OUTER_IPV6) << 4);
-			shift += (!!(ol_flags & PKT_TX_IPV6) << 3);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
 
 			w1.il4type = NIX_SENDL4TYPE_TCP_CKSUM;
 			w1.ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 050c6f5c32..1b4dfff3c3 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -1639,9 +1639,9 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 					    "L4 csum failed, flags = 0x%x\n",
 					    parse_flag);
 				rxq->rx_hw_errors++;
-				ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+				ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 
 			if (unlikely(qede_check_tunn_csum_l3(parse_flag))) {
@@ -1649,9 +1649,9 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 					"Outer L3 csum failed, flags = 0x%x\n",
 					parse_flag);
 				rxq->rx_hw_errors++;
-				ol_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+				ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 
 			flags = fp_cqe->tunnel_pars_flags.flags;
@@ -1684,31 +1684,31 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 				    "L4 csum failed, flags = 0x%x\n",
 				    parse_flag);
 			rxq->rx_hw_errors++;
-			ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		} else {
-			ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		}
 		if (unlikely(qede_check_notunn_csum_l3(rx_mb, parse_flag))) {
 			PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x\n",
 				   parse_flag);
 			rxq->rx_hw_errors++;
-			ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		} else {
-			ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		}
 
 		if (unlikely(CQE_HAS_VLAN(parse_flag) ||
 			     CQE_HAS_OUTER_VLAN(parse_flag))) {
 			/* Note: FW doesn't indicate Q-in-Q packet */
-			ol_flags |= PKT_RX_VLAN;
+			ol_flags |= RTE_MBUF_F_RX_VLAN;
 			if (qdev->vlan_strip_flg) {
-				ol_flags |= PKT_RX_VLAN_STRIPPED;
+				ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 				rx_mb->vlan_tci = vlan_tci;
 			}
 		}
 
 		if (rss_enable) {
-			ol_flags |= PKT_RX_RSS_HASH;
+			ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 			rx_mb->hash.rss = rss_hash;
 		}
 
@@ -1837,7 +1837,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			tpa_info = &rxq->tpa_info[cqe_start_tpa->tpa_agg_index];
 			tpa_start_flg = true;
 			/* Mark it as LRO packet */
-			ol_flags |= PKT_RX_LRO;
+			ol_flags |= RTE_MBUF_F_RX_LRO;
 			/* In split mode,  seg_len is same as len_on_first_bd
 			 * and bw_ext_bd_len_list will be empty since there are
 			 * no additional buffers
@@ -1908,9 +1908,9 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 					    "L4 csum failed, flags = 0x%x\n",
 					    parse_flag);
 				rxq->rx_hw_errors++;
-				ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+				ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 
 			if (unlikely(qede_check_tunn_csum_l3(parse_flag))) {
@@ -1918,9 +1918,9 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 					"Outer L3 csum failed, flags = 0x%x\n",
 					parse_flag);
 				  rxq->rx_hw_errors++;
-				  ol_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				  ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				  ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+				  ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 
 			if (tpa_start_flg)
@@ -1957,32 +1957,32 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 				    "L4 csum failed, flags = 0x%x\n",
 				    parse_flag);
 			rxq->rx_hw_errors++;
-			ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		} else {
-			ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		}
 		if (unlikely(qede_check_notunn_csum_l3(rx_mb, parse_flag))) {
 			PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x\n",
 				   parse_flag);
 			rxq->rx_hw_errors++;
-			ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		} else {
-			ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		}
 
 		if (CQE_HAS_VLAN(parse_flag) ||
 		    CQE_HAS_OUTER_VLAN(parse_flag)) {
 			/* Note: FW doesn't indicate Q-in-Q packet */
-			ol_flags |= PKT_RX_VLAN;
+			ol_flags |= RTE_MBUF_F_RX_VLAN;
 			if (qdev->vlan_strip_flg) {
-				ol_flags |= PKT_RX_VLAN_STRIPPED;
+				ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 				rx_mb->vlan_tci = vlan_tci;
 			}
 		}
 
 		/* RSS Hash */
 		if (qdev->rss_enable) {
-			ol_flags |= PKT_RX_RSS_HASH;
+			ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 			rx_mb->hash.rss = rss_hash;
 		}
 
@@ -2178,7 +2178,7 @@ qede_xmit_prep_pkts(__rte_unused void *p_txq, struct rte_mbuf **tx_pkts,
 	for (i = 0; i < nb_pkts; i++) {
 		m = tx_pkts[i];
 		ol_flags = m->ol_flags;
-		if (ol_flags & PKT_TX_TCP_SEG) {
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			if (m->nb_segs >= ETH_TX_MAX_BDS_PER_LSO_PACKET) {
 				rte_errno = EINVAL;
 				break;
@@ -2196,14 +2196,14 @@ qede_xmit_prep_pkts(__rte_unused void *p_txq, struct rte_mbuf **tx_pkts,
 		}
 		if (ol_flags & QEDE_TX_OFFLOAD_NOTSUP_MASK) {
 			/* We support only limited tunnel protocols */
-			if (ol_flags & PKT_TX_TUNNEL_MASK) {
+			if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 				uint64_t temp;
 
-				temp = ol_flags & PKT_TX_TUNNEL_MASK;
-				if (temp == PKT_TX_TUNNEL_VXLAN ||
-				    temp == PKT_TX_TUNNEL_GENEVE ||
-				    temp == PKT_TX_TUNNEL_MPLSINUDP ||
-				    temp == PKT_TX_TUNNEL_GRE)
+				temp = ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK;
+				if (temp == RTE_MBUF_F_TX_TUNNEL_VXLAN ||
+				    temp == RTE_MBUF_F_TX_TUNNEL_GENEVE ||
+				    temp == RTE_MBUF_F_TX_TUNNEL_MPLSINUDP ||
+				    temp == RTE_MBUF_F_TX_TUNNEL_GRE)
 					continue;
 			}
 
@@ -2311,13 +2311,13 @@ qede_xmit_pkts_regular(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			<< ETH_TX_DATA_1ST_BD_PKT_LEN_SHIFT;
 
 		/* Offload the IP checksum in the hardware */
-		if (tx_ol_flags & PKT_TX_IP_CKSUM)
+		if (tx_ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			bd1_bd_flags_bf |=
 				1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
 
 		/* L4 checksum offload (tcp or udp) */
-		if ((tx_ol_flags & (PKT_TX_IPV4 | PKT_TX_IPV6)) &&
-		    (tx_ol_flags & (PKT_TX_UDP_CKSUM | PKT_TX_TCP_CKSUM)))
+		if ((tx_ol_flags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)) &&
+		    (tx_ol_flags & (RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM)))
 			bd1_bd_flags_bf |=
 				1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
 
@@ -2456,7 +2456,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 * offloads. Don't rely on pkt_type marked by Rx, instead use
 		 * tx_ol_flags to decide.
 		 */
-		tunn_flg = !!(tx_ol_flags & PKT_TX_TUNNEL_MASK);
+		tunn_flg = !!(tx_ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK);
 
 		if (tunn_flg) {
 			/* Check against max which is Tunnel IPv6 + ext */
@@ -2477,8 +2477,8 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			}
 
 			/* Outer IP checksum offload */
-			if (tx_ol_flags & (PKT_TX_OUTER_IP_CKSUM |
-					   PKT_TX_OUTER_IPV4)) {
+			if (tx_ol_flags & (RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+					   RTE_MBUF_F_TX_OUTER_IPV4)) {
 				bd1_bd_flags_bf |=
 					ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_MASK <<
 					ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_SHIFT;
@@ -2490,8 +2490,8 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			 * and inner layers  lengths need to be provided in
 			 * mbuf.
 			 */
-			if ((tx_ol_flags & PKT_TX_TUNNEL_MASK) ==
-						PKT_TX_TUNNEL_MPLSINUDP) {
+			if ((tx_ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ==
+						RTE_MBUF_F_TX_TUNNEL_MPLSINUDP) {
 				mplsoudp_flg = true;
 #ifdef RTE_LIBRTE_QEDE_DEBUG_TX
 				qede_mpls_tunn_tx_sanity_check(mbuf, txq);
@@ -2524,18 +2524,18 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				    1 << ETH_TX_DATA_2ND_BD_TUNN_IPV6_EXT_SHIFT;
 
 				/* Mark inner IPv6 if present */
-				if (tx_ol_flags & PKT_TX_IPV6)
+				if (tx_ol_flags & RTE_MBUF_F_TX_IPV6)
 					bd2_bf1 |=
 						1 << ETH_TX_DATA_2ND_BD_TUNN_INNER_IPV6_SHIFT;
 
 				/* Inner L4 offsets */
-				if ((tx_ol_flags & (PKT_TX_IPV4 | PKT_TX_IPV6)) &&
-				     (tx_ol_flags & (PKT_TX_UDP_CKSUM |
-							PKT_TX_TCP_CKSUM))) {
+				if ((tx_ol_flags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)) &&
+				     (tx_ol_flags & (RTE_MBUF_F_TX_UDP_CKSUM |
+							RTE_MBUF_F_TX_TCP_CKSUM))) {
 					/* Determines if BD3 is needed */
 					tunn_ipv6_ext_flg = true;
-					if ((tx_ol_flags & PKT_TX_L4_MASK) ==
-							PKT_TX_UDP_CKSUM) {
+					if ((tx_ol_flags & RTE_MBUF_F_TX_L4_MASK) ==
+							RTE_MBUF_F_TX_UDP_CKSUM) {
 						bd2_bf1 |=
 							1 << ETH_TX_DATA_2ND_BD_L4_UDP_SHIFT;
 					}
@@ -2553,7 +2553,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			} /* End MPLSoUDP */
 		} /* End Tunnel handling */
 
-		if (tx_ol_flags & PKT_TX_TCP_SEG) {
+		if (tx_ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			lso_flg = true;
 			if (unlikely(txq->nb_tx_avail <
 						ETH_TX_MIN_BDS_PER_LSO_PKT))
@@ -2570,7 +2570,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			bd1_bd_flags_bf |= 1 << ETH_TX_1ST_BD_FLAGS_LSO_SHIFT;
 			bd1_bd_flags_bf |=
 					1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
-			/* PKT_TX_TCP_SEG implies PKT_TX_TCP_CKSUM */
+			/* RTE_MBUF_F_TX_TCP_SEG implies RTE_MBUF_F_TX_TCP_CKSUM */
 			bd1_bd_flags_bf |=
 					1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
 			mss = rte_cpu_to_le_16(mbuf->tso_segsz);
@@ -2587,14 +2587,14 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (tx_ol_flags & PKT_TX_VLAN) {
+		if (tx_ol_flags & RTE_MBUF_F_TX_VLAN) {
 			vlan = rte_cpu_to_le_16(mbuf->vlan_tci);
 			bd1_bd_flags_bf |=
 			    1 << ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_SHIFT;
 		}
 
 		/* Offload the IP checksum in the hardware */
-		if (tx_ol_flags & PKT_TX_IP_CKSUM) {
+		if (tx_ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 			bd1_bd_flags_bf |=
 				1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
 			/* There's no DPDK flag to request outer-L4 csum
@@ -2602,8 +2602,8 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			 * csum offload is requested then we need to force
 			 * recalculation of L4 tunnel header csum also.
 			 */
-			if (tunn_flg && ((tx_ol_flags & PKT_TX_TUNNEL_MASK) !=
-							PKT_TX_TUNNEL_GRE)) {
+			if (tunn_flg && ((tx_ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) !=
+							RTE_MBUF_F_TX_TUNNEL_GRE)) {
 				bd1_bd_flags_bf |=
 					ETH_TX_1ST_BD_FLAGS_TUNN_L4_CSUM_MASK <<
 					ETH_TX_1ST_BD_FLAGS_TUNN_L4_CSUM_SHIFT;
@@ -2611,8 +2611,8 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* L4 checksum offload (tcp or udp) */
-		if ((tx_ol_flags & (PKT_TX_IPV4 | PKT_TX_IPV6)) &&
-		    (tx_ol_flags & (PKT_TX_UDP_CKSUM | PKT_TX_TCP_CKSUM))) {
+		if ((tx_ol_flags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)) &&
+		    (tx_ol_flags & (RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM))) {
 			bd1_bd_flags_bf |=
 				1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
 			/* There's no DPDK flag to request outer-L4 csum
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 025ed6fff2..828df1cf99 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -144,20 +144,20 @@
 
 #define QEDE_PKT_TYPE_TUNN_MAX_TYPE			0x20 /* 2^5 */
 
-#define QEDE_TX_CSUM_OFFLOAD_MASK (PKT_TX_IP_CKSUM              | \
-				   PKT_TX_TCP_CKSUM             | \
-				   PKT_TX_UDP_CKSUM             | \
-				   PKT_TX_OUTER_IP_CKSUM        | \
-				   PKT_TX_TCP_SEG		| \
-				   PKT_TX_IPV4			| \
-				   PKT_TX_IPV6)
+#define QEDE_TX_CSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM              | \
+				   RTE_MBUF_F_TX_TCP_CKSUM             | \
+				   RTE_MBUF_F_TX_UDP_CKSUM             | \
+				   RTE_MBUF_F_TX_OUTER_IP_CKSUM        | \
+				   RTE_MBUF_F_TX_TCP_SEG		| \
+				   RTE_MBUF_F_TX_IPV4			| \
+				   RTE_MBUF_F_TX_IPV6)
 
 #define QEDE_TX_OFFLOAD_MASK (QEDE_TX_CSUM_OFFLOAD_MASK | \
-			      PKT_TX_VLAN		| \
-			      PKT_TX_TUNNEL_MASK)
+			      RTE_MBUF_F_TX_VLAN		| \
+			      RTE_MBUF_F_TX_TUNNEL_MASK)
 
 #define QEDE_TX_OFFLOAD_NOTSUP_MASK \
-	(PKT_TX_OFFLOAD_MASK ^ QEDE_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ QEDE_TX_OFFLOAD_MASK)
 
 /* TPA related structures */
 struct qede_agg_info {
diff --git a/drivers/net/sfc/sfc_dp_tx.h b/drivers/net/sfc/sfc_dp_tx.h
index 777807985b..20f3b4eaba 100644
--- a/drivers/net/sfc/sfc_dp_tx.h
+++ b/drivers/net/sfc/sfc_dp_tx.h
@@ -241,7 +241,7 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
 			   unsigned int nb_vlan_descs)
 {
 	unsigned int descs_required = m->nb_segs;
-	unsigned int tcph_off = ((m->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	unsigned int tcph_off = ((m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 				 m->outer_l2_len + m->outer_l3_len : 0) +
 				m->l2_len + m->l3_len;
 	unsigned int header_len = tcph_off + m->l4_len;
@@ -279,21 +279,21 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
 			 * to proceed with additional checks below.
 			 * Otherwise, throw an error.
 			 */
-			if ((m->ol_flags & PKT_TX_TCP_SEG) == 0 ||
+			if ((m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0 ||
 			    tso_bounce_buffer_len == 0)
 				return EINVAL;
 		}
 	}
 
-	if (m->ol_flags & PKT_TX_TCP_SEG) {
-		switch (m->ol_flags & PKT_TX_TUNNEL_MASK) {
+	if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
+		switch (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 		case 0:
 			break;
-		case PKT_TX_TUNNEL_VXLAN:
+		case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 			/* FALLTHROUGH */
-		case PKT_TX_TUNNEL_GENEVE:
+		case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 			if (!(m->ol_flags &
-			      (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6)))
+			      (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6)))
 				return EINVAL;
 		}
 
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 1bf04f565a..35e1650851 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -203,7 +203,7 @@ sfc_ef100_rx_nt_or_inner_l4_csum(const efx_word_t class)
 	return EFX_WORD_FIELD(class,
 			      ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L4_CSUM) ==
 		ESE_GZ_RH_HCLASS_L4_CSUM_GOOD ?
-		PKT_RX_L4_CKSUM_GOOD : PKT_RX_L4_CKSUM_BAD;
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD : RTE_MBUF_F_RX_L4_CKSUM_BAD;
 }
 
 static inline uint64_t
@@ -212,7 +212,7 @@ sfc_ef100_rx_tun_outer_l4_csum(const efx_word_t class)
 	return EFX_WORD_FIELD(class,
 			      ESF_GZ_RX_PREFIX_HCLASS_TUN_OUTER_L4_CSUM) ==
 		ESE_GZ_RH_HCLASS_L4_CSUM_GOOD ?
-		PKT_RX_OUTER_L4_CKSUM_GOOD : PKT_RX_OUTER_L4_CKSUM_BAD;
+		RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD : RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 }
 
 static uint32_t
@@ -268,11 +268,11 @@ sfc_ef100_rx_class_decode(const efx_word_t class, uint64_t *ol_flags)
 			ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L3_CLASS)) {
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4GOOD:
 			ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			*ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4BAD:
 			ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			*ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP6:
 			ptype |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
@@ -309,7 +309,7 @@ sfc_ef100_rx_class_decode(const efx_word_t class, uint64_t *ol_flags)
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4BAD:
 			ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+			*ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP6:
 			ptype |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
@@ -320,11 +320,11 @@ sfc_ef100_rx_class_decode(const efx_word_t class, uint64_t *ol_flags)
 			ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L3_CLASS)) {
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4GOOD:
 			ptype |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			*ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4BAD:
 			ptype |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			*ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP6:
 			ptype |= RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN;
@@ -401,7 +401,7 @@ sfc_ef100_rx_prefix_to_offloads(const struct sfc_ef100_rxq *rxq,
 	if ((rxq->flags & SFC_EF100_RXQ_RSS_HASH) &&
 	    EFX_TEST_OWORD_BIT(rx_prefix[0],
 			       ESF_GZ_RX_PREFIX_RSS_HASH_VALID_LBN)) {
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		/* EFX_OWORD_FIELD converts little-endian to CPU */
 		m->hash.rss = EFX_OWORD_FIELD(rx_prefix[0],
 					      ESF_GZ_RX_PREFIX_RSS_HASH);
@@ -414,7 +414,7 @@ sfc_ef100_rx_prefix_to_offloads(const struct sfc_ef100_rxq *rxq,
 		user_mark = EFX_OWORD_FIELD(rx_prefix[0],
 					    ESF_GZ_RX_PREFIX_USER_MARK);
 		if (user_mark != SFC_EF100_USER_MARK_INVALID) {
-			ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+			ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 			m->hash.fdir.hi = user_mark;
 		}
 	}
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 53d01612d1..78c16168ed 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -98,7 +98,7 @@ static int
 sfc_ef100_tx_prepare_pkt_tso(struct sfc_ef100_txq * const txq,
 			     struct rte_mbuf *m)
 {
-	size_t header_len = ((m->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	size_t header_len = ((m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			     m->outer_l2_len + m->outer_l3_len : 0) +
 			    m->l2_len + m->l3_len + m->l4_len;
 	size_t payload_len = m->pkt_len - header_len;
@@ -106,12 +106,12 @@ sfc_ef100_tx_prepare_pkt_tso(struct sfc_ef100_txq * const txq,
 	unsigned int nb_payload_descs;
 
 #ifdef RTE_LIBRTE_SFC_EFX_DEBUG
-	switch (m->ol_flags & PKT_TX_TUNNEL_MASK) {
+	switch (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 	case 0:
 		/* FALLTHROUGH */
-	case PKT_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		/* FALLTHROUGH */
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		break;
 	default:
 		return ENOTSUP;
@@ -164,11 +164,11 @@ sfc_ef100_tx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 * pseudo-header checksum which is calculated below,
 		 * but requires contiguous packet headers.
 		 */
-		if ((m->ol_flags & PKT_TX_TUNNEL_MASK) &&
-		    (m->ol_flags & PKT_TX_L4_MASK)) {
+		if ((m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) &&
+		    (m->ol_flags & RTE_MBUF_F_TX_L4_MASK)) {
 			calc_phdr_cksum = true;
 			max_nb_header_segs = 1;
-		} else if (m->ol_flags & PKT_TX_TCP_SEG) {
+		} else if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			max_nb_header_segs = txq->tso_max_nb_header_descs;
 		}
 
@@ -180,7 +180,7 @@ sfc_ef100_tx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			break;
 		}
 
-		if (m->ol_flags & PKT_TX_TCP_SEG) {
+		if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			ret = sfc_ef100_tx_prepare_pkt_tso(txq, m);
 			if (unlikely(ret != 0)) {
 				rte_errno = ret;
@@ -197,7 +197,7 @@ sfc_ef100_tx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			 * and does not require any assistance.
 			 */
 			ret = rte_net_intel_cksum_flags_prepare(m,
-					m->ol_flags & ~PKT_TX_IP_CKSUM);
+					m->ol_flags & ~RTE_MBUF_F_TX_IP_CKSUM);
 			if (unlikely(ret != 0)) {
 				rte_errno = -ret;
 				break;
@@ -315,10 +315,10 @@ sfc_ef100_tx_qdesc_cso_inner_l3(uint64_t tx_tunnel)
 	uint8_t inner_l3;
 
 	switch (tx_tunnel) {
-	case PKT_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		inner_l3 = ESE_GZ_TX_DESC_CS_INNER_L3_VXLAN;
 		break;
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		inner_l3 = ESE_GZ_TX_DESC_CS_INNER_L3_GENEVE;
 		break;
 	default:
@@ -338,25 +338,25 @@ sfc_ef100_tx_qdesc_send_create(const struct rte_mbuf *m, efx_oword_t *tx_desc)
 	uint16_t part_cksum_w;
 	uint16_t l4_offset_w;
 
-	if ((m->ol_flags & PKT_TX_TUNNEL_MASK) == 0) {
-		outer_l3 = (m->ol_flags & PKT_TX_IP_CKSUM);
-		outer_l4 = (m->ol_flags & PKT_TX_L4_MASK);
+	if ((m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) == 0) {
+		outer_l3 = (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
+		outer_l4 = (m->ol_flags & RTE_MBUF_F_TX_L4_MASK);
 		inner_l3 = ESE_GZ_TX_DESC_CS_INNER_L3_OFF;
 		partial_en = ESE_GZ_TX_DESC_CSO_PARTIAL_EN_OFF;
 		part_cksum_w = 0;
 		l4_offset_w = 0;
 	} else {
-		outer_l3 = (m->ol_flags & PKT_TX_OUTER_IP_CKSUM);
-		outer_l4 = (m->ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		outer_l3 = (m->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
+		outer_l4 = (m->ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		inner_l3 = sfc_ef100_tx_qdesc_cso_inner_l3(m->ol_flags &
-							   PKT_TX_TUNNEL_MASK);
+							   RTE_MBUF_F_TX_TUNNEL_MASK);
 
-		switch (m->ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_TCP_CKSUM:
+		switch (m->ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			partial_en = ESE_GZ_TX_DESC_CSO_PARTIAL_EN_TCP;
 			part_cksum_w = offsetof(struct rte_tcp_hdr, cksum) >> 1;
 			break;
-		case PKT_TX_UDP_CKSUM:
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			partial_en = ESE_GZ_TX_DESC_CSO_PARTIAL_EN_UDP;
 			part_cksum_w = offsetof(struct rte_udp_hdr,
 						dgram_cksum) >> 1;
@@ -382,7 +382,7 @@ sfc_ef100_tx_qdesc_send_create(const struct rte_mbuf *m, efx_oword_t *tx_desc)
 			ESF_GZ_TX_SEND_CSO_OUTER_L4, outer_l4,
 			ESF_GZ_TX_DESC_TYPE, ESE_GZ_TX_DESC_TYPE_SEND);
 
-	if (m->ol_flags & PKT_TX_VLAN) {
+	if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		efx_oword_t tx_desc_extra_fields;
 
 		EFX_POPULATE_OWORD_2(tx_desc_extra_fields,
@@ -423,7 +423,7 @@ sfc_ef100_tx_qdesc_tso_create(const struct rte_mbuf *m,
 	 */
 	int ed_inner_ip_id = ESE_GZ_TX_DESC_IP4_ID_INC_MOD16;
 	uint8_t inner_l3 = sfc_ef100_tx_qdesc_cso_inner_l3(
-					m->ol_flags & PKT_TX_TUNNEL_MASK);
+					m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK);
 
 	EFX_POPULATE_OWORD_10(*tx_desc,
 			ESF_GZ_TX_TSO_MSS, m->tso_segsz,
@@ -464,7 +464,7 @@ sfc_ef100_tx_qdesc_tso_create(const struct rte_mbuf *m,
 
 	EFX_OR_OWORD(*tx_desc, tx_desc_extra_fields);
 
-	if (m->ol_flags & PKT_TX_VLAN) {
+	if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		EFX_POPULATE_OWORD_2(tx_desc_extra_fields,
 				ESF_GZ_TX_TSO_VLAN_INSERT_EN, 1,
 				ESF_GZ_TX_TSO_VLAN_INSERT_TCI, m->vlan_tci);
@@ -505,7 +505,7 @@ sfc_ef100_tx_pkt_descs_max(const struct rte_mbuf *m)
 #define SFC_MBUF_SEG_LEN_MAX		UINT16_MAX
 	RTE_BUILD_BUG_ON(sizeof(m->data_len) != 2);
 
-	if (m->ol_flags & PKT_TX_TCP_SEG) {
+	if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		/* Tx TSO descriptor */
 		extra_descs++;
 		/*
@@ -552,7 +552,7 @@ sfc_ef100_xmit_tso_pkt(struct sfc_ef100_txq * const txq,
 	size_t header_len;
 	size_t remaining_hdr_len;
 
-	if (m->ol_flags & PKT_TX_TUNNEL_MASK) {
+	if (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 		outer_iph_off = m->outer_l2_len;
 		outer_udph_off = outer_iph_off + m->outer_l3_len;
 	} else {
@@ -671,7 +671,7 @@ sfc_ef100_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				break;
 		}
 
-		if (m_seg->ol_flags & PKT_TX_TCP_SEG) {
+		if (m_seg->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			m_seg = sfc_ef100_xmit_tso_pkt(txq, m_seg, &added);
 		} else {
 			id = added++ & txq->ptr_mask;
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 991329e86f..eda468df3f 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -374,13 +374,13 @@ sfc_ef10_essb_rx_get_pending(struct sfc_ef10_essb_rxq *rxq,
 			rte_pktmbuf_data_len(m) = pkt_len;
 
 			m->ol_flags |=
-				(PKT_RX_RSS_HASH *
+				(RTE_MBUF_F_RX_RSS_HASH *
 				 !!EFX_TEST_QWORD_BIT(*qwordp,
 					ES_EZ_ESSB_RX_PREFIX_HASH_VALID_LBN)) |
-				(PKT_RX_FDIR_ID *
+				(RTE_MBUF_F_RX_FDIR_ID *
 				 !!EFX_TEST_QWORD_BIT(*qwordp,
 					ES_EZ_ESSB_RX_PREFIX_MARK_VALID_LBN)) |
-				(PKT_RX_FDIR *
+				(RTE_MBUF_F_RX_FDIR *
 				 !!EFX_TEST_QWORD_BIT(*qwordp,
 					ES_EZ_ESSB_RX_PREFIX_MATCH_FLAG_LBN));
 
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 49a7d4fb42..8ddb830642 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -330,7 +330,7 @@ sfc_ef10_rx_process_event(struct sfc_ef10_rxq *rxq, efx_qword_t rx_ev,
 	/* Mask RSS hash offload flag if RSS is not enabled */
 	sfc_ef10_rx_ev_to_offloads(rx_ev, m,
 				   (rxq->flags & SFC_EF10_RXQ_RSS_HASH) ?
-				   ~0ull : ~PKT_RX_RSS_HASH);
+				   ~0ull : ~RTE_MBUF_F_RX_RSS_HASH);
 
 	/* data_off already moved past pseudo header */
 	pseudo_hdr = (uint8_t *)m->buf_addr + RTE_PKTMBUF_HEADROOM;
@@ -338,7 +338,7 @@ sfc_ef10_rx_process_event(struct sfc_ef10_rxq *rxq, efx_qword_t rx_ev,
 	/*
 	 * Always get RSS hash from pseudo header to avoid
 	 * condition/branching. If it is valid or not depends on
-	 * PKT_RX_RSS_HASH in m->ol_flags.
+	 * RTE_MBUF_F_RX_RSS_HASH in m->ol_flags.
 	 */
 	m->hash.rss = sfc_ef10_rx_pseudo_hdr_get_hash(pseudo_hdr);
 
@@ -392,7 +392,7 @@ sfc_ef10_rx_process_event(struct sfc_ef10_rxq *rxq, efx_qword_t rx_ev,
 		/*
 		 * Always get RSS hash from pseudo header to avoid
 		 * condition/branching. If it is valid or not depends on
-		 * PKT_RX_RSS_HASH in m->ol_flags.
+		 * RTE_MBUF_F_RX_RSS_HASH in m->ol_flags.
 		 */
 		m->hash.rss = sfc_ef10_rx_pseudo_hdr_get_hash(pseudo_hdr);
 
diff --git a/drivers/net/sfc/sfc_ef10_rx_ev.h b/drivers/net/sfc/sfc_ef10_rx_ev.h
index a7f5b9168b..821e2227bb 100644
--- a/drivers/net/sfc/sfc_ef10_rx_ev.h
+++ b/drivers/net/sfc/sfc_ef10_rx_ev.h
@@ -27,9 +27,9 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 			   uint64_t ol_mask)
 {
 	uint32_t tun_ptype = 0;
-	/* Which event bit is mapped to PKT_RX_IP_CKSUM_* */
+	/* Which event bit is mapped to RTE_MBUF_F_RX_IP_CKSUM_* */
 	int8_t ip_csum_err_bit;
-	/* Which event bit is mapped to PKT_RX_L4_CKSUM_* */
+	/* Which event bit is mapped to RTE_MBUF_F_RX_L4_CKSUM_* */
 	int8_t l4_csum_err_bit;
 	uint32_t l2_ptype = 0;
 	uint32_t l3_ptype = 0;
@@ -76,7 +76,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 		l4_csum_err_bit = ESF_EZ_RX_TCP_UDP_INNER_CHKSUM_ERR_LBN;
 		if (unlikely(EFX_TEST_QWORD_BIT(rx_ev,
 						ESF_DZ_RX_IPCKSUM_ERR_LBN)))
-			ol_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 	}
 
 	switch (EFX_QWORD_FIELD(rx_ev, ESF_DZ_RX_ETH_TAG_CLASS)) {
@@ -105,9 +105,9 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 	case ESE_DZ_L3_CLASS_IP4:
 		l3_ptype = (tun_ptype == 0) ? RTE_PTYPE_L3_IPV4_EXT_UNKNOWN :
 			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN;
-		ol_flags |= PKT_RX_RSS_HASH |
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH |
 			((EFX_TEST_QWORD_BIT(rx_ev, ip_csum_err_bit)) ?
-			 PKT_RX_IP_CKSUM_BAD : PKT_RX_IP_CKSUM_GOOD);
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD : RTE_MBUF_F_RX_IP_CKSUM_GOOD);
 		break;
 	case ESE_DZ_L3_CLASS_IP6_FRAG:
 		l4_ptype = (tun_ptype == 0) ? RTE_PTYPE_L4_FRAG :
@@ -116,7 +116,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 	case ESE_DZ_L3_CLASS_IP6:
 		l3_ptype = (tun_ptype == 0) ? RTE_PTYPE_L3_IPV6_EXT_UNKNOWN :
 			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN;
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		break;
 	case ESE_DZ_L3_CLASS_ARP:
 		/* Override Layer 2 packet type */
@@ -144,7 +144,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 			RTE_PTYPE_INNER_L4_TCP;
 		ol_flags |=
 			(EFX_TEST_QWORD_BIT(rx_ev, l4_csum_err_bit)) ?
-			PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD;
+			RTE_MBUF_F_RX_L4_CKSUM_BAD : RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		break;
 	case ESE_FZ_L4_CLASS_UDP:
 		 RTE_BUILD_BUG_ON(ESE_FZ_L4_CLASS_UDP != ESE_DE_L4_CLASS_UDP);
@@ -152,7 +152,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 			RTE_PTYPE_INNER_L4_UDP;
 		ol_flags |=
 			(EFX_TEST_QWORD_BIT(rx_ev, l4_csum_err_bit)) ?
-			PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD;
+			RTE_MBUF_F_RX_L4_CKSUM_BAD : RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		break;
 	case ESE_FZ_L4_CLASS_UNKNOWN:
 		 RTE_BUILD_BUG_ON(ESE_FZ_L4_CLASS_UNKNOWN !=
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index 277fe6c6ca..e58f8bbe8c 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -341,7 +341,7 @@ sfc_ef10_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 * the size limit. Perform the check in debug mode since MTU
 		 * more than 9k is not supported, but the limit here is 16k-1.
 		 */
-		if (!(m->ol_flags & PKT_TX_TCP_SEG)) {
+		if (!(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			struct rte_mbuf *m_seg;
 
 			for (m_seg = m; m_seg != NULL; m_seg = m_seg->next) {
@@ -371,7 +371,7 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 		      unsigned int *added, unsigned int *dma_desc_space,
 		      bool *reap_done)
 {
-	size_t iph_off = ((m_seg->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	size_t iph_off = ((m_seg->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			  m_seg->outer_l2_len + m_seg->outer_l3_len : 0) +
 			 m_seg->l2_len;
 	size_t tcph_off = iph_off + m_seg->l3_len;
@@ -489,10 +489,10 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	 *
 	 * The same concern applies to outer UDP datagram length field.
 	 */
-	switch (m_seg->ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_VXLAN:
+	switch (m_seg->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		/* FALLTHROUGH */
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		sfc_tso_outer_udp_fix_len(first_m_seg, hdr_addr);
 		break;
 	default:
@@ -506,10 +506,10 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	 * filled in in TSO mbuf. Use zero IPID if there is no IPv4 flag.
 	 * If the packet is still IPv4, HW will simply start from zero IPID.
 	 */
-	if (first_m_seg->ol_flags & PKT_TX_IPV4)
+	if (first_m_seg->ol_flags & RTE_MBUF_F_TX_IPV4)
 		packet_id = sfc_tso_ip4_get_ipid(hdr_addr, iph_off);
 
-	if (first_m_seg->ol_flags & PKT_TX_OUTER_IPV4)
+	if (first_m_seg->ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)
 		outer_packet_id = sfc_tso_ip4_get_ipid(hdr_addr,
 						first_m_seg->outer_l2_len);
 
@@ -648,7 +648,7 @@ sfc_ef10_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		if (likely(pktp + 1 != pktp_end))
 			rte_mbuf_prefetch_part1(pktp[1]);
 
-		if (m_seg->ol_flags & PKT_TX_TCP_SEG) {
+		if (m_seg->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			int rc;
 
 			rc = sfc_ef10_xmit_tso_pkt(txq, m_seg, &added,
@@ -805,7 +805,7 @@ sfc_ef10_simple_prepare_pkts(__rte_unused void *tx_queue,
 
 		/* ef10_simple does not support TSO and VLAN insertion */
 		if (unlikely(m->ol_flags &
-			     (PKT_TX_TCP_SEG | PKT_TX_VLAN))) {
+			     (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_VLAN))) {
 			rte_errno = ENOTSUP;
 			break;
 		}
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 280e8a61f9..66024f3e53 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -148,15 +148,15 @@ sfc_efx_rx_desc_flags_to_offload_flags(const unsigned int desc_flags)
 
 	switch (desc_flags & (EFX_PKT_IPV4 | EFX_CKSUM_IPV4)) {
 	case (EFX_PKT_IPV4 | EFX_CKSUM_IPV4):
-		mbuf_flags |= PKT_RX_IP_CKSUM_GOOD;
+		mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		break;
 	case EFX_PKT_IPV4:
-		mbuf_flags |= PKT_RX_IP_CKSUM_BAD;
+		mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		break;
 	default:
-		RTE_BUILD_BUG_ON(PKT_RX_IP_CKSUM_UNKNOWN != 0);
-		SFC_ASSERT((mbuf_flags & PKT_RX_IP_CKSUM_MASK) ==
-			   PKT_RX_IP_CKSUM_UNKNOWN);
+		RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN != 0);
+		SFC_ASSERT((mbuf_flags & RTE_MBUF_F_RX_IP_CKSUM_MASK) ==
+			   RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN);
 		break;
 	}
 
@@ -164,16 +164,16 @@ sfc_efx_rx_desc_flags_to_offload_flags(const unsigned int desc_flags)
 		 (EFX_PKT_TCP | EFX_PKT_UDP | EFX_CKSUM_TCPUDP))) {
 	case (EFX_PKT_TCP | EFX_CKSUM_TCPUDP):
 	case (EFX_PKT_UDP | EFX_CKSUM_TCPUDP):
-		mbuf_flags |= PKT_RX_L4_CKSUM_GOOD;
+		mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		break;
 	case EFX_PKT_TCP:
 	case EFX_PKT_UDP:
-		mbuf_flags |= PKT_RX_L4_CKSUM_BAD;
+		mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		break;
 	default:
-		RTE_BUILD_BUG_ON(PKT_RX_L4_CKSUM_UNKNOWN != 0);
-		SFC_ASSERT((mbuf_flags & PKT_RX_L4_CKSUM_MASK) ==
-			   PKT_RX_L4_CKSUM_UNKNOWN);
+		RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN != 0);
+		SFC_ASSERT((mbuf_flags & RTE_MBUF_F_RX_L4_CKSUM_MASK) ==
+			   RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN);
 		break;
 	}
 
@@ -224,7 +224,7 @@ sfc_efx_rx_set_rss_hash(struct sfc_efx_rxq *rxq, unsigned int flags,
 						      EFX_RX_HASHALG_TOEPLITZ,
 						      mbuf_data);
 
-		m->ol_flags |= PKT_RX_RSS_HASH;
+		m->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 }
 
diff --git a/drivers/net/sfc/sfc_tso.c b/drivers/net/sfc/sfc_tso.c
index 29d0836b65..927e351a6e 100644
--- a/drivers/net/sfc/sfc_tso.c
+++ b/drivers/net/sfc/sfc_tso.c
@@ -153,7 +153,7 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 	 * IPv4 flag. If the packet is still IPv4, HW will simply start from
 	 * zero IPID.
 	 */
-	if (m->ol_flags & PKT_TX_IPV4)
+	if (m->ol_flags & RTE_MBUF_F_TX_IPV4)
 		packet_id = sfc_tso_ip4_get_ipid(tsoh, nh_off);
 
 	/* Handle TCP header */
diff --git a/drivers/net/sfc/sfc_tso.h b/drivers/net/sfc/sfc_tso.h
index f081e856e1..9029ad1590 100644
--- a/drivers/net/sfc/sfc_tso.h
+++ b/drivers/net/sfc/sfc_tso.h
@@ -59,7 +59,7 @@ sfc_tso_innermost_ip_fix_len(const struct rte_mbuf *m, uint8_t *tsoh,
 	size_t field_ofst;
 	rte_be16_t len;
 
-	if (m->ol_flags & PKT_TX_IPV4) {
+	if (m->ol_flags & RTE_MBUF_F_TX_IPV4) {
 		field_ofst = offsetof(struct rte_ipv4_hdr, total_length);
 		len = rte_cpu_to_be_16(m->l3_len + ip_payload_len);
 	} else {
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 936ae815ea..fd79e67efa 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -766,7 +766,7 @@ static unsigned int
 sfc_efx_tx_maybe_insert_tag(struct sfc_efx_txq *txq, struct rte_mbuf *m,
 			    efx_desc_t **pend)
 {
-	uint16_t this_tag = ((m->ol_flags & PKT_TX_VLAN) ?
+	uint16_t this_tag = ((m->ol_flags & RTE_MBUF_F_TX_VLAN) ?
 			     m->vlan_tci : 0);
 
 	if (this_tag == txq->hw_vlan_tci)
@@ -869,7 +869,7 @@ sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 */
 		pkt_descs += sfc_efx_tx_maybe_insert_tag(txq, m_seg, &pend);
 
-		if (m_seg->ol_flags & PKT_TX_TCP_SEG) {
+		if (m_seg->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			/*
 			 * We expect correct 'pkt->l[2, 3, 4]_len' values
 			 * to be set correctly by the caller
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index 046f17669d..19236e574e 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -340,8 +340,8 @@ tap_verify_csum(struct rte_mbuf *mbuf)
 
 		cksum = ~rte_raw_cksum(iph, l3_len);
 		mbuf->ol_flags |= cksum ?
-			PKT_RX_IP_CKSUM_BAD :
-			PKT_RX_IP_CKSUM_GOOD;
+			RTE_MBUF_F_RX_IP_CKSUM_BAD :
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	} else if (l3 == RTE_PTYPE_L3_IPV6) {
 		struct rte_ipv6_hdr *iph = l3_hdr;
 
@@ -376,7 +376,7 @@ tap_verify_csum(struct rte_mbuf *mbuf)
 					 * indicates that the sender did not
 					 * generate one [RFC 768].
 					 */
-					mbuf->ol_flags |= PKT_RX_L4_CKSUM_NONE;
+					mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_NONE;
 					return;
 				}
 			}
@@ -387,7 +387,7 @@ tap_verify_csum(struct rte_mbuf *mbuf)
 								 l4_hdr);
 		}
 		mbuf->ol_flags |= cksum_ok ?
-			PKT_RX_L4_CKSUM_GOOD : PKT_RX_L4_CKSUM_BAD;
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD : RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	}
 }
 
@@ -544,7 +544,7 @@ tap_tx_l3_cksum(char *packet, uint64_t ol_flags, unsigned int l2_len,
 {
 	void *l3_hdr = packet + l2_len;
 
-	if (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_IPV4)) {
+	if (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_IPV4)) {
 		struct rte_ipv4_hdr *iph = l3_hdr;
 		uint16_t cksum;
 
@@ -552,18 +552,18 @@ tap_tx_l3_cksum(char *packet, uint64_t ol_flags, unsigned int l2_len,
 		cksum = rte_raw_cksum(iph, l3_len);
 		iph->hdr_checksum = (cksum == 0xffff) ? cksum : ~cksum;
 	}
-	if (ol_flags & PKT_TX_L4_MASK) {
+	if (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
 		void *l4_hdr;
 
 		l4_hdr = packet + l2_len + l3_len;
-		if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM)
+		if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM)
 			*l4_cksum = &((struct rte_udp_hdr *)l4_hdr)->dgram_cksum;
-		else if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM)
+		else if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM)
 			*l4_cksum = &((struct rte_tcp_hdr *)l4_hdr)->cksum;
 		else
 			return;
 		**l4_cksum = 0;
-		if (ol_flags & PKT_TX_IPV4)
+		if (ol_flags & RTE_MBUF_F_TX_IPV4)
 			*l4_phdr_cksum = rte_ipv4_phdr_cksum(l3_hdr, 0);
 		else
 			*l4_phdr_cksum = rte_ipv6_phdr_cksum(l3_hdr, 0);
@@ -627,9 +627,9 @@ tap_write_mbufs(struct tx_queue *txq, uint16_t num_mbufs,
 
 		nb_segs = mbuf->nb_segs;
 		if (txq->csum &&
-		    ((mbuf->ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_IPV4) ||
-		     (mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM ||
-		     (mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM))) {
+		    ((mbuf->ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_IPV4) ||
+		      (mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM ||
+		      (mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM))) {
 			is_cksum = 1;
 
 			/* Support only packets with at least layer 4
@@ -719,12 +719,12 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		uint16_t hdrs_len;
 		uint64_t tso;
 
-		tso = mbuf_in->ol_flags & PKT_TX_TCP_SEG;
+		tso = mbuf_in->ol_flags & RTE_MBUF_F_TX_TCP_SEG;
 		if (tso) {
 			struct rte_gso_ctx *gso_ctx = &txq->gso_ctx;
 
 			/* TCP segmentation implies TCP checksum offload */
-			mbuf_in->ol_flags |= PKT_TX_TCP_CKSUM;
+			mbuf_in->ol_flags |= RTE_MBUF_F_TX_TCP_CKSUM;
 
 			/* gso size is calculated without RTE_ETHER_CRC_LEN */
 			hdrs_len = mbuf_in->l2_len + mbuf_in->l3_len +
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 91e09ff8d5..4a433435c6 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -42,10 +42,10 @@ fill_sq_desc_header(union sq_entry_t *entry, struct rte_mbuf *pkt)
 	ol_flags = pkt->ol_flags & NICVF_TX_OFFLOAD_MASK;
 	if (unlikely(ol_flags)) {
 		/* L4 cksum */
-		uint64_t l4_flags = ol_flags & PKT_TX_L4_MASK;
-		if (l4_flags == PKT_TX_TCP_CKSUM)
+		uint64_t l4_flags = ol_flags & RTE_MBUF_F_TX_L4_MASK;
+		if (l4_flags == RTE_MBUF_F_TX_TCP_CKSUM)
 			sqe.hdr.csum_l4 = SEND_L4_CSUM_TCP;
-		else if (l4_flags == PKT_TX_UDP_CKSUM)
+		else if (l4_flags == RTE_MBUF_F_TX_UDP_CKSUM)
 			sqe.hdr.csum_l4 = SEND_L4_CSUM_UDP;
 		else
 			sqe.hdr.csum_l4 = SEND_L4_CSUM_DISABLE;
@@ -54,7 +54,7 @@ fill_sq_desc_header(union sq_entry_t *entry, struct rte_mbuf *pkt)
 		sqe.hdr.l4_offset = pkt->l3_len + pkt->l2_len;
 
 		/* L3 cksum */
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			sqe.hdr.csum_l3 = 1;
 	}
 
@@ -343,9 +343,9 @@ static inline uint64_t __rte_hot
 nicvf_set_olflags(const cqe_rx_word0_t cqe_rx_w0)
 {
 	static const uint64_t flag_table[3] __rte_cache_aligned = {
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_UNKNOWN,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
 	};
 
 	const uint8_t idx = (cqe_rx_w0.err_opcode == CQE_RX_ERR_L4_CHK) << 1 |
@@ -409,7 +409,7 @@ nicvf_rx_offload(cqe_rx_word0_t cqe_rx_w0, cqe_rx_word2_t cqe_rx_w2,
 {
 	if (likely(cqe_rx_w0.rss_alg)) {
 		pkt->hash.rss = cqe_rx_w2.rss_tag;
-		pkt->ol_flags |= PKT_RX_RSS_HASH;
+		pkt->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 
 	}
 }
@@ -454,8 +454,8 @@ nicvf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts,
 			pkt->ol_flags = nicvf_set_olflags(cqe_rx_w0);
 		if (flag & NICVF_RX_OFFLOAD_VLAN_STRIP) {
 			if (unlikely(cqe_rx_w0.vlan_stripped)) {
-				pkt->ol_flags |= PKT_RX_VLAN
-							| PKT_RX_VLAN_STRIPPED;
+				pkt->ol_flags |= RTE_MBUF_F_RX_VLAN
+							| RTE_MBUF_F_RX_VLAN_STRIPPED;
 				pkt->vlan_tci =
 					rte_cpu_to_be_16(cqe_rx_w2.vlan_tci);
 			}
@@ -549,8 +549,8 @@ nicvf_process_cq_mseg_entry(struct cqe_rx_t *cqe_rx,
 		pkt->ol_flags = nicvf_set_olflags(cqe_rx_w0);
 	if (flag & NICVF_RX_OFFLOAD_VLAN_STRIP) {
 		if (unlikely(cqe_rx_w0.vlan_stripped)) {
-			pkt->ol_flags |= PKT_RX_VLAN
-				| PKT_RX_VLAN_STRIPPED;
+			pkt->ol_flags |= RTE_MBUF_F_RX_VLAN
+				| RTE_MBUF_F_RX_VLAN_STRIPPED;
 			pkt->vlan_tci = rte_cpu_to_be_16(cqe_rx_w2.vlan_tci);
 		}
 	}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index d6ed660b4e..3e1d40bbeb 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -12,7 +12,7 @@
 #define NICVF_RX_OFFLOAD_CKSUM          0x2
 #define NICVF_RX_OFFLOAD_VLAN_STRIP     0x4
 
-#define NICVF_TX_OFFLOAD_MASK (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)
+#define NICVF_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK)
 
 #if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
 static inline uint16_t __attribute__((const))
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index b267da462b..bb300bae40 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -1136,10 +1136,10 @@ txgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 	rxq = dev->data->rx_queues[queue];
 
 	if (on) {
-		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		rxq->vlan_flags = RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
 	} else {
-		rxq->vlan_flags = PKT_RX_VLAN;
+		rxq->vlan_flags = RTE_MBUF_F_RX_VLAN;
 		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index f8c1ad3937..33774bc6fa 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -43,30 +43,30 @@
 #include "txgbe_rxtx.h"
 
 #ifdef RTE_LIBRTE_IEEE1588
-#define TXGBE_TX_IEEE1588_TMST PKT_TX_IEEE1588_TMST
+#define TXGBE_TX_IEEE1588_TMST RTE_MBUF_F_TX_IEEE1588_TMST
 #else
 #define TXGBE_TX_IEEE1588_TMST 0
 #endif
 
 /* Bit Mask to indicate what bits required for building TX context */
-static const u64 TXGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM |
-		PKT_TX_OUTER_IPV6 |
-		PKT_TX_OUTER_IPV4 |
-		PKT_TX_IPV6 |
-		PKT_TX_IPV4 |
-		PKT_TX_VLAN |
-		PKT_TX_L4_MASK |
-		PKT_TX_TCP_SEG |
-		PKT_TX_TUNNEL_MASK |
-		PKT_TX_OUTER_IP_CKSUM |
-		PKT_TX_OUTER_UDP_CKSUM |
+static const u64 TXGBE_TX_OFFLOAD_MASK = (RTE_MBUF_F_TX_IP_CKSUM |
+		RTE_MBUF_F_TX_OUTER_IPV6 |
+		RTE_MBUF_F_TX_OUTER_IPV4 |
+		RTE_MBUF_F_TX_IPV6 |
+		RTE_MBUF_F_TX_IPV4 |
+		RTE_MBUF_F_TX_VLAN |
+		RTE_MBUF_F_TX_L4_MASK |
+		RTE_MBUF_F_TX_TCP_SEG |
+		RTE_MBUF_F_TX_TUNNEL_MASK |
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+		RTE_MBUF_F_TX_OUTER_UDP_CKSUM |
 #ifdef RTE_LIB_SECURITY
-		PKT_TX_SEC_OFFLOAD |
+		RTE_MBUF_F_TX_SEC_OFFLOAD |
 #endif
 		TXGBE_TX_IEEE1588_TMST);
 
 #define TXGBE_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ TXGBE_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ TXGBE_TX_OFFLOAD_MASK)
 
 /*
  * Prefetch a cache line into all cache levels.
@@ -339,7 +339,7 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 	type_tucmd_mlhl |= TXGBE_TXD_PTID(tx_offload.ptid);
 
 	/* check if TCP segmentation required for this packet */
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		tx_offload_mask.l2_len |= ~0;
 		tx_offload_mask.l3_len |= ~0;
 		tx_offload_mask.l4_len |= ~0;
@@ -347,25 +347,25 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 		mss_l4len_idx |= TXGBE_TXD_MSS(tx_offload.tso_segsz);
 		mss_l4len_idx |= TXGBE_TXD_L4LEN(tx_offload.l4_len);
 	} else { /* no TSO, check if hardware checksum is needed */
-		if (ol_flags & PKT_TX_IP_CKSUM) {
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 		}
 
-		switch (ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_UDP_CKSUM:
+		switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			mss_l4len_idx |=
 				TXGBE_TXD_L4LEN(sizeof(struct rte_udp_hdr));
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 			break;
-		case PKT_TX_TCP_CKSUM:
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			mss_l4len_idx |=
 				TXGBE_TXD_L4LEN(sizeof(struct rte_tcp_hdr));
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			mss_l4len_idx |=
 				TXGBE_TXD_L4LEN(sizeof(struct rte_sctp_hdr));
 			tx_offload_mask.l2_len |= ~0;
@@ -378,7 +378,7 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 
 	vlan_macip_lens = TXGBE_TXD_IPLEN(tx_offload.l3_len >> 1);
 
-	if (ol_flags & PKT_TX_TUNNEL_MASK) {
+	if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 		tx_offload_mask.outer_tun_len |= ~0;
 		tx_offload_mask.outer_l2_len |= ~0;
 		tx_offload_mask.outer_l3_len |= ~0;
@@ -386,16 +386,16 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 		tunnel_seed = TXGBE_TXD_ETUNLEN(tx_offload.outer_tun_len >> 1);
 		tunnel_seed |= TXGBE_TXD_EIPLEN(tx_offload.outer_l3_len >> 2);
 
-		switch (ol_flags & PKT_TX_TUNNEL_MASK) {
-		case PKT_TX_TUNNEL_IPIP:
+		switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+		case RTE_MBUF_F_TX_TUNNEL_IPIP:
 			/* for non UDP / GRE tunneling, set to 0b */
 			break;
-		case PKT_TX_TUNNEL_VXLAN:
-		case PKT_TX_TUNNEL_VXLAN_GPE:
-		case PKT_TX_TUNNEL_GENEVE:
+		case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+		case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE:
+		case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 			tunnel_seed |= TXGBE_TXD_ETYPE_UDP;
 			break;
-		case PKT_TX_TUNNEL_GRE:
+		case RTE_MBUF_F_TX_TUNNEL_GRE:
 			tunnel_seed |= TXGBE_TXD_ETYPE_GRE;
 			break;
 		default:
@@ -408,13 +408,13 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 		vlan_macip_lens |= TXGBE_TXD_MACLEN(tx_offload.l2_len);
 	}
 
-	if (ol_flags & PKT_TX_VLAN) {
+	if (ol_flags & RTE_MBUF_F_TX_VLAN) {
 		tx_offload_mask.vlan_tci |= ~0;
 		vlan_macip_lens |= TXGBE_TXD_VLAN(tx_offload.vlan_tci);
 	}
 
 #ifdef RTE_LIB_SECURITY
-	if (ol_flags & PKT_TX_SEC_OFFLOAD) {
+	if (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
 		union txgbe_crypto_tx_desc_md *md =
 				(union txgbe_crypto_tx_desc_md *)mdata;
 		tunnel_seed |= TXGBE_TXD_IPSEC_SAIDX(md->sa_idx);
@@ -477,26 +477,26 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 {
 	uint32_t tmp = 0;
 
-	if ((ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM) {
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) != RTE_MBUF_F_TX_L4_NO_CKSUM) {
 		tmp |= TXGBE_TXD_CC;
 		tmp |= TXGBE_TXD_L4CS;
 	}
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		tmp |= TXGBE_TXD_CC;
 		tmp |= TXGBE_TXD_IPCS;
 	}
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) {
 		tmp |= TXGBE_TXD_CC;
 		tmp |= TXGBE_TXD_EIPCS;
 	}
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		tmp |= TXGBE_TXD_CC;
 		/* implies IPv4 cksum */
-		if (ol_flags & PKT_TX_IPV4)
+		if (ol_flags & RTE_MBUF_F_TX_IPV4)
 			tmp |= TXGBE_TXD_IPCS;
 		tmp |= TXGBE_TXD_L4CS;
 	}
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		tmp |= TXGBE_TXD_CC;
 
 	return tmp;
@@ -507,11 +507,11 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
 {
 	uint32_t cmdtype = 0;
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		cmdtype |= TXGBE_TXD_VLE;
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		cmdtype |= TXGBE_TXD_TSE;
-	if (ol_flags & PKT_TX_MACSEC)
+	if (ol_flags & RTE_MBUF_F_TX_MACSEC)
 		cmdtype |= TXGBE_TXD_LINKSEC;
 	return cmdtype;
 }
@@ -525,67 +525,67 @@ tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
 		return txgbe_encode_ptype(ptype);
 
 	/* Only support flags in TXGBE_TX_OFFLOAD_MASK */
-	tun = !!(oflags & PKT_TX_TUNNEL_MASK);
+	tun = !!(oflags & RTE_MBUF_F_TX_TUNNEL_MASK);
 
 	/* L2 level */
 	ptype = RTE_PTYPE_L2_ETHER;
-	if (oflags & PKT_TX_VLAN)
+	if (oflags & RTE_MBUF_F_TX_VLAN)
 		ptype |= RTE_PTYPE_L2_ETHER_VLAN;
 
 	/* L3 level */
-	if (oflags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IP_CKSUM))
+	if (oflags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM))
 		ptype |= RTE_PTYPE_L3_IPV4;
-	else if (oflags & (PKT_TX_OUTER_IPV6))
+	else if (oflags & (RTE_MBUF_F_TX_OUTER_IPV6))
 		ptype |= RTE_PTYPE_L3_IPV6;
 
-	if (oflags & (PKT_TX_IPV4 | PKT_TX_IP_CKSUM))
+	if (oflags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM))
 		ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV4 : RTE_PTYPE_L3_IPV4);
-	else if (oflags & (PKT_TX_IPV6))
+	else if (oflags & (RTE_MBUF_F_TX_IPV6))
 		ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV6 : RTE_PTYPE_L3_IPV6);
 
 	/* L4 level */
-	switch (oflags & (PKT_TX_L4_MASK)) {
-	case PKT_TX_TCP_CKSUM:
+	switch (oflags & (RTE_MBUF_F_TX_L4_MASK)) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		ptype |= (tun ? RTE_PTYPE_INNER_L4_UDP : RTE_PTYPE_L4_UDP);
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		ptype |= (tun ? RTE_PTYPE_INNER_L4_SCTP : RTE_PTYPE_L4_SCTP);
 		break;
 	}
 
-	if (oflags & PKT_TX_TCP_SEG)
+	if (oflags & RTE_MBUF_F_TX_TCP_SEG)
 		ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
 
 	/* Tunnel */
-	switch (oflags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_VXLAN:
+	switch (oflags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_VXLAN;
 		ptype |= RTE_PTYPE_INNER_L2_ETHER;
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_GRE;
 		ptype |= RTE_PTYPE_INNER_L2_ETHER;
 		break;
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_GENEVE;
 		ptype |= RTE_PTYPE_INNER_L2_ETHER;
 		break;
-	case PKT_TX_TUNNEL_VXLAN_GPE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_VXLAN_GPE;
 		break;
-	case PKT_TX_TUNNEL_IPIP:
-	case PKT_TX_TUNNEL_IP:
+	case RTE_MBUF_F_TX_TUNNEL_IPIP:
+	case RTE_MBUF_F_TX_TUNNEL_IP:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_IP;
@@ -669,19 +669,19 @@ txgbe_get_tun_len(struct rte_mbuf *mbuf)
 	const struct txgbe_genevehdr *gh;
 	uint8_t tun_len;
 
-	switch (mbuf->ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_IPIP:
+	switch (mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_IPIP:
 		tun_len = 0;
 		break;
-	case PKT_TX_TUNNEL_VXLAN:
-	case PKT_TX_TUNNEL_VXLAN_GPE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE:
 		tun_len = sizeof(struct txgbe_udphdr)
 			+ sizeof(struct txgbe_vxlanhdr);
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		tun_len = sizeof(struct txgbe_nvgrehdr);
 		break;
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		gh = rte_pktmbuf_read(mbuf,
 			mbuf->outer_l2_len + mbuf->outer_l3_len,
 			sizeof(genevehdr), &genevehdr);
@@ -751,7 +751,7 @@ txgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 */
 		ol_flags = tx_pkt->ol_flags;
 #ifdef RTE_LIB_SECURITY
-		use_ipsec = txq->using_ipsec && (ol_flags & PKT_TX_SEC_OFFLOAD);
+		use_ipsec = txq->using_ipsec && (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD);
 #endif
 
 		/* If hardware offload required */
@@ -895,20 +895,20 @@ txgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		cmd_type_len = TXGBE_TXD_FCS;
 
 #ifdef RTE_LIBRTE_IEEE1588
-		if (ol_flags & PKT_TX_IEEE1588_TMST)
+		if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 			cmd_type_len |= TXGBE_TXD_1588;
 #endif
 
 		olinfo_status = 0;
 		if (tx_ol_req) {
-			if (ol_flags & PKT_TX_TCP_SEG) {
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 				/* when TSO is on, paylen in descriptor is the
 				 * not the packet len but the tcp payload len
 				 */
 				pkt_len -= (tx_offload.l2_len +
 					tx_offload.l3_len + tx_offload.l4_len);
 				pkt_len -=
-					(tx_pkt->ol_flags & PKT_TX_TUNNEL_MASK)
+					(tx_pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 					? tx_offload.outer_l2_len +
 					  tx_offload.outer_l3_len : 0;
 			}
@@ -1076,14 +1076,14 @@ static inline uint64_t
 txgbe_rxd_pkt_info_to_pkt_flags(uint32_t pkt_info)
 {
 	static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
-		0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-		0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
-		PKT_RX_RSS_HASH, 0, 0, 0,
-		0, 0, 0,  PKT_RX_FDIR,
+		0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+		0, RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH,
+		RTE_MBUF_F_RX_RSS_HASH, 0, 0, 0,
+		0, 0, 0,  RTE_MBUF_F_RX_FDIR,
 	};
 #ifdef RTE_LIBRTE_IEEE1588
 	static uint64_t ip_pkt_etqf_map[8] = {
-		0, 0, 0, PKT_RX_IEEE1588_PTP,
+		0, 0, 0, RTE_MBUF_F_RX_IEEE1588_PTP,
 		0, 0, 0, 0,
 	};
 	int etfid = txgbe_etflt_id(TXGBE_RXD_PTID(pkt_info));
@@ -1108,12 +1108,12 @@ rx_desc_status_to_pkt_flags(uint32_t rx_status, uint64_t vlan_flags)
 	 * That can be found from rte_eth_rxmode.offloads flag
 	 */
 	pkt_flags = (rx_status & TXGBE_RXD_STAT_VLAN &&
-		     vlan_flags & PKT_RX_VLAN_STRIPPED)
+		     vlan_flags & RTE_MBUF_F_RX_VLAN_STRIPPED)
 		    ? vlan_flags : 0;
 
 #ifdef RTE_LIBRTE_IEEE1588
 	if (rx_status & TXGBE_RXD_STAT_1588)
-		pkt_flags = pkt_flags | PKT_RX_IEEE1588_TMST;
+		pkt_flags = pkt_flags | RTE_MBUF_F_RX_IEEE1588_TMST;
 #endif
 	return pkt_flags;
 }
@@ -1126,24 +1126,24 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
 	/* checksum offload can't be disabled */
 	if (rx_status & TXGBE_RXD_STAT_IPCS) {
 		pkt_flags |= (rx_status & TXGBE_RXD_ERR_IPCS
-				? PKT_RX_IP_CKSUM_BAD : PKT_RX_IP_CKSUM_GOOD);
+				? RTE_MBUF_F_RX_IP_CKSUM_BAD : RTE_MBUF_F_RX_IP_CKSUM_GOOD);
 	}
 
 	if (rx_status & TXGBE_RXD_STAT_L4CS) {
 		pkt_flags |= (rx_status & TXGBE_RXD_ERR_L4CS
-				? PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD);
+				? RTE_MBUF_F_RX_L4_CKSUM_BAD : RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 	}
 
 	if (rx_status & TXGBE_RXD_STAT_EIPCS &&
 	    rx_status & TXGBE_RXD_ERR_EIPCS) {
-		pkt_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		pkt_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 	}
 
 #ifdef RTE_LIB_SECURITY
 	if (rx_status & TXGBE_RXD_STAT_SECP) {
-		pkt_flags |= PKT_RX_SEC_OFFLOAD;
+		pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
 		if (rx_status & TXGBE_RXD_ERR_SECERR)
-			pkt_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 	}
 #endif
 
@@ -1226,10 +1226,10 @@ txgbe_rx_scan_hw_ring(struct txgbe_rx_queue *rxq)
 				txgbe_rxd_pkt_info_to_pkt_type(pkt_info[j],
 				rxq->pkt_type_mask);
 
-			if (likely(pkt_flags & PKT_RX_RSS_HASH))
+			if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH))
 				mb->hash.rss =
 					rte_le_to_cpu_32(rxdp[j].qw0.dw1);
-			else if (pkt_flags & PKT_RX_FDIR) {
+			else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 				mb->hash.fdir.hash =
 					rte_le_to_cpu_16(rxdp[j].qw0.hi.csum) &
 					TXGBE_ATR_HASH_MASK;
@@ -1541,7 +1541,7 @@ txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->port = rxq->port_id;
 
 		pkt_info = rte_le_to_cpu_32(rxd.qw0.dw0);
-		/* Only valid if PKT_RX_VLAN set in pkt_flags */
+		/* Only valid if RTE_MBUF_F_RX_VLAN set in pkt_flags */
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.qw1.hi.tag);
 
 		pkt_flags = rx_desc_status_to_pkt_flags(staterr,
@@ -1552,9 +1552,9 @@ txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->packet_type = txgbe_rxd_pkt_info_to_pkt_type(pkt_info,
 						       rxq->pkt_type_mask);
 
-		if (likely(pkt_flags & PKT_RX_RSS_HASH)) {
+		if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH)) {
 			rxm->hash.rss = rte_le_to_cpu_32(rxd.qw0.dw1);
-		} else if (pkt_flags & PKT_RX_FDIR) {
+		} else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 			rxm->hash.fdir.hash =
 				rte_le_to_cpu_16(rxd.qw0.hi.csum) &
 				TXGBE_ATR_HASH_MASK;
@@ -1616,7 +1616,7 @@ txgbe_fill_cluster_head_buf(struct rte_mbuf *head, struct txgbe_rx_desc *desc,
 
 	head->port = rxq->port_id;
 
-	/* The vlan_tci field is only valid when PKT_RX_VLAN is
+	/* The vlan_tci field is only valid when RTE_MBUF_F_RX_VLAN is
 	 * set in the pkt_flags field.
 	 */
 	head->vlan_tci = rte_le_to_cpu_16(desc->qw1.hi.tag);
@@ -1628,9 +1628,9 @@ txgbe_fill_cluster_head_buf(struct rte_mbuf *head, struct txgbe_rx_desc *desc,
 	head->packet_type = txgbe_rxd_pkt_info_to_pkt_type(pkt_info,
 						rxq->pkt_type_mask);
 
-	if (likely(pkt_flags & PKT_RX_RSS_HASH)) {
+	if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH)) {
 		head->hash.rss = rte_le_to_cpu_32(desc->qw0.dw1);
-	} else if (pkt_flags & PKT_RX_FDIR) {
+	} else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 		head->hash.fdir.hash = rte_le_to_cpu_16(desc->qw0.hi.csum)
 				& TXGBE_ATR_HASH_MASK;
 		head->hash.fdir.id = rte_le_to_cpu_16(desc->qw0.hi.ipid);
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 2be5edea86..98bdad3e9f 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -444,7 +444,7 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 		struct rte_mbuf *m = bufs[i];
 
 		/* Do VLAN tag insertion */
-		if (m->ol_flags & PKT_TX_VLAN) {
+		if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			int error = rte_vlan_insert(&m);
 			if (unlikely(error)) {
 				rte_pktmbuf_free(m);
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 63f70fc13d..b235749840 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -926,7 +926,7 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 	if (hdr->flags == 0 && hdr->gso_type == VIRTIO_NET_HDR_GSO_NONE)
 		return 0;
 
-	m->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+	m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	ptype = rte_net_get_ptype(m, &hdr_lens, RTE_PTYPE_ALL_MASK);
 	m->packet_type = ptype;
@@ -938,7 +938,7 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 	if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
 		hdrlen = hdr_lens.l2_len + hdr_lens.l3_len + hdr_lens.l4_len;
 		if (hdr->csum_start <= hdrlen && l4_supported) {
-			m->ol_flags |= PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_NONE;
 		} else {
 			/* Unknown proto or tunnel, do sw cksum. We can assume
 			 * the cksum field is in the first segment since the
@@ -960,7 +960,7 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 					off) = csum;
 		}
 	} else if (hdr->flags & VIRTIO_NET_HDR_F_DATA_VALID && l4_supported) {
-		m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	/* GSO request, save required information in mbuf */
@@ -976,8 +976,8 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 		switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
 			case VIRTIO_NET_HDR_GSO_TCPV4:
 			case VIRTIO_NET_HDR_GSO_TCPV6:
-				m->ol_flags |= PKT_RX_LRO | \
-					PKT_RX_L4_CKSUM_NONE;
+				m->ol_flags |= RTE_MBUF_F_RX_LRO | \
+					RTE_MBUF_F_RX_L4_CKSUM_NONE;
 				break;
 			default:
 				return -EINVAL;
@@ -1744,7 +1744,7 @@ virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts,
 #endif
 
 		/* Do VLAN tag insertion */
-		if (unlikely(m->ol_flags & PKT_TX_VLAN)) {
+		if (unlikely(m->ol_flags & RTE_MBUF_F_TX_VLAN)) {
 			error = rte_vlan_insert(&m);
 			/* rte_vlan_insert() may change pointer
 			 * even in the case of failure
@@ -1763,7 +1763,7 @@ virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts,
 			break;
 		}
 
-		if (m->ol_flags & PKT_TX_TCP_SEG)
+		if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			virtio_tso_fix_cksum(m);
 	}
 
diff --git a/drivers/net/virtio/virtio_rxtx_packed.h b/drivers/net/virtio/virtio_rxtx_packed.h
index 77e5cb37e7..d5c259a1f6 100644
--- a/drivers/net/virtio/virtio_rxtx_packed.h
+++ b/drivers/net/virtio/virtio_rxtx_packed.h
@@ -166,7 +166,7 @@ virtio_vec_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 		return 0;
 
 	/* GSO not support in vec path, skip check */
-	m->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+	m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	ptype = rte_net_get_ptype(m, &hdr_lens, RTE_PTYPE_ALL_MASK);
 	m->packet_type = ptype;
@@ -178,7 +178,7 @@ virtio_vec_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 	if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
 		hdrlen = hdr_lens.l2_len + hdr_lens.l3_len + hdr_lens.l4_len;
 		if (hdr->csum_start <= hdrlen && l4_supported) {
-			m->ol_flags |= PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_NONE;
 		} else {
 			/* Unknown proto or tunnel, do sw cksum. We can assume
 			 * the cksum field is in the first segment since the
@@ -200,7 +200,7 @@ virtio_vec_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 					off) = csum;
 		}
 	} else if (hdr->flags & VIRTIO_NET_HDR_F_DATA_VALID && l4_supported) {
-		m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	return 0;
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 5baac221f7..96cc8e79f0 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -639,19 +639,19 @@ virtqueue_notify(struct virtqueue *vq)
 static inline void
 virtqueue_xmit_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *cookie)
 {
-	uint64_t csum_l4 = cookie->ol_flags & PKT_TX_L4_MASK;
+	uint64_t csum_l4 = cookie->ol_flags & RTE_MBUF_F_TX_L4_MASK;
 
-	if (cookie->ol_flags & PKT_TX_TCP_SEG)
-		csum_l4 |= PKT_TX_TCP_CKSUM;
+	if (cookie->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
+		csum_l4 |= RTE_MBUF_F_TX_TCP_CKSUM;
 
 	switch (csum_l4) {
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		hdr->csum_start = cookie->l2_len + cookie->l3_len;
 		hdr->csum_offset = offsetof(struct rte_udp_hdr, dgram_cksum);
 		hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
 		break;
 
-	case PKT_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		hdr->csum_start = cookie->l2_len + cookie->l3_len;
 		hdr->csum_offset = offsetof(struct rte_tcp_hdr, cksum);
 		hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
@@ -665,8 +665,8 @@ virtqueue_xmit_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *cookie)
 	}
 
 	/* TCP Segmentation Offload */
-	if (cookie->ol_flags & PKT_TX_TCP_SEG) {
-		hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ?
+	if (cookie->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
+		hdr->gso_type = (cookie->ol_flags & RTE_MBUF_F_TX_IPV6) ?
 			VIRTIO_NET_HDR_GSO_TCPV6 :
 			VIRTIO_NET_HDR_GSO_TCPV4;
 		hdr->gso_size = cookie->tso_segsz;
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 0c9f881d8a..b769902393 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -48,15 +48,14 @@
 #include "vmxnet3_logs.h"
 #include "vmxnet3_ethdev.h"
 
-#define	VMXNET3_TX_OFFLOAD_MASK	( \
-		PKT_TX_VLAN | \
-		PKT_TX_IPV6 |     \
-		PKT_TX_IPV4 |     \
-		PKT_TX_L4_MASK |  \
-		PKT_TX_TCP_SEG)
+#define	VMXNET3_TX_OFFLOAD_MASK	(RTE_MBUF_F_TX_VLAN | \
+		RTE_MBUF_F_TX_IPV6 |     \
+		RTE_MBUF_F_TX_IPV4 |     \
+		RTE_MBUF_F_TX_L4_MASK |  \
+		RTE_MBUF_F_TX_TCP_SEG)
 
 #define	VMXNET3_TX_OFFLOAD_NOTSUP_MASK	\
-	(PKT_TX_OFFLOAD_MASK ^ VMXNET3_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ VMXNET3_TX_OFFLOAD_MASK)
 
 static const uint32_t rxprod_reg[2] = {VMXNET3_REG_RXPROD, VMXNET3_REG_RXPROD2};
 
@@ -359,7 +358,7 @@ vmxnet3_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		/* Non-TSO packet cannot occupy more than
 		 * VMXNET3_MAX_TXD_PER_PKT TX descriptors.
 		 */
-		if ((ol_flags & PKT_TX_TCP_SEG) == 0 &&
+		if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0 &&
 				m->nb_segs > VMXNET3_MAX_TXD_PER_PKT) {
 			rte_errno = EINVAL;
 			return i;
@@ -367,8 +366,8 @@ vmxnet3_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		/* check that only supported TX offloads are requested. */
 		if ((ol_flags & VMXNET3_TX_OFFLOAD_NOTSUP_MASK) != 0 ||
-				(ol_flags & PKT_TX_L4_MASK) ==
-				PKT_TX_SCTP_CKSUM) {
+				(ol_flags & RTE_MBUF_F_TX_L4_MASK) ==
+				RTE_MBUF_F_TX_SCTP_CKSUM) {
 			rte_errno = ENOTSUP;
 			return i;
 		}
@@ -416,7 +415,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		struct rte_mbuf *txm = tx_pkts[nb_tx];
 		struct rte_mbuf *m_seg = txm;
 		int copy_size = 0;
-		bool tso = (txm->ol_flags & PKT_TX_TCP_SEG) != 0;
+		bool tso = (txm->ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0;
 		/* # of descriptors needed for a packet. */
 		unsigned count = txm->nb_segs;
 
@@ -520,7 +519,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		/* Add VLAN tag if present */
 		gdesc = txq->cmd_ring.base + first2fill;
-		if (txm->ol_flags & PKT_TX_VLAN) {
+		if (txm->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			gdesc->txd.ti = 1;
 			gdesc->txd.tci = txm->vlan_tci;
 		}
@@ -535,23 +534,23 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			gdesc->txd.msscof = mss;
 
 			deferred += (rte_pktmbuf_pkt_len(txm) - gdesc->txd.hlen + mss - 1) / mss;
-		} else if (txm->ol_flags & PKT_TX_L4_MASK) {
+		} else if (txm->ol_flags & RTE_MBUF_F_TX_L4_MASK) {
 			gdesc->txd.om = VMXNET3_OM_CSUM;
 			gdesc->txd.hlen = txm->l2_len + txm->l3_len;
 
-			switch (txm->ol_flags & PKT_TX_L4_MASK) {
-			case PKT_TX_TCP_CKSUM:
+			switch (txm->ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+			case RTE_MBUF_F_TX_TCP_CKSUM:
 				gdesc->txd.msscof = gdesc->txd.hlen +
 					offsetof(struct rte_tcp_hdr, cksum);
 				break;
-			case PKT_TX_UDP_CKSUM:
+			case RTE_MBUF_F_TX_UDP_CKSUM:
 				gdesc->txd.msscof = gdesc->txd.hlen +
 					offsetof(struct rte_udp_hdr,
 						dgram_cksum);
 				break;
 			default:
 				PMD_TX_LOG(WARNING, "requested cksum offload not supported %#llx",
-					   txm->ol_flags & PKT_TX_L4_MASK);
+					   txm->ol_flags & RTE_MBUF_F_TX_L4_MASK);
 				abort();
 			}
 			deferred++;
@@ -739,35 +738,35 @@ vmxnet3_rx_offload(struct vmxnet3_hw *hw, const Vmxnet3_RxCompDesc *rcd,
 
 			rxm->tso_segsz = rcde->mss;
 			*vmxnet3_segs_dynfield(rxm) = rcde->segCnt;
-			ol_flags |= PKT_RX_LRO;
+			ol_flags |= RTE_MBUF_F_RX_LRO;
 		}
 	} else { /* Offloads set in eop */
 		/* Check for RSS */
 		if (rcd->rssType != VMXNET3_RCD_RSS_TYPE_NONE) {
-			ol_flags |= PKT_RX_RSS_HASH;
+			ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 			rxm->hash.rss = rcd->rssHash;
 		}
 
 		/* Check for hardware stripped VLAN tag */
 		if (rcd->ts) {
-			ol_flags |= (PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+			ol_flags |= (RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 			rxm->vlan_tci = rte_le_to_cpu_16((uint16_t)rcd->tci);
 		}
 
 		/* Check packet type, checksum errors, etc. */
 		if (rcd->cnc) {
-			ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 		} else {
 			if (rcd->v4) {
 				packet_type |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
 
 				if (rcd->ipc)
-					ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+					ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 				else
-					ol_flags |= PKT_RX_IP_CKSUM_BAD;
+					ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 				if (rcd->tuc) {
-					ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+					ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 					if (rcd->tcp)
 						packet_type |= RTE_PTYPE_L4_TCP;
 					else
@@ -775,17 +774,17 @@ vmxnet3_rx_offload(struct vmxnet3_hw *hw, const Vmxnet3_RxCompDesc *rcd,
 				} else {
 					if (rcd->tcp) {
 						packet_type |= RTE_PTYPE_L4_TCP;
-						ol_flags |= PKT_RX_L4_CKSUM_BAD;
+						ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 					} else if (rcd->udp) {
 						packet_type |= RTE_PTYPE_L4_UDP;
-						ol_flags |= PKT_RX_L4_CKSUM_BAD;
+						ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 					}
 				}
 			} else if (rcd->v6) {
 				packet_type |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
 
 				if (rcd->tuc) {
-					ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+					ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 					if (rcd->tcp)
 						packet_type |= RTE_PTYPE_L4_TCP;
 					else
@@ -793,10 +792,10 @@ vmxnet3_rx_offload(struct vmxnet3_hw *hw, const Vmxnet3_RxCompDesc *rcd,
 				} else {
 					if (rcd->tcp) {
 						packet_type |= RTE_PTYPE_L4_TCP;
-						ol_flags |= PKT_RX_L4_CKSUM_BAD;
+						ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 					} else if (rcd->udp) {
 						packet_type |= RTE_PTYPE_L4_UDP;
-						ol_flags |= PKT_RX_L4_CKSUM_BAD;
+						ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 					}
 				}
 			} else {
@@ -804,7 +803,7 @@ vmxnet3_rx_offload(struct vmxnet3_hw *hw, const Vmxnet3_RxCompDesc *rcd,
 			}
 
 			/* Old variants of vmxnet3 do not provide MSS */
-			if ((ol_flags & PKT_RX_LRO) && rxm->tso_segsz == 0)
+			if ((ol_flags & RTE_MBUF_F_RX_LRO) && rxm->tso_segsz == 0)
 				rxm->tso_segsz = vmxnet3_guess_mss(hw,
 						rcd, rxm);
 		}
diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c
index 0833b2817e..32c64dda65 100644
--- a/drivers/regex/mlx5/mlx5_regex_fastpath.c
+++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c
@@ -139,7 +139,7 @@ mlx5_regex_addr2mr(struct mlx5_regex_priv *priv, struct mlx5_mr_ctrl *mr_ctrl,
 		return lkey;
 	/* Take slower bottom-half on miss. */
 	return mlx5_mr_addr2mr_bh(priv->pd, 0, &priv->mr_scache, mr_ctrl, addr,
-				  !!(mbuf->ol_flags & EXT_ATTACHED_MBUF));
+				  !!(mbuf->ol_flags & RTE_MBUF_F_EXTERNAL));
 }
 
 
diff --git a/examples/bpf/t2.c b/examples/bpf/t2.c
index b9bce746c0..67cd908cd6 100644
--- a/examples/bpf/t2.c
+++ b/examples/bpf/t2.c
@@ -6,7 +6,7 @@
  * eBPF program sample.
  * Accepts pointer to struct rte_mbuf as an input parameter.
  * cleanup mbuf's vlan_tci and all related RX flags
- * (PKT_RX_VLAN_PKT | PKT_RX_VLAN_STRIPPED).
+ * (RTE_MBUF_F_RX_VLAN_PKT | RTE_MBUF_F_RX_VLAN_STRIPPED).
  * Doesn't touch contents of packet data.
  * To compile:
  * clang -O2 -target bpf -Wno-int-to-void-pointer-cast -c t2.c
@@ -27,7 +27,7 @@ entry(void *pkt)
 
 	mb = pkt;
 	mb->vlan_tci = 0;
-	mb->ol_flags &= ~(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+	mb->ol_flags &= ~(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 
 	return 1;
 }
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index a7f40970f2..d850e7b97d 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -299,7 +299,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
 			rte_pktmbuf_free(m);
 
 			/* request HW to regenerate IPv4 cksum */
-			ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+			ol_flags |= (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM);
 
 			/* If we fail to fragment the packet */
 			if (unlikely (len2 < 0))
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index d611c7d016..0ebb4b09e4 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -359,7 +359,7 @@ reassemble(struct rte_mbuf *m, uint16_t portid, uint32_t queue,
 			}
 
 			/* update offloading flags */
-			m->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+			m->ol_flags |= (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM);
 		}
 		ip_dst = rte_be_to_cpu_32(ip_hdr->dst_addr);
 
diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c
index bfa7ff7217..bd233752c8 100644
--- a/examples/ipsec-secgw/esp.c
+++ b/examples/ipsec-secgw/esp.c
@@ -159,8 +159,8 @@ esp_inbound_post(struct rte_mbuf *m, struct ipsec_sa *sa,
 
 	if ((ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) ||
 			(ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO)) {
-		if (m->ol_flags & PKT_RX_SEC_OFFLOAD) {
-			if (m->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		if (m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) {
+			if (m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)
 				cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
 			else
 				cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
@@ -464,7 +464,7 @@ esp_outbound_post(struct rte_mbuf *m,
 
 	if ((type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) ||
 			(type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO)) {
-		m->ol_flags |= PKT_TX_SEC_OFFLOAD;
+		m->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
 	} else {
 		RTE_ASSERT(cop != NULL);
 		if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 7b01872c6f..f6b384a8f4 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -466,7 +466,7 @@ prepare_one_packet(struct rte_mbuf *pkt, struct ipsec_traffic *t)
 	 * with the security session.
 	 */
 
-	if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD &&
+	if (pkt->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD &&
 			rte_security_dynfield_is_registered()) {
 		struct ipsec_sa *sa;
 		struct ipsec_mbuf_metadata *priv;
@@ -533,7 +533,7 @@ prepare_tx_pkt(struct rte_mbuf *pkt, uint16_t port,
 		ip->ip_sum = 0;
 
 		/* calculate IPv4 cksum in SW */
-		if ((pkt->ol_flags & PKT_TX_IP_CKSUM) == 0)
+		if ((pkt->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) == 0)
 			ip->ip_sum = rte_ipv4_cksum((struct rte_ipv4_hdr *)ip);
 
 		ethhdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
@@ -696,7 +696,7 @@ inbound_sp_sa(struct sp_ctx *sp, struct sa_ctx *sa, struct traffic_type *ip,
 		}
 
 		/* Only check SPI match for processed IPSec packets */
-		if (i < lim && ((m->ol_flags & PKT_RX_SEC_OFFLOAD) == 0)) {
+		if (i < lim && ((m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) == 0)) {
 			free_pkts(&m, 1);
 			continue;
 		}
@@ -978,7 +978,7 @@ route4_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts)
 	 */
 
 	for (i = 0; i < nb_pkts; i++) {
-		if (!(pkts[i]->ol_flags & PKT_TX_SEC_OFFLOAD)) {
+		if (!(pkts[i]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)) {
 			/* Security offload not enabled. So an LPM lookup is
 			 * required to get the hop
 			 */
@@ -995,7 +995,7 @@ route4_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts)
 	lpm_pkts = 0;
 
 	for (i = 0; i < nb_pkts; i++) {
-		if (pkts[i]->ol_flags & PKT_TX_SEC_OFFLOAD) {
+		if (pkts[i]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
 			/* Read hop from the SA */
 			pkt_hop = get_hop_for_offload_pkt(pkts[i], 0);
 		} else {
@@ -1029,7 +1029,7 @@ route6_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts)
 	 */
 
 	for (i = 0; i < nb_pkts; i++) {
-		if (!(pkts[i]->ol_flags & PKT_TX_SEC_OFFLOAD)) {
+		if (!(pkts[i]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)) {
 			/* Security offload not enabled. So an LPM lookup is
 			 * required to get the hop
 			 */
@@ -1047,7 +1047,7 @@ route6_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts)
 	lpm_pkts = 0;
 
 	for (i = 0; i < nb_pkts; i++) {
-		if (pkts[i]->ol_flags & PKT_TX_SEC_OFFLOAD) {
+		if (pkts[i]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
 			/* Read hop from the SA */
 			pkt_hop = get_hop_for_offload_pkt(pkts[i], 1);
 		} else {
@@ -2302,10 +2302,10 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 		qconf->tx_queue_id[portid] = tx_queueid;
 
 		/* Pre-populate pkt offloads based on capabilities */
-		qconf->outbound.ipv4_offloads = PKT_TX_IPV4;
-		qconf->outbound.ipv6_offloads = PKT_TX_IPV6;
+		qconf->outbound.ipv4_offloads = RTE_MBUF_F_TX_IPV4;
+		qconf->outbound.ipv6_offloads = RTE_MBUF_F_TX_IPV6;
 		if (local_port_conf.txmode.offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
-			qconf->outbound.ipv4_offloads |= PKT_TX_IP_CKSUM;
+			qconf->outbound.ipv4_offloads |= RTE_MBUF_F_TX_IP_CKSUM;
 
 		tx_queueid++;
 
diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c
index 6f49239c4a..6d3f72a047 100644
--- a/examples/ipsec-secgw/ipsec_worker.c
+++ b/examples/ipsec-secgw/ipsec_worker.c
@@ -211,9 +211,9 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt,
 
 	switch (type) {
 	case PKT_TYPE_PLAIN_IPV4:
-		if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) {
+		if (pkt->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) {
 			if (unlikely(pkt->ol_flags &
-				     PKT_RX_SEC_OFFLOAD_FAILED)) {
+				     RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)) {
 				RTE_LOG(ERR, IPSEC,
 					"Inbound security offload failed\n");
 				goto drop_pkt_and_exit;
@@ -229,9 +229,9 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt,
 		break;
 
 	case PKT_TYPE_PLAIN_IPV6:
-		if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) {
+		if (pkt->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) {
 			if (unlikely(pkt->ol_flags &
-				     PKT_RX_SEC_OFFLOAD_FAILED)) {
+				     RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)) {
 				RTE_LOG(ERR, IPSEC,
 					"Inbound security offload failed\n");
 				goto drop_pkt_and_exit;
@@ -370,7 +370,7 @@ process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt,
 				      sess->security.ses, pkt, NULL);
 
 	/* Mark the packet for Tx security offload */
-	pkt->ol_flags |= PKT_TX_SEC_OFFLOAD;
+	pkt->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
 
 	/* Get the port to which this pkt need to be submitted */
 	port_id = sa->portid;
@@ -485,7 +485,7 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links,
 						      NULL);
 
 			/* Mark the packet for Tx security offload */
-			pkt->ol_flags |= PKT_TX_SEC_OFFLOAD;
+			pkt->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
 
 			/* Provide L2 len for Outbound processing */
 			pkt->l2_len = RTE_ETHER_HDR_LEN;
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 17a28556c9..7f2199290e 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -32,7 +32,7 @@
 
 #define IP6_FULL_MASK (sizeof(((struct ip_addr *)NULL)->ip.ip6.ip6) * CHAR_BIT)
 
-#define MBUF_NO_SEC_OFFLOAD(m) ((m->ol_flags & PKT_RX_SEC_OFFLOAD) == 0)
+#define MBUF_NO_SEC_OFFLOAD(m) ((m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) == 0)
 
 struct supported_cipher_algo {
 	const char *keyword;
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index d94eca0353..3d8c82bef1 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -468,7 +468,7 @@ parse_fup(struct ptpv2_data_slave_ordinary *ptp_data)
 			   sizeof(struct clock_id));
 
 		/* Enable flag for hardware timestamping. */
-		created_pkt->ol_flags |= PKT_TX_IEEE1588_TMST;
+		created_pkt->ol_flags |= RTE_MBUF_F_TX_IEEE1588_TMST;
 
 		/*Read value from NIC to prevent latching with old value. */
 		rte_eth_timesync_read_tx_timestamp(ptp_data->portid,
@@ -630,7 +630,7 @@ lcore_main(void)
 				continue;
 
 			/* Packet is parsed to determine which type. 8< */
-			if (m->ol_flags & PKT_RX_IEEE1588_PTP)
+			if (m->ol_flags & RTE_MBUF_F_RX_IEEE1588_PTP)
 				parse_ptp_frames(portid, m);
 			/* >8 End of packet is parsed to determine which type. */
 
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index 7ffccc8369..41e8fcdc30 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -77,13 +77,13 @@ static struct rte_eth_conf port_conf = {
  * Packet RX/TX
  *
  ***/
-#define PKT_RX_BURST_MAX                32
-#define PKT_TX_BURST_MAX                32
+#define RTE_MBUF_F_RX_BURST_MAX                32
+#define RTE_MBUF_F_TX_BURST_MAX                32
 #define TIME_TX_DRAIN                   200000ULL
 
 static uint16_t port_rx;
 static uint16_t port_tx;
-static struct rte_mbuf *pkts_rx[PKT_RX_BURST_MAX];
+static struct rte_mbuf *pkts_rx[RTE_MBUF_F_RX_BURST_MAX];
 struct rte_eth_dev_tx_buffer *tx_buffer;
 
 /* Traffic meter parameters are configured in the application. 8< */
@@ -188,7 +188,7 @@ main_loop(__rte_unused void *dummy)
 		}
 
 		/* Read packet burst from NIC RX */
-		nb_rx = rte_eth_rx_burst(port_rx, NIC_RX_QUEUE, pkts_rx, PKT_RX_BURST_MAX);
+		nb_rx = rte_eth_rx_burst(port_rx, NIC_RX_QUEUE, pkts_rx, RTE_MBUF_F_RX_BURST_MAX);
 
 		/* Handle packets */
 		for (i = 0; i < nb_rx; i ++) {
@@ -420,13 +420,13 @@ main(int argc, char **argv)
 		rte_exit(EXIT_FAILURE, "Port %d TX queue setup error (%d)\n", port_tx, ret);
 
 	tx_buffer = rte_zmalloc_socket("tx_buffer",
-			RTE_ETH_TX_BUFFER_SIZE(PKT_TX_BURST_MAX), 0,
+			RTE_ETH_TX_BUFFER_SIZE(RTE_MBUF_F_TX_BURST_MAX), 0,
 			rte_eth_dev_socket_id(port_tx));
 	if (tx_buffer == NULL)
 		rte_exit(EXIT_FAILURE, "Port %d TX buffer allocation error\n",
 				port_tx);
 
-	rte_eth_tx_buffer_init(tx_buffer, PKT_TX_BURST_MAX);
+	rte_eth_tx_buffer_init(tx_buffer, RTE_MBUF_F_TX_BURST_MAX);
 
 	ret = rte_eth_dev_start(port_rx);
 	if (ret < 0)
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index d2254733bc..efda091406 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -1037,15 +1037,15 @@ static void virtio_tx_offload(struct rte_mbuf *m)
 	tcp_hdr = rte_pktmbuf_mtod_offset(m, struct rte_tcp_hdr *,
 		m->l2_len + m->l3_len);
 
-	m->ol_flags |= PKT_TX_TCP_SEG;
+	m->ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	if ((ptype & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4) {
-		m->ol_flags |= PKT_TX_IPV4;
-		m->ol_flags |= PKT_TX_IP_CKSUM;
+		m->ol_flags |= RTE_MBUF_F_TX_IPV4;
+		m->ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
 		ipv4_hdr = l3_hdr;
 		ipv4_hdr->hdr_checksum = 0;
 		tcp_hdr->cksum = rte_ipv4_phdr_cksum(l3_hdr, m->ol_flags);
 	} else { /* assume ethertype == RTE_ETHER_TYPE_IPV6 */
-		m->ol_flags |= PKT_TX_IPV6;
+		m->ol_flags |= RTE_MBUF_F_TX_IPV6;
 		tcp_hdr->cksum = rte_ipv6_phdr_cksum(l3_hdr, m->ol_flags);
 	}
 }
@@ -1116,7 +1116,7 @@ virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m, uint16_t vlan_tag)
 			(vh->vlan_tci != vlan_tag_be))
 			vh->vlan_tci = vlan_tag_be;
 	} else {
-		m->ol_flags |= PKT_TX_VLAN;
+		m->ol_flags |= RTE_MBUF_F_TX_VLAN;
 
 		/*
 		 * Find the right seg to adjust the data len when offset is
@@ -1140,7 +1140,7 @@ virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m, uint16_t vlan_tag)
 		m->vlan_tci = vlan_tag;
 	}
 
-	if (m->ol_flags & PKT_RX_LRO)
+	if (m->ol_flags & RTE_MBUF_F_RX_LRO)
 		virtio_tx_offload(m);
 
 	tx_q->m_table[tx_q->len++] = m;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 6d80514ba7..ac531af26c 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1415,13 +1415,13 @@ struct rte_eth_conf {
 #define DEV_TX_OFFLOAD_SECURITY         0x00020000
 /**
  * Device supports generic UDP tunneled packet TSO.
- * Application must set PKT_TX_TUNNEL_UDP and other mbuf fields required
+ * Application must set RTE_MBUF_F_TX_TUNNEL_UDP and other mbuf fields required
  * for tunnel TSO.
  */
 #define DEV_TX_OFFLOAD_UDP_TNL_TSO      0x00040000
 /**
  * Device supports generic IP tunneled packet TSO.
- * Application must set PKT_TX_TUNNEL_IP and other mbuf fields required
+ * Application must set RTE_MBUF_F_TX_TUNNEL_IP and other mbuf fields required
  * for tunnel TSO.
  */
 #define DEV_TX_OFFLOAD_IP_TNL_TSO       0x00080000
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index a89945061a..d6a7fc8f68 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1422,11 +1422,12 @@ rte_flow_item_icmp6_nd_opt_tla_eth_mask = {
  * RTE_FLOW_ITEM_TYPE_META
  *
  * Matches a specified metadata value. On egress, metadata can be set
- * either by mbuf dynamic metadata field with PKT_TX_DYNF_METADATA flag or
- * RTE_FLOW_ACTION_TYPE_SET_META. On ingress, RTE_FLOW_ACTION_TYPE_SET_META
+ * either by mbuf dynamic metadata field with RTE_MBUF_DYNFLAG_TX_METADATA flag
+ * or RTE_FLOW_ACTION_TYPE_SET_META. On ingress, RTE_FLOW_ACTION_TYPE_SET_META
  * sets metadata for a packet and the metadata will be reported via mbuf
- * metadata dynamic field with PKT_RX_DYNF_METADATA flag. The dynamic mbuf
- * field must be registered in advance by rte_flow_dynf_metadata_register().
+ * metadata dynamic field with RTE_MBUF_DYNFLAG_RX_METADATA flag. The dynamic
+ * mbuf field must be registered in advance by
+ * rte_flow_dynf_metadata_register().
  */
 struct rte_flow_item_meta {
 	uint32_t data;
@@ -1900,8 +1901,8 @@ enum rte_flow_action_type {
 	RTE_FLOW_ACTION_TYPE_JUMP,
 
 	/**
-	 * Attaches an integer value to packets and sets PKT_RX_FDIR and
-	 * PKT_RX_FDIR_ID mbuf flags.
+	 * Attaches an integer value to packets and sets RTE_MBUF_F_RX_FDIR and
+	 * RTE_MBUF_F_RX_FDIR_ID mbuf flags.
 	 *
 	 * See struct rte_flow_action_mark.
 	 */
@@ -1909,7 +1910,7 @@ enum rte_flow_action_type {
 
 	/**
 	 * Flags packets. Similar to MARK without a specific value; only
-	 * sets the PKT_RX_FDIR mbuf flag.
+	 * sets the RTE_MBUF_F_RX_FDIR mbuf flag.
 	 *
 	 * No associated configuration structure.
 	 */
@@ -2414,8 +2415,8 @@ enum rte_flow_action_type {
 /**
  * RTE_FLOW_ACTION_TYPE_MARK
  *
- * Attaches an integer value to packets and sets PKT_RX_FDIR and
- * PKT_RX_FDIR_ID mbuf flags.
+ * Attaches an integer value to packets and sets RTE_MBUF_F_RX_FDIR and
+ * RTE_MBUF_F_RX_FDIR_ID mbuf flags.
  *
  * This value is arbitrary and application-defined. Maximum allowed value
  * depends on the underlying implementation. It is returned in the
@@ -2960,10 +2961,10 @@ struct rte_flow_action_set_tag {
  * RTE_FLOW_ACTION_TYPE_SET_META
  *
  * Set metadata. Metadata set by mbuf metadata dynamic field with
- * PKT_TX_DYNF_DATA flag on egress will be overridden by this action. On
+ * RTE_MBUF_DYNFLAG_TX_DATA flag on egress will be overridden by this action. On
  * ingress, the metadata will be carried by mbuf metadata dynamic field
- * with PKT_RX_DYNF_METADATA flag if set.  The dynamic mbuf field must be
- * registered in advance by rte_flow_dynf_metadata_register().
+ * with RTE_MBUF_DYNFLAG_RX_METADATA flag if set.  The dynamic mbuf field must
+ * be registered in advance by rte_flow_dynf_metadata_register().
  *
  * Altering partial bits is supported with mask. For bits which have never
  * been set, unpredictable value will be seen depending on driver
@@ -3261,8 +3262,12 @@ extern uint64_t rte_flow_dynf_metadata_mask;
 	RTE_MBUF_DYNFIELD((m), rte_flow_dynf_metadata_offs, uint32_t *)
 
 /* Mbuf dynamic flags for metadata. */
-#define PKT_RX_DYNF_METADATA (rte_flow_dynf_metadata_mask)
-#define PKT_TX_DYNF_METADATA (rte_flow_dynf_metadata_mask)
+#define RTE_MBUF_DYNFLAG_RX_METADATA (rte_flow_dynf_metadata_mask)
+#define PKT_RX_DYNF_METADATA RTE_DEPRECATED(PKT_RX_DYNF_METADATA) \
+		RTE_MBUF_DYNFLAG_RX_METADATA
+#define RTE_MBUF_DYNFLAG_TX_METADATA (rte_flow_dynf_metadata_mask)
+#define PKT_TX_DYNF_METADATA RTE_DEPRECATED(PKT_TX_DYNF_METADATA) \
+		RTE_MBUF_DYNFLAG_TX_METADATA
 
 __rte_experimental
 static inline uint32_t
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index 13dfb28401..c67dbdf102 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -869,8 +869,8 @@ rxa_buffer_mbufs(struct rte_event_eth_rx_adapter *rx_adapter,
 	uint16_t dropped;
 
 	if (!eth_rx_queue_info->ena_vector) {
-		/* 0xffff ffff if PKT_RX_RSS_HASH is set, otherwise 0 */
-		rss_mask = ~(((m->ol_flags & PKT_RX_RSS_HASH) != 0) - 1);
+		/* 0xffff ffff if RTE_MBUF_F_RX_RSS_HASH is set, otherwise 0 */
+		rss_mask = ~(((m->ol_flags & RTE_MBUF_F_RX_RSS_HASH) != 0) - 1);
 		do_rss = !rss_mask && !eth_rx_queue_info->flow_id_mask;
 		for (i = 0; i < num; i++) {
 			m = mbufs[i];
diff --git a/lib/gso/gso_common.h b/lib/gso/gso_common.h
index 4d5f303fa6..2c258b22bf 100644
--- a/lib/gso/gso_common.h
+++ b/lib/gso/gso_common.h
@@ -18,26 +18,26 @@
 #define TCP_HDR_PSH_MASK ((uint8_t)0x08)
 #define TCP_HDR_FIN_MASK ((uint8_t)0x01)
 
-#define IS_IPV4_TCP(flag) (((flag) & (PKT_TX_TCP_SEG | PKT_TX_IPV4)) == \
-		(PKT_TX_TCP_SEG | PKT_TX_IPV4))
-
-#define IS_IPV4_VXLAN_TCP4(flag) (((flag) & (PKT_TX_TCP_SEG | PKT_TX_IPV4 | \
-				PKT_TX_OUTER_IPV4 | PKT_TX_TUNNEL_MASK)) == \
-		(PKT_TX_TCP_SEG | PKT_TX_IPV4 | PKT_TX_OUTER_IPV4 | \
-		 PKT_TX_TUNNEL_VXLAN))
-
-#define IS_IPV4_VXLAN_UDP4(flag) (((flag) & (PKT_TX_UDP_SEG | PKT_TX_IPV4 | \
-				PKT_TX_OUTER_IPV4 | PKT_TX_TUNNEL_MASK)) == \
-		(PKT_TX_UDP_SEG | PKT_TX_IPV4 | PKT_TX_OUTER_IPV4 | \
-		 PKT_TX_TUNNEL_VXLAN))
-
-#define IS_IPV4_GRE_TCP4(flag) (((flag) & (PKT_TX_TCP_SEG | PKT_TX_IPV4 | \
-				PKT_TX_OUTER_IPV4 | PKT_TX_TUNNEL_MASK)) == \
-		(PKT_TX_TCP_SEG | PKT_TX_IPV4 | PKT_TX_OUTER_IPV4 | \
-		 PKT_TX_TUNNEL_GRE))
-
-#define IS_IPV4_UDP(flag) (((flag) & (PKT_TX_UDP_SEG | PKT_TX_IPV4)) == \
-		(PKT_TX_UDP_SEG | PKT_TX_IPV4))
+#define IS_IPV4_TCP(flag) (((flag) & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4)) == \
+		(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4))
+
+#define IS_IPV4_VXLAN_TCP4(flag) (((flag) & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4 | \
+				RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_TUNNEL_MASK)) == \
+		(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_OUTER_IPV4 | \
+		 RTE_MBUF_F_TX_TUNNEL_VXLAN))
+
+#define IS_IPV4_VXLAN_UDP4(flag) (((flag) & (RTE_MBUF_F_TX_UDP_SEG | RTE_MBUF_F_TX_IPV4 | \
+				RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_TUNNEL_MASK)) == \
+		(RTE_MBUF_F_TX_UDP_SEG | RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_OUTER_IPV4 | \
+		 RTE_MBUF_F_TX_TUNNEL_VXLAN))
+
+#define IS_IPV4_GRE_TCP4(flag) (((flag) & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4 | \
+				RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_TUNNEL_MASK)) == \
+		(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_OUTER_IPV4 | \
+		 RTE_MBUF_F_TX_TUNNEL_GRE))
+
+#define IS_IPV4_UDP(flag) (((flag) & (RTE_MBUF_F_TX_UDP_SEG | RTE_MBUF_F_TX_IPV4)) == \
+		(RTE_MBUF_F_TX_UDP_SEG | RTE_MBUF_F_TX_IPV4))
 
 /**
  * Internal function which updates the UDP header of a packet, following
diff --git a/lib/gso/gso_tunnel_tcp4.c b/lib/gso/gso_tunnel_tcp4.c
index 166aace73a..1a7ef30dde 100644
--- a/lib/gso/gso_tunnel_tcp4.c
+++ b/lib/gso/gso_tunnel_tcp4.c
@@ -37,7 +37,7 @@ update_tunnel_ipv4_tcp_headers(struct rte_mbuf *pkt, uint8_t ipid_delta,
 	tail_idx = nb_segs - 1;
 
 	/* Only update UDP header for VxLAN packets. */
-	update_udp_hdr = (pkt->ol_flags & PKT_TX_TUNNEL_VXLAN) ? 1 : 0;
+	update_udp_hdr = (pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN) ? 1 : 0;
 
 	for (i = 0; i < nb_segs; i++) {
 		update_ipv4_header(segs[i], outer_ipv4_offset, outer_id);
diff --git a/lib/gso/rte_gso.c b/lib/gso/rte_gso.c
index 0d02ec3cee..58037d6b5d 100644
--- a/lib/gso/rte_gso.c
+++ b/lib/gso/rte_gso.c
@@ -43,7 +43,7 @@ rte_gso_segment(struct rte_mbuf *pkt,
 		return -EINVAL;
 
 	if (gso_ctx->gso_size >= pkt->pkt_len) {
-		pkt->ol_flags &= (~(PKT_TX_TCP_SEG | PKT_TX_UDP_SEG));
+		pkt->ol_flags &= (~(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG));
 		return 0;
 	}
 
@@ -57,26 +57,26 @@ rte_gso_segment(struct rte_mbuf *pkt,
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
 			((IS_IPV4_GRE_TCP4(pkt->ol_flags) &&
 			 (gso_ctx->gso_types & DEV_TX_OFFLOAD_GRE_TNL_TSO)))) {
-		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
+		pkt->ol_flags &= (~RTE_MBUF_F_TX_TCP_SEG);
 		ret = gso_tunnel_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_VXLAN_UDP4(pkt->ol_flags) &&
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO) &&
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
-		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
+		pkt->ol_flags &= (~RTE_MBUF_F_TX_UDP_SEG);
 		ret = gso_tunnel_udp4_segment(pkt, gso_size,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_TCP(pkt->ol_flags) &&
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_TCP_TSO)) {
-		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
+		pkt->ol_flags &= (~RTE_MBUF_F_TX_TCP_SEG);
 		ret = gso_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_UDP(pkt->ol_flags) &&
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
-		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
+		pkt->ol_flags &= (~RTE_MBUF_F_TX_UDP_SEG);
 		ret = gso_udp4_segment(pkt, gso_size, direct_pool,
 				indirect_pool, pkts_out, nb_pkts_out);
 	} else {
diff --git a/lib/gso/rte_gso.h b/lib/gso/rte_gso.h
index d93ee8e5b1..777d0a55fb 100644
--- a/lib/gso/rte_gso.h
+++ b/lib/gso/rte_gso.h
@@ -77,8 +77,8 @@ struct rte_gso_ctx {
  *
  * Before calling rte_gso_segment(), applications must set proper ol_flags
  * for the packet. The GSO library uses the same macros as that of TSO.
- * For example, set PKT_TX_TCP_SEG and PKT_TX_IPV4 in ol_flags to segment
- * a TCP/IPv4 packet. If rte_gso_segment() succeeds, the PKT_TX_TCP_SEG
+ * For example, set RTE_MBUF_F_TX_TCP_SEG and RTE_MBUF_F_TX_IPV4 in ol_flags to segment
+ * a TCP/IPv4 packet. If rte_gso_segment() succeeds, the RTE_MBUF_F_TX_TCP_SEG
  * flag is removed for all GSO segments and the input packet.
  *
  * Each of the newly-created GSO segments is organized as a two-segment
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 2b1df6a032..17442a98f2 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -399,7 +399,7 @@ static inline int32_t
 trs_process_check(struct rte_mbuf *mb, struct rte_mbuf **ml,
 	uint32_t *tofs, struct rte_esp_tail espt, uint32_t hlen, uint32_t tlen)
 {
-	if ((mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) != 0 ||
+	if ((mb->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED) != 0 ||
 			tlen + hlen > mb->pkt_len)
 		return -EBADMSG;
 
@@ -487,8 +487,8 @@ trs_process_step3(struct rte_mbuf *mb)
 	/* reset mbuf packet type */
 	mb->packet_type &= (RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK);
 
-	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
-	mb->ol_flags &= ~PKT_RX_SEC_OFFLOAD;
+	/* clear the RTE_MBUF_F_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~RTE_MBUF_F_RX_SEC_OFFLOAD;
 }
 
 /*
@@ -505,8 +505,8 @@ tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
 	mb->packet_type = RTE_PTYPE_UNKNOWN;
 	mb->tx_offload = (mb->tx_offload & txof_msk) | txof_val;
 
-	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
-	mb->ol_flags &= ~PKT_RX_SEC_OFFLOAD;
+	/* clear the RTE_MBUF_F_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~RTE_MBUF_F_RX_SEC_OFFLOAD;
 }
 
 /*
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 1e181cf2ce..2bbd5df2b8 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -544,7 +544,7 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	icv_len = sa->icv_len;
 
 	for (i = 0; i != num; i++) {
-		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
+		if ((mb[i]->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED) == 0) {
 			ml = rte_pktmbuf_lastseg(mb[i]);
 			/* remove high-order 32 bits of esn from packet len */
 			mb[i]->pkt_len -= sa->sqh_len;
@@ -580,7 +580,7 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
 	ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
 	for (i = 0; i != num; i++) {
 
-		mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+		mb[i]->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
 		if (ol_flags != 0)
 			rte_security_set_pkt_metadata(ss->security.ctx,
 				ss->security.ses, mb[i], NULL);
diff --git a/lib/ipsec/misc.h b/lib/ipsec/misc.h
index 79b9a20762..8e72ca992d 100644
--- a/lib/ipsec/misc.h
+++ b/lib/ipsec/misc.h
@@ -173,7 +173,7 @@ cpu_crypto_bulk(const struct rte_ipsec_session *ss,
 	j = num - n;
 	for (i = 0; j != 0 && i != num; i++) {
 		if (st[i] != 0) {
-			mb[i]->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			mb[i]->ol_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 			j--;
 		}
 	}
diff --git a/lib/ipsec/rte_ipsec_group.h b/lib/ipsec/rte_ipsec_group.h
index ea3bdfad95..60ab297710 100644
--- a/lib/ipsec/rte_ipsec_group.h
+++ b/lib/ipsec/rte_ipsec_group.h
@@ -61,7 +61,7 @@ rte_ipsec_ses_from_crypto(const struct rte_crypto_op *cop)
  * Take as input completed crypto ops, extract related mbufs
  * and group them by rte_ipsec_session they belong to.
  * For mbuf which crypto-op wasn't completed successfully
- * PKT_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags.
+ * RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags.
  * Note that mbufs with undetermined SA (session-less) are not freed
  * by the function, but are placed beyond mbufs for the last valid group.
  * It is a user responsibility to handle them further.
@@ -95,9 +95,9 @@ rte_ipsec_pkt_crypto_group(const struct rte_crypto_op *cop[],
 		m = cop[i]->sym[0].m_src;
 		ns = cop[i]->sym[0].session;
 
-		m->ol_flags |= PKT_RX_SEC_OFFLOAD;
+		m->ol_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
 		if (cop[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS)
-			m->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			m->ol_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 
 		/* no valid session found */
 		if (ns == NULL) {
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index e59189d215..4754093873 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -590,7 +590,7 @@ pkt_flag_process(const struct rte_ipsec_session *ss,
 
 	k = 0;
 	for (i = 0; i != num; i++) {
-		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+		if ((mb[i]->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED) == 0)
 			k++;
 		else
 			dr[i - k] = i;
diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c
index f7e3c1a187..f2e740c363 100644
--- a/lib/mbuf/rte_mbuf.c
+++ b/lib/mbuf/rte_mbuf.c
@@ -133,7 +133,7 @@ rte_pktmbuf_free_pinned_extmem(void *addr, void *opaque)
 	RTE_ASSERT(m->shinfo->fcb_opaque == m);
 
 	rte_mbuf_ext_refcnt_set(m->shinfo, 1);
-	m->ol_flags = EXT_ATTACHED_MBUF;
+	m->ol_flags = RTE_MBUF_F_EXTERNAL;
 	if (m->next != NULL) {
 		m->next = NULL;
 		m->nb_segs = 1;
@@ -213,7 +213,7 @@ __rte_pktmbuf_init_extmem(struct rte_mempool *mp,
 	m->pool = mp;
 	m->nb_segs = 1;
 	m->port = RTE_MBUF_PORT_INVALID;
-	m->ol_flags = EXT_ATTACHED_MBUF;
+	m->ol_flags = RTE_MBUF_F_EXTERNAL;
 	rte_mbuf_refcnt_set(m, 1);
 	m->next = NULL;
 
@@ -620,7 +620,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
 	__rte_pktmbuf_copy_hdr(mc, m);
 
 	/* copied mbuf is not indirect or external */
-	mc->ol_flags = m->ol_flags & ~(IND_ATTACHED_MBUF|EXT_ATTACHED_MBUF);
+	mc->ol_flags = m->ol_flags & ~(RTE_MBUF_F_INDIRECT|RTE_MBUF_F_EXTERNAL);
 
 	prev = &mc->next;
 	m_last = mc;
@@ -685,7 +685,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
 	fprintf(f, "  pkt_len=%u, ol_flags=%#"PRIx64", nb_segs=%u, port=%u",
 		m->pkt_len, m->ol_flags, m->nb_segs, m->port);
 
-	if (m->ol_flags & (PKT_RX_VLAN | PKT_TX_VLAN))
+	if (m->ol_flags & (RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_TX_VLAN))
 		fprintf(f, ", vlan_tci=%u", m->vlan_tci);
 
 	fprintf(f, ", ptype=%#"PRIx32"\n", m->packet_type);
@@ -751,30 +751,30 @@ const void *__rte_pktmbuf_read(const struct rte_mbuf *m, uint32_t off,
 const char *rte_get_rx_ol_flag_name(uint64_t mask)
 {
 	switch (mask) {
-	case PKT_RX_VLAN: return "PKT_RX_VLAN";
-	case PKT_RX_RSS_HASH: return "PKT_RX_RSS_HASH";
-	case PKT_RX_FDIR: return "PKT_RX_FDIR";
-	case PKT_RX_L4_CKSUM_BAD: return "PKT_RX_L4_CKSUM_BAD";
-	case PKT_RX_L4_CKSUM_GOOD: return "PKT_RX_L4_CKSUM_GOOD";
-	case PKT_RX_L4_CKSUM_NONE: return "PKT_RX_L4_CKSUM_NONE";
-	case PKT_RX_IP_CKSUM_BAD: return "PKT_RX_IP_CKSUM_BAD";
-	case PKT_RX_IP_CKSUM_GOOD: return "PKT_RX_IP_CKSUM_GOOD";
-	case PKT_RX_IP_CKSUM_NONE: return "PKT_RX_IP_CKSUM_NONE";
-	case PKT_RX_OUTER_IP_CKSUM_BAD: return "PKT_RX_OUTER_IP_CKSUM_BAD";
-	case PKT_RX_VLAN_STRIPPED: return "PKT_RX_VLAN_STRIPPED";
-	case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
-	case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
-	case PKT_RX_FDIR_ID: return "PKT_RX_FDIR_ID";
-	case PKT_RX_FDIR_FLX: return "PKT_RX_FDIR_FLX";
-	case PKT_RX_QINQ_STRIPPED: return "PKT_RX_QINQ_STRIPPED";
-	case PKT_RX_QINQ: return "PKT_RX_QINQ";
-	case PKT_RX_LRO: return "PKT_RX_LRO";
-	case PKT_RX_SEC_OFFLOAD: return "PKT_RX_SEC_OFFLOAD";
-	case PKT_RX_SEC_OFFLOAD_FAILED: return "PKT_RX_SEC_OFFLOAD_FAILED";
-	case PKT_RX_OUTER_L4_CKSUM_BAD: return "PKT_RX_OUTER_L4_CKSUM_BAD";
-	case PKT_RX_OUTER_L4_CKSUM_GOOD: return "PKT_RX_OUTER_L4_CKSUM_GOOD";
-	case PKT_RX_OUTER_L4_CKSUM_INVALID:
-		return "PKT_RX_OUTER_L4_CKSUM_INVALID";
+	case RTE_MBUF_F_RX_VLAN: return "RTE_MBUF_F_RX_VLAN";
+	case RTE_MBUF_F_RX_RSS_HASH: return "RTE_MBUF_F_RX_RSS_HASH";
+	case RTE_MBUF_F_RX_FDIR: return "RTE_MBUF_F_RX_FDIR";
+	case RTE_MBUF_F_RX_L4_CKSUM_BAD: return "RTE_MBUF_F_RX_L4_CKSUM_BAD";
+	case RTE_MBUF_F_RX_L4_CKSUM_GOOD: return "RTE_MBUF_F_RX_L4_CKSUM_GOOD";
+	case RTE_MBUF_F_RX_L4_CKSUM_NONE: return "RTE_MBUF_F_RX_L4_CKSUM_NONE";
+	case RTE_MBUF_F_RX_IP_CKSUM_BAD: return "RTE_MBUF_F_RX_IP_CKSUM_BAD";
+	case RTE_MBUF_F_RX_IP_CKSUM_GOOD: return "RTE_MBUF_F_RX_IP_CKSUM_GOOD";
+	case RTE_MBUF_F_RX_IP_CKSUM_NONE: return "RTE_MBUF_F_RX_IP_CKSUM_NONE";
+	case RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD: return "RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD";
+	case RTE_MBUF_F_RX_VLAN_STRIPPED: return "RTE_MBUF_F_RX_VLAN_STRIPPED";
+	case RTE_MBUF_F_RX_IEEE1588_PTP: return "RTE_MBUF_F_RX_IEEE1588_PTP";
+	case RTE_MBUF_F_RX_IEEE1588_TMST: return "RTE_MBUF_F_RX_IEEE1588_TMST";
+	case RTE_MBUF_F_RX_FDIR_ID: return "RTE_MBUF_F_RX_FDIR_ID";
+	case RTE_MBUF_F_RX_FDIR_FLX: return "RTE_MBUF_F_RX_FDIR_FLX";
+	case RTE_MBUF_F_RX_QINQ_STRIPPED: return "RTE_MBUF_F_RX_QINQ_STRIPPED";
+	case RTE_MBUF_F_RX_QINQ: return "RTE_MBUF_F_RX_QINQ";
+	case RTE_MBUF_F_RX_LRO: return "RTE_MBUF_F_RX_LRO";
+	case RTE_MBUF_F_RX_SEC_OFFLOAD: return "RTE_MBUF_F_RX_SEC_OFFLOAD";
+	case RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED: return "RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED";
+	case RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD: return "RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD";
+	case RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD: return "RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD";
+	case RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID:
+		return "RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID";
 
 	default: return NULL;
 	}
@@ -791,37 +791,37 @@ int
 rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
 {
 	const struct flag_mask rx_flags[] = {
-		{ PKT_RX_VLAN, PKT_RX_VLAN, NULL },
-		{ PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, NULL },
-		{ PKT_RX_FDIR, PKT_RX_FDIR, NULL },
-		{ PKT_RX_L4_CKSUM_BAD, PKT_RX_L4_CKSUM_MASK, NULL },
-		{ PKT_RX_L4_CKSUM_GOOD, PKT_RX_L4_CKSUM_MASK, NULL },
-		{ PKT_RX_L4_CKSUM_NONE, PKT_RX_L4_CKSUM_MASK, NULL },
-		{ PKT_RX_L4_CKSUM_UNKNOWN, PKT_RX_L4_CKSUM_MASK,
-		  "PKT_RX_L4_CKSUM_UNKNOWN" },
-		{ PKT_RX_IP_CKSUM_BAD, PKT_RX_IP_CKSUM_MASK, NULL },
-		{ PKT_RX_IP_CKSUM_GOOD, PKT_RX_IP_CKSUM_MASK, NULL },
-		{ PKT_RX_IP_CKSUM_NONE, PKT_RX_IP_CKSUM_MASK, NULL },
-		{ PKT_RX_IP_CKSUM_UNKNOWN, PKT_RX_IP_CKSUM_MASK,
-		  "PKT_RX_IP_CKSUM_UNKNOWN" },
-		{ PKT_RX_OUTER_IP_CKSUM_BAD, PKT_RX_OUTER_IP_CKSUM_BAD, NULL },
-		{ PKT_RX_VLAN_STRIPPED, PKT_RX_VLAN_STRIPPED, NULL },
-		{ PKT_RX_IEEE1588_PTP, PKT_RX_IEEE1588_PTP, NULL },
-		{ PKT_RX_IEEE1588_TMST, PKT_RX_IEEE1588_TMST, NULL },
-		{ PKT_RX_FDIR_ID, PKT_RX_FDIR_ID, NULL },
-		{ PKT_RX_FDIR_FLX, PKT_RX_FDIR_FLX, NULL },
-		{ PKT_RX_QINQ_STRIPPED, PKT_RX_QINQ_STRIPPED, NULL },
-		{ PKT_RX_LRO, PKT_RX_LRO, NULL },
-		{ PKT_RX_SEC_OFFLOAD, PKT_RX_SEC_OFFLOAD, NULL },
-		{ PKT_RX_SEC_OFFLOAD_FAILED, PKT_RX_SEC_OFFLOAD_FAILED, NULL },
-		{ PKT_RX_QINQ, PKT_RX_QINQ, NULL },
-		{ PKT_RX_OUTER_L4_CKSUM_BAD, PKT_RX_OUTER_L4_CKSUM_MASK, NULL },
-		{ PKT_RX_OUTER_L4_CKSUM_GOOD, PKT_RX_OUTER_L4_CKSUM_MASK,
+		{ RTE_MBUF_F_RX_VLAN, RTE_MBUF_F_RX_VLAN, NULL },
+		{ RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, NULL },
+		{ RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR, NULL },
+		{ RTE_MBUF_F_RX_L4_CKSUM_BAD, RTE_MBUF_F_RX_L4_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_L4_CKSUM_GOOD, RTE_MBUF_F_RX_L4_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_L4_CKSUM_NONE, RTE_MBUF_F_RX_L4_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN, RTE_MBUF_F_RX_L4_CKSUM_MASK,
+		  "RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN" },
+		{ RTE_MBUF_F_RX_IP_CKSUM_BAD, RTE_MBUF_F_RX_IP_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_IP_CKSUM_GOOD, RTE_MBUF_F_RX_IP_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_IP_CKSUM_NONE, RTE_MBUF_F_RX_IP_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN, RTE_MBUF_F_RX_IP_CKSUM_MASK,
+		  "RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN" },
+		{ RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, NULL },
+		{ RTE_MBUF_F_RX_VLAN_STRIPPED, RTE_MBUF_F_RX_VLAN_STRIPPED, NULL },
+		{ RTE_MBUF_F_RX_IEEE1588_PTP, RTE_MBUF_F_RX_IEEE1588_PTP, NULL },
+		{ RTE_MBUF_F_RX_IEEE1588_TMST, RTE_MBUF_F_RX_IEEE1588_TMST, NULL },
+		{ RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID, NULL },
+		{ RTE_MBUF_F_RX_FDIR_FLX, RTE_MBUF_F_RX_FDIR_FLX, NULL },
+		{ RTE_MBUF_F_RX_QINQ_STRIPPED, RTE_MBUF_F_RX_QINQ_STRIPPED, NULL },
+		{ RTE_MBUF_F_RX_LRO, RTE_MBUF_F_RX_LRO, NULL },
+		{ RTE_MBUF_F_RX_SEC_OFFLOAD, RTE_MBUF_F_RX_SEC_OFFLOAD, NULL },
+		{ RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED, RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED, NULL },
+		{ RTE_MBUF_F_RX_QINQ, RTE_MBUF_F_RX_QINQ, NULL },
+		{ RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD, RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD, RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK,
 		  NULL },
-		{ PKT_RX_OUTER_L4_CKSUM_INVALID, PKT_RX_OUTER_L4_CKSUM_MASK,
+		{ RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID, RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK,
 		  NULL },
-		{ PKT_RX_OUTER_L4_CKSUM_UNKNOWN, PKT_RX_OUTER_L4_CKSUM_MASK,
-		  "PKT_RX_OUTER_L4_CKSUM_UNKNOWN" },
+		{ RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN, RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK,
+		  "RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN" },
 	};
 	const char *name;
 	unsigned int i;
@@ -856,32 +856,32 @@ rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
 const char *rte_get_tx_ol_flag_name(uint64_t mask)
 {
 	switch (mask) {
-	case PKT_TX_VLAN: return "PKT_TX_VLAN";
-	case PKT_TX_IP_CKSUM: return "PKT_TX_IP_CKSUM";
-	case PKT_TX_TCP_CKSUM: return "PKT_TX_TCP_CKSUM";
-	case PKT_TX_SCTP_CKSUM: return "PKT_TX_SCTP_CKSUM";
-	case PKT_TX_UDP_CKSUM: return "PKT_TX_UDP_CKSUM";
-	case PKT_TX_IEEE1588_TMST: return "PKT_TX_IEEE1588_TMST";
-	case PKT_TX_TCP_SEG: return "PKT_TX_TCP_SEG";
-	case PKT_TX_IPV4: return "PKT_TX_IPV4";
-	case PKT_TX_IPV6: return "PKT_TX_IPV6";
-	case PKT_TX_OUTER_IP_CKSUM: return "PKT_TX_OUTER_IP_CKSUM";
-	case PKT_TX_OUTER_IPV4: return "PKT_TX_OUTER_IPV4";
-	case PKT_TX_OUTER_IPV6: return "PKT_TX_OUTER_IPV6";
-	case PKT_TX_TUNNEL_VXLAN: return "PKT_TX_TUNNEL_VXLAN";
-	case PKT_TX_TUNNEL_GTP: return "PKT_TX_TUNNEL_GTP";
-	case PKT_TX_TUNNEL_GRE: return "PKT_TX_TUNNEL_GRE";
-	case PKT_TX_TUNNEL_IPIP: return "PKT_TX_TUNNEL_IPIP";
-	case PKT_TX_TUNNEL_GENEVE: return "PKT_TX_TUNNEL_GENEVE";
-	case PKT_TX_TUNNEL_MPLSINUDP: return "PKT_TX_TUNNEL_MPLSINUDP";
-	case PKT_TX_TUNNEL_VXLAN_GPE: return "PKT_TX_TUNNEL_VXLAN_GPE";
-	case PKT_TX_TUNNEL_IP: return "PKT_TX_TUNNEL_IP";
-	case PKT_TX_TUNNEL_UDP: return "PKT_TX_TUNNEL_UDP";
-	case PKT_TX_QINQ: return "PKT_TX_QINQ";
-	case PKT_TX_MACSEC: return "PKT_TX_MACSEC";
-	case PKT_TX_SEC_OFFLOAD: return "PKT_TX_SEC_OFFLOAD";
-	case PKT_TX_UDP_SEG: return "PKT_TX_UDP_SEG";
-	case PKT_TX_OUTER_UDP_CKSUM: return "PKT_TX_OUTER_UDP_CKSUM";
+	case RTE_MBUF_F_TX_VLAN: return "RTE_MBUF_F_TX_VLAN";
+	case RTE_MBUF_F_TX_IP_CKSUM: return "RTE_MBUF_F_TX_IP_CKSUM";
+	case RTE_MBUF_F_TX_TCP_CKSUM: return "RTE_MBUF_F_TX_TCP_CKSUM";
+	case RTE_MBUF_F_TX_SCTP_CKSUM: return "RTE_MBUF_F_TX_SCTP_CKSUM";
+	case RTE_MBUF_F_TX_UDP_CKSUM: return "RTE_MBUF_F_TX_UDP_CKSUM";
+	case RTE_MBUF_F_TX_IEEE1588_TMST: return "RTE_MBUF_F_TX_IEEE1588_TMST";
+	case RTE_MBUF_F_TX_TCP_SEG: return "RTE_MBUF_F_TX_TCP_SEG";
+	case RTE_MBUF_F_TX_IPV4: return "RTE_MBUF_F_TX_IPV4";
+	case RTE_MBUF_F_TX_IPV6: return "RTE_MBUF_F_TX_IPV6";
+	case RTE_MBUF_F_TX_OUTER_IP_CKSUM: return "RTE_MBUF_F_TX_OUTER_IP_CKSUM";
+	case RTE_MBUF_F_TX_OUTER_IPV4: return "RTE_MBUF_F_TX_OUTER_IPV4";
+	case RTE_MBUF_F_TX_OUTER_IPV6: return "RTE_MBUF_F_TX_OUTER_IPV6";
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN: return "RTE_MBUF_F_TX_TUNNEL_VXLAN";
+	case RTE_MBUF_F_TX_TUNNEL_GTP: return "RTE_MBUF_F_TX_TUNNEL_GTP";
+	case RTE_MBUF_F_TX_TUNNEL_GRE: return "RTE_MBUF_F_TX_TUNNEL_GRE";
+	case RTE_MBUF_F_TX_TUNNEL_IPIP: return "RTE_MBUF_F_TX_TUNNEL_IPIP";
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE: return "RTE_MBUF_F_TX_TUNNEL_GENEVE";
+	case RTE_MBUF_F_TX_TUNNEL_MPLSINUDP: return "RTE_MBUF_F_TX_TUNNEL_MPLSINUDP";
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE: return "RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE";
+	case RTE_MBUF_F_TX_TUNNEL_IP: return "RTE_MBUF_F_TX_TUNNEL_IP";
+	case RTE_MBUF_F_TX_TUNNEL_UDP: return "RTE_MBUF_F_TX_TUNNEL_UDP";
+	case RTE_MBUF_F_TX_QINQ: return "RTE_MBUF_F_TX_QINQ";
+	case RTE_MBUF_F_TX_MACSEC: return "RTE_MBUF_F_TX_MACSEC";
+	case RTE_MBUF_F_TX_SEC_OFFLOAD: return "RTE_MBUF_F_TX_SEC_OFFLOAD";
+	case RTE_MBUF_F_TX_UDP_SEG: return "RTE_MBUF_F_TX_UDP_SEG";
+	case RTE_MBUF_F_TX_OUTER_UDP_CKSUM: return "RTE_MBUF_F_TX_OUTER_UDP_CKSUM";
 	default: return NULL;
 	}
 }
@@ -891,33 +891,33 @@ int
 rte_get_tx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
 {
 	const struct flag_mask tx_flags[] = {
-		{ PKT_TX_VLAN, PKT_TX_VLAN, NULL },
-		{ PKT_TX_IP_CKSUM, PKT_TX_IP_CKSUM, NULL },
-		{ PKT_TX_TCP_CKSUM, PKT_TX_L4_MASK, NULL },
-		{ PKT_TX_SCTP_CKSUM, PKT_TX_L4_MASK, NULL },
-		{ PKT_TX_UDP_CKSUM, PKT_TX_L4_MASK, NULL },
-		{ PKT_TX_L4_NO_CKSUM, PKT_TX_L4_MASK, "PKT_TX_L4_NO_CKSUM" },
-		{ PKT_TX_IEEE1588_TMST, PKT_TX_IEEE1588_TMST, NULL },
-		{ PKT_TX_TCP_SEG, PKT_TX_TCP_SEG, NULL },
-		{ PKT_TX_IPV4, PKT_TX_IPV4, NULL },
-		{ PKT_TX_IPV6, PKT_TX_IPV6, NULL },
-		{ PKT_TX_OUTER_IP_CKSUM, PKT_TX_OUTER_IP_CKSUM, NULL },
-		{ PKT_TX_OUTER_IPV4, PKT_TX_OUTER_IPV4, NULL },
-		{ PKT_TX_OUTER_IPV6, PKT_TX_OUTER_IPV6, NULL },
-		{ PKT_TX_TUNNEL_VXLAN, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_GTP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_GRE, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_IPIP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_GENEVE, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_MPLSINUDP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_VXLAN_GPE, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_IP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_UDP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_QINQ, PKT_TX_QINQ, NULL },
-		{ PKT_TX_MACSEC, PKT_TX_MACSEC, NULL },
-		{ PKT_TX_SEC_OFFLOAD, PKT_TX_SEC_OFFLOAD, NULL },
-		{ PKT_TX_UDP_SEG, PKT_TX_UDP_SEG, NULL },
-		{ PKT_TX_OUTER_UDP_CKSUM, PKT_TX_OUTER_UDP_CKSUM, NULL },
+		{ RTE_MBUF_F_TX_VLAN, RTE_MBUF_F_TX_VLAN, NULL },
+		{ RTE_MBUF_F_TX_IP_CKSUM, RTE_MBUF_F_TX_IP_CKSUM, NULL },
+		{ RTE_MBUF_F_TX_TCP_CKSUM, RTE_MBUF_F_TX_L4_MASK, NULL },
+		{ RTE_MBUF_F_TX_SCTP_CKSUM, RTE_MBUF_F_TX_L4_MASK, NULL },
+		{ RTE_MBUF_F_TX_UDP_CKSUM, RTE_MBUF_F_TX_L4_MASK, NULL },
+		{ RTE_MBUF_F_TX_L4_NO_CKSUM, RTE_MBUF_F_TX_L4_MASK, "RTE_MBUF_F_TX_L4_NO_CKSUM" },
+		{ RTE_MBUF_F_TX_IEEE1588_TMST, RTE_MBUF_F_TX_IEEE1588_TMST, NULL },
+		{ RTE_MBUF_F_TX_TCP_SEG, RTE_MBUF_F_TX_TCP_SEG, NULL },
+		{ RTE_MBUF_F_TX_IPV4, RTE_MBUF_F_TX_IPV4, NULL },
+		{ RTE_MBUF_F_TX_IPV6, RTE_MBUF_F_TX_IPV6, NULL },
+		{ RTE_MBUF_F_TX_OUTER_IP_CKSUM, RTE_MBUF_F_TX_OUTER_IP_CKSUM, NULL },
+		{ RTE_MBUF_F_TX_OUTER_IPV4, RTE_MBUF_F_TX_OUTER_IPV4, NULL },
+		{ RTE_MBUF_F_TX_OUTER_IPV6, RTE_MBUF_F_TX_OUTER_IPV6, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_VXLAN, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_GTP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_GRE, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_IPIP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_GENEVE, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_MPLSINUDP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_IP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_UDP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_QINQ, RTE_MBUF_F_TX_QINQ, NULL },
+		{ RTE_MBUF_F_TX_MACSEC, RTE_MBUF_F_TX_MACSEC, NULL },
+		{ RTE_MBUF_F_TX_SEC_OFFLOAD, RTE_MBUF_F_TX_SEC_OFFLOAD, NULL },
+		{ RTE_MBUF_F_TX_UDP_SEG, RTE_MBUF_F_TX_UDP_SEG, NULL },
+		{ RTE_MBUF_F_TX_OUTER_UDP_CKSUM, RTE_MBUF_F_TX_OUTER_UDP_CKSUM, NULL },
 	};
 	const char *name;
 	unsigned int i;
diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h
index ec2f4bb188..3ae01c8b3d 100644
--- a/lib/mbuf/rte_mbuf.h
+++ b/lib/mbuf/rte_mbuf.h
@@ -77,7 +77,7 @@ int rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen);
  * @param mask
  *   The mask describing the flag. Usually only one bit must be set.
  *   Several bits can be given if they belong to the same mask.
- *   Ex: PKT_TX_L4_MASK.
+ *   Ex: RTE_MBUF_F_TX_L4_MASK.
  * @return
  *   The name of this flag, or NULL if it's not a valid TX flag.
  */
@@ -849,7 +849,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
 	m->nb_segs = 1;
 	m->port = RTE_MBUF_PORT_INVALID;
 
-	m->ol_flags &= EXT_ATTACHED_MBUF;
+	m->ol_flags &= RTE_MBUF_F_EXTERNAL;
 	m->packet_type = 0;
 	rte_pktmbuf_reset_headroom(m);
 
@@ -1064,7 +1064,7 @@ rte_pktmbuf_attach_extbuf(struct rte_mbuf *m, void *buf_addr,
 	m->data_len = 0;
 	m->data_off = 0;
 
-	m->ol_flags |= EXT_ATTACHED_MBUF;
+	m->ol_flags |= RTE_MBUF_F_EXTERNAL;
 	m->shinfo = shinfo;
 }
 
@@ -1138,7 +1138,7 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m)
 		/* if m is not direct, get the mbuf that embeds the data */
 		rte_mbuf_refcnt_update(rte_mbuf_from_indirect(m), 1);
 		mi->priv_size = m->priv_size;
-		mi->ol_flags = m->ol_flags | IND_ATTACHED_MBUF;
+		mi->ol_flags = m->ol_flags | RTE_MBUF_F_INDIRECT;
 	}
 
 	__rte_pktmbuf_copy_hdr(mi, m);
@@ -1272,7 +1272,7 @@ static inline int __rte_pktmbuf_pinned_extbuf_decref(struct rte_mbuf *m)
 	struct rte_mbuf_ext_shared_info *shinfo;
 
 	/* Clear flags, mbuf is being freed. */
-	m->ol_flags = EXT_ATTACHED_MBUF;
+	m->ol_flags = RTE_MBUF_F_EXTERNAL;
 	shinfo = m->shinfo;
 
 	/* Optimize for performance - do not dec/reinit */
@@ -1798,28 +1798,28 @@ rte_validate_tx_offload(const struct rte_mbuf *m)
 	uint64_t ol_flags = m->ol_flags;
 
 	/* Does packet set any of available offloads? */
-	if (!(ol_flags & PKT_TX_OFFLOAD_MASK))
+	if (!(ol_flags & RTE_MBUF_F_TX_OFFLOAD_MASK))
 		return 0;
 
 	/* IP checksum can be counted only for IPv4 packet */
-	if ((ol_flags & PKT_TX_IP_CKSUM) && (ol_flags & PKT_TX_IPV6))
+	if ((ol_flags & RTE_MBUF_F_TX_IP_CKSUM) && (ol_flags & RTE_MBUF_F_TX_IPV6))
 		return -EINVAL;
 
 	/* IP type not set when required */
-	if (ol_flags & (PKT_TX_L4_MASK | PKT_TX_TCP_SEG))
-		if (!(ol_flags & (PKT_TX_IPV4 | PKT_TX_IPV6)))
+	if (ol_flags & (RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_TCP_SEG))
+		if (!(ol_flags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)))
 			return -EINVAL;
 
 	/* Check requirements for TSO packet */
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		if ((m->tso_segsz == 0) ||
-				((ol_flags & PKT_TX_IPV4) &&
-				!(ol_flags & PKT_TX_IP_CKSUM)))
+				((ol_flags & RTE_MBUF_F_TX_IPV4) &&
+				 !(ol_flags & RTE_MBUF_F_TX_IP_CKSUM)))
 			return -EINVAL;
 
-	/* PKT_TX_OUTER_IP_CKSUM set for non outer IPv4 packet. */
-	if ((ol_flags & PKT_TX_OUTER_IP_CKSUM) &&
-			!(ol_flags & PKT_TX_OUTER_IPV4))
+	/* RTE_MBUF_F_TX_OUTER_IP_CKSUM set for non outer IPv4 packet. */
+	if ((ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) &&
+			!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4))
 		return -EINVAL;
 
 	return 0;
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index 8e7eef319b..1867d580ff 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -43,273 +43,378 @@ extern "C" {
 /**
  * The RX packet is a 802.1q VLAN packet, and the tci has been
  * saved in in mbuf->vlan_tci.
- * If the flag PKT_RX_VLAN_STRIPPED is also present, the VLAN
+ * If the flag RTE_MBUF_F_RX_VLAN_STRIPPED is also present, the VLAN
  * header has been stripped from mbuf data, else it is still
  * present.
  */
-#define PKT_RX_VLAN          (1ULL << 0)
+#define RTE_MBUF_F_RX_VLAN          (1ULL << 0)
+#define PKT_RX_VLAN RTE_DEPRECATED(PKT_RX_VLAN) RTE_MBUF_F_RX_VLAN
 
 /** RX packet with RSS hash result. */
-#define PKT_RX_RSS_HASH      (1ULL << 1)
+#define RTE_MBUF_F_RX_RSS_HASH      (1ULL << 1)
+#define PKT_RX_RSS_HASH RTE_DEPRECATED(PKT_RX_RSS_HASH) RTE_MBUF_F_RX_RSS_HASH
 
  /** RX packet with FDIR match indicate. */
-#define PKT_RX_FDIR          (1ULL << 2)
+#define RTE_MBUF_F_RX_FDIR          (1ULL << 2)
+#define PKT_RX_FDIR RTE_DEPRECATED(PKT_RX_FDIR) RTE_MBUF_F_RX_FDIR
 
 /**
  * This flag is set when the outermost IP header checksum is detected as
  * wrong by the hardware.
  */
-#define PKT_RX_OUTER_IP_CKSUM_BAD (1ULL << 5)
+#define RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD (1ULL << 5)
+#define PKT_RX_OUTER_IP_CKSUM_BAD RTE_DEPRECATED(PKT_RX_OUTER_IP_CKSUM_BAD) \
+		RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD
 
 /**
  * A vlan has been stripped by the hardware and its tci is saved in
  * mbuf->vlan_tci. This can only happen if vlan stripping is enabled
  * in the RX configuration of the PMD.
- * When PKT_RX_VLAN_STRIPPED is set, PKT_RX_VLAN must also be set.
+ * When RTE_MBUF_F_RX_VLAN_STRIPPED is set, RTE_MBUF_F_RX_VLAN must also be set.
  */
-#define PKT_RX_VLAN_STRIPPED (1ULL << 6)
+#define RTE_MBUF_F_RX_VLAN_STRIPPED (1ULL << 6)
+#define PKT_RX_VLAN_STRIPPED RTE_DEPRECATED(PKT_RX_VLAN_STRIPPED) \
+		RTE_MBUF_F_RX_VLAN_STRIPPED
 
 /**
  * Mask of bits used to determine the status of RX IP checksum.
- * - PKT_RX_IP_CKSUM_UNKNOWN: no information about the RX IP checksum
- * - PKT_RX_IP_CKSUM_BAD: the IP checksum in the packet is wrong
- * - PKT_RX_IP_CKSUM_GOOD: the IP checksum in the packet is valid
- * - PKT_RX_IP_CKSUM_NONE: the IP checksum is not correct in the packet
+ * - RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN: no information about the RX IP checksum
+ * - RTE_MBUF_F_RX_IP_CKSUM_BAD: the IP checksum in the packet is wrong
+ * - RTE_MBUF_F_RX_IP_CKSUM_GOOD: the IP checksum in the packet is valid
+ * - RTE_MBUF_F_RX_IP_CKSUM_NONE: the IP checksum is not correct in the packet
  *   data, but the integrity of the IP header is verified.
  */
-#define PKT_RX_IP_CKSUM_MASK ((1ULL << 4) | (1ULL << 7))
+#define RTE_MBUF_F_RX_IP_CKSUM_MASK ((1ULL << 4) | (1ULL << 7))
+#define PKT_RX_IP_CKSUM_MASK RTE_DEPRECATED(PKT_RX_IP_CKSUM_MASK) \
+		RTE_MBUF_F_RX_IP_CKSUM_MASK
 
-#define PKT_RX_IP_CKSUM_UNKNOWN 0
-#define PKT_RX_IP_CKSUM_BAD     (1ULL << 4)
-#define PKT_RX_IP_CKSUM_GOOD    (1ULL << 7)
-#define PKT_RX_IP_CKSUM_NONE    ((1ULL << 4) | (1ULL << 7))
+#define RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN 0
+#define RTE_MBUF_F_RX_IP_CKSUM_BAD     (1ULL << 4)
+#define RTE_MBUF_F_RX_IP_CKSUM_GOOD    (1ULL << 7)
+#define RTE_MBUF_F_RX_IP_CKSUM_NONE    ((1ULL << 4) | (1ULL << 7))
+#define PKT_RX_IP_CKSUM_UNKNOWN RTE_DEPRECATED(PKT_RX_IP_CKSUM_UNKNOWN) \
+		RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN
+#define PKT_RX_IP_CKSUM_BAD RTE_DEPRECATED(PKT_RX_IP_CKSUM_BAD) \
+		RTE_MBUF_F_RX_IP_CKSUM_BAD
+#define PKT_RX_IP_CKSUM_GOOD RTE_DEPRECATED(PKT_RX_IP_CKSUM_GOOD) \
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD
+#define PKT_RX_IP_CKSUM_NONE RTE_DEPRECATED(PKT_RX_IP_CKSUM_NONE) \
+		RTE_MBUF_F_RX_IP_CKSUM_NONE
 
 /**
  * Mask of bits used to determine the status of RX L4 checksum.
- * - PKT_RX_L4_CKSUM_UNKNOWN: no information about the RX L4 checksum
- * - PKT_RX_L4_CKSUM_BAD: the L4 checksum in the packet is wrong
- * - PKT_RX_L4_CKSUM_GOOD: the L4 checksum in the packet is valid
- * - PKT_RX_L4_CKSUM_NONE: the L4 checksum is not correct in the packet
+ * - RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN: no information about the RX L4 checksum
+ * - RTE_MBUF_F_RX_L4_CKSUM_BAD: the L4 checksum in the packet is wrong
+ * - RTE_MBUF_F_RX_L4_CKSUM_GOOD: the L4 checksum in the packet is valid
+ * - RTE_MBUF_F_RX_L4_CKSUM_NONE: the L4 checksum is not correct in the packet
  *   data, but the integrity of the L4 data is verified.
  */
-#define PKT_RX_L4_CKSUM_MASK ((1ULL << 3) | (1ULL << 8))
-
-#define PKT_RX_L4_CKSUM_UNKNOWN 0
-#define PKT_RX_L4_CKSUM_BAD     (1ULL << 3)
-#define PKT_RX_L4_CKSUM_GOOD    (1ULL << 8)
-#define PKT_RX_L4_CKSUM_NONE    ((1ULL << 3) | (1ULL << 8))
+#define RTE_MBUF_F_RX_L4_CKSUM_MASK ((1ULL << 3) | (1ULL << 8))
+#define PKT_RX_L4_CKSUM_MASK RTE_DEPRECATED(PKT_RX_L4_CKSUM_MASK) \
+		RTE_MBUF_F_RX_L4_CKSUM_MASK
+
+#define RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN 0
+#define RTE_MBUF_F_RX_L4_CKSUM_BAD     (1ULL << 3)
+#define RTE_MBUF_F_RX_L4_CKSUM_GOOD    (1ULL << 8)
+#define RTE_MBUF_F_RX_L4_CKSUM_NONE    ((1ULL << 3) | (1ULL << 8))
+#define PKT_RX_L4_CKSUM_UNKNOWN RTE_DEPRECATED(PKT_RX_L4_CKSUM_UNKNOWN) \
+		RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN
+#define PKT_RX_L4_CKSUM_BAD RTE_DEPRECATED(PKT_RX_L4_CKSUM_BAD) \
+		RTE_MBUF_F_RX_L4_CKSUM_BAD
+#define PKT_RX_L4_CKSUM_GOOD RTE_DEPRECATED(PKT_RX_L4_CKSUM_GOOD) \
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD
+#define PKT_RX_L4_CKSUM_NONE RTE_DEPRECATED(PKT_RX_L4_CKSUM_NONE) \
+		RTE_MBUF_F_RX_L4_CKSUM_NONE
 
 /** RX IEEE1588 L2 Ethernet PT Packet. */
-#define PKT_RX_IEEE1588_PTP  (1ULL << 9)
+#define RTE_MBUF_F_RX_IEEE1588_PTP  (1ULL << 9)
+#define PKT_RX_IEEE1588_PTP RTE_DEPRECATED(PKT_RX_IEEE1588_PTP) \
+		RTE_MBUF_F_RX_IEEE1588_PTP
 
 /** RX IEEE1588 L2/L4 timestamped packet.*/
-#define PKT_RX_IEEE1588_TMST (1ULL << 10)
+#define RTE_MBUF_F_RX_IEEE1588_TMST (1ULL << 10)
+#define PKT_RX_IEEE1588_TMST RTE_DEPRECATED(PKT_RX_IEEE1588_TMST) \
+		RTE_MBUF_F_RX_IEEE1588_TMST
 
 /** FD id reported if FDIR match. */
-#define PKT_RX_FDIR_ID       (1ULL << 13)
+#define RTE_MBUF_F_RX_FDIR_ID       (1ULL << 13)
+#define PKT_RX_FDIR_ID RTE_DEPRECATED(PKT_RX_FDIR_ID) \
+		RTE_MBUF_F_RX_FDIR_ID
 
 /** Flexible bytes reported if FDIR match. */
-#define PKT_RX_FDIR_FLX      (1ULL << 14)
+#define RTE_MBUF_F_RX_FDIR_FLX      (1ULL << 14)
+#define PKT_RX_FDIR_FLX RTE_DEPRECATED(PKT_RX_FDIR_FLX) \
+		RTE_MBUF_F_RX_FDIR_FLX
 
 /**
  * The outer VLAN has been stripped by the hardware and its TCI is
  * saved in mbuf->vlan_tci_outer.
  * This can only happen if VLAN stripping is enabled in the Rx
  * configuration of the PMD.
- * When PKT_RX_QINQ_STRIPPED is set, the flags PKT_RX_VLAN and PKT_RX_QINQ
- * must also be set.
+ * When RTE_MBUF_F_RX_QINQ_STRIPPED is set, the flags RTE_MBUF_F_RX_VLAN
+ * and RTE_MBUF_F_RX_QINQ must also be set.
  *
- * - If both PKT_RX_QINQ_STRIPPED and PKT_RX_VLAN_STRIPPED are set, the 2 VLANs
- *   have been stripped by the hardware and their TCIs are saved in
- *   mbuf->vlan_tci (inner) and mbuf->vlan_tci_outer (outer).
- * - If PKT_RX_QINQ_STRIPPED is set and PKT_RX_VLAN_STRIPPED is unset, only the
- *   outer VLAN is removed from packet data, but both tci are saved in
- *   mbuf->vlan_tci (inner) and mbuf->vlan_tci_outer (outer).
+ * - If both RTE_MBUF_F_RX_QINQ_STRIPPED and RTE_MBUF_F_RX_VLAN_STRIPPED are
+ *   set, the 2 VLANs have been stripped by the hardware and their TCIs are
+ *   saved in mbuf->vlan_tci (inner) and mbuf->vlan_tci_outer (outer).
+ * - If RTE_MBUF_F_RX_QINQ_STRIPPED is set and RTE_MBUF_F_RX_VLAN_STRIPPED
+ *   is unset, only the outer VLAN is removed from packet data, but both tci
+ *   are saved in mbuf->vlan_tci (inner) and mbuf->vlan_tci_outer (outer).
  */
-#define PKT_RX_QINQ_STRIPPED (1ULL << 15)
+#define RTE_MBUF_F_RX_QINQ_STRIPPED (1ULL << 15)
+#define PKT_RX_QINQ_STRIPPED RTE_DEPRECATED(PKT_RX_QINQ_STRIPPED) \
+		RTE_MBUF_F_RX_QINQ_STRIPPED
 
 /**
  * When packets are coalesced by a hardware or virtual driver, this flag
  * can be set in the RX mbuf, meaning that the m->tso_segsz field is
  * valid and is set to the segment size of original packets.
  */
-#define PKT_RX_LRO           (1ULL << 16)
+#define RTE_MBUF_F_RX_LRO           (1ULL << 16)
+#define PKT_RX_LRO RTE_DEPRECATED(PKT_RX_LRO) RTE_MBUF_F_RX_LRO
 
 /* There is no flag defined at offset 17. It is free for any future use. */
 
 /**
  * Indicate that security offload processing was applied on the RX packet.
  */
-#define PKT_RX_SEC_OFFLOAD	(1ULL << 18)
+#define RTE_MBUF_F_RX_SEC_OFFLOAD	(1ULL << 18)
+#define PKT_RX_SEC_OFFLOAD RTE_DEPRECATED(PKT_RX_SEC_OFFLOAD) \
+		RTE_MBUF_F_RX_SEC_OFFLOAD
 
 /**
  * Indicate that security offload processing failed on the RX packet.
  */
-#define PKT_RX_SEC_OFFLOAD_FAILED	(1ULL << 19)
+#define RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED	(1ULL << 19)
+#define PKT_RX_SEC_OFFLOAD_FAILED RTE_DEPRECATED(PKT_RX_SEC_OFFLOAD_FAILED) \
+		RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED
 
 /**
  * The RX packet is a double VLAN, and the outer tci has been
- * saved in mbuf->vlan_tci_outer. If this flag is set, PKT_RX_VLAN
+ * saved in mbuf->vlan_tci_outer. If this flag is set, RTE_MBUF_F_RX_VLAN
  * must also be set and the inner tci is saved in mbuf->vlan_tci.
- * If the flag PKT_RX_QINQ_STRIPPED is also present, both VLANs
+ * If the flag RTE_MBUF_F_RX_QINQ_STRIPPED is also present, both VLANs
  * headers have been stripped from mbuf data, else they are still
  * present.
  */
-#define PKT_RX_QINQ          (1ULL << 20)
+#define RTE_MBUF_F_RX_QINQ          (1ULL << 20)
+#define PKT_RX_QINQ RTE_DEPRECATED(PKT_RX_QINQ) RTE_MBUF_F_RX_QINQ
 
 /**
  * Mask of bits used to determine the status of outer RX L4 checksum.
- * - PKT_RX_OUTER_L4_CKSUM_UNKNOWN: no info about the outer RX L4 checksum
- * - PKT_RX_OUTER_L4_CKSUM_BAD: the outer L4 checksum in the packet is wrong
- * - PKT_RX_OUTER_L4_CKSUM_GOOD: the outer L4 checksum in the packet is valid
- * - PKT_RX_OUTER_L4_CKSUM_INVALID: invalid outer L4 checksum state.
+ * - RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN: no info about the outer RX L4
+ *   checksum
+ * - RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD: the outer L4 checksum in the packet
+ *   is wrong
+ * - RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD: the outer L4 checksum in the packet
+ *   is valid
+ * - RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID: invalid outer L4 checksum state.
  *
- * The detection of PKT_RX_OUTER_L4_CKSUM_GOOD shall be based on the given
- * HW capability, At minimum, the PMD should support
- * PKT_RX_OUTER_L4_CKSUM_UNKNOWN and PKT_RX_OUTER_L4_CKSUM_BAD states
- * if the DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
- */
-#define PKT_RX_OUTER_L4_CKSUM_MASK	((1ULL << 21) | (1ULL << 22))
-
-#define PKT_RX_OUTER_L4_CKSUM_UNKNOWN	0
-#define PKT_RX_OUTER_L4_CKSUM_BAD	(1ULL << 21)
-#define PKT_RX_OUTER_L4_CKSUM_GOOD	(1ULL << 22)
-#define PKT_RX_OUTER_L4_CKSUM_INVALID	((1ULL << 21) | (1ULL << 22))
-
-/* add new RX flags here, don't forget to update PKT_FIRST_FREE */
-
-#define PKT_FIRST_FREE (1ULL << 23)
-#define PKT_LAST_FREE (1ULL << 40)
-
-/* add new TX flags here, don't forget to update PKT_LAST_FREE  */
+ * The detection of RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD shall be based on the
+ * given HW capability, At minimum, the PMD should support
+ * RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN and RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD
+ * states if the DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
+ */
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK	((1ULL << 21) | (1ULL << 22))
+#define PKT_RX_OUTER_L4_CKSUM_MASK RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_MASK) \
+		RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK
+
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN	0
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD	(1ULL << 21)
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD	(1ULL << 22)
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID	((1ULL << 21) | (1ULL << 22))
+#define PKT_RX_OUTER_L4_CKSUM_UNKNOWN \
+	RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_UNKNOWN) \
+	RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
+#define PKT_RX_OUTER_L4_CKSUM_BAD RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_BAD) \
+		RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD
+#define PKT_RX_OUTER_L4_CKSUM_GOOD RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_GOOD) \
+		RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD
+#define PKT_RX_OUTER_L4_CKSUM_INVALID \
+	RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_INVALID) \
+	RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID
+
+/* add new RX flags here, don't forget to update RTE_MBUF_F_FIRST_FREE */
+
+#define RTE_MBUF_F_FIRST_FREE (1ULL << 23)
+#define PKT_FIRST_FREE RTE_DEPRECATED(PKT_FIRST_FREE) RTE_MBUF_F_FIRST_FREE
+#define RTE_MBUF_F_LAST_FREE (1ULL << 40)
+#define PKT_LAST_FREE RTE_DEPRECATED(PKT_LAST_FREE) RTE_MBUF_F_LAST_FREE
+
+/* add new TX flags here, don't forget to update RTE_MBUF_F_LAST_FREE  */
 
 /**
  * Outer UDP checksum offload flag. This flag is used for enabling
  * outer UDP checksum in PMD. To use outer UDP checksum, the user needs to
  * 1) Enable the following in mbuf,
  * a) Fill outer_l2_len and outer_l3_len in mbuf.
- * b) Set the PKT_TX_OUTER_UDP_CKSUM flag.
- * c) Set the PKT_TX_OUTER_IPV4 or PKT_TX_OUTER_IPV6 flag.
+ * b) Set the RTE_MBUF_F_TX_OUTER_UDP_CKSUM flag.
+ * c) Set the RTE_MBUF_F_TX_OUTER_IPV4 or RTE_MBUF_F_TX_OUTER_IPV6 flag.
  * 2) Configure DEV_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
  */
-#define PKT_TX_OUTER_UDP_CKSUM     (1ULL << 41)
+#define RTE_MBUF_F_TX_OUTER_UDP_CKSUM     (1ULL << 41)
+#define PKT_TX_OUTER_UDP_CKSUM RTE_DEPRECATED(PKT_TX_OUTER_UDP_CKSUM) \
+		RTE_MBUF_F_TX_OUTER_UDP_CKSUM
 
 /**
  * UDP Fragmentation Offload flag. This flag is used for enabling UDP
  * fragmentation in SW or in HW. When use UFO, mbuf->tso_segsz is used
  * to store the MSS of UDP fragments.
  */
-#define PKT_TX_UDP_SEG	(1ULL << 42)
+#define RTE_MBUF_F_TX_UDP_SEG	(1ULL << 42)
+#define PKT_TX_UDP_SEG RTE_DEPRECATED(PKT_TX_UDP_SEG) RTE_MBUF_F_TX_UDP_SEG
 
 /**
  * Request security offload processing on the TX packet.
  * To use Tx security offload, the user needs to fill l2_len in mbuf
  * indicating L2 header size and where L3 header starts.
  */
-#define PKT_TX_SEC_OFFLOAD	(1ULL << 43)
+#define RTE_MBUF_F_TX_SEC_OFFLOAD	(1ULL << 43)
+#define PKT_TX_SEC_OFFLOAD RTE_DEPRECATED(PKT_TX_SEC_OFFLOAD) \
+		RTE_MBUF_F_TX_SEC_OFFLOAD
 
 /**
  * Offload the MACsec. This flag must be set by the application to enable
  * this offload feature for a packet to be transmitted.
  */
-#define PKT_TX_MACSEC        (1ULL << 44)
+#define RTE_MBUF_F_TX_MACSEC        (1ULL << 44)
+#define PKT_TX_MACSEC RTE_DEPRECATED(PKT_TX_MACSEC) RTE_MBUF_F_TX_MACSEC
 
 /**
  * Bits 45:48 used for the tunnel type.
  * The tunnel type must be specified for TSO or checksum on the inner part
  * of tunnel packets.
- * These flags can be used with PKT_TX_TCP_SEG for TSO, or PKT_TX_xxx_CKSUM.
+ * These flags can be used with RTE_MBUF_F_TX_TCP_SEG for TSO, or
+ * RTE_MBUF_F_TX_xxx_CKSUM.
  * The mbuf fields for inner and outer header lengths are required:
  * outer_l2_len, outer_l3_len, l2_len, l3_len, l4_len and tso_segsz for TSO.
  */
-#define PKT_TX_TUNNEL_VXLAN   (0x1ULL << 45)
-#define PKT_TX_TUNNEL_GRE     (0x2ULL << 45)
-#define PKT_TX_TUNNEL_IPIP    (0x3ULL << 45)
-#define PKT_TX_TUNNEL_GENEVE  (0x4ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_VXLAN   (0x1ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_GRE     (0x2ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_IPIP    (0x3ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_GENEVE  (0x4ULL << 45)
 /** TX packet with MPLS-in-UDP RFC 7510 header. */
-#define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
-#define PKT_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
-#define PKT_TX_TUNNEL_GTP       (0x7ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_GTP       (0x7ULL << 45)
 /**
  * Generic IP encapsulated tunnel type, used for TSO and checksum offload.
  * It can be used for tunnels which are not standards or listed above.
- * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_GRE
- * or PKT_TX_TUNNEL_IPIP if possible.
+ * It is preferred to use specific tunnel flags like RTE_MBUF_F_TX_TUNNEL_GRE
+ * or RTE_MBUF_F_TX_TUNNEL_IPIP if possible.
  * The ethdev must be configured with DEV_TX_OFFLOAD_IP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
- * PKT_TX_xxx_CKSUM.
+ * RTE_MBUF_F_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
  * or checksum are not expected to be updated.
  */
-#define PKT_TX_TUNNEL_IP (0xDULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_IP (0xDULL << 45)
 /**
  * Generic UDP encapsulated tunnel type, used for TSO and checksum offload.
  * UDP tunnel type implies outer IP layer.
  * It can be used for tunnels which are not standards or listed above.
- * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_VXLAN
+ * It is preferred to use specific tunnel flags like RTE_MBUF_F_TX_TUNNEL_VXLAN
  * if possible.
  * The ethdev must be configured with DEV_TX_OFFLOAD_UDP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
- * PKT_TX_xxx_CKSUM.
+ * RTE_MBUF_F_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
  * or checksum are not expected to be updated.
  */
-#define PKT_TX_TUNNEL_UDP (0xEULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_UDP (0xEULL << 45)
 /* add new TX TUNNEL type here */
-#define PKT_TX_TUNNEL_MASK    (0xFULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_MASK    (0xFULL << 45)
+
+#define PKT_TX_TUNNEL_VXLAN RTE_DEPRECATED(PKT_TX_TUNNEL_VXLAN) \
+		RTE_MBUF_F_TX_TUNNEL_VXLAN
+#define PKT_TX_TUNNEL_GRE RTE_DEPRECATED(PKT_TX_TUNNEL_GRE) \
+		RTE_MBUF_F_TX_TUNNEL_GRE
+#define PKT_TX_TUNNEL_IPIP RTE_DEPRECATED(PKT_TX_TUNNEL_IPIP) \
+		RTE_MBUF_F_TX_TUNNEL_IPIP
+#define PKT_TX_TUNNEL_GENEVE RTE_DEPRECATED(PKT_TX_TUNNEL_GENEVE) \
+		RTE_MBUF_F_TX_TUNNEL_GENEVE
+#define PKT_TX_TUNNEL_MPLSINUDP RTE_DEPRECATED(PKT_TX_TUNNEL_MPLSINUDP) \
+		RTE_MBUF_F_TX_TUNNEL_MPLSINUDP
+#define PKT_TX_TUNNEL_VXLAN_GPE RTE_DEPRECATED(PKT_TX_TUNNEL_VXLAN_GPE) \
+		RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE
+#define PKT_TX_TUNNEL_GTP RTE_DEPRECATED(PKT_TX_TUNNEL_GTP) \
+		RTE_MBUF_F_TX_TUNNEL_GTP
+#define PKT_TX_TUNNEL_IP RTE_DEPRECATED(PKT_TX_TUNNEL_IP) \
+		RTE_MBUF_F_TX_TUNNEL_IP
+#define PKT_TX_TUNNEL_UDP RTE_DEPRECATED(PKT_TX_TUNNEL_UDP) \
+		RTE_MBUF_F_TX_TUNNEL_UDP
+#define PKT_TX_TUNNEL_MASK RTE_DEPRECATED(PKT_TX_TUNNEL_MASK) \
+		RTE_MBUF_F_TX_TUNNEL_MASK
 
 /**
  * Double VLAN insertion (QinQ) request to driver, driver may offload the
  * insertion based on device capability.
  * mbuf 'vlan_tci' & 'vlan_tci_outer' must be valid when this flag is set.
  */
-#define PKT_TX_QINQ        (1ULL << 49)
-/** This old name is deprecated. */
-#define PKT_TX_QINQ_PKT RTE_DEPRECATED(PKT_TX_QINQ_PKT) PKT_TX_QINQ
+#define RTE_MBUF_F_TX_QINQ        (1ULL << 49)
+#define PKT_TX_QINQ RTE_DEPRECATED(PKT_TX_QINQ) RTE_MBUF_F_TX_QINQ
+#define PKT_TX_QINQ_PKT RTE_DEPRECATED(PKT_TX_QINQ_PKT) RTE_MBUF_F_TX_QINQ
 
 /**
  * TCP segmentation offload. To enable this offload feature for a
  * packet to be transmitted on hardware supporting TSO:
- *  - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
- *    PKT_TX_TCP_CKSUM)
- *  - set the flag PKT_TX_IPV4 or PKT_TX_IPV6
- *  - if it's IPv4, set the PKT_TX_IP_CKSUM flag
+ *  - set the RTE_MBUF_F_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
+ *    RTE_MBUF_F_TX_TCP_CKSUM)
+ *  - set the flag RTE_MBUF_F_TX_IPV4 or RTE_MBUF_F_TX_IPV6
+ *  - if it's IPv4, set the RTE_MBUF_F_TX_IP_CKSUM flag
  *  - fill the mbuf offload information: l2_len, l3_len, l4_len, tso_segsz
  */
-#define PKT_TX_TCP_SEG       (1ULL << 50)
+#define RTE_MBUF_F_TX_TCP_SEG       (1ULL << 50)
+#define PKT_TX_TCP_SEG RTE_DEPRECATED(PKT_TX_TCP_SEG) RTE_MBUF_F_TX_TCP_SEG
 
 /** TX IEEE1588 packet to timestamp. */
-#define PKT_TX_IEEE1588_TMST (1ULL << 51)
+#define RTE_MBUF_F_TX_IEEE1588_TMST (1ULL << 51)
+#define PKT_TX_IEEE1588_TMST RTE_DEPRECATED(PKT_TX_IEEE1588_TMST) \
+		RTE_MBUF_F_TX_IEEE1588_TMST
 
-/**
+/*
  * Bits 52+53 used for L4 packet type with checksum enabled: 00: Reserved,
  * 01: TCP checksum, 10: SCTP checksum, 11: UDP checksum. To use hardware
  * L4 checksum offload, the user needs to:
  *  - fill l2_len and l3_len in mbuf
- *  - set the flags PKT_TX_TCP_CKSUM, PKT_TX_SCTP_CKSUM or PKT_TX_UDP_CKSUM
- *  - set the flag PKT_TX_IPV4 or PKT_TX_IPV6
+ *  - set the flags RTE_MBUF_F_TX_TCP_CKSUM, RTE_MBUF_F_TX_SCTP_CKSUM or
+ *    RTE_MBUF_F_TX_UDP_CKSUM
+ *  - set the flag RTE_MBUF_F_TX_IPV4 or RTE_MBUF_F_TX_IPV6
  */
-#define PKT_TX_L4_NO_CKSUM   (0ULL << 52) /**< Disable L4 cksum of TX pkt. */
+
+/** Disable L4 cksum of TX pkt. */
+#define RTE_MBUF_F_TX_L4_NO_CKSUM   (0ULL << 52)
 
 /** TCP cksum of TX pkt. computed by NIC. */
-#define PKT_TX_TCP_CKSUM     (1ULL << 52)
+#define RTE_MBUF_F_TX_TCP_CKSUM     (1ULL << 52)
 
 /** SCTP cksum of TX pkt. computed by NIC. */
-#define PKT_TX_SCTP_CKSUM    (2ULL << 52)
+#define RTE_MBUF_F_TX_SCTP_CKSUM    (2ULL << 52)
 
 /** UDP cksum of TX pkt. computed by NIC. */
-#define PKT_TX_UDP_CKSUM     (3ULL << 52)
+#define RTE_MBUF_F_TX_UDP_CKSUM     (3ULL << 52)
 
 /** Mask for L4 cksum offload request. */
-#define PKT_TX_L4_MASK       (3ULL << 52)
+#define RTE_MBUF_F_TX_L4_MASK       (3ULL << 52)
+
+#define PKT_TX_L4_NO_CKSUM RTE_DEPRECATED(PKT_TX_L4_NO_CKSUM) \
+		RTE_MBUF_F_TX_L4_NO_CKSUM
+#define PKT_TX_TCP_CKSUM RTE_DEPRECATED(PKT_TX_TCP_CKSUM) \
+		RTE_MBUF_F_TX_TCP_CKSUM
+#define PKT_TX_SCTP_CKSUM RTE_DEPRECATED(PKT_TX_SCTP_CKSUM) \
+		RTE_MBUF_F_TX_SCTP_CKSUM
+#define PKT_TX_UDP_CKSUM RTE_DEPRECATED(PKT_TX_UDP_CKSUM) \
+		RTE_MBUF_F_TX_UDP_CKSUM
+#define PKT_TX_L4_MASK RTE_DEPRECATED(PKT_TX_L4_MASK) RTE_MBUF_F_TX_L4_MASK
 
 /**
- * Offload the IP checksum in the hardware. The flag PKT_TX_IPV4 should
+ * Offload the IP checksum in the hardware. The flag RTE_MBUF_F_TX_IPV4 should
  * also be set by the application, although a PMD will only check
- * PKT_TX_IP_CKSUM.
+ * RTE_MBUF_F_TX_IP_CKSUM.
  *  - fill the mbuf offload information: l2_len, l3_len
  */
-#define PKT_TX_IP_CKSUM      (1ULL << 54)
+#define RTE_MBUF_F_TX_IP_CKSUM      (1ULL << 54)
+#define PKT_TX_IP_CKSUM RTE_DEPRECATED(PKT_TX_IP_CKSUM) RTE_MBUF_F_TX_IP_CKSUM
 
 /**
  * Packet is IPv4. This flag must be set when using any offload feature
@@ -317,7 +422,8 @@ extern "C" {
  * packet. If the packet is a tunneled packet, this flag is related to
  * the inner headers.
  */
-#define PKT_TX_IPV4          (1ULL << 55)
+#define RTE_MBUF_F_TX_IPV4          (1ULL << 55)
+#define PKT_TX_IPV4 RTE_DEPRECATED(PKT_TX_IPV4) RTE_MBUF_F_TX_IPV4
 
 /**
  * Packet is IPv6. This flag must be set when using an offload feature
@@ -325,67 +431,77 @@ extern "C" {
  * packet. If the packet is a tunneled packet, this flag is related to
  * the inner headers.
  */
-#define PKT_TX_IPV6          (1ULL << 56)
+#define RTE_MBUF_F_TX_IPV6          (1ULL << 56)
+#define PKT_TX_IPV6 RTE_DEPRECATED(PKT_TX_IPV6) RTE_MBUF_F_TX_IPV6
 
 /**
  * VLAN tag insertion request to driver, driver may offload the insertion
  * based on the device capability.
  * mbuf 'vlan_tci' field must be valid when this flag is set.
  */
-#define PKT_TX_VLAN          (1ULL << 57)
-/* this old name is deprecated */
-#define PKT_TX_VLAN_PKT RTE_DEPRECATED(PKT_TX_VLAN_PKT) PKT_TX_VLAN
+#define RTE_MBUF_F_TX_VLAN          (1ULL << 57)
+#define PKT_TX_VLAN RTE_DEPRECATED(PKT_TX_VLAN) RTE_MBUF_F_TX_VLAN
+#define PKT_TX_VLAN_PKT RTE_DEPRECATED(PKT_TX_VLAN_PKT) RTE_MBUF_F_TX_VLAN
 
 /**
  * Offload the IP checksum of an external header in the hardware. The
- * flag PKT_TX_OUTER_IPV4 should also be set by the application, although
- * a PMD will only check PKT_TX_OUTER_IP_CKSUM.
+ * flag RTE_MBUF_F_TX_OUTER_IPV4 should also be set by the application, although
+ * a PMD will only check RTE_MBUF_F_TX_OUTER_IP_CKSUM.
  *  - fill the mbuf offload information: outer_l2_len, outer_l3_len
  */
-#define PKT_TX_OUTER_IP_CKSUM   (1ULL << 58)
+#define RTE_MBUF_F_TX_OUTER_IP_CKSUM   (1ULL << 58)
+#define PKT_TX_OUTER_IP_CKSUM RTE_DEPRECATED(PKT_TX_OUTER_IP_CKSUM) \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM
 
 /**
  * Packet outer header is IPv4. This flag must be set when using any
  * outer offload feature (L3 or L4 checksum) to tell the NIC that the
  * outer header of the tunneled packet is an IPv4 packet.
  */
-#define PKT_TX_OUTER_IPV4   (1ULL << 59)
+#define RTE_MBUF_F_TX_OUTER_IPV4   (1ULL << 59)
+#define PKT_TX_OUTER_IPV4 RTE_DEPRECATED(PKT_TX_OUTER_IPV4) \
+		RTE_MBUF_F_TX_OUTER_IPV4
 
 /**
  * Packet outer header is IPv6. This flag must be set when using any
  * outer offload feature (L4 checksum) to tell the NIC that the outer
  * header of the tunneled packet is an IPv6 packet.
  */
-#define PKT_TX_OUTER_IPV6    (1ULL << 60)
+#define RTE_MBUF_F_TX_OUTER_IPV6    (1ULL << 60)
+#define PKT_TX_OUTER_IPV6 RTE_DEPRECATED(PKT_TX_OUTER_IPV6) \
+		RTE_MBUF_F_TX_OUTER_IPV6
 
 /**
  * Bitmask of all supported packet Tx offload features flags,
  * which can be set for packet.
  */
-#define PKT_TX_OFFLOAD_MASK (    \
-		PKT_TX_OUTER_IPV6 |	 \
-		PKT_TX_OUTER_IPV4 |	 \
-		PKT_TX_OUTER_IP_CKSUM |  \
-		PKT_TX_VLAN |        \
-		PKT_TX_IPV6 |		 \
-		PKT_TX_IPV4 |		 \
-		PKT_TX_IP_CKSUM |        \
-		PKT_TX_L4_MASK |         \
-		PKT_TX_IEEE1588_TMST |	 \
-		PKT_TX_TCP_SEG |         \
-		PKT_TX_QINQ |        \
-		PKT_TX_TUNNEL_MASK |	 \
-		PKT_TX_MACSEC |		 \
-		PKT_TX_SEC_OFFLOAD |	 \
-		PKT_TX_UDP_SEG |	 \
-		PKT_TX_OUTER_UDP_CKSUM)
+#define RTE_MBUF_F_TX_OFFLOAD_MASK (    \
+		RTE_MBUF_F_TX_OUTER_IPV6 |	 \
+		RTE_MBUF_F_TX_OUTER_IPV4 |	 \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |  \
+		RTE_MBUF_F_TX_VLAN |        \
+		RTE_MBUF_F_TX_IPV6 |		 \
+		RTE_MBUF_F_TX_IPV4 |		 \
+		RTE_MBUF_F_TX_IP_CKSUM |        \
+		RTE_MBUF_F_TX_L4_MASK |         \
+		RTE_MBUF_F_TX_IEEE1588_TMST |	 \
+		RTE_MBUF_F_TX_TCP_SEG |         \
+		RTE_MBUF_F_TX_QINQ |        \
+		RTE_MBUF_F_TX_TUNNEL_MASK |	 \
+		RTE_MBUF_F_TX_MACSEC |		 \
+		RTE_MBUF_F_TX_SEC_OFFLOAD |	 \
+		RTE_MBUF_F_TX_UDP_SEG |	 \
+		RTE_MBUF_F_TX_OUTER_UDP_CKSUM)
+#define PKT_TX_OFFLOAD_MASK RTE_DEPRECATED(PKT_TX_OFFLOAD_MASK) RTE_MBUF_F_TX_OFFLOAD_MASK
 
 /**
  * Mbuf having an external buffer attached. shinfo in mbuf must be filled.
  */
-#define EXT_ATTACHED_MBUF    (1ULL << 61)
+#define RTE_MBUF_F_EXTERNAL    (1ULL << 61)
+#define EXT_ATTACHED_MBUF RTE_DEPRECATED(EXT_ATTACHED_MBUF) RTE_MBUF_F_EXTERNAL
 
-#define IND_ATTACHED_MBUF    (1ULL << 62) /**< Indirect attached mbuf */
+#define RTE_MBUF_F_INDIRECT    (1ULL << 62) /**< Indirect attached mbuf */
+#define IND_ATTACHED_MBUF RTE_DEPRECATED(IND_ATTACHED_MBUF) RTE_MBUF_F_INDIRECT
 
 /** Alignment constraint of mbuf private area. */
 #define RTE_MBUF_PRIV_ALIGN 8
@@ -532,7 +648,7 @@ struct rte_mbuf {
 
 	uint32_t pkt_len;         /**< Total pkt len: sum of all segments. */
 	uint16_t data_len;        /**< Amount of data in segment buffer. */
-	/** VLAN TCI (CPU order), valid if PKT_RX_VLAN is set. */
+	/** VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_VLAN is set. */
 	uint16_t vlan_tci;
 
 	RTE_STD_C11
@@ -550,7 +666,7 @@ struct rte_mbuf {
 				};
 				uint32_t hi;
 				/**< First 4 flexible bytes or FD ID, dependent
-				 * on PKT_RX_FDIR_* flag in ol_flags.
+				 * on RTE_MBUF_F_RX_FDIR_* flag in ol_flags.
 				 */
 			} fdir;	/**< Filter identifier if FDIR enabled */
 			struct rte_mbuf_sched sched;
@@ -569,7 +685,7 @@ struct rte_mbuf {
 		} hash;                   /**< hash information */
 	};
 
-	/** Outer VLAN TCI (CPU order), valid if PKT_RX_QINQ is set. */
+	/** Outer VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_QINQ is set. */
 	uint16_t vlan_tci_outer;
 
 	uint16_t buf_len;         /**< Length of segment buffer. */
@@ -659,14 +775,14 @@ struct rte_mbuf_ext_shared_info {
  * If a mbuf has its data in another mbuf and references it by mbuf
  * indirection, this mbuf can be defined as a cloned mbuf.
  */
-#define RTE_MBUF_CLONED(mb)     ((mb)->ol_flags & IND_ATTACHED_MBUF)
+#define RTE_MBUF_CLONED(mb)     ((mb)->ol_flags & RTE_MBUF_F_INDIRECT)
 
 /**
  * Returns TRUE if given mbuf has an external buffer, or FALSE otherwise.
  *
  * External buffer is a user-provided anonymous buffer.
  */
-#define RTE_MBUF_HAS_EXTBUF(mb) ((mb)->ol_flags & EXT_ATTACHED_MBUF)
+#define RTE_MBUF_HAS_EXTBUF(mb) ((mb)->ol_flags & RTE_MBUF_F_EXTERNAL)
 
 /**
  * Returns TRUE if given mbuf is direct, or FALSE otherwise.
@@ -675,7 +791,7 @@ struct rte_mbuf_ext_shared_info {
  * can be defined as a direct mbuf.
  */
 #define RTE_MBUF_DIRECT(mb) \
-	(!((mb)->ol_flags & (IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF)))
+	(!((mb)->ol_flags & (RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL)))
 
 /** Uninitialized or unspecified port. */
 #define RTE_MBUF_PORT_INVALID UINT16_MAX
diff --git a/lib/mbuf/rte_mbuf_dyn.c b/lib/mbuf/rte_mbuf_dyn.c
index d55e162a68..db8e020665 100644
--- a/lib/mbuf/rte_mbuf_dyn.c
+++ b/lib/mbuf/rte_mbuf_dyn.c
@@ -130,7 +130,7 @@ init_shared_mem(void)
 		mark_free(dynfield1);
 
 		/* init free_flags */
-		for (mask = PKT_FIRST_FREE; mask <= PKT_LAST_FREE; mask <<= 1)
+		for (mask = RTE_MBUF_F_FIRST_FREE; mask <= RTE_MBUF_F_LAST_FREE; mask <<= 1)
 			shm->free_flags |= mask;
 
 		process_score();
diff --git a/lib/net/rte_ether.h b/lib/net/rte_ether.h
index b83e0d3fce..2c7da55b6b 100644
--- a/lib/net/rte_ether.h
+++ b/lib/net/rte_ether.h
@@ -331,7 +331,7 @@ static inline int rte_vlan_strip(struct rte_mbuf *m)
 		return -1;
 
 	vh = (struct rte_vlan_hdr *)(eh + 1);
-	m->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+	m->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 	m->vlan_tci = rte_be_to_cpu_16(vh->vlan_tci);
 
 	/* Copy ether header over rather than moving whole packet */
@@ -378,9 +378,9 @@ static inline int rte_vlan_insert(struct rte_mbuf **m)
 	vh = (struct rte_vlan_hdr *) (nh + 1);
 	vh->vlan_tci = rte_cpu_to_be_16((*m)->vlan_tci);
 
-	(*m)->ol_flags &= ~(PKT_RX_VLAN_STRIPPED | PKT_TX_VLAN);
+	(*m)->ol_flags &= ~(RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_TX_VLAN);
 
-	if ((*m)->ol_flags & PKT_TX_TUNNEL_MASK)
+	if ((*m)->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 		(*m)->outer_l2_len += sizeof(struct rte_vlan_hdr);
 	else
 		(*m)->l2_len += sizeof(struct rte_vlan_hdr);
diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h
index b3d45e85db..c37cb04b60 100644
--- a/lib/net/rte_ip.h
+++ b/lib/net/rte_ip.h
@@ -334,7 +334,7 @@ rte_ipv4_phdr_cksum(const struct rte_ipv4_hdr *ipv4_hdr, uint64_t ol_flags)
 	psd_hdr.dst_addr = ipv4_hdr->dst_addr;
 	psd_hdr.zero = 0;
 	psd_hdr.proto = ipv4_hdr->next_proto_id;
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		psd_hdr.len = 0;
 	} else {
 		l3_len = rte_be_to_cpu_16(ipv4_hdr->total_length);
@@ -474,7 +474,7 @@ rte_ipv6_phdr_cksum(const struct rte_ipv6_hdr *ipv6_hdr, uint64_t ol_flags)
 	} psd_hdr;
 
 	psd_hdr.proto = (uint32_t)(ipv6_hdr->proto << 24);
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		psd_hdr.len = 0;
 	} else {
 		psd_hdr.len = ipv6_hdr->payload_len;
diff --git a/lib/net/rte_net.h b/lib/net/rte_net.h
index f4460202c0..53a7f4d360 100644
--- a/lib/net/rte_net.h
+++ b/lib/net/rte_net.h
@@ -121,17 +121,17 @@ rte_net_intel_cksum_flags_prepare(struct rte_mbuf *m, uint64_t ol_flags)
 	 * Mainly it is required to avoid fragmented headers check if
 	 * no offloads are requested.
 	 */
-	if (!(ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK | PKT_TX_TCP_SEG |
-			  PKT_TX_OUTER_IP_CKSUM)))
+	if (!(ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_TCP_SEG |
+			  RTE_MBUF_F_TX_OUTER_IP_CKSUM)))
 		return 0;
 
-	if (ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6)) {
+	if (ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6)) {
 		inner_l3_offset += m->outer_l2_len + m->outer_l3_len;
 		/*
 		 * prepare outer IPv4 header checksum by setting it to 0,
 		 * in order to be computed by hardware NICs.
 		 */
-		if (ol_flags & PKT_TX_OUTER_IP_CKSUM) {
+		if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) {
 			ipv4_hdr = rte_pktmbuf_mtod_offset(m,
 					struct rte_ipv4_hdr *, m->outer_l2_len);
 			ipv4_hdr->hdr_checksum = 0;
@@ -147,16 +147,16 @@ rte_net_intel_cksum_flags_prepare(struct rte_mbuf *m, uint64_t ol_flags)
 		     inner_l3_offset + m->l3_len + m->l4_len))
 		return -ENOTSUP;
 
-	if (ol_flags & PKT_TX_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,
 				inner_l3_offset);
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			ipv4_hdr->hdr_checksum = 0;
 	}
 
-	if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM) {
-		if (ol_flags & PKT_TX_IPV4) {
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM) {
+		if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 			udp_hdr = (struct rte_udp_hdr *)((char *)ipv4_hdr +
 					m->l3_len);
 			udp_hdr->dgram_cksum = rte_ipv4_phdr_cksum(ipv4_hdr,
@@ -171,9 +171,9 @@ rte_net_intel_cksum_flags_prepare(struct rte_mbuf *m, uint64_t ol_flags)
 			udp_hdr->dgram_cksum = rte_ipv6_phdr_cksum(ipv6_hdr,
 					ol_flags);
 		}
-	} else if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM ||
-			(ol_flags & PKT_TX_TCP_SEG)) {
-		if (ol_flags & PKT_TX_IPV4) {
+	} else if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM ||
+			(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
+		if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 			/* non-TSO tcp or TSO */
 			tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr +
 					m->l3_len);
diff --git a/lib/pipeline/rte_table_action.c b/lib/pipeline/rte_table_action.c
index 4b0316bfed..ebab2444d3 100644
--- a/lib/pipeline/rte_table_action.c
+++ b/lib/pipeline/rte_table_action.c
@@ -2085,7 +2085,7 @@ pkt_work_tag(struct rte_mbuf *mbuf,
 	struct tag_data *data)
 {
 	mbuf->hash.fdir.hi = data->tag;
-	mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	mbuf->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 }
 
 static __rte_always_inline void
@@ -2103,10 +2103,10 @@ pkt4_work_tag(struct rte_mbuf *mbuf0,
 	mbuf2->hash.fdir.hi = data2->tag;
 	mbuf3->hash.fdir.hi = data3->tag;
 
-	mbuf0->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-	mbuf1->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-	mbuf2->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-	mbuf3->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	mbuf0->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
+	mbuf1->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
+	mbuf2->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
+	mbuf3->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 }
 
 /**
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 7eb9f109ae..eb1b38be4d 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -236,10 +236,10 @@ struct rte_security_ipsec_sa_options {
 	 * * 0: Inner packet IP header checksum is not computed/verified.
 	 *
 	 * The checksum verification status would be set in mbuf using
-	 * PKT_RX_IP_CKSUM_xxx flags.
+	 * RTE_MBUF_F_RX_IP_CKSUM_xxx flags.
 	 *
 	 * Inner IP checksum computation can also be enabled(per operation)
-	 * by setting the flag PKT_TX_IP_CKSUM in mbuf.
+	 * by setting the flag RTE_MBUF_F_TX_IP_CKSUM in mbuf.
 	 */
 	uint32_t ip_csum_enable : 1;
 
@@ -251,11 +251,11 @@ struct rte_security_ipsec_sa_options {
 	 * * 0: Inner packet L4 checksum is not computed/verified.
 	 *
 	 * The checksum verification status would be set in mbuf using
-	 * PKT_RX_L4_CKSUM_xxx flags.
+	 * RTE_MBUF_F_RX_L4_CKSUM_xxx flags.
 	 *
 	 * Inner L4 checksum computation can also be enabled(per operation)
-	 * by setting the flags PKT_TX_TCP_CKSUM or PKT_TX_SCTP_CKSUM or
-	 * PKT_TX_UDP_CKSUM or PKT_TX_L4_MASK in mbuf.
+	 * by setting the flags RTE_MBUF_F_TX_TCP_CKSUM or RTE_MBUF_F_TX_SCTP_CKSUM or
+	 * RTE_MBUF_F_TX_UDP_CKSUM or RTE_MBUF_F_TX_L4_MASK in mbuf.
 	 */
 	uint32_t l4_csum_enable : 1;
 };
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index e481906113..b6140a643b 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -411,25 +411,25 @@ vhost_shadow_enqueue_single_packed(struct virtio_net *dev,
 static __rte_always_inline void
 virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr)
 {
-	uint64_t csum_l4 = m_buf->ol_flags & PKT_TX_L4_MASK;
+	uint64_t csum_l4 = m_buf->ol_flags & RTE_MBUF_F_TX_L4_MASK;
 
-	if (m_buf->ol_flags & PKT_TX_TCP_SEG)
-		csum_l4 |= PKT_TX_TCP_CKSUM;
+	if (m_buf->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
+		csum_l4 |= RTE_MBUF_F_TX_TCP_CKSUM;
 
 	if (csum_l4) {
 		net_hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
 		net_hdr->csum_start = m_buf->l2_len + m_buf->l3_len;
 
 		switch (csum_l4) {
-		case PKT_TX_TCP_CKSUM:
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			net_hdr->csum_offset = (offsetof(struct rte_tcp_hdr,
 						cksum));
 			break;
-		case PKT_TX_UDP_CKSUM:
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			net_hdr->csum_offset = (offsetof(struct rte_udp_hdr,
 						dgram_cksum));
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			net_hdr->csum_offset = (offsetof(struct rte_sctp_hdr,
 						cksum));
 			break;
@@ -441,7 +441,7 @@ virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr)
 	}
 
 	/* IP cksum verification cannot be bypassed, then calculate here */
-	if (m_buf->ol_flags & PKT_TX_IP_CKSUM) {
+	if (m_buf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		struct rte_ipv4_hdr *ipv4_hdr;
 
 		ipv4_hdr = rte_pktmbuf_mtod_offset(m_buf, struct rte_ipv4_hdr *,
@@ -450,15 +450,15 @@ virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr)
 		ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
 	}
 
-	if (m_buf->ol_flags & PKT_TX_TCP_SEG) {
-		if (m_buf->ol_flags & PKT_TX_IPV4)
+	if (m_buf->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
+		if (m_buf->ol_flags & RTE_MBUF_F_TX_IPV4)
 			net_hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV4;
 		else
 			net_hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV6;
 		net_hdr->gso_size = m_buf->tso_segsz;
 		net_hdr->hdr_len = m_buf->l2_len + m_buf->l3_len
 					+ m_buf->l4_len;
-	} else if (m_buf->ol_flags & PKT_TX_UDP_SEG) {
+	} else if (m_buf->ol_flags & RTE_MBUF_F_TX_UDP_SEG) {
 		net_hdr->gso_type = VIRTIO_NET_HDR_GSO_UDP;
 		net_hdr->gso_size = m_buf->tso_segsz;
 		net_hdr->hdr_len = m_buf->l2_len + m_buf->l3_len +
@@ -2259,7 +2259,7 @@ parse_headers(struct rte_mbuf *m, uint8_t *l4_proto)
 		m->l3_len = rte_ipv4_hdr_len(ipv4_hdr);
 		if (data_len < m->l2_len + m->l3_len)
 			goto error;
-		m->ol_flags |= PKT_TX_IPV4;
+		m->ol_flags |= RTE_MBUF_F_TX_IPV4;
 		*l4_proto = ipv4_hdr->next_proto_id;
 		break;
 	case RTE_ETHER_TYPE_IPV6:
@@ -2268,7 +2268,7 @@ parse_headers(struct rte_mbuf *m, uint8_t *l4_proto)
 		ipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *,
 				m->l2_len);
 		m->l3_len = sizeof(struct rte_ipv6_hdr);
-		m->ol_flags |= PKT_TX_IPV6;
+		m->ol_flags |= RTE_MBUF_F_TX_IPV6;
 		*l4_proto = ipv6_hdr->proto;
 		break;
 	default:
@@ -2323,17 +2323,17 @@ vhost_dequeue_offload_legacy(struct virtio_net_hdr *hdr, struct rte_mbuf *m)
 			case (offsetof(struct rte_tcp_hdr, cksum)):
 				if (l4_proto != IPPROTO_TCP)
 					goto error;
-				m->ol_flags |= PKT_TX_TCP_CKSUM;
+				m->ol_flags |= RTE_MBUF_F_TX_TCP_CKSUM;
 				break;
 			case (offsetof(struct rte_udp_hdr, dgram_cksum)):
 				if (l4_proto != IPPROTO_UDP)
 					goto error;
-				m->ol_flags |= PKT_TX_UDP_CKSUM;
+				m->ol_flags |= RTE_MBUF_F_TX_UDP_CKSUM;
 				break;
 			case (offsetof(struct rte_sctp_hdr, cksum)):
 				if (l4_proto != IPPROTO_SCTP)
 					goto error;
-				m->ol_flags |= PKT_TX_SCTP_CKSUM;
+				m->ol_flags |= RTE_MBUF_F_TX_SCTP_CKSUM;
 				break;
 			default:
 				goto error;
@@ -2355,14 +2355,14 @@ vhost_dequeue_offload_legacy(struct virtio_net_hdr *hdr, struct rte_mbuf *m)
 			tcp_len = (tcp_hdr->data_off & 0xf0) >> 2;
 			if (data_len < m->l2_len + m->l3_len + tcp_len)
 				goto error;
-			m->ol_flags |= PKT_TX_TCP_SEG;
+			m->ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 			m->tso_segsz = hdr->gso_size;
 			m->l4_len = tcp_len;
 			break;
 		case VIRTIO_NET_HDR_GSO_UDP:
 			if (l4_proto != IPPROTO_UDP)
 				goto error;
-			m->ol_flags |= PKT_TX_UDP_SEG;
+			m->ol_flags |= RTE_MBUF_F_TX_UDP_SEG;
 			m->tso_segsz = hdr->gso_size;
 			m->l4_len = sizeof(struct rte_udp_hdr);
 			break;
@@ -2396,7 +2396,7 @@ vhost_dequeue_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *m,
 		return;
 	}
 
-	m->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+	m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	ptype = rte_net_get_ptype(m, &hdr_lens, RTE_PTYPE_ALL_MASK);
 	m->packet_type = ptype;
@@ -2423,7 +2423,7 @@ vhost_dequeue_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *m,
 
 		hdrlen = hdr_lens.l2_len + hdr_lens.l3_len + hdr_lens.l4_len;
 		if (hdr->csum_start <= hdrlen && l4_supported != 0) {
-			m->ol_flags |= PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_NONE;
 		} else {
 			/* Unknown proto or tunnel, do sw cksum. We can assume
 			 * the cksum field is in the first segment since the
@@ -2453,13 +2453,13 @@ vhost_dequeue_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *m,
 		case VIRTIO_NET_HDR_GSO_TCPV6:
 			if ((ptype & RTE_PTYPE_L4_MASK) != RTE_PTYPE_L4_TCP)
 				break;
-			m->ol_flags |= PKT_RX_LRO | PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_LRO | RTE_MBUF_F_RX_L4_CKSUM_NONE;
 			m->tso_segsz = hdr->gso_size;
 			break;
 		case VIRTIO_NET_HDR_GSO_UDP:
 			if ((ptype & RTE_PTYPE_L4_MASK) != RTE_PTYPE_L4_UDP)
 				break;
-			m->ol_flags |= PKT_RX_LRO | PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_LRO | RTE_MBUF_F_RX_L4_CKSUM_NONE;
 			m->tso_segsz = hdr->gso_size;
 			break;
 		default:
-- 
2.30.2


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v4 14/14] eventdev: mark trace variables as internal
    2021-10-15 19:02  2%   ` [dpdk-dev] [PATCH v4 04/14] eventdev: move inline APIs into separate structure pbhagavatula
@ 2021-10-15 19:02  9%   ` pbhagavatula
  2021-10-17  5:58  0%     ` Jerin Jacob
    2 siblings, 1 reply; 200+ results
From: pbhagavatula @ 2021-10-15 19:02 UTC (permalink / raw)
  To: jerinj, Ray Kinsella; +Cc: dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Mark rte_trace global variables as internal i.e. remove them
from experimental section of version map.
Some of them are used in inline APIs, mark those as global.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
 doc/guides/rel_notes/release_21_11.rst | 12 +++++
 lib/eventdev/version.map               | 71 ++++++++++++--------------
 2 files changed, 44 insertions(+), 39 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 38e601c236..5b4a05c3ae 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -226,6 +226,9 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* eventdev: Event vector configuration APIs have been made stable.
+  Move memory used by timer adapters to hugepage. This will prevent TLB misses
+  if any and aligns to memory structure of other subsystems.
 
 ABI Changes
 -----------
@@ -277,6 +280,15 @@ ABI Changes
   were added in structure ``rte_event_eth_rx_adapter_stats`` to get additional
   status.
 
+* eventdev: A new structure ``rte_event_fp_ops`` has been added which is now used
+  by the fastpath inline functions. The structures ``rte_eventdev``,
+  ``rte_eventdev_data`` have been made internal. ``rte_eventdevs[]`` can't be
+  accessed directly by user any more. This change is transparent to both
+  applications and PMDs.
+
+* eventdev: Re-arrange fields in ``rte_event_timer`` to remove holes.
+  ``rte_event_timer_adapter_pmd.h`` has been made internal.
+
 
 Known Issues
 ------------
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index 8f2fb0cf14..cd37164141 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -1,6 +1,13 @@
 DPDK_22 {
 	global:
 
+	__rte_eventdev_trace_crypto_adapter_enqueue;
+	__rte_eventdev_trace_deq_burst;
+	__rte_eventdev_trace_enq_burst;
+	__rte_eventdev_trace_eth_tx_adapter_enqueue;
+	__rte_eventdev_trace_timer_arm_burst;
+	__rte_eventdev_trace_timer_arm_tmo_tick_burst;
+	__rte_eventdev_trace_timer_cancel_burst;
 	rte_event_crypto_adapter_caps_get;
 	rte_event_crypto_adapter_create;
 	rte_event_crypto_adapter_create_ext;
@@ -42,8 +49,8 @@ DPDK_22 {
 	rte_event_eth_rx_adapter_start;
 	rte_event_eth_rx_adapter_stats_get;
 	rte_event_eth_rx_adapter_stats_reset;
-	rte_event_eth_rx_adapter_vector_limits_get;
 	rte_event_eth_rx_adapter_stop;
+	rte_event_eth_rx_adapter_vector_limits_get;
 	rte_event_eth_tx_adapter_caps_get;
 	rte_event_eth_tx_adapter_create;
 	rte_event_eth_tx_adapter_create_ext;
@@ -56,6 +63,7 @@ DPDK_22 {
 	rte_event_eth_tx_adapter_stats_get;
 	rte_event_eth_tx_adapter_stats_reset;
 	rte_event_eth_tx_adapter_stop;
+	rte_event_fp_ops;
 	rte_event_port_attr_get;
 	rte_event_port_default_conf_get;
 	rte_event_port_link;
@@ -86,25 +94,28 @@ DPDK_22 {
 	rte_event_timer_cancel_burst;
 	rte_event_vector_pool_create;
 
-	#added in 21.11
-	rte_event_fp_ops;
-
 	local: *;
 };
 
 EXPERIMENTAL {
 	global:
 
-	# added in 20.05
-	__rte_eventdev_trace_configure;
-	__rte_eventdev_trace_queue_setup;
-	__rte_eventdev_trace_port_link;
-	__rte_eventdev_trace_port_unlink;
-	__rte_eventdev_trace_start;
-	__rte_eventdev_trace_stop;
+	# added in 21.11
+	rte_event_eth_rx_adapter_create_with_params;
+	rte_event_eth_rx_adapter_queue_conf_get;
+};
+
+INTERNAL {
+	global:
+
 	__rte_eventdev_trace_close;
-	__rte_eventdev_trace_deq_burst;
-	__rte_eventdev_trace_enq_burst;
+	__rte_eventdev_trace_configure;
+	__rte_eventdev_trace_crypto_adapter_create;
+	__rte_eventdev_trace_crypto_adapter_free;
+	__rte_eventdev_trace_crypto_adapter_queue_pair_add;
+	__rte_eventdev_trace_crypto_adapter_queue_pair_del;
+	__rte_eventdev_trace_crypto_adapter_start;
+	__rte_eventdev_trace_crypto_adapter_stop;
 	__rte_eventdev_trace_eth_rx_adapter_create;
 	__rte_eventdev_trace_eth_rx_adapter_free;
 	__rte_eventdev_trace_eth_rx_adapter_queue_add;
@@ -117,38 +128,19 @@ EXPERIMENTAL {
 	__rte_eventdev_trace_eth_tx_adapter_queue_del;
 	__rte_eventdev_trace_eth_tx_adapter_start;
 	__rte_eventdev_trace_eth_tx_adapter_stop;
-	__rte_eventdev_trace_eth_tx_adapter_enqueue;
+	__rte_eventdev_trace_port_link;
+	__rte_eventdev_trace_port_setup;
+	__rte_eventdev_trace_port_unlink;
+	__rte_eventdev_trace_queue_setup;
+	__rte_eventdev_trace_start;
+	__rte_eventdev_trace_stop;
 	__rte_eventdev_trace_timer_adapter_create;
+	__rte_eventdev_trace_timer_adapter_free;
 	__rte_eventdev_trace_timer_adapter_start;
 	__rte_eventdev_trace_timer_adapter_stop;
-	__rte_eventdev_trace_timer_adapter_free;
-	__rte_eventdev_trace_timer_arm_burst;
-	__rte_eventdev_trace_timer_arm_tmo_tick_burst;
-	__rte_eventdev_trace_timer_cancel_burst;
-	__rte_eventdev_trace_crypto_adapter_create;
-	__rte_eventdev_trace_crypto_adapter_free;
-	__rte_eventdev_trace_crypto_adapter_queue_pair_add;
-	__rte_eventdev_trace_crypto_adapter_queue_pair_del;
-	__rte_eventdev_trace_crypto_adapter_start;
-	__rte_eventdev_trace_crypto_adapter_stop;
-
-	# changed in 20.11
-	__rte_eventdev_trace_port_setup;
-	# added in 21.11
-	rte_event_eth_rx_adapter_create_with_params;
-
-	#added in 21.05
-	__rte_eventdev_trace_crypto_adapter_enqueue;
-	rte_event_eth_rx_adapter_queue_conf_get;
-};
-
-INTERNAL {
-	global:
-
 	event_dev_fp_ops_reset;
 	event_dev_fp_ops_set;
 	event_dev_probing_finish;
-	rte_event_pmd_selftest_seqn_dynfield_offset;
 	rte_event_pmd_allocate;
 	rte_event_pmd_get_named_dev;
 	rte_event_pmd_is_valid_dev;
@@ -156,6 +148,7 @@ INTERNAL {
 	rte_event_pmd_pci_probe_named;
 	rte_event_pmd_pci_remove;
 	rte_event_pmd_release;
+	rte_event_pmd_selftest_seqn_dynfield_offset;
 	rte_event_pmd_vdev_init;
 	rte_event_pmd_vdev_uninit;
 	rte_eventdevs;
-- 
2.17.1


^ permalink raw reply	[relevance 9%]

* [dpdk-dev] [PATCH v4 04/14] eventdev: move inline APIs into separate structure
  @ 2021-10-15 19:02  2%   ` pbhagavatula
  2021-10-15 19:02  9%   ` [dpdk-dev] [PATCH v4 14/14] eventdev: mark trace variables as internal pbhagavatula
    2 siblings, 0 replies; 200+ results
From: pbhagavatula @ 2021-10-15 19:02 UTC (permalink / raw)
  To: jerinj, Ray Kinsella; +Cc: dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Move fastpath inline function pointers from rte_eventdev into a
separate structure accessed via a flat array.
The intention is to make rte_eventdev and related structures private
to avoid future API/ABI breakages.`

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
 lib/eventdev/eventdev_pmd.h      |  38 +++++++++++
 lib/eventdev/eventdev_pmd_pci.h  |   4 +-
 lib/eventdev/eventdev_private.c  | 112 +++++++++++++++++++++++++++++++
 lib/eventdev/meson.build         |  21 +++---
 lib/eventdev/rte_eventdev.c      |  22 +++++-
 lib/eventdev/rte_eventdev_core.h |  26 +++++++
 lib/eventdev/version.map         |   6 ++
 7 files changed, 217 insertions(+), 12 deletions(-)
 create mode 100644 lib/eventdev/eventdev_private.c

diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index 9b2aec8371..0532b542d4 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -1188,4 +1188,42 @@ __rte_internal
 int
 rte_event_pmd_release(struct rte_eventdev *eventdev);
 
+/**
+ *
+ * @internal
+ * This is the last step of device probing.
+ * It must be called after a port is allocated and initialized successfully.
+ *
+ * @param eventdev
+ *  New event device.
+ */
+__rte_internal
+void
+event_dev_probing_finish(struct rte_eventdev *eventdev);
+
+/**
+ * Reset eventdevice fastpath APIs to dummy values.
+ *
+ * @param fp_ops
+ * The *fp_ops* pointer to reset.
+ */
+__rte_internal
+void
+event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op);
+
+/**
+ * Set eventdevice fastpath APIs to event device values.
+ *
+ * @param fp_ops
+ * The *fp_ops* pointer to set.
+ */
+__rte_internal
+void
+event_dev_fp_ops_set(struct rte_event_fp_ops *fp_ops,
+		     const struct rte_eventdev *dev);
+
+#ifdef __cplusplus
+}
+#endif
+
 #endif /* _RTE_EVENTDEV_PMD_H_ */
diff --git a/lib/eventdev/eventdev_pmd_pci.h b/lib/eventdev/eventdev_pmd_pci.h
index 2f12a5eb24..499852db16 100644
--- a/lib/eventdev/eventdev_pmd_pci.h
+++ b/lib/eventdev/eventdev_pmd_pci.h
@@ -67,8 +67,10 @@ rte_event_pmd_pci_probe_named(struct rte_pci_driver *pci_drv,
 
 	/* Invoke PMD device initialization function */
 	retval = devinit(eventdev);
-	if (retval == 0)
+	if (retval == 0) {
+		event_dev_probing_finish(eventdev);
 		return 0;
+	}
 
 	RTE_EDEV_LOG_ERR("driver %s: (vendor_id=0x%x device_id=0x%x)"
 			" failed", pci_drv->driver.name,
diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
new file mode 100644
index 0000000000..9084833847
--- /dev/null
+++ b/lib/eventdev/eventdev_private.c
@@ -0,0 +1,112 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "eventdev_pmd.h"
+#include "rte_eventdev.h"
+
+static uint16_t
+dummy_event_enqueue(__rte_unused void *port,
+		    __rte_unused const struct rte_event *ev)
+{
+	RTE_EDEV_LOG_ERR(
+		"event enqueue requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_enqueue_burst(__rte_unused void *port,
+			  __rte_unused const struct rte_event ev[],
+			  __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event enqueue burst requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_dequeue(__rte_unused void *port, __rte_unused struct rte_event *ev,
+		    __rte_unused uint64_t timeout_ticks)
+{
+	RTE_EDEV_LOG_ERR(
+		"event dequeue requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_dequeue_burst(__rte_unused void *port,
+			  __rte_unused struct rte_event ev[],
+			  __rte_unused uint16_t nb_events,
+			  __rte_unused uint64_t timeout_ticks)
+{
+	RTE_EDEV_LOG_ERR(
+		"event dequeue burst requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_tx_adapter_enqueue(__rte_unused void *port,
+			       __rte_unused struct rte_event ev[],
+			       __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event Tx adapter enqueue requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_tx_adapter_enqueue_same_dest(__rte_unused void *port,
+					 __rte_unused struct rte_event ev[],
+					 __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event Tx adapter enqueue same destination requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_crypto_adapter_enqueue(__rte_unused void *port,
+				   __rte_unused struct rte_event ev[],
+				   __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event crypto adapter enqueue requested for unconfigured event device");
+	return 0;
+}
+
+void
+event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
+{
+	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+	static const struct rte_event_fp_ops dummy = {
+		.enqueue = dummy_event_enqueue,
+		.enqueue_burst = dummy_event_enqueue_burst,
+		.enqueue_new_burst = dummy_event_enqueue_burst,
+		.enqueue_forward_burst = dummy_event_enqueue_burst,
+		.dequeue = dummy_event_dequeue,
+		.dequeue_burst = dummy_event_dequeue_burst,
+		.txa_enqueue = dummy_event_tx_adapter_enqueue,
+		.txa_enqueue_same_dest =
+			dummy_event_tx_adapter_enqueue_same_dest,
+		.ca_enqueue = dummy_event_crypto_adapter_enqueue,
+		.data = dummy_data,
+	};
+
+	*fp_op = dummy;
+}
+
+void
+event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op,
+		     const struct rte_eventdev *dev)
+{
+	fp_op->enqueue = dev->enqueue;
+	fp_op->enqueue_burst = dev->enqueue_burst;
+	fp_op->enqueue_new_burst = dev->enqueue_new_burst;
+	fp_op->enqueue_forward_burst = dev->enqueue_forward_burst;
+	fp_op->dequeue = dev->dequeue;
+	fp_op->dequeue_burst = dev->dequeue_burst;
+	fp_op->txa_enqueue = dev->txa_enqueue;
+	fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest;
+	fp_op->ca_enqueue = dev->ca_enqueue;
+	fp_op->data = dev->data->ports;
+}
diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build
index 8b51fde361..cb9abe92f6 100644
--- a/lib/eventdev/meson.build
+++ b/lib/eventdev/meson.build
@@ -8,24 +8,25 @@ else
 endif
 
 sources = files(
-        'rte_eventdev.c',
-        'rte_event_ring.c',
+        'eventdev_private.c',
         'eventdev_trace_points.c',
-        'rte_event_eth_rx_adapter.c',
-        'rte_event_timer_adapter.c',
         'rte_event_crypto_adapter.c',
+        'rte_event_eth_rx_adapter.c',
         'rte_event_eth_tx_adapter.c',
+        'rte_event_ring.c',
+        'rte_event_timer_adapter.c',
+        'rte_eventdev.c',
 )
 headers = files(
-        'rte_eventdev.h',
-        'rte_eventdev_trace.h',
-        'rte_eventdev_trace_fp.h',
-        'rte_event_ring.h',
+        'rte_event_crypto_adapter.h',
         'rte_event_eth_rx_adapter.h',
+        'rte_event_eth_tx_adapter.h',
+        'rte_event_ring.h',
         'rte_event_timer_adapter.h',
         'rte_event_timer_adapter_pmd.h',
-        'rte_event_crypto_adapter.h',
-        'rte_event_eth_tx_adapter.h',
+        'rte_eventdev.h',
+        'rte_eventdev_trace.h',
+        'rte_eventdev_trace_fp.h',
 )
 indirect_headers += files(
         'rte_eventdev_core.h',
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index bfcfa31cd1..4c30a37831 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -46,6 +46,9 @@ static struct rte_eventdev_global eventdev_globals = {
 	.nb_devs		= 0
 };
 
+/* Public fastpath APIs. */
+struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
+
 /* Event dev north bound API implementation */
 
 uint8_t
@@ -300,8 +303,8 @@ int
 rte_event_dev_configure(uint8_t dev_id,
 			const struct rte_event_dev_config *dev_conf)
 {
-	struct rte_eventdev *dev;
 	struct rte_event_dev_info info;
+	struct rte_eventdev *dev;
 	int diag;
 
 	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
@@ -470,10 +473,13 @@ rte_event_dev_configure(uint8_t dev_id,
 		return diag;
 	}
 
+	event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
+
 	/* Configure the device */
 	diag = (*dev->dev_ops->dev_configure)(dev);
 	if (diag != 0) {
 		RTE_EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id, diag);
+		event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
 		event_dev_queue_config(dev, 0);
 		event_dev_port_config(dev, 0);
 	}
@@ -1244,6 +1250,8 @@ rte_event_dev_start(uint8_t dev_id)
 	else
 		return diag;
 
+	event_dev_fp_ops_set(rte_event_fp_ops + dev_id, dev);
+
 	return 0;
 }
 
@@ -1284,6 +1292,7 @@ rte_event_dev_stop(uint8_t dev_id)
 	dev->data->dev_started = 0;
 	(*dev->dev_ops->dev_stop)(dev);
 	rte_eventdev_trace_stop(dev_id);
+	event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
 }
 
 int
@@ -1302,6 +1311,7 @@ rte_event_dev_close(uint8_t dev_id)
 		return -EBUSY;
 	}
 
+	event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
 	rte_eventdev_trace_close(dev_id);
 	return (*dev->dev_ops->dev_close)(dev);
 }
@@ -1435,6 +1445,7 @@ rte_event_pmd_release(struct rte_eventdev *eventdev)
 	if (eventdev == NULL)
 		return -EINVAL;
 
+	event_dev_fp_ops_reset(rte_event_fp_ops + eventdev->data->dev_id);
 	eventdev->attached = RTE_EVENTDEV_DETACHED;
 	eventdev_globals.nb_devs--;
 
@@ -1460,6 +1471,15 @@ rte_event_pmd_release(struct rte_eventdev *eventdev)
 	return 0;
 }
 
+void
+event_dev_probing_finish(struct rte_eventdev *eventdev)
+{
+	if (eventdev == NULL)
+		return;
+
+	event_dev_fp_ops_set(rte_event_fp_ops + eventdev->data->dev_id,
+			     eventdev);
+}
 
 static int
 handle_dev_list(const char *cmd __rte_unused,
diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
index 115b97e431..916023f71f 100644
--- a/lib/eventdev/rte_eventdev_core.h
+++ b/lib/eventdev/rte_eventdev_core.h
@@ -39,6 +39,32 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port,
 						   uint16_t nb_events);
 /**< @internal Enqueue burst of events on crypto adapter */
 
+struct rte_event_fp_ops {
+	void **data;
+	/**< points to array of internal port data pointers */
+	event_enqueue_t enqueue;
+	/**< PMD enqueue function. */
+	event_enqueue_burst_t enqueue_burst;
+	/**< PMD enqueue burst function. */
+	event_enqueue_burst_t enqueue_new_burst;
+	/**< PMD enqueue burst new function. */
+	event_enqueue_burst_t enqueue_forward_burst;
+	/**< PMD enqueue burst fwd function. */
+	event_dequeue_t dequeue;
+	/**< PMD dequeue function. */
+	event_dequeue_burst_t dequeue_burst;
+	/**< PMD dequeue burst function. */
+	event_tx_adapter_enqueue_t txa_enqueue;
+	/**< PMD Tx adapter enqueue function. */
+	event_tx_adapter_enqueue_t txa_enqueue_same_dest;
+	/**< PMD Tx adapter enqueue same destination function. */
+	event_crypto_adapter_enqueue_t ca_enqueue;
+	/**< PMD Crypto adapter enqueue function. */
+	uintptr_t reserved[6];
+} __rte_cache_aligned;
+
+extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
+
 #define RTE_EVENTDEV_NAME_MAX_LEN (64)
 /**< @internal Max length of name of event PMD */
 
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index cd72f45d29..e684154bf9 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -85,6 +85,9 @@ DPDK_22 {
 	rte_event_timer_cancel_burst;
 	rte_eventdevs;
 
+	#added in 21.11
+	rte_event_fp_ops;
+
 	local: *;
 };
 
@@ -143,6 +146,9 @@ EXPERIMENTAL {
 INTERNAL {
 	global:
 
+	event_dev_fp_ops_reset;
+	event_dev_fp_ops_set;
+	event_dev_probing_finish;
 	rte_event_pmd_selftest_seqn_dynfield_offset;
 	rte_event_pmd_allocate;
 	rte_event_pmd_get_named_dev;
-- 
2.17.1


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH v13 11/12] doc: changes for new pcapng and dumpcap utility
    2021-10-15 18:28  1%   ` [dpdk-dev] [PATCH v13 06/12] pdump: support pcapng and filtering Stephen Hemminger
@ 2021-10-15 18:29  1%   ` Stephen Hemminger
  1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-10-15 18:29 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger, Reshma Pattan

Describe the new packet capture library and utility.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 doc/api/doxy-api-index.md                     |  1 +
 doc/api/doxy-api.conf.in                      |  1 +
 .../howto/img/packet_capture_framework.svg    | 96 +++++++++----------
 doc/guides/howto/packet_capture_framework.rst | 69 ++++++-------
 doc/guides/prog_guide/index.rst               |  1 +
 doc/guides/prog_guide/pcapng_lib.rst          | 25 +++++
 doc/guides/prog_guide/pdump_lib.rst           | 28 ++++--
 doc/guides/rel_notes/release_21_11.rst        | 10 ++
 doc/guides/tools/dumpcap.rst                  | 86 +++++++++++++++++
 doc/guides/tools/index.rst                    |  1 +
 10 files changed, 230 insertions(+), 88 deletions(-)
 create mode 100644 doc/guides/prog_guide/pcapng_lib.rst
 create mode 100644 doc/guides/tools/dumpcap.rst

diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 1992107a0356..ee07394d1c78 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -223,3 +223,4 @@ The public API headers are grouped by topics:
   [experimental APIs]  (@ref rte_compat.h),
   [ABI versioning]     (@ref rte_function_versioning.h),
   [version]            (@ref rte_version.h)
+  [pcapng]             (@ref rte_pcapng.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 325a0195c6ab..aba17799a9a1 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -58,6 +58,7 @@ INPUT                   = @TOPDIR@/doc/api/doxy-api-index.md \
                           @TOPDIR@/lib/metrics \
                           @TOPDIR@/lib/node \
                           @TOPDIR@/lib/net \
+                          @TOPDIR@/lib/pcapng \
                           @TOPDIR@/lib/pci \
                           @TOPDIR@/lib/pdump \
                           @TOPDIR@/lib/pipeline \
diff --git a/doc/guides/howto/img/packet_capture_framework.svg b/doc/guides/howto/img/packet_capture_framework.svg
index a76baf71fdee..1c2646a81096 100644
--- a/doc/guides/howto/img/packet_capture_framework.svg
+++ b/doc/guides/howto/img/packet_capture_framework.svg
@@ -1,6 +1,4 @@
 <?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
 <svg
    xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
@@ -16,8 +14,8 @@
    viewBox="0 0 425.19685 283.46457"
    id="svg2"
    version="1.1"
-   inkscape:version="0.91 r13725"
-   sodipodi:docname="drawing-pcap.svg">
+   inkscape:version="1.0.2 (e86c870879, 2021-01-15)"
+   sodipodi:docname="packet_capture_framework.svg">
   <defs
      id="defs4">
     <marker
@@ -228,7 +226,7 @@
        x2="487.64606"
        y2="258.38232"
        gradientUnits="userSpaceOnUse"
-       gradientTransform="translate(-84.916417,744.90779)" />
+       gradientTransform="matrix(1.1457977,0,0,0.99944907,-151.97019,745.05014)" />
     <linearGradient
        inkscape:collect="always"
        xlink:href="#linearGradient5784"
@@ -277,17 +275,18 @@
      borderopacity="1.0"
      inkscape:pageopacity="0.0"
      inkscape:pageshadow="2"
-     inkscape:zoom="0.57434918"
-     inkscape:cx="215.17857"
-     inkscape:cy="285.26445"
+     inkscape:zoom="1"
+     inkscape:cx="226.77165"
+     inkscape:cy="78.124511"
      inkscape:document-units="px"
      inkscape:current-layer="layer1"
      showgrid="false"
-     inkscape:window-width="1874"
-     inkscape:window-height="971"
-     inkscape:window-x="2"
-     inkscape:window-y="24"
-     inkscape:window-maximized="0" />
+     inkscape:window-width="2560"
+     inkscape:window-height="1414"
+     inkscape:window-x="0"
+     inkscape:window-y="0"
+     inkscape:window-maximized="1"
+     inkscape:document-rotation="0" />
   <metadata
      id="metadata7">
     <rdf:RDF>
@@ -296,7 +295,7 @@
         <dc:format>image/svg+xml</dc:format>
         <dc:type
            rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
+        <dc:title />
       </cc:Work>
     </rdf:RDF>
   </metadata>
@@ -321,15 +320,15 @@
        y="790.82452" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="61.050636"
        y="807.3205"
-       id="text4152"
-       sodipodi:linespacing="125%"><tspan
+       id="text4152"><tspan
          sodipodi:role="line"
          id="tspan4154"
          x="61.050636"
-         y="807.3205">DPDK Primary Application</tspan></text>
+         y="807.3205"
+         style="font-size:12.5px;line-height:1.25">DPDK Primary Application</tspan></text>
     <rect
        style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6"
@@ -339,19 +338,20 @@
        y="827.01843" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="350.68585"
        y="841.16058"
-       id="text4189"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189"><tspan
          sodipodi:role="line"
          id="tspan4191"
          x="350.68585"
-         y="841.16058">dpdk-pdump</tspan><tspan
+         y="841.16058"
+         style="font-size:12.5px;line-height:1.25">dpdk-dumpcap</tspan><tspan
          sodipodi:role="line"
          x="350.68585"
          y="856.78558"
-         id="tspan4193">tool</tspan></text>
+         id="tspan4193"
+         style="font-size:12.5px;line-height:1.25">tool</tspan></text>
     <rect
        style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-4"
@@ -361,15 +361,15 @@
        y="891.16315" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="352.70612"
        y="905.3053"
-       id="text4189-1"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-1"><tspan
          sodipodi:role="line"
          x="352.70612"
          y="905.3053"
-         id="tspan4193-3">PCAP PMD</tspan></text>
+         id="tspan4193-3"
+         style="font-size:12.5px;line-height:1.25">librte_pcapng</tspan></text>
     <rect
        style="fill:url(#linearGradient5745);fill-opacity:1;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-6"
@@ -379,15 +379,15 @@
        y="923.9931" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="136.02846"
        y="938.13525"
-       id="text4189-0"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-0"><tspan
          sodipodi:role="line"
          x="136.02846"
          y="938.13525"
-         id="tspan4193-6">dpdk_port0</tspan></text>
+         id="tspan4193-6"
+         style="font-size:12.5px;line-height:1.25">dpdk_port0</tspan></text>
     <rect
        style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-5"
@@ -397,33 +397,33 @@
        y="824.99817" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="137.54369"
        y="839.14026"
-       id="text4189-4"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-4"><tspan
          sodipodi:role="line"
          x="137.54369"
          y="839.14026"
-         id="tspan4193-2">librte_pdump</tspan></text>
+         id="tspan4193-2"
+         style="font-size:12.5px;line-height:1.25">librte_pdump</tspan></text>
     <rect
-       style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+       style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1.07013;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-4-5"
-       width="94.449265"
-       height="35.355339"
-       x="307.7804"
-       y="985.61243" />
+       width="108.21974"
+       height="35.335861"
+       x="297.9809"
+       y="985.62219" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="352.70618"
        y="999.75458"
-       id="text4189-1-8"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-1-8"><tspan
          sodipodi:role="line"
          x="352.70618"
          y="999.75458"
-         id="tspan4193-3-2">capture.pcap</tspan></text>
+         id="tspan4193-3-2"
+         style="font-size:12.5px;line-height:1.25">capture.pcapng</tspan></text>
     <rect
        style="fill:url(#linearGradient5788-1);fill-opacity:1;stroke:#257cdc;stroke-width:1.12555885;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-4-5-1"
@@ -433,15 +433,15 @@
        y="983.14984" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="136.53352"
        y="1002.785"
-       id="text4189-1-8-4"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-1-8-4"><tspan
          sodipodi:role="line"
          x="136.53352"
          y="1002.785"
-         id="tspan4193-3-2-7">Traffic Generator</tspan></text>
+         id="tspan4193-3-2-7"
+         style="font-size:12.5px;line-height:1.25">Traffic Generator</tspan></text>
     <path
        style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker7331)"
        d="m 351.46948,927.02357 c 0,57.5787 0,57.5787 0,57.5787"
diff --git a/doc/guides/howto/packet_capture_framework.rst b/doc/guides/howto/packet_capture_framework.rst
index c31bac52340e..f933cc7e9311 100644
--- a/doc/guides/howto/packet_capture_framework.rst
+++ b/doc/guides/howto/packet_capture_framework.rst
@@ -1,18 +1,19 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright(c) 2017 Intel Corporation.
+    Copyright(c) 2017-2021 Intel Corporation.
 
-DPDK pdump Library and pdump Tool
-=================================
+DPDK packet capture libraries and tools
+=======================================
 
 This document describes how the Data Plane Development Kit (DPDK) Packet
 Capture Framework is used for capturing packets on DPDK ports. It is intended
 for users of DPDK who want to know more about the Packet Capture feature and
 for those who want to monitor traffic on DPDK-controlled devices.
 
-The DPDK packet capture framework was introduced in DPDK v16.07. The DPDK
-packet capture framework consists of the DPDK pdump library and DPDK pdump
-tool.
-
+The DPDK packet capture framework was introduced in DPDK v16.07 and
+enhanced in 21.11. The DPDK packet capture framework consists of the
+libraries for collecting packets ``librte_pdump`` and writing packets
+to a file ``librte_pcapng``. There are two sample applications:
+``dpdk-dumpcap`` and older ``dpdk-pdump``.
 
 Introduction
 ------------
@@ -22,43 +23,46 @@ allow users to initialize the packet capture framework and to enable or
 disable packet capture. The library works on a multi process communication model and its
 usage is recommended for debugging purposes.
 
-The :ref:`dpdk-pdump <pdump_tool>` tool is developed based on the
-``librte_pdump`` library.  It runs as a DPDK secondary process and is capable
-of enabling or disabling packet capture on DPDK ports. The ``dpdk-pdump`` tool
-provides command-line options with which users can request enabling or
-disabling of the packet capture on DPDK ports.
+The :ref:`librte_pcapng <pcapng_library>` library provides the APIs to format
+packets and write them to a file in Pcapng format.
+
+
+The :ref:`dpdk-dumpcap <dumpcap_tool>` is a tool that captures packets in
+like Wireshark dumpcap does for Linux. It runs as a DPDK secondary process and
+captures packets from one or more interfaces and writes them to a file
+in Pcapng format.  The ``dpdk-dumpcap`` tool is designed to take
+most of the same options as the Wireshark ``dumpcap`` command.
 
-The application which initializes the packet capture framework will be a primary process
-and the application that enables or disables the packet capture will
-be a secondary process. The primary process sends the Rx and Tx packets from the DPDK ports
-to the secondary process.
+Without any options it will use the packet capture framework to
+capture traffic from the first available DPDK port.
 
 In DPDK the ``testpmd`` application can be used to initialize the packet
-capture framework and acts as a server, and the ``dpdk-pdump`` tool acts as a
+capture framework and acts as a server, and the ``dpdk-dumpcap`` tool acts as a
 client. To view Rx or Tx packets of ``testpmd``, the application should be
-launched first, and then the ``dpdk-pdump`` tool. Packets from ``testpmd``
-will be sent to the tool, which then sends them on to the Pcap PMD device and
-that device writes them to the Pcap file or to an external interface depending
-on the command-line option used.
+launched first, and then the ``dpdk-dumpcap`` tool. Packets from ``testpmd``
+will be sent to the tool, and then to the Pcapng file.
 
 Some things to note:
 
-* The ``dpdk-pdump`` tool can only be used in conjunction with a primary
+* All tools using ``librte_pdump`` can only be used in conjunction with a primary
   application which has the packet capture framework initialized already. In
   dpdk, only ``testpmd`` is modified to initialize packet capture framework,
-  other applications remain untouched. So, if the ``dpdk-pdump`` tool has to
+  other applications remain untouched. So, if the ``dpdk-dumpcap`` tool has to
   be used with any application other than the testpmd, the user needs to
   explicitly modify that application to call the packet capture framework
   initialization code. Refer to the ``app/test-pmd/testpmd.c`` code and look
   for ``pdump`` keyword to see how this is done.
 
-* The ``dpdk-pdump`` tool depends on the libpcap based PMD.
+* The ``dpdk-pdump`` tool is an older tool created as demonstration of ``librte_pdump``
+  library. The ``dpdk-pdump`` tool provides more limited functionality and
+  and depends on the Pcap PMD. It is retained only for compatibility reasons;
+  users should use ``dpdk-dumpcap`` instead.
 
 
 Test Environment
 ----------------
 
-The overview of using the Packet Capture Framework and the ``dpdk-pdump`` tool
+The overview of using the Packet Capture Framework and the ``dpdk-dumpcap`` utility
 for packet capturing on the DPDK port in
 :numref:`figure_packet_capture_framework`.
 
@@ -66,13 +70,13 @@ for packet capturing on the DPDK port in
 
 .. figure:: img/packet_capture_framework.*
 
-   Packet capturing on a DPDK port using the dpdk-pdump tool.
+   Packet capturing on a DPDK port using the dpdk-dumpcap utility.
 
 
 Running the Application
 -----------------------
 
-The following steps demonstrate how to run the ``dpdk-pdump`` tool to capture
+The following steps demonstrate how to run the ``dpdk-dumpcap`` tool to capture
 Rx side packets on dpdk_port0 in :numref:`figure_packet_capture_framework` and
 inspect them using ``tcpdump``.
 
@@ -80,16 +84,15 @@ inspect them using ``tcpdump``.
 
      sudo <build_dir>/app/dpdk-testpmd -c 0xf0 -n 4 -- -i --port-topology=chained
 
-#. Launch the pdump tool as follows::
+#. Launch the dpdk-dumpcap as follows::
 
-     sudo <build_dir>/app/dpdk-pdump -- \
-          --pdump 'port=0,queue=*,rx-dev=/tmp/capture.pcap'
+     sudo <build_dir>/app/dpdk-dumpcap -w /tmp/capture.pcapng
 
 #. Send traffic to dpdk_port0 from traffic generator.
-   Inspect packets captured in the file capture.pcap using a tool
-   that can interpret Pcap files, for example tcpdump::
+   Inspect packets captured in the file capture.pcapng using a tool such as
+   tcpdump or tshark that can interpret Pcapng files::
 
-     $tcpdump -nr /tmp/capture.pcap
+     $ tcpdump -nr /tmp/capture.pcapng
      reading from file /tmp/capture.pcap, link-type EN10MB (Ethernet)
      11:11:36.891404 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
      11:11:36.891442 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 2dce507f46a3..b440c77c2ba1 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -43,6 +43,7 @@ Programmer's Guide
     ip_fragment_reassembly_lib
     generic_receive_offload_lib
     generic_segmentation_offload_lib
+    pcapng_lib
     pdump_lib
     multi_proc_support
     kernel_nic_interface
diff --git a/doc/guides/prog_guide/pcapng_lib.rst b/doc/guides/prog_guide/pcapng_lib.rst
new file mode 100644
index 000000000000..7b2d82d7bd3b
--- /dev/null
+++ b/doc/guides/prog_guide/pcapng_lib.rst
@@ -0,0 +1,25 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2016 Intel Corporation.
+
+.. _pcapng_library:
+
+Packet Capture File Writer
+==========================
+
+Pcapng is a library for creating files in Pcapng file format.
+The Pcapng file format is the default capture file format for modern
+network capture processing tools. It can be read by wireshark and tcpdump.
+
+Usage
+-----
+
+Before the library can be used the function ``rte_pcapng_init``
+should be called once to initialize timestamp computation.
+
+
+References
+----------
+* Project repository  https://github.com/pcapng/pcapng/
+
+* PCAP Next Generation (pcapng) Capture File Format
+https://pcapng.github.io/pcapng/draft-tuexen-opsawg-pcapng.html
diff --git a/doc/guides/prog_guide/pdump_lib.rst b/doc/guides/prog_guide/pdump_lib.rst
index 62c0b015b2fe..d04d9709e364 100644
--- a/doc/guides/prog_guide/pdump_lib.rst
+++ b/doc/guides/prog_guide/pdump_lib.rst
@@ -3,10 +3,10 @@
 
 .. _pdump_library:
 
-The librte_pdump Library
-========================
+The Packet Capture Library
+==========================
 
-The ``librte_pdump`` library provides a framework for packet capturing in DPDK.
+The DPDK ``pdump`` library provides a framework for packet capturing in DPDK.
 The library does the complete copy of the Rx and Tx mbufs to a new mempool and
 hence it slows down the performance of the applications, so it is recommended
 to use this library for debugging purposes.
@@ -23,11 +23,19 @@ or disable the packet capture, and to uninitialize it.
 
 * ``rte_pdump_enable()``:
   This API enables the packet capture on a given port and queue.
-  Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf()``
+  This API enables the packet capture on a given port and queue.
+  It also allows setting an optional filter using DPDK BPF interpreter and
+  setting the captured packet length.
 
 * ``rte_pdump_enable_by_deviceid()``:
   This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
-  Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf_by_deviceid()``
+  This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
+  It also allows setting an optional filter using DPDK BPF interpreter and
+  setting the captured packet length.
 
 * ``rte_pdump_disable()``:
   This API disables the packet capture on a given port and queue.
@@ -61,6 +69,12 @@ and enables the packet capture by registering the Ethernet RX and TX callbacks f
 and queue combinations. Then the primary process will mirror the packets to the new mempool and enqueue them to
 the rte_ring that secondary process have passed to these APIs.
 
+The packet ring supports one of two formats. The default format enqueues copies of the original packets
+into the rte_ring. If the ``RTE_PDUMP_FLAG_PCAPNG`` is set the mbuf data is extended with header and trailer
+to match the format of Pcapng enhanced packet block. The enhanced packet block has meta-data such as the
+timestamp, port and queue the packet was captured on. It is up to the application consuming the
+packets from the ring to select the format desired.
+
 The library APIs ``rte_pdump_disable()`` and ``rte_pdump_disable_by_deviceid()`` disables the packet capture.
 For the calls to these APIs from secondary process, the library creates the "pdump disable" request and sends
 the request to the primary process over the multi process channel. The primary process takes this request and
@@ -74,5 +88,5 @@ function.
 Use Case: Packet Capturing
 --------------------------
 
-The DPDK ``app/pdump`` tool is developed based on this library to capture packets in DPDK.
-Users can use this as an example to develop their own packet capturing tools.
+The DPDK ``app/dpdk-dumpcap`` utility uses this library
+to capture packets in DPDK.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 4c56cdfeaaa2..0909f4258cf8 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -159,6 +159,16 @@ New Features
   * Added tests to verify tunnel header verification in IPsec inbound.
   * Added tests to verify inner checksum.
 
+* **Revised packet capture framework.**
+
+  * New dpdk-dumpcap program that has most of the features of the
+    wireshark dumpcap utility including: capture of multiple interfaces,
+    filtering, and stopping after number of bytes, packets.
+  * New library for writing pcapng packet capture files.
+  * Enhancements to the pdump library to support:
+    * Packet filter with BPF.
+    * Pcapng format with timestamps and meta-data.
+    * Fixes packet capture with stripped VLAN tags.
 
 Removed Items
 -------------
diff --git a/doc/guides/tools/dumpcap.rst b/doc/guides/tools/dumpcap.rst
new file mode 100644
index 000000000000..664ea0c79802
--- /dev/null
+++ b/doc/guides/tools/dumpcap.rst
@@ -0,0 +1,86 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2020 Microsoft Corporation.
+
+.. _dumpcap_tool:
+
+dpdk-dumpcap Application
+========================
+
+The ``dpdk-dumpcap`` tool is a Data Plane Development Kit (DPDK)
+network traffic dump tool.  The interface is similar to  the dumpcap tool in Wireshark.
+It runs as a secondary DPDK process and lets you capture packets that are
+coming into and out of a DPDK primary process.
+The ``dpdk-dumpcap`` writes files in Pcapng packet format using
+capture file format is pcapng.
+
+Without any options set it will use DPDK to capture traffic from the first
+available DPDK interface and write the received raw packet data, along
+with timestamps into a pcapng file.
+
+If the ``-w`` option is not specified, ``dpdk-dumpcap`` writes to a newly
+create file with a name chosen based on interface name and timestamp.
+If ``-w`` option is specified, then that file is used.
+
+   .. Note::
+      * The ``dpdk-dumpcap`` tool can only be used in conjunction with a primary
+        application which has the packet capture framework initialized already.
+        In dpdk, only the ``testpmd`` is modified to initialize packet capture
+        framework, other applications remain untouched. So, if the ``dpdk-dumpcap``
+        tool has to be used with any application other than the testpmd, user
+        needs to explicitly modify that application to call packet capture
+        framework initialization code. Refer ``app/test-pmd/testpmd.c``
+        code to see how this is done.
+
+      * The ``dpdk-dumpcap`` tool runs as a DPDK secondary process. It exits when
+        the primary application exits.
+
+
+Running the Application
+-----------------------
+
+To list interfaces available for capture use ``--list-interfaces``.
+
+To filter packets in style of *tshark* use the ``-f`` flag.
+
+To capture on multiple interfaces at once, use multiple ``-I`` flags.
+
+Example
+-------
+
+.. code-block:: console
+
+   # ./<build_dir>/app/dpdk-dumpcap --list-interfaces
+   0. 000:00:03.0
+   1. 000:00:03.1
+
+   # ./<build_dir>/app/dpdk-dumpcap -I 0000:00:03.0 -c 6 -w /tmp/sample.pcapng
+   Packets captured: 6
+   Packets received/dropped on interface '0000:00:03.0' 6/0
+
+   # ./<build_dir>/app/dpdk-dumpcap -f 'tcp port 80'
+   Packets captured: 6
+   Packets received/dropped on interface '0000:00:03.0' 10/8
+
+
+Limitations
+-----------
+The following option of Wireshark ``dumpcap`` is not yet implemented:
+
+   * ``-b|--ring-buffer`` -- more complex file management.
+
+The following options do not make sense in the context of DPDK.
+
+   * ``-C <byte_limit>`` -- its a kernel thing
+
+   * ``-t`` -- use a thread per interface
+
+   * Timestamp type.
+
+   * Link data types. Only EN10MB (Ethernet) is supported.
+
+   * Wireless related options:  ``-I|--monitor-mode`` and  ``-k <freq>``
+
+
+.. Note::
+   * The options to ``dpdk-dumpcap`` are like the Wireshark dumpcap program and
+     are not the same as ``dpdk-pdump`` and other DPDK applications.
diff --git a/doc/guides/tools/index.rst b/doc/guides/tools/index.rst
index 93dde4148e90..b71c12b8f2dd 100644
--- a/doc/guides/tools/index.rst
+++ b/doc/guides/tools/index.rst
@@ -8,6 +8,7 @@ DPDK Tools User Guides
     :maxdepth: 2
     :numbered:
 
+    dumpcap
     proc_info
     pdump
     pmdinfo
-- 
2.30.2


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v13 06/12] pdump: support pcapng and filtering
  @ 2021-10-15 18:28  1%   ` Stephen Hemminger
  2021-10-15 18:29  1%   ` [dpdk-dev] [PATCH v13 11/12] doc: changes for new pcapng and dumpcap utility Stephen Hemminger
  1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-10-15 18:28 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger, Reshma Pattan, Ray Kinsella, Anatoly Burakov

This enhances the DPDK pdump library to support new
pcapng format and filtering via BPF.

The internal client/server protocol is changed to support
two versions: the original pdump basic version and a
new pcapng version.

The internal version number (not part of exposed API or ABI)
is intentionally increased to cause any attempt to try
mismatched primary/secondary process to fail.

Add new API to do allow filtering of captured packets with
DPDK BPF (eBPF) filter program. It keeps statistics
on packets captured, filtered, and missed (because ring was full).

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Reshma Pattan <reshma.pattan@intel.com>
---
 lib/meson.build       |   4 +-
 lib/pdump/meson.build |   2 +-
 lib/pdump/rte_pdump.c | 432 ++++++++++++++++++++++++++++++------------
 lib/pdump/rte_pdump.h | 113 ++++++++++-
 lib/pdump/version.map |   8 +
 5 files changed, 433 insertions(+), 126 deletions(-)

diff --git a/lib/meson.build b/lib/meson.build
index 15150efa19a7..c71c6917dbb7 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -27,6 +27,7 @@ libraries = [
         'acl',
         'bbdev',
         'bitratestats',
+        'bpf',
         'cfgfile',
         'compressdev',
         'cryptodev',
@@ -43,7 +44,6 @@ libraries = [
         'member',
         'pcapng',
         'power',
-        'pdump',
         'rawdev',
         'regexdev',
         'rib',
@@ -55,10 +55,10 @@ libraries = [
         'ipsec', # ipsec lib depends on net, crypto and security
         'fib', #fib lib depends on rib
         'port', # pkt framework libs which use other libs from above
+        'pdump', # pdump lib depends on bpf
         'table',
         'pipeline',
         'flow_classify', # flow_classify lib depends on pkt framework table lib
-        'bpf',
         'graph',
         'node',
 ]
diff --git a/lib/pdump/meson.build b/lib/pdump/meson.build
index 3a95eabde6a6..51ceb2afdec5 100644
--- a/lib/pdump/meson.build
+++ b/lib/pdump/meson.build
@@ -3,4 +3,4 @@
 
 sources = files('rte_pdump.c')
 headers = files('rte_pdump.h')
-deps += ['ethdev']
+deps += ['ethdev', 'bpf', 'pcapng']
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index 382217bc1564..2636a216994b 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -7,8 +7,10 @@
 #include <rte_ethdev.h>
 #include <rte_lcore.h>
 #include <rte_log.h>
+#include <rte_memzone.h>
 #include <rte_errno.h>
 #include <rte_string_fns.h>
+#include <rte_pcapng.h>
 
 #include "rte_pdump.h"
 
@@ -27,30 +29,23 @@ enum pdump_operation {
 	ENABLE = 2
 };
 
+/* Internal version number in request */
 enum pdump_version {
-	V1 = 1
+	V1 = 1,		    /* no filtering or snap */
+	V2 = 2,
 };
 
 struct pdump_request {
 	uint16_t ver;
 	uint16_t op;
 	uint32_t flags;
-	union pdump_data {
-		struct enable_v1 {
-			char device[RTE_DEV_NAME_MAX_LEN];
-			uint16_t queue;
-			struct rte_ring *ring;
-			struct rte_mempool *mp;
-			void *filter;
-		} en_v1;
-		struct disable_v1 {
-			char device[RTE_DEV_NAME_MAX_LEN];
-			uint16_t queue;
-			struct rte_ring *ring;
-			struct rte_mempool *mp;
-			void *filter;
-		} dis_v1;
-	} data;
+	char device[RTE_DEV_NAME_MAX_LEN];
+	uint16_t queue;
+	struct rte_ring *ring;
+	struct rte_mempool *mp;
+
+	const struct rte_bpf_prm *prm;
+	uint32_t snaplen;
 };
 
 struct pdump_response {
@@ -63,80 +58,140 @@ static struct pdump_rxtx_cbs {
 	struct rte_ring *ring;
 	struct rte_mempool *mp;
 	const struct rte_eth_rxtx_callback *cb;
-	void *filter;
+	const struct rte_bpf *filter;
+	enum pdump_version ver;
+	uint32_t snaplen;
 } rx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
 tx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
 
 
-static inline void
-pdump_copy(struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
+/*
+ * The packet capture statistics keep track of packets
+ * accepted, filtered and dropped. These are per-queue
+ * and in memory between primary and secondary processes.
+ */
+static const char MZ_RTE_PDUMP_STATS[] = "rte_pdump_stats";
+static struct {
+	struct rte_pdump_stats rx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+	struct rte_pdump_stats tx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+} *pdump_stats;
+
+/* Create a clone of mbuf to be placed into ring. */
+static void
+pdump_copy(uint16_t port_id, uint16_t queue,
+	   enum rte_pcapng_direction direction,
+	   struct rte_mbuf **pkts, uint16_t nb_pkts,
+	   const struct pdump_rxtx_cbs *cbs,
+	   struct rte_pdump_stats *stats)
 {
 	unsigned int i;
 	int ring_enq;
 	uint16_t d_pkts = 0;
 	struct rte_mbuf *dup_bufs[nb_pkts];
-	struct pdump_rxtx_cbs *cbs;
+	uint64_t ts;
 	struct rte_ring *ring;
 	struct rte_mempool *mp;
 	struct rte_mbuf *p;
+	uint64_t rcs[nb_pkts];
+
+	if (cbs->filter)
+		rte_bpf_exec_burst(cbs->filter, (void **)pkts, rcs, nb_pkts);
 
-	cbs  = user_params;
+	ts = rte_get_tsc_cycles();
 	ring = cbs->ring;
 	mp = cbs->mp;
 	for (i = 0; i < nb_pkts; i++) {
-		p = rte_pktmbuf_copy(pkts[i], mp, 0, UINT32_MAX);
-		if (p)
+		/*
+		 * This uses same BPF return value convention as socket filter
+		 * and pcap_offline_filter.
+		 * if program returns zero
+		 * then packet doesn't match the filter (will be ignored).
+		 */
+		if (cbs->filter && rcs[i] == 0) {
+			__atomic_fetch_add(&stats->filtered,
+					   1, __ATOMIC_RELAXED);
+			continue;
+		}
+
+		/*
+		 * If using pcapng then want to wrap packets
+		 * otherwise a simple copy.
+		 */
+		if (cbs->ver == V2)
+			p = rte_pcapng_copy(port_id, queue,
+					    pkts[i], mp, cbs->snaplen,
+					    ts, direction);
+		else
+			p = rte_pktmbuf_copy(pkts[i], mp, 0, cbs->snaplen);
+
+		if (unlikely(p == NULL))
+			__atomic_fetch_add(&stats->nombuf, 1, __ATOMIC_RELAXED);
+		else
 			dup_bufs[d_pkts++] = p;
 	}
 
+	__atomic_fetch_add(&stats->accepted, d_pkts, __ATOMIC_RELAXED);
+
 	ring_enq = rte_ring_enqueue_burst(ring, (void *)dup_bufs, d_pkts, NULL);
 	if (unlikely(ring_enq < d_pkts)) {
 		unsigned int drops = d_pkts - ring_enq;
 
-		PDUMP_LOG(DEBUG,
-			"only %d of packets enqueued to ring\n", ring_enq);
+		__atomic_fetch_add(&stats->ringfull, drops, __ATOMIC_RELAXED);
 		rte_pktmbuf_free_bulk(&dup_bufs[ring_enq], drops);
 	}
 }
 
 static uint16_t
-pdump_rx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_rx(uint16_t port, uint16_t queue,
 	struct rte_mbuf **pkts, uint16_t nb_pkts,
-	uint16_t max_pkts __rte_unused,
-	void *user_params)
+	uint16_t max_pkts __rte_unused, void *user_params)
 {
-	pdump_copy(pkts, nb_pkts, user_params);
+	const struct pdump_rxtx_cbs *cbs = user_params;
+	struct rte_pdump_stats *stats = &pdump_stats->rx[port][queue];
+
+	pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_IN,
+		   pkts, nb_pkts, cbs, stats);
 	return nb_pkts;
 }
 
 static uint16_t
-pdump_tx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_tx(uint16_t port, uint16_t queue,
 		struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
 {
-	pdump_copy(pkts, nb_pkts, user_params);
+	const struct pdump_rxtx_cbs *cbs = user_params;
+	struct rte_pdump_stats *stats = &pdump_stats->tx[port][queue];
+
+	pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_OUT,
+		   pkts, nb_pkts, cbs, stats);
 	return nb_pkts;
 }
 
 static int
-pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
-				struct rte_ring *ring, struct rte_mempool *mp,
-				uint16_t operation)
+pdump_register_rx_callbacks(enum pdump_version ver,
+			    uint16_t end_q, uint16_t port, uint16_t queue,
+			    struct rte_ring *ring, struct rte_mempool *mp,
+			    struct rte_bpf *filter,
+			    uint16_t operation, uint32_t snaplen)
 {
 	uint16_t qid;
-	struct pdump_rxtx_cbs *cbs = NULL;
 
 	qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
 	for (; qid < end_q; qid++) {
-		cbs = &rx_cbs[port][qid];
-		if (cbs && operation == ENABLE) {
+		struct pdump_rxtx_cbs *cbs = &rx_cbs[port][qid];
+
+		if (operation == ENABLE) {
 			if (cbs->cb) {
 				PDUMP_LOG(ERR,
 					"rx callback for port=%d queue=%d, already exists\n",
 					port, qid);
 				return -EEXIST;
 			}
+			cbs->ver = ver;
 			cbs->ring = ring;
 			cbs->mp = mp;
+			cbs->snaplen = snaplen;
+			cbs->filter = filter;
+
 			cbs->cb = rte_eth_add_first_rx_callback(port, qid,
 								pdump_rx, cbs);
 			if (cbs->cb == NULL) {
@@ -145,8 +200,7 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
 					rte_errno);
 				return rte_errno;
 			}
-		}
-		if (cbs && operation == DISABLE) {
+		} else if (operation == DISABLE) {
 			int ret;
 
 			if (cbs->cb == NULL) {
@@ -170,26 +224,32 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
 }
 
 static int
-pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
-				struct rte_ring *ring, struct rte_mempool *mp,
-				uint16_t operation)
+pdump_register_tx_callbacks(enum pdump_version ver,
+			    uint16_t end_q, uint16_t port, uint16_t queue,
+			    struct rte_ring *ring, struct rte_mempool *mp,
+			    struct rte_bpf *filter,
+			    uint16_t operation, uint32_t snaplen)
 {
 
 	uint16_t qid;
-	struct pdump_rxtx_cbs *cbs = NULL;
 
 	qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
 	for (; qid < end_q; qid++) {
-		cbs = &tx_cbs[port][qid];
-		if (cbs && operation == ENABLE) {
+		struct pdump_rxtx_cbs *cbs = &tx_cbs[port][qid];
+
+		if (operation == ENABLE) {
 			if (cbs->cb) {
 				PDUMP_LOG(ERR,
 					"tx callback for port=%d queue=%d, already exists\n",
 					port, qid);
 				return -EEXIST;
 			}
+			cbs->ver = ver;
 			cbs->ring = ring;
 			cbs->mp = mp;
+			cbs->snaplen = snaplen;
+			cbs->filter = filter;
+
 			cbs->cb = rte_eth_add_tx_callback(port, qid, pdump_tx,
 								cbs);
 			if (cbs->cb == NULL) {
@@ -198,8 +258,7 @@ pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
 					rte_errno);
 				return rte_errno;
 			}
-		}
-		if (cbs && operation == DISABLE) {
+		} else if (operation == DISABLE) {
 			int ret;
 
 			if (cbs->cb == NULL) {
@@ -228,37 +287,47 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
 	uint16_t nb_rx_q = 0, nb_tx_q = 0, end_q, queue;
 	uint16_t port;
 	int ret = 0;
+	struct rte_bpf *filter = NULL;
 	uint32_t flags;
 	uint16_t operation;
 	struct rte_ring *ring;
 	struct rte_mempool *mp;
 
-	flags = p->flags;
-	operation = p->op;
-	if (operation == ENABLE) {
-		ret = rte_eth_dev_get_port_by_name(p->data.en_v1.device,
-				&port);
-		if (ret < 0) {
+	/* Check for possible DPDK version mismatch */
+	if (!(p->ver == V1 || p->ver == V2)) {
+		PDUMP_LOG(ERR,
+			  "incorrect client version %u\n", p->ver);
+		return -EINVAL;
+	}
+
+	if (p->prm) {
+		if (p->prm->prog_arg.type != RTE_BPF_ARG_PTR_MBUF) {
 			PDUMP_LOG(ERR,
-				"failed to get port id for device id=%s\n",
-				p->data.en_v1.device);
+				  "invalid BPF program type: %u\n",
+				  p->prm->prog_arg.type);
 			return -EINVAL;
 		}
-		queue = p->data.en_v1.queue;
-		ring = p->data.en_v1.ring;
-		mp = p->data.en_v1.mp;
-	} else {
-		ret = rte_eth_dev_get_port_by_name(p->data.dis_v1.device,
-				&port);
-		if (ret < 0) {
-			PDUMP_LOG(ERR,
-				"failed to get port id for device id=%s\n",
-				p->data.dis_v1.device);
-			return -EINVAL;
+
+		filter = rte_bpf_load(p->prm);
+		if (filter == NULL) {
+			PDUMP_LOG(ERR, "cannot load BPF filter: %s\n",
+				  rte_strerror(rte_errno));
+			return -rte_errno;
 		}
-		queue = p->data.dis_v1.queue;
-		ring = p->data.dis_v1.ring;
-		mp = p->data.dis_v1.mp;
+	}
+
+	flags = p->flags;
+	operation = p->op;
+	queue = p->queue;
+	ring = p->ring;
+	mp = p->mp;
+
+	ret = rte_eth_dev_get_port_by_name(p->device, &port);
+	if (ret < 0) {
+		PDUMP_LOG(ERR,
+			  "failed to get port id for device id=%s\n",
+			  p->device);
+		return -EINVAL;
 	}
 
 	/* validation if packet capture is for all queues */
@@ -296,8 +365,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
 	/* register RX callback */
 	if (flags & RTE_PDUMP_FLAG_RX) {
 		end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_rx_q : queue + 1;
-		ret = pdump_register_rx_callbacks(end_q, port, queue, ring, mp,
-							operation);
+		ret = pdump_register_rx_callbacks(p->ver, end_q, port, queue,
+						  ring, mp, filter,
+						  operation, p->snaplen);
 		if (ret < 0)
 			return ret;
 	}
@@ -305,8 +375,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
 	/* register TX callback */
 	if (flags & RTE_PDUMP_FLAG_TX) {
 		end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_tx_q : queue + 1;
-		ret = pdump_register_tx_callbacks(end_q, port, queue, ring, mp,
-							operation);
+		ret = pdump_register_tx_callbacks(p->ver, end_q, port, queue,
+						  ring, mp, filter,
+						  operation, p->snaplen);
 		if (ret < 0)
 			return ret;
 	}
@@ -332,7 +403,7 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
 		resp->err_value = set_pdump_rxtx_cbs(cli_req);
 	}
 
-	strlcpy(mp_resp.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
+	rte_strscpy(mp_resp.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
 	mp_resp.len_param = sizeof(*resp);
 	mp_resp.num_fds = 0;
 	if (rte_mp_reply(&mp_resp, peer) < 0) {
@@ -347,8 +418,18 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
 int
 rte_pdump_init(void)
 {
+	const struct rte_memzone *mz;
 	int ret;
 
+	mz = rte_memzone_reserve(MZ_RTE_PDUMP_STATS, sizeof(*pdump_stats),
+				 rte_socket_id(), 0);
+	if (mz == NULL) {
+		PDUMP_LOG(ERR, "cannot allocate pdump statistics\n");
+		rte_errno = ENOMEM;
+		return -1;
+	}
+	pdump_stats = mz->addr;
+
 	ret = rte_mp_action_register(PDUMP_MP, pdump_server);
 	if (ret && rte_errno != ENOTSUP)
 		return -1;
@@ -392,14 +473,21 @@ pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp)
 static int
 pdump_validate_flags(uint32_t flags)
 {
-	if (flags != RTE_PDUMP_FLAG_RX && flags != RTE_PDUMP_FLAG_TX &&
-		flags != RTE_PDUMP_FLAG_RXTX) {
+	if ((flags & RTE_PDUMP_FLAG_RXTX) == 0) {
 		PDUMP_LOG(ERR,
 			"invalid flags, should be either rx/tx/rxtx\n");
 		rte_errno = EINVAL;
 		return -1;
 	}
 
+	/* mask off the flags we know about */
+	if (flags & ~(RTE_PDUMP_FLAG_RXTX | RTE_PDUMP_FLAG_PCAPNG)) {
+		PDUMP_LOG(ERR,
+			  "unknown flags: %#x\n", flags);
+		rte_errno = ENOTSUP;
+		return -1;
+	}
+
 	return 0;
 }
 
@@ -426,12 +514,12 @@ pdump_validate_port(uint16_t port, char *name)
 }
 
 static int
-pdump_prepare_client_request(char *device, uint16_t queue,
-				uint32_t flags,
-				uint16_t operation,
-				struct rte_ring *ring,
-				struct rte_mempool *mp,
-				void *filter)
+pdump_prepare_client_request(const char *device, uint16_t queue,
+			     uint32_t flags, uint32_t snaplen,
+			     uint16_t operation,
+			     struct rte_ring *ring,
+			     struct rte_mempool *mp,
+			     const struct rte_bpf_prm *prm)
 {
 	int ret = -1;
 	struct rte_mp_msg mp_req, *mp_rep;
@@ -440,26 +528,22 @@ pdump_prepare_client_request(char *device, uint16_t queue,
 	struct pdump_request *req = (struct pdump_request *)mp_req.param;
 	struct pdump_response *resp;
 
-	req->ver = 1;
-	req->flags = flags;
+	memset(req, 0, sizeof(*req));
+
+	req->ver = (flags & RTE_PDUMP_FLAG_PCAPNG) ? V2 : V1;
+	req->flags = flags & RTE_PDUMP_FLAG_RXTX;
 	req->op = operation;
+	req->queue = queue;
+	rte_strscpy(req->device, device, sizeof(req->device));
+
 	if ((operation & ENABLE) != 0) {
-		strlcpy(req->data.en_v1.device, device,
-			sizeof(req->data.en_v1.device));
-		req->data.en_v1.queue = queue;
-		req->data.en_v1.ring = ring;
-		req->data.en_v1.mp = mp;
-		req->data.en_v1.filter = filter;
-	} else {
-		strlcpy(req->data.dis_v1.device, device,
-			sizeof(req->data.dis_v1.device));
-		req->data.dis_v1.queue = queue;
-		req->data.dis_v1.ring = NULL;
-		req->data.dis_v1.mp = NULL;
-		req->data.dis_v1.filter = NULL;
+		req->ring = ring;
+		req->mp = mp;
+		req->prm = prm;
+		req->snaplen = snaplen;
 	}
 
-	strlcpy(mp_req.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
+	rte_strscpy(mp_req.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
 	mp_req.len_param = sizeof(*req);
 	mp_req.num_fds = 0;
 	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0) {
@@ -477,11 +561,17 @@ pdump_prepare_client_request(char *device, uint16_t queue,
 	return ret;
 }
 
-int
-rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
-			struct rte_ring *ring,
-			struct rte_mempool *mp,
-			void *filter)
+/*
+ * There are two versions of this function, because although original API
+ * left place holder for future filter, it never checked the value.
+ * Therefore the API can't depend on application passing a non
+ * bogus value.
+ */
+static int
+pdump_enable(uint16_t port, uint16_t queue,
+	     uint32_t flags, uint32_t snaplen,
+	     struct rte_ring *ring, struct rte_mempool *mp,
+	     const struct rte_bpf_prm *prm)
 {
 	int ret;
 	char name[RTE_DEV_NAME_MAX_LEN];
@@ -496,20 +586,42 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(name, queue, flags,
-						ENABLE, ring, mp, filter);
+	if (snaplen == 0)
+		snaplen = UINT32_MAX;
 
-	return ret;
+	return pdump_prepare_client_request(name, queue, flags, snaplen,
+					    ENABLE, ring, mp, prm);
 }
 
 int
-rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
-				uint32_t flags,
-				struct rte_ring *ring,
-				struct rte_mempool *mp,
-				void *filter)
+rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
+		 struct rte_ring *ring,
+		 struct rte_mempool *mp,
+		 void *filter __rte_unused)
 {
-	int ret = 0;
+	return pdump_enable(port, queue, flags, 0,
+			    ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf(uint16_t port, uint16_t queue,
+		     uint32_t flags, uint32_t snaplen,
+		     struct rte_ring *ring,
+		     struct rte_mempool *mp,
+		     const struct rte_bpf_prm *prm)
+{
+	return pdump_enable(port, queue, flags, snaplen,
+			    ring, mp, prm);
+}
+
+static int
+pdump_enable_by_deviceid(const char *device_id, uint16_t queue,
+			 uint32_t flags, uint32_t snaplen,
+			 struct rte_ring *ring,
+			 struct rte_mempool *mp,
+			 const struct rte_bpf_prm *prm)
+{
+	int ret;
 
 	ret = pdump_validate_ring_mp(ring, mp);
 	if (ret < 0)
@@ -518,10 +630,30 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(device_id, queue, flags,
-						ENABLE, ring, mp, filter);
+	return pdump_prepare_client_request(device_id, queue, flags, snaplen,
+					    ENABLE, ring, mp, prm);
+}
 
-	return ret;
+int
+rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
+			     uint32_t flags,
+			     struct rte_ring *ring,
+			     struct rte_mempool *mp,
+			     void *filter __rte_unused)
+{
+	return pdump_enable_by_deviceid(device_id, queue, flags, 0,
+					ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+				 uint32_t flags, uint32_t snaplen,
+				 struct rte_ring *ring,
+				 struct rte_mempool *mp,
+				 const struct rte_bpf_prm *prm)
+{
+	return pdump_enable_by_deviceid(device_id, queue, flags, snaplen,
+					ring, mp, prm);
 }
 
 int
@@ -537,8 +669,8 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags)
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(name, queue, flags,
-						DISABLE, NULL, NULL, NULL);
+	ret = pdump_prepare_client_request(name, queue, flags, 0,
+					   DISABLE, NULL, NULL, NULL);
 
 	return ret;
 }
@@ -553,8 +685,68 @@ rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(device_id, queue, flags,
-						DISABLE, NULL, NULL, NULL);
+	ret = pdump_prepare_client_request(device_id, queue, flags, 0,
+					   DISABLE, NULL, NULL, NULL);
 
 	return ret;
 }
+
+static void
+pdump_sum_stats(uint16_t port, uint16_t nq,
+		struct rte_pdump_stats stats[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
+		struct rte_pdump_stats *total)
+{
+	uint64_t *sum = (uint64_t *)total;
+	unsigned int i;
+	uint64_t val;
+	uint16_t qid;
+
+	for (qid = 0; qid < nq; qid++) {
+		const uint64_t *perq = (const uint64_t *)&stats[port][qid];
+
+		for (i = 0; i < sizeof(*total) / sizeof(uint64_t); i++) {
+			val = __atomic_load_n(&perq[i], __ATOMIC_RELAXED);
+			sum[i] += val;
+		}
+	}
+}
+
+int
+rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats)
+{
+	struct rte_eth_dev_info dev_info;
+	const struct rte_memzone *mz;
+	int ret;
+
+	memset(stats, 0, sizeof(*stats));
+	ret = rte_eth_dev_info_get(port, &dev_info);
+	if (ret != 0) {
+		PDUMP_LOG(ERR,
+			  "Error during getting device (port %u) info: %s\n",
+			  port, strerror(-ret));
+		return ret;
+	}
+
+	if (pdump_stats == NULL) {
+		if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+			/* rte_pdump_init was not called */
+			PDUMP_LOG(ERR, "pdump stats not initialized\n");
+			rte_errno = EINVAL;
+			return -1;
+		}
+
+		/* secondary process looks up the memzone */
+		mz = rte_memzone_lookup(MZ_RTE_PDUMP_STATS);
+		if (mz == NULL) {
+			/* rte_pdump_init was not called in primary process?? */
+			PDUMP_LOG(ERR, "can not find pdump stats\n");
+			rte_errno = EINVAL;
+			return -1;
+		}
+		pdump_stats = mz->addr;
+	}
+
+	pdump_sum_stats(port, dev_info.nb_rx_queues, pdump_stats->rx, stats);
+	pdump_sum_stats(port, dev_info.nb_tx_queues, pdump_stats->tx, stats);
+	return 0;
+}
diff --git a/lib/pdump/rte_pdump.h b/lib/pdump/rte_pdump.h
index 6b00fc17aeb2..6efa0274f2ce 100644
--- a/lib/pdump/rte_pdump.h
+++ b/lib/pdump/rte_pdump.h
@@ -15,6 +15,7 @@
 #include <stdint.h>
 #include <rte_mempool.h>
 #include <rte_ring.h>
+#include <rte_bpf.h>
 
 #ifdef __cplusplus
 extern "C" {
@@ -26,7 +27,9 @@ enum {
 	RTE_PDUMP_FLAG_RX = 1,  /* receive direction */
 	RTE_PDUMP_FLAG_TX = 2,  /* transmit direction */
 	/* both receive and transmit directions */
-	RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX)
+	RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX),
+
+	RTE_PDUMP_FLAG_PCAPNG = 4, /* format for pcapng */
 };
 
 /**
@@ -68,7 +71,7 @@ rte_pdump_uninit(void);
  * @param mp
  *  mempool on to which original packets will be mirrored or duplicated.
  * @param filter
- *  place holder for packet filtering.
+ *  Unused should be NULL.
  *
  * @return
  *    0 on success, -1 on error, rte_errno is set accordingly.
@@ -80,6 +83,41 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
 		struct rte_mempool *mp,
 		void *filter);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given port and queue with filtering.
+ *
+ * @param port_id
+ *  The Ethernet port on which packet capturing should be enabled.
+ * @param queue
+ *  The queue on the Ethernet port which packet capturing
+ *  should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ *  queues of a given port.
+ * @param flags
+ *  Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ *  The upper limit on bytes to copy.
+ *  Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ *  The ring on which captured packets will be enqueued for user.
+ * @param mp
+ *  The mempool on to which original packets will be mirrored or duplicated.
+ * @param prm
+ *  Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ *    0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf(uint16_t port_id, uint16_t queue,
+		     uint32_t flags, uint32_t snaplen,
+		     struct rte_ring *ring,
+		     struct rte_mempool *mp,
+		     const struct rte_bpf_prm *prm);
+
 /**
  * Disables packet capturing on given port and queue.
  *
@@ -118,7 +156,7 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags);
  * @param mp
  *  mempool on to which original packets will be mirrored or duplicated.
  * @param filter
- *  place holder for packet filtering.
+ *  unused should be NULL
  *
  * @return
  *    0 on success, -1 on error, rte_errno is set accordingly.
@@ -131,6 +169,43 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
 				struct rte_mempool *mp,
 				void *filter);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given device id and queue with filtering.
+ * device_id can be name or pci address of device.
+ *
+ * @param device_id
+ *  device id on which packet capturing should be enabled.
+ * @param queue
+ *  The queue on the Ethernet port which packet capturing
+ *  should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ *  queues of a given port.
+ * @param flags
+ *  Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ *  The upper limit on bytes to copy.
+ *  Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ *  The ring on which captured packets will be enqueued for user.
+ * @param mp
+ *  The mempool on to which original packets will be mirrored or duplicated.
+ * @param filter
+ *  Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ *    0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+				 uint32_t flags, uint32_t snaplen,
+				 struct rte_ring *ring,
+				 struct rte_mempool *mp,
+				 const struct rte_bpf_prm *filter);
+
+
 /**
  * Disables packet capturing on given device_id and queue.
  * device_id can be name or pci address of device.
@@ -153,6 +228,38 @@ int
 rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
 				uint32_t flags);
 
+
+/**
+ * A structure used to retrieve statistics from packet capture.
+ * The statistics are sum of both receive and transmit queues.
+ */
+struct rte_pdump_stats {
+	uint64_t accepted; /**< Number of packets accepted by filter. */
+	uint64_t filtered; /**< Number of packets rejected by filter. */
+	uint64_t nombuf;   /**< Number of mbuf allocation failures. */
+	uint64_t ringfull; /**< Number of missed packets due to ring full. */
+
+	uint64_t reserved[4]; /**< Reserved and pad to cache line */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Retrieve the packet capture statistics for a queue.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param stats
+ *   A pointer to structure of type *rte_pdump_stats* to be filled in.
+ * @return
+ *   Zero if successful. -1 on error and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_pdump_stats(uint16_t port_id, struct rte_pdump_stats *stats);
+
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/pdump/version.map b/lib/pdump/version.map
index f0a9d12c9a9e..ce5502d9cdf4 100644
--- a/lib/pdump/version.map
+++ b/lib/pdump/version.map
@@ -10,3 +10,11 @@ DPDK_22 {
 
 	local: *;
 };
+
+EXPERIMENTAL {
+	global:
+
+	rte_pdump_enable_bpf;
+	rte_pdump_enable_bpf_by_deviceid;
+	rte_pdump_stats;
+};
-- 
2.30.2


^ permalink raw reply	[relevance 1%]

* Re: [dpdk-dev] [PATCH v2 1/5] hash: add new toeplitz hash implementation
  @ 2021-10-15 16:58  3%   ` Stephen Hemminger
  2021-10-18 10:40  3%     ` Ananyev, Konstantin
  2021-10-18 11:08  0%     ` Medvedkin, Vladimir
  0 siblings, 2 replies; 200+ results
From: Stephen Hemminger @ 2021-10-15 16:58 UTC (permalink / raw)
  To: Vladimir Medvedkin
  Cc: dev, yipeng1.wang, sameh.gobriel, bruce.richardson, konstantin.ananyev

On Fri, 15 Oct 2021 10:30:02 +0100
Vladimir Medvedkin <vladimir.medvedkin@intel.com> wrote:

> +			m[i * 8 + j] = (rss_key[i] << j)|
> +				(uint8_t)((uint16_t)(rss_key[i + 1]) >>
> +				(8 - j));
> +		}

This ends up being harder than necessary to read. Maybe split into
multiple statements and/or use temporary variable.

> +RTE_INIT(rte_thash_gfni_init)
> +{
> +	rte_thash_gfni_supported = 0;

Not necessary in C globals are initialized to zero by default.

By removing that the constructor can be totally behind #ifdef

> +__rte_internal
> +static inline __m512i
> +__rte_thash_gfni(const uint64_t *mtrx, const uint8_t *tuple,
> +	const uint8_t *secondary_tuple, int len)
> +{
> +	__m512i permute_idx = _mm512_set_epi8(7, 6, 5, 4, 7, 6, 5, 4,
> +						6, 5, 4, 3, 6, 5, 4, 3,
> +						5, 4, 3, 2, 5, 4, 3, 2,
> +						4, 3, 2, 1, 4, 3, 2, 1,
> +						3, 2, 1, 0, 3, 2, 1, 0,
> +						2, 1, 0, -1, 2, 1, 0, -1,
> +						1, 0, -1, -2, 1, 0, -1, -2,
> +						0, -1, -2, -3, 0, -1, -2, -3);

NAK

Please don't put the implementation in an inline. This makes it harder
to support (API/ABI) and blocks other architectures from implementing
same thing with different instructions.

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v5 2/4] mempool: add non-IO flag
  @ 2021-10-15 16:02  3%     ` Dmitry Kozlyuk
    1 sibling, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-15 16:02 UTC (permalink / raw)
  To: dev
  Cc: Matan Azrad, Andrew Rybchenko, Maryam Tahhan, Reshma Pattan,
	Olivier Matz

Mempool is a generic allocator that is not necessarily used
for device IO operations and its memory for DMA.
Add MEMPOOL_F_NON_IO flag to mark such mempools automatically
a) if their objects are not contiguous;
b) if IOVA is not available for any object.
Other components can inspect this flag
in order to optimize their memory management.

Discussion: https://mails.dpdk.org/archives/dev/2021-August/216654.html

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/proc-info/main.c                   |   4 +-
 app/test/test_mempool.c                | 112 +++++++++++++++++++++++++
 doc/guides/rel_notes/release_21_11.rst |   3 +
 lib/mempool/rte_mempool.c              |  10 +++
 lib/mempool/rte_mempool.h              |   2 +
 5 files changed, 130 insertions(+), 1 deletion(-)

diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index a8e928fa9f..6054cb3d88 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -1296,6 +1296,7 @@ show_mempool(char *name)
 				"\t  -- SP put (%c), SC get (%c)\n"
 				"\t  -- Pool created (%c)\n"
 				"\t  -- No IOVA config (%c)\n",
+				"\t  -- Not used for IO (%c)\n",
 				ptr->name,
 				ptr->socket_id,
 				(flags & MEMPOOL_F_NO_SPREAD) ? 'y' : 'n',
@@ -1303,7 +1304,8 @@ show_mempool(char *name)
 				(flags & MEMPOOL_F_SP_PUT) ? 'y' : 'n',
 				(flags & MEMPOOL_F_SC_GET) ? 'y' : 'n',
 				(flags & MEMPOOL_F_POOL_CREATED) ? 'y' : 'n',
-				(flags & MEMPOOL_F_NO_IOVA_CONTIG) ? 'y' : 'n');
+				(flags & MEMPOOL_F_NO_IOVA_CONTIG) ? 'y' : 'n',
+				(flags & MEMPOOL_F_NON_IO) ? 'y' : 'n');
 			printf("  - Size %u Cache %u element %u\n"
 				"  - header %u trailer %u\n"
 				"  - private data size %u\n",
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index c39c83256e..caf9c46a29 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -12,6 +12,7 @@
 #include <sys/queue.h>
 
 #include <rte_common.h>
+#include <rte_eal_paging.h>
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_errno.h>
@@ -729,6 +730,109 @@ test_mempool_events_safety(void)
 #pragma pop_macro("RTE_TEST_TRACE_FAILURE")
 }
 
+#pragma push_macro("RTE_TEST_TRACE_FAILURE")
+#undef RTE_TEST_TRACE_FAILURE
+#define RTE_TEST_TRACE_FAILURE(...) do { \
+		ret = TEST_FAILED; \
+		goto exit; \
+	} while (0)
+
+static int
+test_mempool_flag_non_io_set_when_no_iova_contig_set(void)
+{
+	struct rte_mempool *mp = NULL;
+	int ret;
+
+	mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
+				      MEMPOOL_ELT_SIZE, 0, 0,
+				      SOCKET_ID_ANY, MEMPOOL_F_NO_IOVA_CONTIG);
+	RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
+				 rte_strerror(rte_errno));
+	rte_mempool_set_ops_byname(mp, rte_mbuf_best_mempool_ops(), NULL);
+	ret = rte_mempool_populate_default(mp);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(mp->flags & MEMPOOL_F_NON_IO,
+			"NON_IO flag is not set when NO_IOVA_CONTIG is set");
+	ret = TEST_SUCCESS;
+exit:
+	rte_mempool_free(mp);
+	return ret;
+}
+
+static int
+test_mempool_flag_non_io_unset_when_populated_with_valid_iova(void)
+{
+	const struct rte_memzone *mz;
+	void *virt;
+	rte_iova_t iova;
+	size_t page_size = RTE_PGSIZE_2M;
+	struct rte_mempool *mp;
+	int ret;
+
+	mz = rte_memzone_reserve("test_mempool", 3 * page_size, SOCKET_ID_ANY,
+				 RTE_MEMZONE_IOVA_CONTIG);
+	RTE_TEST_ASSERT_NOT_NULL(mz, "Cannot allocate memory");
+	virt = mz->addr;
+	iova = rte_mem_virt2iova(virt);
+	RTE_TEST_ASSERT_NOT_EQUAL(iova,  RTE_BAD_IOVA, "Cannot get IOVA");
+	mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
+				      MEMPOOL_ELT_SIZE, 0, 0,
+				      SOCKET_ID_ANY, 0);
+	RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
+				 rte_strerror(rte_errno));
+
+	ret = rte_mempool_populate_iova(mp, RTE_PTR_ADD(virt, 1 * page_size),
+					RTE_BAD_IOVA, page_size, NULL, NULL);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(mp->flags & MEMPOOL_F_NON_IO,
+			"NON_IO flag is not set when mempool is populated with only RTE_BAD_IOVA");
+
+	ret = rte_mempool_populate_iova(mp, virt, iova, page_size, NULL, NULL);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(!(mp->flags & MEMPOOL_F_NON_IO),
+			"NON_IO flag is not unset when mempool is populated with valid IOVA");
+
+	ret = rte_mempool_populate_iova(mp, RTE_PTR_ADD(virt, 2 * page_size),
+					RTE_BAD_IOVA, page_size, NULL, NULL);
+	RTE_TEST_ASSERT(ret > 0, "Failed to populate mempool: %s",
+			rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(!(mp->flags & MEMPOOL_F_NON_IO),
+			"NON_IO flag is set even when some objects have valid IOVA");
+	ret = TEST_SUCCESS;
+
+exit:
+	rte_mempool_free(mp);
+	rte_memzone_free(mz);
+	return ret;
+}
+
+static int
+test_mempool_flag_non_io_unset_by_default(void)
+{
+	struct rte_mempool *mp;
+	int ret;
+
+	mp = rte_mempool_create_empty("empty", MEMPOOL_SIZE,
+				      MEMPOOL_ELT_SIZE, 0, 0,
+				      SOCKET_ID_ANY, 0);
+	RTE_TEST_ASSERT_NOT_NULL(mp, "Cannot create mempool: %s",
+				 rte_strerror(rte_errno));
+	ret = rte_mempool_populate_default(mp);
+	RTE_TEST_ASSERT_EQUAL(ret, (int)mp->size, "Failed to populate mempool: %s",
+			      rte_strerror(rte_errno));
+	RTE_TEST_ASSERT(!(mp->flags & MEMPOOL_F_NON_IO),
+			"NON_IO flag is set by default");
+	ret = TEST_SUCCESS;
+exit:
+	rte_mempool_free(mp);
+	return ret;
+}
+
+#pragma pop_macro("RTE_TEST_TRACE_FAILURE")
+
 static int
 test_mempool(void)
 {
@@ -914,6 +1018,14 @@ test_mempool(void)
 	if (test_mempool_events_safety() < 0)
 		GOTO_ERR(ret, err);
 
+	/* test NON_IO flag inference */
+	if (test_mempool_flag_non_io_unset_by_default() < 0)
+		GOTO_ERR(ret, err);
+	if (test_mempool_flag_non_io_set_when_no_iova_contig_set() < 0)
+		GOTO_ERR(ret, err);
+	if (test_mempool_flag_non_io_unset_when_populated_with_valid_iova() < 0)
+		GOTO_ERR(ret, err);
+
 	rte_mempool_list_dump(stdout);
 
 	ret = 0;
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 4c56cdfeaa..39a8a3d950 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -229,6 +229,9 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* mempool: Added ``MEMPOOL_F_NON_IO`` flag to give a hint to DPDK components
+  that objects from this pool will not be used for device IO (e.g. DMA).
+
 
 ABI Changes
 -----------
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 8810d08ab5..7d7d97d85d 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -372,6 +372,10 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next);
 	mp->nb_mem_chunks++;
 
+	/* At least some objects in the pool can now be used for IO. */
+	if (iova != RTE_BAD_IOVA)
+		mp->flags &= ~MEMPOOL_F_NON_IO;
+
 	/* Report the mempool as ready only when fully populated. */
 	if (mp->populated_size >= mp->size)
 		mempool_event_callback_invoke(RTE_MEMPOOL_EVENT_READY, mp);
@@ -851,6 +855,12 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		return NULL;
 	}
 
+	/*
+	 * No objects in the pool can be used for IO until it's populated
+	 * with at least some objects with valid IOVA.
+	 */
+	flags |= MEMPOOL_F_NON_IO;
+
 	/* "no cache align" imply "no spread" */
 	if (flags & MEMPOOL_F_NO_CACHE_ALIGN)
 		flags |= MEMPOOL_F_NO_SPREAD;
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 3285626712..408d916a9c 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -257,6 +257,8 @@ struct rte_mempool {
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
+/** Internal: no object from the pool can be used for device IO (DMA). */
+#define MEMPOOL_F_NON_IO         0x0040
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v14 0/5] Add PIE support for HQoS library
  2021-10-15  8:16  3% ` [dpdk-dev] [PATCH v14 " Liguzinski, WojciechX
@ 2021-10-15 13:56  0%   ` Dumitrescu, Cristian
  2021-10-19  8:26  0%     ` Liguzinski, WojciechX
  2021-10-19  8:18  3%   ` [dpdk-dev] [PATCH v15 " Liguzinski, WojciechX
  1 sibling, 1 reply; 200+ results
From: Dumitrescu, Cristian @ 2021-10-15 13:56 UTC (permalink / raw)
  To: Liguzinski, WojciechX, dev, Singh, Jasvinder; +Cc: Ajmera, Megha



> -----Original Message-----
> From: Liguzinski, WojciechX <wojciechx.liguzinski@intel.com>
> Sent: Friday, October 15, 2021 9:16 AM
> To: dev@dpdk.org; Singh, Jasvinder <jasvinder.singh@intel.com>;
> Dumitrescu, Cristian <cristian.dumitrescu@intel.com>
> Cc: Ajmera, Megha <megha.ajmera@intel.com>
> Subject: [PATCH v14 0/5] Add PIE support for HQoS library
> 
> DPDK sched library is equipped with mechanism that secures it from the
> bufferbloat problem
> which is a situation when excess buffers in the network cause high latency
> and latency
> variation. Currently, it supports RED for active queue management (which is
> designed
> to control the queue length but it does not control latency directly and is now
> being
> obsoleted). 

Please remove the statement that RED is obsolete, as it is not true. Please refer only to the benefits on the new algorithm without any generic negative statements not supported by data versus other algorithms, thank you!

However, more advanced queue management is required to
> address this problem
> and provide desirable quality of service to users.
> 
> This solution (RFC) proposes usage of new algorithm called "PIE"
> (Proportional Integral
> controller Enhanced) that can effectively and directly control queuing latency
> to address
> the bufferbloat problem.
> 
> The implementation of mentioned functionality includes modification of
> existing and
> adding a new set of data structures to the library, adding PIE related APIs.
> This affects structures in public API/ABI. That is why deprecation notice is
> going
> to be prepared and sent.
> 
> Liguzinski, WojciechX (5):
>   sched: add PIE based congestion management
>   example/qos_sched: add PIE support
>   example/ip_pipeline: add PIE support
>   doc/guides/prog_guide: added PIE
>   app/test: add tests for PIE
> 
>  app/test/meson.build                         |    4 +
>  app/test/test_pie.c                          | 1065 ++++++++++++++++++
>  config/rte_config.h                          |    1 -
>  doc/guides/prog_guide/glossary.rst           |    3 +
>  doc/guides/prog_guide/qos_framework.rst      |   60 +-
>  doc/guides/prog_guide/traffic_management.rst |   13 +-
>  drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
>  examples/ip_pipeline/tmgr.c                  |  142 +--
>  examples/qos_sched/app_thread.c              |    1 -
>  examples/qos_sched/cfg_file.c                |  111 +-
>  examples/qos_sched/cfg_file.h                |    5 +
>  examples/qos_sched/init.c                    |   27 +-
>  examples/qos_sched/main.h                    |    3 +
>  examples/qos_sched/profile.cfg               |  196 ++--
>  lib/sched/meson.build                        |   10 +-
>  lib/sched/rte_pie.c                          |   86 ++
>  lib/sched/rte_pie.h                          |  398 +++++++
>  lib/sched/rte_sched.c                        |  240 ++--
>  lib/sched/rte_sched.h                        |   63 +-
>  lib/sched/version.map                        |    3 +
>  20 files changed, 2161 insertions(+), 276 deletions(-)
>  create mode 100644 app/test/test_pie.c
>  create mode 100644 lib/sched/rte_pie.c
>  create mode 100644 lib/sched/rte_pie.h
> 
> --
> 2.25.1


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3] test/hash: fix buffer overflow
  @ 2021-10-15 13:02  3%     ` Medvedkin, Vladimir
  2021-10-19  7:02  3%       ` David Marchand
  0 siblings, 1 reply; 200+ results
From: Medvedkin, Vladimir @ 2021-10-15 13:02 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, Wang, Yipeng1, Gobriel, Sameh, Bruce Richardson, dpdk stable

Hi David,

On 15/10/2021 11:33, David Marchand wrote:
> On Thu, Oct 14, 2021 at 7:55 PM Vladimir Medvedkin
> <vladimir.medvedkin@intel.com> wrote:
>> @@ -1607,6 +1611,17 @@ static struct rte_hash_parameters hash_params_ex = {
>>   };
>>
>>   /*
>> + * Wrapper function around rte_jhash_32b.
>> + * It is required because rte_jhash_32b() accepts the length
>> + * as size of 4-byte units.
>> + */
>> +static inline uint32_t
>> +test_jhash_32b(const void *k, uint32_t length, uint32_t initval)
>> +{
>> +       return rte_jhash_32b(k, length >> 2, initval);
>> +}
> 
> I am confused.
> Does it mean that rte_jhash_32b is not compliant with rte_hash_create API?
> 

I think so too, because despite the fact that the ABI is the same, the 
API remains different with respect to the length argument.

> 

-- 
Regards,
Vladimir

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] 回复: [PATCH v2 1/1] devtools: add relative path support for ABI compatibility check
  @ 2021-10-15 10:02  4%     ` Feifei Wang
  0 siblings, 0 replies; 200+ results
From: Feifei Wang @ 2021-10-15 10:02 UTC (permalink / raw)
  To: Feifei Wang, Bruce Richardson, thomas, david.marchand
  Cc: dev, nd, Phil Yang, Juraj Linkeš, Ruifeng Wang, nd

Hi,

Sorry to disturb you. Have more comments for this patch or if it can be applied?
Thanks very much.

 Best Regards
Feifei

> -----邮件原件-----
> 发件人: Feifei Wang <feifei.wang2@arm.com>
> 发送时间: Wednesday, August 11, 2021 2:17 PM
> 收件人: Bruce Richardson <bruce.richardson@intel.com>
> 抄送: dev@dpdk.org; nd <nd@arm.com>; Phil Yang <Phil.Yang@arm.com>;
> Feifei Wang <Feifei.Wang2@arm.com>; Juraj Linkeš
> <juraj.linkes@pantheon.tech>; Ruifeng Wang <Ruifeng.Wang@arm.com>
> 主题: [PATCH v2 1/1] devtools: add relative path support for ABI compatibility
> check
> 
> From: Phil Yang <phil.yang@arm.com>
> 
> Because dpdk guide does not limit the relative path for ABI compatibility
> check, users maybe set 'DPDK_ABI_REF_DIR' as a relative
> path:
> 
> ~/dpdk/devtools$ DPDK_ABI_REF_VERSION=v19.11
> DPDK_ABI_REF_DIR=build-gcc-shared ./test-meson-builds.sh
> 
> And if the DESTDIR is not an absolute path, ninja complains:
> + install_target build-gcc-shared/v19.11/build
> + build-gcc-shared/v19.11/build-gcc-shared
> + rm -rf build-gcc-shared/v19.11/build-gcc-shared
> + echo 'DESTDIR=build-gcc-shared/v19.11/build-gcc-shared ninja -C build-gcc-
> shared/v19.11/build install'
> + DESTDIR=build-gcc-shared/v19.11/build-gcc-shared
> + ninja -C build-gcc-shared/v19.11/build install
> ...
> ValueError: dst_dir must be absolute, got build-gcc-shared/v19.11/build-gcc-
> shared/usr/local/share/dpdk/
> examples/bbdev_app
> ...
> Error: install directory 'build-gcc-shared/v19.11/build-gcc-shared' does not
> exist.
> 
> To fix this, add relative path support using 'readlink -f'.
> 
> Signed-off-by: Phil Yang <phil.yang@arm.com>
> Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> Reviewed-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>  devtools/test-meson-builds.sh | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
> index 9ec8e2bc7e..8ddde95276 100755
> --- a/devtools/test-meson-builds.sh
> +++ b/devtools/test-meson-builds.sh
> @@ -168,7 +168,8 @@ build () # <directory> <target cc | cross file> <ABI
> check> [meson options]
>  	config $srcdir $builds_dir/$targetdir $cross --werror $*
>  	compile $builds_dir/$targetdir
>  	if [ -n "$DPDK_ABI_REF_VERSION" -a "$abicheck" = ABI ] ; then
> -		abirefdir=${DPDK_ABI_REF_DIR:-
> reference}/$DPDK_ABI_REF_VERSION
> +		abirefdir=$(readlink -f \
> +			${DPDK_ABI_REF_DIR:-
> reference}/$DPDK_ABI_REF_VERSION)
>  		if [ ! -d $abirefdir/$targetdir ]; then
>  			# clone current sources
>  			if [ ! -d $abirefdir/src ]; then
> --
> 2.25.1


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v4 2/4] mempool: add non-IO flag
  @ 2021-10-15  9:01  0%     ` Andrew Rybchenko
  0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-15  9:01 UTC (permalink / raw)
  To: Dmitry Kozlyuk, dev; +Cc: Matan Azrad, Olivier Matz

On 10/13/21 2:01 PM, Dmitry Kozlyuk wrote:
> Mempool is a generic allocator that is not necessarily used for device
> IO operations and its memory for DMA. Add MEMPOOL_F_NON_IO flag to mark
> such mempools automatically if their objects are not contiguous
> or IOVA are not available. Components can inspect this flag
> in order to optimize their memory management.
> Discussion: https://mails.dpdk.org/archives/dev/2021-August/216654.html
> 
> Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>

See review notes below. With review notes processed:

Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

[snip]

> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index f643a61f44..74e0e6f495 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -226,6 +226,9 @@ API Changes
>    the crypto/security operation. This field will be used to communicate
>    events such as soft expiry with IPsec in lookaside mode.
>  
> +* mempool: Added ``MEMPOOL_F_NON_IO`` flag to give a hint to DPDK components
> +  that objects from this pool will not be used for device IO (e.g. DMA).
> +
>  
>  ABI Changes
>  -----------
> diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
> index 51c0ba2931..2204f140b3 100644
> --- a/lib/mempool/rte_mempool.c
> +++ b/lib/mempool/rte_mempool.c
> @@ -371,6 +371,8 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>  
>  	STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next);
>  	mp->nb_mem_chunks++;
> +	if (iova == RTE_BAD_IOVA)
> +		mp->flags |= MEMPOOL_F_NON_IO;

As I understand rte_mempool_populate_iova() may be called
few times for one mempool. The flag must be set if all
invocations are done with RTE_BAD_IOVA. So, it should be
set by default and just removed when iova != RTE_BAD_IOVA
happens.

Yes, it is a corner case. May be it makes sense to
cover it by unit test as well.

>  
>  	/* Report the mempool as ready only when fully populated. */
>  	if (mp->populated_size >= mp->size)
> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> index 663123042f..029b62a650 100644
> --- a/lib/mempool/rte_mempool.h
> +++ b/lib/mempool/rte_mempool.h
> @@ -262,6 +262,8 @@ struct rte_mempool {
>  #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
>  #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
>  #define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
> +#define MEMPOOL_F_NON_IO         0x0040
> +		/**< Internal: pool is not usable for device IO (DMA). */

Please, put the documentation before the define.
/** Internal: pool is not usable for device IO (DMA). */
#define MEMPOOL_F_NON_IO         0x0040

>  
>  /**
>   * @internal When debug is enabled, store some statistics.
> @@ -991,6 +993,9 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);
>   *     "single-consumer". Otherwise, it is "multi-consumers".
>   *   - MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't
>   *     necessarily be contiguous in IO memory.
> + *   - MEMPOOL_F_NON_IO: If set, the mempool is considered to be
> + *     never used for device IO, i.e. for DMA operations.
> + *     It's a hint to other components and does not affect the mempool behavior.

I tend to say that it should not be here if the flag is
internal.

>   * @return
>   *   The pointer to the new allocated mempool, on success. NULL on error
>   *   with rte_errno set appropriately. Possible rte_errno values include:
> 


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v14 0/5] Add PIE support for HQoS library
  @ 2021-10-15  8:16  3% ` Liguzinski, WojciechX
  2021-10-15 13:56  0%   ` Dumitrescu, Cristian
  2021-10-19  8:18  3%   ` [dpdk-dev] [PATCH v15 " Liguzinski, WojciechX
  0 siblings, 2 replies; 200+ results
From: Liguzinski, WojciechX @ 2021-10-15  8:16 UTC (permalink / raw)
  To: dev, jasvinder.singh, cristian.dumitrescu; +Cc: megha.ajmera

DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency 
variation. Currently, it supports RED for active queue management (which is designed 
to control the queue length but it does not control latency directly and is now being 
obsoleted). However, more advanced queue management is required to address this problem
and provide desirable quality of service to users.

This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address 
the bufferbloat problem.

The implementation of mentioned functionality includes modification of existing and 
adding a new set of data structures to the library, adding PIE related APIs. 
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.

Liguzinski, WojciechX (5):
  sched: add PIE based congestion management
  example/qos_sched: add PIE support
  example/ip_pipeline: add PIE support
  doc/guides/prog_guide: added PIE
  app/test: add tests for PIE

 app/test/meson.build                         |    4 +
 app/test/test_pie.c                          | 1065 ++++++++++++++++++
 config/rte_config.h                          |    1 -
 doc/guides/prog_guide/glossary.rst           |    3 +
 doc/guides/prog_guide/qos_framework.rst      |   60 +-
 doc/guides/prog_guide/traffic_management.rst |   13 +-
 drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
 examples/ip_pipeline/tmgr.c                  |  142 +--
 examples/qos_sched/app_thread.c              |    1 -
 examples/qos_sched/cfg_file.c                |  111 +-
 examples/qos_sched/cfg_file.h                |    5 +
 examples/qos_sched/init.c                    |   27 +-
 examples/qos_sched/main.h                    |    3 +
 examples/qos_sched/profile.cfg               |  196 ++--
 lib/sched/meson.build                        |   10 +-
 lib/sched/rte_pie.c                          |   86 ++
 lib/sched/rte_pie.h                          |  398 +++++++
 lib/sched/rte_sched.c                        |  240 ++--
 lib/sched/rte_sched.h                        |   63 +-
 lib/sched/version.map                        |    3 +
 20 files changed, 2161 insertions(+), 276 deletions(-)
 create mode 100644 app/test/test_pie.c
 create mode 100644 lib/sched/rte_pie.c
 create mode 100644 lib/sched/rte_pie.h

-- 
2.25.1


^ permalink raw reply	[relevance 3%]

Results 2801-3000 of ~18000   |  | reverse | sort options + mbox downloads above
-- links below jump to the message on this page --
2020-04-28 23:58     [dpdk-dev] [PATCH v3 0/8] eal: cleanup resources on shutdown Stephen Hemminger
2021-11-13  0:28  3% ` [PATCH v4 0/5] cleanup more stuff " Stephen Hemminger
2021-11-13  3:32  3% ` [PATCH v5 0/5] cleanup DPDK resources via eal_cleanup Stephen Hemminger
2021-11-13 17:22  3% ` [PATCH v6 0/5] cleanup more resources on eal_cleanup Stephen Hemminger
2021-01-12  1:04     [dpdk-dev] [PATCH] eal/rwlock: add note about writer starvation Stephen Hemminger
2021-02-12  0:21     ` [dpdk-dev] [PATCH v2] " Honnappa Nagarahalli
2021-05-12 19:10       ` Thomas Monjalon
2021-11-08 10:18  0%     ` Thomas Monjalon
2021-03-10 23:24     [dpdk-dev] [PATCH] doc: propose correction rte_bsf64 return type declaration Tyler Retzlaff
2021-10-26  7:45     ` [dpdk-dev] [PATCH v2] doc: propose correction rte_{bsf, fls} inline functions type use Morten Brørup
2021-11-11  4:15       ` Tyler Retzlaff
2021-11-11 11:54  3%     ` Thomas Monjalon
2021-11-11 12:41  0%       ` Morten Brørup
2021-03-18  6:34     [dpdk-dev] [PATCH 1/6] baseband: introduce NXP LA12xx driver Hemant Agrawal
2021-10-17  6:53     ` [dpdk-dev] [PATCH v11 0/8] baseband: add " nipun.gupta
2021-10-17  6:53  4%   ` [dpdk-dev] [PATCH v11 1/8] bbdev: add device info related to data endianness nipun.gupta
2021-06-01  1:56     [dpdk-dev] [PATCH v1 0/2] relative path support for ABI compatibility check Feifei Wang
2021-08-11  6:17     ` [dpdk-dev] [PATCH v2 0/1] " Feifei Wang
2021-08-11  6:17       ` [dpdk-dev] [PATCH v2 1/1] devtools: add " Feifei Wang
2021-10-15 10:02  4%     ` [dpdk-dev] 回复: " Feifei Wang
2021-06-23 17:31     [dpdk-dev] [PATCH] doc: note KNI alternatives and deprecation plan Ferruh Yigit
2021-11-23 12:08     ` [PATCH v2 1/2] doc: note KNI alternatives Ferruh Yigit
2021-11-23 12:08  5%   ` [PATCH v2 2/2] doc: announce KNI deprecation Ferruh Yigit
2021-11-24 17:16     ` [PATCH v3 1/2] doc: note KNI alternatives Ferruh Yigit
2021-11-24 17:16  5%   ` [PATCH v3 2/2] doc: announce KNI deprecation Ferruh Yigit
2021-08-03  8:26     [dpdk-dev] [RFC v2 1/3] eventdev: allow for event devices requiring maintenance Mattias Rönnblom
2021-10-26 17:31     ` [dpdk-dev] [PATCH " Mattias Rönnblom
2021-10-29 14:38       ` Jerin Jacob
2021-10-29 15:03         ` Mattias Rönnblom
2021-10-29 15:17           ` Jerin Jacob
2021-11-01  9:26  3%         ` Mattias Rönnblom
2021-08-26 14:57     [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
2021-10-18 19:37  4% ` [dpdk-dev] [PATCH v3 " Harman Kalra
2021-10-18 19:37       ` [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement get set APIs Harman Kalra
2021-10-18 22:56         ` Stephen Hemminger
2021-10-19  8:32           ` [dpdk-dev] [EXT] " Harman Kalra
2021-10-20 15:30  3%         ` Dmitry Kozlyuk
2021-10-21  9:16  0%           ` Harman Kalra
2021-10-21 12:33  0%             ` Dmitry Kozlyuk
2021-10-18 19:37  1%   ` [dpdk-dev] [PATCH v3 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
2021-10-19 18:35  4% ` [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal Harman Kalra
2021-10-19 18:35  1%   ` [dpdk-dev] [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
2021-10-19 21:27  4%     ` Dmitry Kozlyuk
2021-10-20  9:25  3%       ` [dpdk-dev] [EXT] " Harman Kalra
2021-10-22 20:49  4% ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
2021-10-24 20:04  4%   ` [dpdk-dev] [PATCH v6 0/9] " David Marchand
2021-10-25 13:04  0%   ` [dpdk-dev] [PATCH v5 0/6] " Raslan Darawsheh
2021-10-25 13:09  0%     ` David Marchand
2021-10-25 13:34  4%   ` [dpdk-dev] [PATCH v7 0/9] " David Marchand
2021-10-25 14:27  4%   ` [dpdk-dev] [PATCH v8 " David Marchand
2021-10-25 14:32  0%     ` Raslan Darawsheh
2021-10-25 19:24  0%     ` David Marchand
2021-08-29 12:51     [dpdk-dev] [PATCH 0/8] cryptodev: hide internal strutures Akhil Goyal
2021-10-11 12:43     ` [dpdk-dev] [PATCH v2 0/5] cryptodev: hide internal structures Akhil Goyal
2021-10-11 12:43       ` [dpdk-dev] [PATCH v2 3/5] cryptodev: move inline APIs into separate structure Akhil Goyal
2021-10-11 14:45         ` Zhang, Roy Fan
2021-10-18  7:02  0%       ` Akhil Goyal
2021-10-18 14:41       ` [dpdk-dev] [PATCH v3 0/7] cryptodev: hide internal structures Akhil Goyal
2021-10-18 14:41  2%     ` [dpdk-dev] [PATCH v3 3/7] cryptodev: move inline APIs into separate structure Akhil Goyal
2021-10-19 16:00  0%       ` Zhang, Roy Fan
2021-10-18 14:42  3%     ` [dpdk-dev] [PATCH v3 6/7] cryptodev: update fast path APIs to use new flat array Akhil Goyal
2021-10-19 12:28  0%       ` Ananyev, Konstantin
2021-10-20 11:27         ` [dpdk-dev] [PATCH v4 0/8] cryptodev: hide internal structures Akhil Goyal
2021-10-20 11:27  2%       ` [dpdk-dev] [PATCH v4 3/8] cryptodev: move inline APIs into separate structure Akhil Goyal
2021-10-20 11:27  3%       ` [dpdk-dev] [PATCH v4 7/8] cryptodev: update fast path APIs to use new flat array Akhil Goyal
2021-10-20 11:27  7%       ` [dpdk-dev] [PATCH v4 8/8] cryptodev: move device specific structures Akhil Goyal
2021-08-30 17:19     [dpdk-dev] [PATCH v3] ethdev: add namespace Ferruh Yigit
2021-10-18 15:43  1% ` [dpdk-dev] [PATCH v4] " Ferruh Yigit
2021-10-20 19:23  1%   ` [dpdk-dev] [PATCH v5] " Ferruh Yigit
2021-10-22  2:02  1%     ` [dpdk-dev] [PATCH v6] " Ferruh Yigit
2021-10-22 11:03  1%       ` [dpdk-dev] [PATCH v7] " Ferruh Yigit
2021-09-01  5:30     [dpdk-dev] [PATCH 0/2] *** support IOMMU for DMA device *** Xuan Ding
2021-10-11  7:59     ` [dpdk-dev] [PATCH v7 0/2] Support IOMMU for DMA device Xuan Ding
2021-10-21 12:33  0%   ` Maxime Coquelin
2021-09-03  0:47     [dpdk-dev] [PATCH 0/5] Packet capture framework enhancements Stephen Hemminger
2021-10-15 18:28     ` [dpdk-dev] [PATCH v13 00/12] Packet capture framework update Stephen Hemminger
2021-10-15 18:28  1%   ` [dpdk-dev] [PATCH v13 06/12] pdump: support pcapng and filtering Stephen Hemminger
2021-10-15 18:29  1%   ` [dpdk-dev] [PATCH v13 11/12] doc: changes for new pcapng and dumpcap utility Stephen Hemminger
2021-10-15 20:11     ` [dpdk-dev] [PATCH v14 00/12] Packet capture framework update Stephen Hemminger
2021-10-15 20:11  1%   ` [dpdk-dev] [PATCH v14 06/12] pdump: support pcapng and filtering Stephen Hemminger
2021-10-15 20:11  1%   ` [dpdk-dev] [PATCH v14 11/12] doc: changes for new pcapng and dumpcap utility Stephen Hemminger
2021-10-20 21:42     ` [dpdk-dev] [PATCH v15 00/12] Packet capture framework update Stephen Hemminger
2021-10-20 21:42  1%   ` [dpdk-dev] [PATCH v15 06/12] pdump: support pcapng and filtering Stephen Hemminger
2021-10-21 14:16  0%     ` Kinsella, Ray
2021-10-27  6:34  0%     ` Wang, Yinan
2021-10-20 21:42  1%   ` [dpdk-dev] [PATCH v15 11/12] doc: changes for new pcapng and dumpcap utility Stephen Hemminger
2021-09-06 16:55     [dpdk-dev] [RFC PATCH v2] raw/ptdma: introduce ptdma driver Selwin Sebastian
2021-09-06 17:17     ` David Marchand
2021-10-27 14:59  0%   ` Thomas Monjalon
2021-10-28 14:54  0%     ` Sebastian, Selwin
2021-09-09 16:40     [dpdk-dev] [PATCH] port: eventdev port api promoted Rahul Shah
2021-09-10  7:36     ` David Marchand
2021-09-10 13:40       ` Kinsella, Ray
2021-10-13 12:12         ` Thomas Monjalon
2021-10-20  9:55  3%       ` Kinsella, Ray
2021-09-09 17:56     [dpdk-dev] [PATCH 00/18] comment spelling errors Stephen Hemminger
2021-11-12  0:02     ` [PATCH v4 00/18] fix docbook and " Stephen Hemminger
2021-11-12  0:02  4%   ` [PATCH v4 08/18] eal: fix typos in comments Stephen Hemminger
2021-11-12 15:22  0%     ` Kinsella, Ray
2021-09-10  2:23     [dpdk-dev] [PATCH 0/8] Removal of PCI bus ABIs Chenbo Xia
2021-10-14  7:07     ` [dpdk-dev] [PATCH v2 0/7] " Thomas Monjalon
2021-10-14  8:07       ` Xia, Chenbo
2021-10-14  8:25         ` Thomas Monjalon
2021-10-27 12:03  4%       ` Xia, Chenbo
2021-09-29 21:48     [dpdk-dev] [PATCH 0/3] mbuf: offload flags namespace Olivier Matz
2021-10-15 19:24     ` [dpdk-dev] [PATCH v2 0/4] " Olivier Matz
2021-10-15 19:24  1%   ` [dpdk-dev] [PATCH v2 4/4] mbuf: add rte prefix to offload flags Olivier Matz
2021-10-04 13:29     [dpdk-dev] [PATCH v2] ci: update machine meson option to platform Juraj Linkeš
2021-10-11 13:40     ` [dpdk-dev] [PATCH v3] " Juraj Linkeš
2021-10-14 12:26       ` Aaron Conole
2021-10-25 15:42  0%     ` Thomas Monjalon
2021-10-05 20:15     [dpdk-dev] [PATCH v4 0/2] cmdline: reduce ABI Dmitry Kozlyuk
2021-10-07 22:10     ` [dpdk-dev] [PATCH v5 " Dmitry Kozlyuk
2021-10-22 21:24  4%   ` Thomas Monjalon
2021-10-06  6:49     [dpdk-dev] [PATCH v3 01/14] eventdev: make driver interface as internal pbhagavatula
2021-10-15 19:02     ` [dpdk-dev] [PATCH v4 " pbhagavatula
2021-10-15 19:02  2%   ` [dpdk-dev] [PATCH v4 04/14] eventdev: move inline APIs into separate structure pbhagavatula
2021-10-15 19:02  9%   ` [dpdk-dev] [PATCH v4 14/14] eventdev: mark trace variables as internal pbhagavatula
2021-10-17  5:58  0%     ` Jerin Jacob
2021-10-18 15:06  0%       ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
2021-10-19  7:01  3%         ` David Marchand
2021-10-18 23:35       ` [dpdk-dev] [PATCH v5 01/14] eventdev: make driver interface " pbhagavatula
2021-10-18 23:35  6%     ` [dpdk-dev] [PATCH v5 04/14] eventdev: move inline APIs into separate structure pbhagavatula
2021-10-18 23:36  5%     ` [dpdk-dev] [PATCH v5 10/14] eventdev: rearrange fields in timer object pbhagavatula
2021-10-18 23:36  4%     ` [dpdk-dev] [PATCH v5 11/14] eventdev: move timer adapters memory to hugepage pbhagavatula
2021-10-20 20:24  0%       ` Carrillo, Erik G
2021-10-18 23:36  4%     ` [dpdk-dev] [PATCH v5 12/14] eventdev: promote event vector API to stable pbhagavatula
2021-10-08 20:45     [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END enumerators Akhil Goyal
2021-10-18  5:22     ` [dpdk-dev] [PATCH v3 1/2] security: hide internal API Akhil Goyal
2021-10-18  5:22  3%   ` [dpdk-dev] [PATCH v3 2/2] security: add reserved bitfields Akhil Goyal
2021-10-18 15:39  0%     ` Akhil Goyal
2021-10-08 21:28     [dpdk-dev] [PATCH] lpm: fix buffer overflow Vladimir Medvedkin
2021-10-20 19:55  3% ` David Marchand
2021-10-21 17:15  0%   ` Medvedkin, Vladimir
2021-10-08 22:40     [dpdk-dev] [PATCH v15 0/9] eal: Add EAL API for threading Narcisa Ana Maria Vasile
2021-10-09  7:41     ` [dpdk-dev] [PATCH v16 " Narcisa Ana Maria Vasile
2021-10-09  7:41       ` [dpdk-dev] [PATCH v16 8/9] eal: implement functions for thread barrier management Narcisa Ana Maria Vasile
2021-10-12 16:32         ` Thomas Monjalon
2021-11-09  2:07  3%       ` Narcisa Ana Maria Vasile
2021-11-10  3:13  0%         ` Narcisa Ana Maria Vasile
2021-11-10  3:01  3%   ` [dpdk-dev] [PATCH v17 00/13] eal: Add EAL API for threading Narcisa Ana Maria Vasile
2021-11-11  1:33  3%     ` [PATCH v18 0/8] " Narcisa Ana Maria Vasile
2021-10-12  0:04     [dpdk-dev] [PATCH v3 0/4] net/mlx5: implicit mempool registration Dmitry Kozlyuk
2021-10-13 11:01     ` [dpdk-dev] [PATCH v4 " Dmitry Kozlyuk
2021-10-13 11:01       ` [dpdk-dev] [PATCH v4 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-15  9:01  0%     ` Andrew Rybchenko
2021-10-15 16:02       ` [dpdk-dev] [PATCH v5 0/4] net/mlx5: implicit mempool registration Dmitry Kozlyuk
2021-10-15 16:02  3%     ` [dpdk-dev] [PATCH v5 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-16 20:00         ` [dpdk-dev] [PATCH v6 0/4] net/mlx5: implicit mempool registration Dmitry Kozlyuk
2021-10-16 20:00  3%       ` [dpdk-dev] [PATCH v6 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-18 10:01           ` [dpdk-dev] [PATCH v7 0/4] net/mlx5: implicit mempool registration Dmitry Kozlyuk
2021-10-18 10:01  3%         ` [dpdk-dev] [PATCH v7 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-18 14:40             ` [dpdk-dev] [PATCH v8 0/4] net/mlx5: implicit mempool registration Dmitry Kozlyuk
2021-10-18 14:40  3%           ` [dpdk-dev] [PATCH v8 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-18 22:43               ` [dpdk-dev] [PATCH v9 0/4] net/mlx5: implicit mempool registration Dmitry Kozlyuk
2021-10-18 22:43  3%             ` [dpdk-dev] [PATCH v9 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-13  1:52     [dpdk-dev] [PATCH v4 2/2] app/test: delete cmdline free function zhihongx.peng
2021-10-18 13:58  4% ` [dpdk-dev] [PATCH v5] lib/cmdline: release cl when cmdline exit zhihongx.peng
2021-10-20  9:22  0%   ` Peng, ZhihongX
2021-10-13 19:00     [dpdk-dev] [PATCH v4 00/15] crypto: add raw vector support in DPAAx Hemant Agrawal
2021-10-17 16:16     ` [dpdk-dev] [PATCH v5 " Hemant Agrawal
2021-10-17 16:16  4%   ` [dpdk-dev] [PATCH v5 02/15] crypto: add total raw buffer length Hemant Agrawal
2021-10-17 16:16  4%   ` [dpdk-dev] [PATCH v5 03/15] crypto: add dest_sgl in raw vector APIs Hemant Agrawal
2021-10-13 19:22     [dpdk-dev] [PATCH v2 1/7] security: rework session framework Akhil Goyal
2021-10-18 21:34     ` [dpdk-dev] [PATCH v3 0/8] crypto/security session framework rework Akhil Goyal
2021-10-18 21:34  1%   ` [dpdk-dev] [PATCH v3 1/8] security: rework session framework Akhil Goyal
2021-10-18 21:34  1%   ` [dpdk-dev] [PATCH v3 6/8] cryptodev: " Akhil Goyal
2021-10-20 19:27  0%     ` Ananyev, Konstantin
2021-10-21  6:53  0%       ` Akhil Goyal
2021-10-21 10:38  0%         ` Ananyev, Konstantin
2021-10-20 15:45       ` [dpdk-dev] [PATCH v3 0/8] crypto/security session framework rework Power, Ciara
2021-10-20 16:41  3%     ` Akhil Goyal
2021-10-20 16:48  0%       ` Akhil Goyal
2021-10-20 18:04  0%         ` Akhil Goyal
2021-10-21  8:43  0%           ` Zhang, Roy Fan
2021-10-13 19:27     [dpdk-dev] [PATCH v2] test/hash: fix buffer overflow Vladimir Medvedkin
2021-10-14 17:48     ` [dpdk-dev] [PATCH v3] " Vladimir Medvedkin
2021-10-15  9:33       ` David Marchand
2021-10-15 13:02  3%     ` Medvedkin, Vladimir
2021-10-19  7:02  3%       ` David Marchand
2021-10-19 15:57  0%         ` Medvedkin, Vladimir
2021-10-14 15:33     [dpdk-dev] [PATCH v13 0/5] Add PIE support for HQoS library Liguzinski, WojciechX
2021-10-15  8:16  3% ` [dpdk-dev] [PATCH v14 " Liguzinski, WojciechX
2021-10-15 13:56  0%   ` Dumitrescu, Cristian
2021-10-19  8:26  0%     ` Liguzinski, WojciechX
2021-10-19  8:18  3%   ` [dpdk-dev] [PATCH v15 " Liguzinski, WojciechX
2021-10-19 12:18  0%     ` Dumitrescu, Cristian
2021-10-19 12:45  3%     ` [dpdk-dev] [PATCH v16 " Liguzinski, WojciechX
2021-10-20  7:49  3%       ` [dpdk-dev] [PATCH v17 " Liguzinski, WojciechX
2021-10-25 11:32  3%         ` [dpdk-dev] [PATCH v18 " Liguzinski, WojciechX
2021-10-26  8:24  3%           ` Liu, Yu Y
2021-10-26  8:33  0%             ` Thomas Monjalon
2021-10-26 10:02  0%               ` Dumitrescu, Cristian
2021-10-28 10:17  3%           ` [dpdk-dev] [PATCH v19 " Liguzinski, WojciechX
2021-11-02 23:57  3%             ` [dpdk-dev] [PATCH v20 " Liguzinski, WojciechX
2021-11-03 17:52  0%               ` Thomas Monjalon
2021-11-04  8:29  0%                 ` Liguzinski, WojciechX
2021-11-04 10:40  3%               ` [dpdk-dev] [PATCH v21 0/3] " Liguzinski, WojciechX
2021-11-04 10:49  3%                 ` [dpdk-dev] [PATCH v22 " Liguzinski, WojciechX
2021-11-04 11:03  3%                   ` [dpdk-dev] [PATCH v23 " Liguzinski, WojciechX
2021-11-04 14:55  3%                   ` [dpdk-dev] [PATCH v24 " Thomas Monjalon
2021-10-14 20:55     [dpdk-dev] [PATCH] ring: fix size of name array in ring structure Honnappa Nagarahalli
2021-10-18 14:54  0% ` Honnappa Nagarahalli
2021-10-19  7:01  0%   ` Tu, Lijuan
2021-10-20 23:06  0% ` Ananyev, Konstantin
2021-10-21  7:35  0%   ` David Marchand
2021-10-15  9:30     [dpdk-dev] [PATCH v2 0/5] optimized Toeplitz hash implementation Vladimir Medvedkin
2021-10-15  9:30     ` [dpdk-dev] [PATCH v2 1/5] hash: add new toeplitz " Vladimir Medvedkin
2021-10-15 16:58  3%   ` Stephen Hemminger
2021-10-18 10:40  3%     ` Ananyev, Konstantin
2021-10-19  1:15  0%       ` Stephen Hemminger
2021-10-19 15:42  0%         ` Medvedkin, Vladimir
2021-10-18 11:08  0%     ` Medvedkin, Vladimir
2021-10-18 10:26  3% [dpdk-dev] Minutes of Technical Board Meeting, 2021-Oct-06 Kevin Traynor
2021-10-18 14:49     [dpdk-dev] [PATCH 0/6] mempool: cleanup namespace Andrew Rybchenko
2021-10-18 14:49     ` [dpdk-dev] [PATCH 5/6] mempool: add namespace to driver register macro Andrew Rybchenko
2021-10-19  8:49       ` David Marchand
2021-10-19  9:04  3%     ` Andrew Rybchenko
2021-10-19  9:23  0%       ` Andrew Rybchenko
2021-10-19  9:27  0%       ` David Marchand
2021-10-19  9:38  0%         ` Andrew Rybchenko
2021-10-19  9:42  0%         ` Thomas Monjalon
     [not found]     <0211007112750.25526-1-konstantin.ananyev@intel.com>
2021-10-13 13:36     ` [dpdk-dev] [PATCH v6 0/6] hide eth dev related structures Konstantin Ananyev
2021-10-13 20:16       ` Ferruh Yigit
2021-10-18 16:04  0%     ` Ali Alnubani
2021-10-18 16:47  0%       ` Ferruh Yigit
2021-10-18 23:47  0%         ` Ajit Khaparde
2021-10-19 18:14     [dpdk-dev] [RFC PATCH 0/1] Dataplane Workload Accelerator library jerinj
2021-10-25  7:35     ` Mattias Rönnblom
2021-10-25  9:03       ` Jerin Jacob
2021-10-29 11:57         ` Mattias Rönnblom
2021-10-29 15:51  2%       ` Jerin Jacob
2021-10-31  9:18  4%         ` Mattias Rönnblom
2021-10-31 14:01  4%           ` Jerin Jacob
2021-10-31 19:34  0%             ` Thomas Monjalon
2021-10-31 21:13  2%               ` Jerin Jacob
2021-10-31 21:55  0%                 ` Thomas Monjalon
2021-10-31 22:19  0%                   ` Jerin Jacob
2021-10-25 21:40  4% [dpdk-dev] [dpdk-announce] release candidate 21.11-rc1 Thomas Monjalon
2021-10-28  7:10  0% ` Jiang, YuX
2021-11-01 11:53  0%   ` Jiang, YuX
2021-11-05 21:51  0% ` Thinh Tran
2021-11-08 10:50  0% ` Pei Zhang
2021-10-26 15:56     [dpdk-dev] [PATCH v3 1/3] config/x86: add support for AMD platform Aman Kumar
2021-10-27  7:28     ` [dpdk-dev] [PATCH v4 1/2] " Aman Kumar
2021-10-27  7:28       ` [dpdk-dev] [PATCH v4 2/2] lib/eal: add temporal store memcpy " Aman Kumar
2021-10-27  8:13         ` Thomas Monjalon
2021-10-27 11:03  3%       ` Van Haaren, Harry
2021-10-27 11:41  0%         ` Mattias Rönnblom
2021-10-27 12:15               ` Van Haaren, Harry
2021-10-27 12:22                 ` Ananyev, Konstantin
2021-10-27 13:34                   ` Aman Kumar
2021-10-27 14:10  2%                 ` Van Haaren, Harry
2021-10-27 14:31  0%                   ` Thomas Monjalon
2021-10-29 16:01  0%                     ` Song, Keesang
2021-10-27 17:43  2% [dpdk-dev] [Bug 842] [dpdk-21.11 rc1] FIPS tests are failing bugzilla
2021-10-28  8:35  3% [dpdk-dev] [PATCH] ethdev: promote device removal check function as stable Thomas Monjalon
2021-10-28  8:38  0% ` Kinsella, Ray
2021-10-28  8:56  0%   ` Andrew Rybchenko
2021-11-04 10:45  0%     ` Ferruh Yigit
2021-10-28 14:15     [dpdk-dev] [PATCH v2] vhost: mark vDPA driver API as internal Maxime Coquelin
2021-10-29 16:15  3% ` [dpdk-dev] [dpdk-techboard] " Thomas Monjalon
2021-10-28 21:01  4% [dpdk-dev] Windows community call: MoM 2021-10-27 Dmitry Kozlyuk
2021-10-29 13:48     [dpdk-dev] Overriding rte_config.h Ben Magistro
2021-11-01 15:03     ` Bruce Richardson
2021-11-02 11:20       ` Ananyev, Konstantin
2021-11-02 12:07         ` Bruce Richardson
2021-11-02 12:24  3%       ` Ananyev, Konstantin
2021-11-02 14:19  3%         ` Bruce Richardson
2021-11-02 15:00  0%           ` Ananyev, Konstantin
2021-11-03 14:38  0%             ` Ben Magistro
2021-11-04 11:03  0%               ` Ananyev, Konstantin
2021-11-02  9:56  4% [dpdk-dev] [PATCH v3] vhost: mark vDPA driver API as internal Maxime Coquelin
2021-11-02 10:47  4% [dpdk-dev] [PATCH] vhost: rename driver callbacks struct Maxime Coquelin
2021-11-03  8:16  0% ` Xia, Chenbo
2021-11-02 19:03 14% [dpdk-dev] [PATCH] ip_frag: increase default value for config parameter Konstantin Ananyev
2021-11-08 22:08  0% ` Thomas Monjalon
2021-11-03  5:00     [dpdk-dev] [PATCH] doc: remove deprecation notice for vhost Chenbo Xia
2021-11-03  5:25  3% ` Xia, Chenbo
2021-11-03  7:03  0%   ` David Marchand
2021-11-03 17:50  5% [dpdk-dev] [PATCH] doc: remove deprecation notice for interrupt Harman Kalra
2021-11-04 19:54  4% [dpdk-dev] Minutes of Technical Board Meeting, 2021-Nov-03 Maxime Coquelin
2021-11-08 11:51     [dpdk-dev] [PATCH v3] ip_frag: hide internal structures Konstantin Ananyev
2021-11-08 13:55     ` [dpdk-dev] [PATCH v4 0/2] ip_frag cleanup patches Konstantin Ananyev
2021-11-08 13:55  3%   ` [dpdk-dev] [PATCH v4 2/2] ip_frag: add namespace Konstantin Ananyev
2021-11-09 12:32  3%     ` [dpdk-dev] [PATCH v5] " Konstantin Ananyev
2021-11-10 16:48     [PATCH 0/5] Extend optional libraries list David Marchand
2021-11-10 16:48  4% ` [PATCH 1/5] ci: test build with minimum configuration David Marchand
2021-11-16  0:24  4% ethdev: hide internal structures Tyler Retzlaff
2021-11-16  9:32  0% ` Ferruh Yigit
2021-11-16 17:54  4%   ` Tyler Retzlaff
2021-11-16 20:07  4%     ` Ferruh Yigit
2021-11-16 20:44  0%       ` Tyler Retzlaff
2021-11-16 10:32  3% ` Ananyev, Konstantin
2021-11-16 19:10  0%   ` Tyler Retzlaff
2021-11-16 21:25  0%     ` Stephen Hemminger
2021-11-16 22:58  3%       ` Tyler Retzlaff
2021-11-16 23:22  0%         ` Stephen Hemminger
2021-11-17 22:05  0%           ` Tyler Retzlaff
2021-11-18 14:46     [PATCH v1 0/3] Fix typo's and capitalise PMD Sean Morrissey
2021-11-18 14:46  1% ` [PATCH v1 1/3] fix PMD wording typo Sean Morrissey
2021-11-22 10:50     ` [PATCH v2 0/3] Fix typo's and capitalise PMD Sean Morrissey
2021-11-22 10:50  1%   ` [PATCH v2 1/3] fix PMD wording typo Sean Morrissey
2021-11-18 19:28     [PATCH v1] gpudev: return EINVAL if invalid input pointer for free and unregister eagostini
2021-11-18 20:19     ` Tyler Retzlaff
2021-11-19  9:34       ` Ferruh Yigit
2021-11-19  9:56         ` Thomas Monjalon
2021-11-24 17:24  3%       ` Tyler Retzlaff
2021-11-24 18:04  0%         ` Bruce Richardson
2021-11-22 17:00 12% [PATCH v1] doc: update release notes for 21.11 John McNamara
2021-11-22 17:05  0% ` Ajit Khaparde
2021-11-23  7:59     [PATCH] ethdev: deprecate header fields and metadata flow actions Viacheslav Ovsiienko
2021-11-24 15:37     ` [PATCH v3] " Viacheslav Ovsiienko
2021-11-25 12:31  4%   ` Ferruh Yigit
2021-11-25 12:50  0%     ` Thomas Monjalon
2021-11-24 13:00  4% Minutes of Technical Board Meeting, 2021-11-17 Olivier Matz
2021-11-26 20:34  4% DPDK 21.11 released! David Marchand
2021-11-29 13:16 11% [PATCH] version: 22.03-rc0 David Marchand
2021-11-30 15:35  0% ` Thomas Monjalon
2021-11-30 19:51  3%   ` David Marchand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).